text
stringlengths
6
128k
# Representation theory via cohomology of line bundles Henning Haahr Andersen Centre for Quantum Geometry (QM), Imada, University of Southern Denmark, Odense, Denmark<EMAIL_ADDRESS> ###### Abstract. Let $G$ be a reductive algebraic group over a field $k$ and let $B$ be a Borel subgroup in $G$. We demonstrate how a number of results on the cohomology of line bundles on the flag manifold $G/B$ have had interesting consequences in the representation theory for $G$. And vice versa. Our focus is on the case where the characteristic of $k$ is positive. In this case both the vanishing behavior of the cohomology modules for a line bundle on $G/B$ and the $G$-structures of the non-zero cohomology modules are still very much open problems. We give an account of the developments over the years, trying to illustrate what is now known and what is still not known today. Dedicated to the memory of Jim Humphreys ## 1\. Introduction Let $G$ be a connected reductive algebraic group over an algebraically closed field $k$ and let $B$ be a Borel subgroup in $G$. Then any finite dimensional $B$-module $E$ induces a vector bundle (locally free sheaf) $\mathcal{L}(E)$ on the homogeneous space $G/B$ as follows: For any open subset $U\subset G/B$ the set of sections of $\mathcal{L}(E)$ over $U$ is $\Gamma(U,\mathcal{L}(E))=\\{\varphi:\pi^{-1}(U)\rightarrow E\mid\varphi(xb)=b^{-1}\varphi(x),x\in\pi^{-1}(U),b\in B\\}$ Here $\pi$ denotes the canonical map $G\to G/B$, and the maps $\varphi$ considered are the regular maps from the open subvariety $\pi^{-1}(U)$ of the affine variety $G$ to the affine space $E\simeq k^{n}$. In particular, the space of global sections $\Gamma(G/B,\mathcal{L}(E))$ has a natural $G$-action given by $g\varphi:x\mapsto\varphi(g^{-1}x),g,x\in G,\varphi\in\Gamma((G/B,\mathcal{L}(E))$. This is also the $G$-module obtained by applying the induction functor $\mathrm{Ind}_{B}^{G}$ to $E$. More generally, we can for any $i\geq 0$ identify the sheaf cohomology module $H^{i}(G/B,\mathcal{L}(E))$ with the module obtained by applying the right derived functor $R^{i}\mathrm{Ind}_{B}^{G}$ to $E$, cf. [43], Proposition I.5.12. We denote this module $H^{i}(E)$ for short. The aim of this paper is to give - with all deliberate hindsight - an account of the interplay between the representation theory for $G$ and the study of the cohomology modules $H^{i}(E)$. A major role in this study is played by the line bundles, i.e. the case where $E$ is a $1$-dimensional $B$-module. As will become clear there have been interactions in both directions : results in the representation theory have been obtained from investigating the cohomology of line bundles on $G/B$, and representation theoretic results have influenced the computations of cohomology modules. Key examples are Borel-Weil-Bott theory, the strong linkage principle, Kempf’s vanishing theorem, Frobenius splitting, Jantzen type sum formulas. We begin our account by the Borel-Weil-Bott theory dating back to the early 1950’s and follow up by giving some of the important developments - especially in modular representation theory - since then. Of course we can only present a tiny bit of the total work done over this 70-year period. Moreover, our choices are highly biased towards our own interests and our own contributions. Fortunately, there already exists several surveys and books in the area with broader perspectives. In particular, the well known book by Jantzen, [43], gives an extensive account of many developments in the field (up until its publication in 2003). In our treatment here we have included some proofs and given references for the remaining ones. The proofs we give will often contain simplifications of the original proofs. In other cases we have added further details as we have found appropriate. The paper also contains a number of remarks, which we hope put the developments into perspective. And we mention several problems, which are still open and waiting to be explored. A famous underlying challenge, namely the problem of determining the irreducible characters for $G$ in characteristic $p>0$, has been a key motivating factor and driving force behind a lot of the work we discuss in this survey. However, we have not included anything on the marvelous recent breakthrough on this problem (and on the related a priori even harder looking problem of determining the characters of all indecomposable tilting modules for $G$). For this story we refer to [1] and [53], cf. also the further references in these papers. A look at the list of publications by J.E. Humphreys reveals that he - over the major part of the period we are treating - had a keen interest in exploring the relations between representation theory and cohomology of line bundles on $G/B$. His very last mathematical note, [40] concerned these topics. In fact, Jim’s questions, results, suggestions, and conjectures over the years have strongly influenced a lot of the work done in this area. This certainly includes my own work, beginning when I was a graduate student at MIT in 1975-77. I’m very happy to dedicate this paper to his memory. ## 2\. The Borel-Weil and Borel-Weil-Bott theorems The original formulations of the two theorems discussed in this section originally concerned compact complex Lie groups. In our formulation we have stated the results for connected reductive algebraic groups $G$ (using the notation as in the introduction). As we shall point out more explicitly below, the theorems are strictly characteristic $0$ results. We shall need a little more notation. Choose a maximal torus $T$ contained in $B$ and let $X=X(T)$ denote the character group for $T$. If $\lambda\in X$ we extend it to $B$ by letting it be trivial on the unipotent radical $U\subset B$. Inside $X$ we have the root system $R$ for $(G,T)$ and we choose the the set of positive roots $R^{+}$ to be those of the Borel subgroup opposite to $B$. The set of simple roots in $R^{+}$ we denote by $S$, and the set of dominant weights $X^{+}$ is the set of $\lambda\in X$ for which $\langle\lambda,\alpha^{\vee}\rangle\geq 0$ for all $\alpha\in S$. The Weyl group $N_{G}(T)/T$ for $G$ is denoted $W$. It acts naturally on $X$. In both this and the rest of this paper we will also use the “dot”-action of $W$ on $X$ given by $w\cdot\lambda=w(\lambda+\rho)-\rho,\lambda\in X,w\in W.$ Here $\rho$ is half the sum of the positive roots. For convenience we assume $\rho\in X$. If $M$ is a $T$-module and $\lambda\in X$ we define the $\lambda$-weight space in $M$ by $M_{\lambda}=\\{m\in M\mid tm=\lambda(t)m,\;t\in T\\}.$ We say that $\lambda$ is a weight of $M$ if $M_{\lambda}\neq 0$. The dimension of $M_{\lambda}$ is called the multiplicity of $\lambda$ as a weight of $M$. Finally, the length function on $W$ corresponding to the set of simple roots in $R^{+}$ is denoted $\ell$. ### 2.1. The Borel-Weil theorem ###### Theorem 2.1. (The Borel-Weil theorem, [54]) Assume $k$ has characteristic $0$. Then for each $\lambda\in X^{+}$ the $G$-module $H^{0}(\lambda)$ is irreducible. Moreover, any finite dimensional $G$-module is isomorphic to $H^{0}(\lambda)$ for a unique $\lambda\in X^{+}$. ###### Remark 2.2. Suppose $\mathrm{char}(k)=p>0$. Then the analogue of Theorem 2.1 is false already for $G=SL_{2}$. In fact, in that case $X^{+}={\mathbb{Z}}_{\geq 0}$ and the module $H^{0}(\lambda)$ is only irreducible for very special values of $\lambda$, namely $\lambda=ap^{n}-1$ for some $0<a<p$ and $n\geq 0$. However, in all characteristics we have the following classification of the finite dimensional irreducible $G$-modules. We use the notation $w_{0}$ for the longest element in $W$. ###### Theorem 2.3. (Chevalley, see [22]) For each $\lambda\in X^{+}$ the $G$-module $H^{0}(\lambda)$ has a unique irreducible submodule $L(\lambda)$, and if $L$ is an arbitrary finite dimensional irreducible $G$-module then $L$ is isomorphic to $L(\lambda)$ for a unique $\lambda\in X^{+}$. Moreover, all weights $\mu$ of $H^{0}(\lambda)$, respectively $L(\lambda)$, satisfy $w_{0}\lambda\leq\mu\leq\lambda$, and the weight $\lambda$ occurs with multiplicity $1$. ### 2.2. The Borel-Weil-Bott theorem We shall need the fact, that for any $\lambda\in X$ the intersection $(W\cdot\lambda)\cap X^{+}$ is either empty or equal to $\lambda^{+}$ for some unique $\lambda^{+}\in X^{+}$. In the first case we say that $\lambda$ is singular, and in the second that $\lambda$ is regular. We can now formulate the Borel-Weil-Bott theorem (or sometimes just Bott’s theorem) as follows ###### Theorem 2.4. (Bott 1957, [18]). Assume that $k$ has characteristic $0$. For $\lambda\in X$ we have $H^{i}(\lambda)=\begin{cases}{H^{0}(\lambda^{+})\text{ when $\lambda=w\cdot\lambda^{+}$ for some $w\in W$ and $i=\ell(w),$}}\\\ {0\text{ otherwise. }}\end{cases}$ As indicated this result was obtained by R. Bott in 1957. Later M. Demazure gave first a simple proof [25] and then a very simple proof [27] of Bott’s theorem. An alternative very simple proof can be found in [3], Remark 3.3. We like to describe Bott’s theorem as containing two parts. Firstly, it describes completely the vanishing behavior of all homogeneous line bundles on $G/B$: those corresponding to regular characters have exactly one non- vanishing cohomology group, and all cohomology vanish for the line bundles associated to singular characters. Secondly, it proves that each non-vanishing cohomology module is irreducible, and it singles out its highest weight. In characteristic $p>0$ the first part holds for $SL_{2}$ but for no simple group of higher rank. D. Mumford was the first to find an example of a line bundle with more than one non-vanishing cohomology module. His example was a line bundle on the $3$-dimensional flag variety $SL_{3}/B$. For this flag variety his student L. Griffith gave later a complete description of the vanishing behavoir of line bundles in his thesis, cf. [31]. As observed above the second part of Bott’s theorem fails already for $SL_{2}$ (and hence for all other groups as well). This presents us with two problems: ###### Problem 2.5. Describe the vanishing behavior for the cohomology of all line bundles on $G/B$. ###### Problem 2.6. Describe the $G$-module structures of the cohomology modules of all line bundles on $G/B$. In the following sections we shall give some partial answers to these two problems. As will become clear complete answers still seem very much out of reach. However, the most important subproblem, namely the problem of determining the composition factors of $H^{0}(\lambda)$ for all $\lambda\in X^{+}$ has now been settled (in the sense that the composition factor multiplicities of $H^{0}(\lambda)$ is expressed in terms of so called $p$-Kazhdan-Lusztig polynomials) by the recent breakthrough by Riche and Williamson [1], [53] (for primes less than $2h-2$ see also [56]). Even though we have separated the problem of determining the cohomology modules $H^{i}(\lambda)$, $i\in{\mathbb{Z}}_{\geq}0$ and $\lambda\in X$ into the two individual problems, Problems 2.5 and Problem 2.6, it will hopefully be clear from the following sections that they are very much interrelated and we advocate very strongly to explore them together. For instance, my proof of the strong linkage principle (see Section 5.1 below) only succeeded when I decided to explore the structure of the cohomology modules without knowing their vanishing behavior. ## 3\. Kempf’s vanishing theorem A specially important part of Problem 2.5 is to describe the cohomology of dominant line bundles, i.e. line bundles induced by dominant characters. Note that in characteristic zero Bott’s theorem gives that all higher cohomology of such line bundles vanishes. Moreover, it is easy to see that if $\lambda$ is strictly dominant, i.e. $\lambda-\rho\in X^{+}$, then $\mathcal{L}(\lambda)$ is an ample line bundle on $G/B$. Hence for such a character we have in all characteristics that $H^{i}(n\lambda)=0$ for $i>0$ whenever $n\gg 0$. This led to the expectation that the vanishing of the higher cohomology of all line bundles with dominant weights would hold in all characteristics. This turned out to be true: ###### Theorem 3.1. (Kempf’s vanishing theorem, [46]) $\text{If }\lambda\in X^{+}\text{ then for all }p\text{ we have }H^{i}(\lambda)=0\text{ for all }i>0.$ ###### Remark 3.2. 1. (1) The importance of this theorem in modular representation theory is tremendous. First of all, it immediately gives that for $\lambda\in X^{+}$ the character of $H^{0}(\lambda)$ is independent of the characteristic and hence given by the Weyl formula. Using Serre duality this implies that $H^{N}(w_{0}\cdot\lambda)$ is the Weyl module with highest weight $\lambda$. It also reveals that we have ${\mathbb{Z}}$-lattices, i.e. there is a module $H^{0}_{\mathbb{Z}}(\lambda)$, respectively $H^{N}_{\mathbb{Z}}(w_{0}\cdot\lambda)$ for the Chevalley group over ${\mathbb{Z}}$ corresponding to $G$, which is free over ${\mathbb{Z}}$ and has $H^{0}_{Z}(\lambda)\otimes_{\mathbb{Z}}k\simeq H^{0}(\lambda)$, respectively $H^{M}_{Z}(w_{0}\cdot\lambda)\otimes_{\mathbb{Z}}k\simeq H^{N}(w_{0}\cdot\lambda)$. In particular, we see that to determine the irreducible characters, $\mathrm{ch}L(\lambda),\lambda\in X^{+}$ is equivalent to finding the composition factor multiplicities of all Weyl modules. All approaches to finding the irreducible characters in characteristic $p$ rely on this fact. 2. (2) (on the proof of Kempf’s theorem) G. Kempf first proved this result for $G=SL_{n}$ for all $n$, [45]. Then Lakshmi Bai, C. Musili and C.S. Seshadri extended the result to other classical groups, [47], before Kempf came up with his general proof. The method of proof in these papers are all algebraic geometric involving a close analysis of the restrictions of $\mathcal{L}(\lambda)$ to certain Schubert varieties in $G/B$ and $G/P$, $P$ a parabolic subgroup containing $B$. As a side benefit this gave also important vanishing theorems for the higher cohomology of the restrictions of dominant line bundles to various Schubert varieties. 3. (3) (simple proofs and further work) A few years after Kempf’s general proof the author, see [6] (or Theorem 4.2 below), and independently W. Haboush [33], came up with a much shorter proof of Theorem 3.1. These proofs combine (well known) algebraic geometric facts, in particular the vanishing of higher cohomology of sufficiently high powers of ample line bundles, and the properties of the Frobenius morphism in prime characteristics with (equally well known) representation theoretic facts, especially the existence and properties of the Steinberg modules (see below). The author realized a few years [10] later that a slight twist of the arguments used in [6] could be used to extend the vanishing result to all Schubert varieties in $G/B$. As corollaries this proved on the algebraic geometric side the normality of Schubert varieties and on the representation theoretic side the Demazure character formula. Results in the same directions had earlier been explored by M. Demazure, [26] in characteristic $0$, and by V. Lakshmibai, C. Musili and C.S. Seshadri, [47] in general. In particular, it should be mentioned that C.S. Seshadri was the first to prove the normality of all Schubert varieties in all characteristics, see [55]. 4. (4) Later, the Frobenius splitting method - invented and developed by Mehta and Ramanathan, [52] \- became a “big industry” with a large number of applications in both algebraic geometry and in representation theory. We do not report further on this line of research here but refer instead the reader to the book [20]. ## 4\. On the vanishing behavior, Problem 2.5 In this section we try to describe what is known and what is still not known about the vanishing of the cohomology modules $H^{i}(\lambda)$, $i\in{\mathbb{Z}}_{\geq 0},\lambda\in X$. Throughout we assume (unless otherwise said) that the characteristic of $k$ is $p>0$. ### 4.1. Preliminaries In the following $N$ will denote the set of positive roots. Alternatively, $N$ is the dimension of the flag manifold $G/B$. By Grothendieck vanishing [32], Theorem 3.6.5 we have (4.1) $H^{i}(\lambda)=0\text{ for all }\lambda\in X\text{ when }i>N.$ This theorem implies that for each $\lambda\in X$ we have only finitely many cohomology modules to worry about. We can further “cut the vanishing problem in half” by using Serre duality: (4.2) $H^{i}(\lambda)^{*}\simeq H^{N-i}(-\lambda-2\rho)\text{ for all }0\leq i\leq N,\lambda\in X.$ Here, if $M$ is a $G$-module, we use the notation $M^{*}$ for the contragredient dual module. The isomorphism in (4.2) is an isomorphism of $G$-modules. For later use we record also the following consequence (4.3) $\mathrm{Hom}_{G}(H^{N}(w_{0}\cdot\lambda),H^{0}(\lambda))\simeq k\text{ for all }\lambda\in X^{+}.$ In fact, by (4.2) we have that $H^{N}(w_{0}\cdot\lambda)\simeq H^{0}(-w_{0}\lambda)^{*}$. Hence by Theorem 2.3 $\lambda$ is the highest weight of $H^{N}(w_{0}\cdot\lambda)$, and it occurs with multiplicity $1$. ###### Remark 4.1. While we know of no representation theoretic proof of the Grothendieck vanishing theorem (4.1), the Serre duality can be obtained “purely representation-theoretically” (and this proof generalizes to the quantum case), see [17], Section 3.2. Let $n\geq 0$. The $n$’th Steinberg module is $St_{n}$. This is the irreducible $G$-module with highest weight $(p^{n}-1)\rho$, i.e. $St_{n}=L((p^{n}-1)\rho).$ We denote by $F:G\rightarrow G$ the Frobenius homomorphism on the group scheme $G$. Its kernel is an infinitesimal group scheme denoted $G_{1}$. We define more generally $G_{n}$ to be the kernel of $F^{n}$. If $V$ is an arbitrary $G$-module then $V^{(n)}$ denotes the $n$’th Frobenius twist of $V$. This means that $V^{(n)}$ as a vector space is identical to $V$ but its group action is the composite $G\xrightarrow{F^{(n)}}G\rightarrow GL(V)$. We also define “untwist” of twisted modules: If $M=V^{(n)}$, i.e. if the restriction to $G_{n}$ of the $G$-action on $M$ is trivial, then we shall write $V=M^{(-n)}$. A special case of Steinberg’s tensor product theorem, [57], says (4.4) $St_{n}=St_{1}\otimes St_{1}^{(1)}\otimes\cdots\otimes St_{1}^{(n-1)}.$ This implies that the Steinberg module is a (dual) Weyl module: $St_{n}=H^{0}((p^{n}-1)\rho)\text{ for all }n.$ Note that by (4.4) this statement reduces to the case $n=1$. In that case it follows from the strong linkage principle, [4] (or see Theorem 5.1 below). Following [14] we shall set $D_{p}(i)=\\{\lambda\in X\mid H^{i}(\lambda)\neq 0\\},\,i\geq 0,\,\lambda\in X.$ (here we allow $p=0$). Finally, we shall find it convenient to use the following notation $p^{n}\cdot\lambda=p^{n}(\lambda+\rho)-\rho,\;n\geq 0,\,\lambda\in X,$ and $X_{n}=\\{\lambda\in X^{+}\mid\langle\lambda,\alpha^{\vee}\rangle<p^{n}\text{ for all simple roots }\alpha\\}.$ The elements in $X_{n}$ are called the $p^{n}$-restricted weights. ### 4.2. The Frobenius-Steinberg theorem Using the above notation we have ###### Theorem 4.2. (The Frobenius-Steinberg theorem, [6], Theorem 2.5). Let $\lambda\in X$. Then we have $G$-module isomorphisms $H^{i}(p^{n}\cdot\lambda)\simeq St_{n}\otimes H^{i}(\lambda)^{(n)}$ for all $i,n\geq 0$. Note that here we have stated this theorem only for line bundles. It actually holds for all vector bundles on $G/B$ (and is stated and proved in this generality in [6]). Let us point out that the proof follows easily from the following standard facts (about induction to and from the group scheme $G_{n}B$, respectively about the Steinberg modules). ###### Proof. Let $E$ be a $B$-module. Then clearly $E^{(n)}$ is a $G_{n}B$-module and we have (4.5) $H^{i}(G_{n}B/B,E)=0\text{ for all }i>0,$ and (4.6) $H^{i}(E)^{(n)}\simeq H^{i}(G/G_{n}B,E^{(n)})\text{ for all }i\geq 0.$ Let $\lambda\in X$ and write as before $\lambda=\lambda^{0}+p^{n}\lambda^{1}$ with $\lambda^{0}\in X_{n}$ and $\lambda^{1}\in X$. Then we have an isomorphism of $G_{n}B$-modules (4.7) $H^{0}(G_{n}B/B,\lambda)\simeq H^{0}(G_{n}B/B,\lambda^{0})\otimes p^{n}\lambda^{1}.$ Moreover, in the special case $\lambda=(p^{n}-1)\rho$ we have a $G_{n}B$-isomorphism (4.8) $H^{0}(G_{n}B/B,(p^{n}-1)\rho)\simeq{St_{n}}_{|_{G_{n}B}}.$ To obtain the Frobenius-Steinberg theorem (in the form stated above) from these facts we first use (4.5) to see that $H^{i}(p^{n}\cdot\lambda)=H^{i}((p^{n}-1)\rho+p^{n}\lambda)\simeq H^{i}(G/G_{n}B,H^{0}(G_{n}B/B,(p^{n}-1)\rho+p^{n}\lambda)).$ Here according to (4.7) and (4.8) the term $H^{0}(G_{n}B/B,(p^{n}-1)\rho+p^{n}\lambda)$ is isomorphic as $G_{n}B$-module to $St_{n}\otimes p^{n}\lambda$. So using this and the (generalized) tensor identity [43], Proposition I.3.6 we get $H^{i}(G/G_{n}B,H^{0}(G_{n}B/B,(p^{n}-1)\rho+p^{n}\lambda))\simeq St_{n}\otimes H^{i}(G/G_{n}B,p^{n}\lambda).$ We conclude by applying (4.6). ∎ The first and foremost consequence of Theorem 4.2 is the Kempf vanishing theorem for the higher cohomology of dominant line bundles, see Remark 3.2 (2). However, the result certainly has important applications also for non- dominant line bundles, including the vanishing behavior of their cohomology. The most evident application is the following ###### Corollary 4.3. Let $i,n\geq 0$ and $\lambda\in X$. Then $\lambda\in X_{p}(i)$ if and only if $p^{n}\cdot\lambda\in D_{p}(i)$. With a little more effort we can prove, see [6], Proposition 3.3 (using the same notation as in Corollary 4.3) (4.9) $\lambda\in D_{p}(i)\text{ if and only if }p^{n}\cdot\lambda- X_{n}\subset D_{p}(i).$ The corollary says that $D_{p}(i)$ is stable under dot multiplication by $p^{n}$. As $(p^{n}-1)\rho\in X_{n}$ we see from (4.9) that $D_{p}(i)$ is also stable under usual multiplication by $p^{n}$. This last result is also a consequence of the following result. ###### Corollary 4.4. ([6], Corollary 2.7) Let $\lambda\in X$. Then the Frobenius homomorphism on $G$ induces injective homomorphisms $H^{i}(\lambda)^{(n)}\rightarrow H^{i}(p^{n}\lambda)$ for all $i,n\geq 0$. ###### Remark 4.5. This corollary holds more generally for all vector bundles on $G/B$. It was conjectured by Cline, Parshall and Scott. I must admit that I didn’t believe their conjecture when CPS mentioned it to me at an Oberwolfach meeting in April 1979, but my efforts of finding a counterexample resulted in the discovery of the Frobenius-Steinberg theorem above. Thus I ended up proving the conjecture instead! Combining Corollary 4.4 with Serre duality gives the analogous statement: (4.10) $\lambda\in D_{p}(i)\text{ if and only if }p^{n}\cdot\lambda+X_{n}\subset D_{p}(i).$ Now we note that by semi-continuity (alternatively use the universal coefficient theorem, Theorem 5.10) we have inclusions $D_{0}(i)\subset D_{p}(i)\text{ for all }i\geq 0.$ By Bott’s theorem we know $D_{0}(i)=\bigcup_{w\in W,\ell(w)=i}w\cdot X^{+}.$ Combining these two facts with (4.9) and (4.10) we obtain the following result. ###### Proposition 4.6. For all $i\geq 0$ we have $\bigcup_{n\geq 0,\;\ell(w)=i}(p^{n}\cdot w\cdot X^{+}\pm X_{n})\subset D_{p}(i).$ ###### Remark 4.7. 1. (1) It is well known that $D_{p}(0)=X^{+}$ for all $p\geq 0$. By Serre duality we then also have $D_{p}(N)=X^{-}$ where the set of antidominant weights $X^{-}$ is defined by $X^{-}=\\{\lambda\in X\mid-\lambda-2\rho\in X^{+}\\}=w_{0}\cdot X^{+}$. It follows that the inequality in Proposition 4.6 is an equality when $i=0$ and $i=N$. 2. (2) It will follow from the results in the next section that we have equality in Proposition 4.6 also for $i=1$ and $i=N-1$. However, the results in [8] and [15] show that equality does not hold in general. In fact, we expect it to fail for all $1<i<N-1$ (by the computations in [8], Section 5, and [15], it does so for type $B_{2}$ and $G_{2}$). Nevertheless, Proposition 4.6 contains the best approximation to the (non-)vanishing behavior of the cohomology of line bundles known up to this date. ### 4.3. Non-vanishing of the first cohomology module We shall now describe those $\lambda\in X$ for which $H^{1}(\lambda)\neq 0$. In other words we determine the set $D_{p}(1)$ from Section 4.2. This result was obtained in [3], Theorem 3.6.a . It thus predates the Frobenius-Steinberg theorem above. Its proof is based on a detailed examination of the $B$-module structure of $H^{1}(P_{\alpha}/B,\mathcal{L}(\lambda))$, where $\alpha$ is a simple root and $P_{\alpha}$ is the minimal parabolic subgroup containing $B$ corresponding to $\alpha$. The proof also gives information about the $G$-module structures of the non-zero $H^{1}(\lambda)$, see Section 5.3. ###### Theorem 4.8. Let $\lambda\in X$. Then $H^{1}(\lambda)\neq 0$ if and only if $\lambda\in\bigcup_{\alpha\in S,\;n\geq 0}(p^{n}\cdot s_{\alpha}\cdot X^{+}-X_{n})$. ###### Remark 4.9. 1. (1) The formulation of the non-vanishing criteria above is somewhat different from the way it is stated in [3]. However, it is easy to check that it is equivalent. We have chosen the alternative formulation in order to stay close to the formulation in Section 4.2. The theorem says that for $i=1$ we have equality in Proposition 4.6 (note that in the $i=1$ case we can replace the $\pm$ sign in this proposition with the minus sign only because $p^{n}\cdot s_{\alpha}\cdot\lambda+X_{n}\subset s_{\alpha}\cdot X^{+}$ for all $\lambda\in X^{+}$). 2. (2) This theorem represents the first progress on describing the vanishing behavior of cohomology of non-dominant line bundles on $G/B$ for a general reductive algebraic group. The only known previous result on this question was Griffith’s treatment of the case $G=SL_{3}$, [31]. Except for Theorem 4.8, the consequences of the Frobenius-Steinberg theorem presented in Section 4.2, and the computations in the rank $2$ cases, [8] and [15], the Problem 2.5 is still wide open for the cohomology modules $H^{i}(\lambda)$ with $1<i<N-1$ and $\lambda\notin X^{+}\cup X^{-}$. The first ingredient in the proof of Theorem 4.8 is the Kempf vanishing theorem. It implies that if $\lambda\in D_{p}(1)$ then there exists a simple root $\alpha$ with $\langle\lambda,\alpha^{\vee}\rangle<0$. This implies that the restriction of $\mathcal{L}(\lambda)$ to $P_{\alpha}/B\simeq{\mathbb{P}}^{1}$ has no cohomology in degrees different from $1$. Therefore, we get $H^{1}(\lambda)\simeq H^{0}(H^{1}(P_{\alpha}/B,\mathcal{L}(\lambda)))$. The proof now comes down to a close inspection of the $B$-module structure of $H^{1}(P_{\alpha}/B,\mathcal{L}(\lambda))$. For details we refer to the proof in [3]. Using Serre duality Theorem 4.8 is equivalent to ###### Corollary 4.10. Let $\lambda\in X$. Then $H^{N-1}(\lambda)\neq 0$ if and only if $\lambda\in\bigcup_{\alpha\in S,\;n\geq 0}(p^{n}\cdot s_{\alpha}\cdot X^{-}+X_{n})$. Again this is equivalent to saying that for $i=N-1$ we have equality in Proposition 4.6. ## 5\. On the $G$-module structure. Problem 2.6 Bott’s theorem is a beautiful and complete description of all cohomology modules of line bundles on $G/B$. It gives a full account of both their vanishing behavior and of their G-structure. In a sense it is at the same time a disappointment: the higher cohomology modules for non-dominant line bundles present nothing new. In fact, in characteristic zero any non-vanishing higher cohomology module is isomorphic to the (simple) $0$’th cohomology module of the corresponding $W$-dot-conjugated dominant line bundle. In characteristic $p$ there is much more excitement. As we shall demonstrate in this section, it is for any $p>0$ and $\mu\in X$ non-dominant a rare exception for $H^{i}(\mu)$ to be isomorphic to some $H^{0}(\lambda)$ with $\lambda$ dominant . Moreover, when $1<i<N-1$ it is not always true that $H^{i}(\mu)$ has simple socle or head. In fact, there are examples where it turns out that $H^{i}(\mu)$ is decomposable. In other words, just as Sections 4.2-3 revealed that the vanishing behavior is much more intricate in positive characteristics than in characteristic zero, so we shall see in this section is the situation about the $G$-module structures. Despite this somewhat mysterious and to a large extent still unknown behavior of the $G$-modules $H^{i}(\mu)$, we shall start out by proving, that we can take good advantage of these higher cohomology modules to obtain not only some definite results about them, but also in fact some general results in representation theory. ### 5.1. The strong linkage principle Let $M$ be a finite dimensional $G$-module. For $\mu\in X^{+}$ we denote by $[M:L(\mu)]$ the composition factor multiplicity of $L(\mu)$ in $M$. Then we can state the strong linkage principle as follows. ###### Theorem 5.1. (The strong linkage principle, [4]) Let $\lambda\in X$ and $\mu\in X^{+}$. If $[H^{i}(\lambda):L(\mu)]\neq 0$ for some $i\geq 0$, then $\mu\uparrow\lambda^{+}$. Here $\lambda^{+}$ is the unique element in $W\cdot\lambda\cap(X^{+}-\rho)$. Moreover, we have used the notation $\mu\uparrow\lambda^{+}$ from Jantzen’s book, [43], II.6 to mean that $\mu$ is strongly linked to $\lambda^{+}$, i.e. there exists a sequence $\mu=\mu_{0}\leq\mu_{1}\leq\cdots\leq\mu_{r}=\lambda^{+}$ in $X$ with $\mu_{j-1}=s_{\beta_{j}}\cdot\mu_{j}+n_{j}p\beta_{j}$ for some $\beta_{j}\in R^{+}$ and $n_{j}\in{\mathbb{Z}},j=1,2,\cdots r$. Before we give the proof of this theorem we shall say a little about its background. A linkage principle for modular representations of reductive algebraic groups was formulated by D.-n. Verma in [58]. It says that two composition factors $L(\lambda)$ and $L(\mu)$ of an indecomposable module for $G$ always satisfy $\mu\in W_{p}\cdot\lambda$. Here $W_{p}$ denotes the affine Weyl group for $G$, i.e. the group generated by the affine reflections $s_{\beta,r}$ defined by $s_{\beta,r}\cdot\nu=s_{\beta}\cdot\nu+rp\beta,\;\nu\in X,$ where $\beta$ runs through all positive roots, and $r$ through all integers. This linkage principle was proved by J.E. Humphreys for $p>h$, see [34]. V. Kac and B. Weisfeiler improved this result by showing that the linkage principle holds for all $p$ not dividing the index of connection for $R$ , see [44]. Also the work of R. Carter and G. Lusztig, [21], implies the principle for type $A_{n}$ for all $p$. We refer to [35], §3 for more details on the background of the linkage principle. Actually, D.-n. Verma suggested in [58], Conjecture II p. 689, that a stronger principle should hold for Weyl modules. This was proved to be the case by Jantzen for $p\geq h$, first for type $A_{n}$ in [41] and then for arbitrary types in [42]. I named this principle (for all cohomology modules of line bundles on $G/B$) the strong linkage principle and proved it for all $p$ in [4]. The linkage principle is a consequence of the strong linkage principle for Weyl modules, see Corollary 5.7 below. J.E. Humphreys asked me some time in the late $1970$’s whether the strong linkage principle could be extended to the higher cohomology groups on line bundles. In [2] I had proved this for the cohomology modules of certain very special line bundles, namely those belonging to $p$-alcoves containing $-\rho$ in their closures (for such line bundles I proved in fact that Bott’s theorem holds). In [3] we proved, that whenever $H^{1}(\lambda)\neq 0$ it has a simple socle. This implies that at least the (weak) linkage principle holds for the first cohomology modules. In the Fall 1979 I tried then to extend the linkage principles (weak or strong) to all the higher cohomology modules. The idea was to use various long exact sequences of such higher cohomology modules to deduce the principle from its validity for the $0$’th cohomology modules, i.e. the dual Weyl modules. Suddenly, I realized that not only would these sequences indeed give this extension, but by including the higher cohomology modules in the statement of the strong linkage principle, one could in fact give a short proof - valid for all $p$ and all cohomology - of the principle itself. We begin with an easy lemma. If $\alpha\in S$ we denote by $P_{\alpha}$ the minimal parabolic subgroup in $G$ containing $B$ and having $\alpha$ as its unique positive root. If $\lambda\in X$ we write $H^{i}_{\alpha}(\lambda)=H^{i}(P_{\alpha}/B,\mathcal{L}(\lambda))$. ###### Lemma 5.2. Let $\lambda\in X$, $\alpha\in S$, and assume $\langle\lambda+\rho,\alpha^{\vee}\rangle\geq 0$. Then there is for each $i\geq 0$ a natural $G$-homomorphism $c_{\alpha}^{i}(\lambda):H^{i+1}(s_{\alpha}\cdot\lambda)\rightarrow H^{i}(\lambda).$ ###### Proof. As $P_{\alpha}/B\simeq{\mathbb{P}}^{1}$ and $\langle\lambda,\alpha^{\vee}\rangle\geq 0$ we have $H^{j}_{\alpha}(\lambda)=0$ for all $j>0$. By Serre duality we have moreover $H^{1}_{\alpha}(s_{\alpha}\cdot\lambda)\simeq H^{0}_{\alpha}(-s_{\alpha}(\lambda))^{*}$, and $H^{j}(s_{\alpha}\cdot\lambda)=0$ for all $j\neq 1$. It is then a matter of easy $SL_{2}$-computations to see that we have (up to scalars) a unique $P_{\alpha}$-homomorphism $H^{1}_{\alpha}(s_{\alpha}\cdot\lambda)\rightarrow H^{0}_{\alpha}(\lambda)$. Via the Leray spectral sequence coming from the $P_{\alpha}/B\simeq{\mathbb{P}}^{1}$-fibration $G/B\rightarrow G/P_{\alpha}$ we get that this homomorphism induces corresponding natural homomorphisms $H^{i+1}(s_{\alpha}\cdot\lambda)\rightarrow H^{i}(\lambda)$ for all $i\geq 0$. ∎ ###### Remark 5.3. Let $\langle\lambda,\alpha^{\vee}\rangle=-1$. The above proof shows in particular that for such $\lambda$ we have $H^{j}(\lambda)=0$ for all $j$. In fact, we get more generally for any $P_{\alpha}$-module $V$ that $H^{j}(V\otimes\lambda)=0$ for all $j$ (as the vector bundle on $P_{\alpha}/B$ induced by $V$ is trivial). We shall now need $2$ long exact sequences involving our cohomology modules. They arise from $4$ short exact sequences of $B$-modules. Assume $\lambda\in X$ and $\alpha\in S$ satisfy $\langle\lambda,\alpha^{\vee}\rangle\geq 0$. Then the first two $B$-sequences are (recall that if $\mu\in X$ we also denote by $\mu$ the $1$-dimensional $B$-module defined by this character) (5.1) $0\rightarrow K_{\alpha}^{\lambda}\rightarrow H^{0}_{\alpha}(\lambda+\rho)\otimes(-\rho)\rightarrow\lambda\rightarrow 0,$ and (5.2) $0\rightarrow s_{\alpha}\cdot\lambda\rightarrow K_{\alpha}^{\lambda}\rightarrow V_{\alpha}^{\lambda}\rightarrow 0.$ These sequences are obtained by noticing that $\lambda$, respectively $s_{\alpha}\cdot\lambda$, is the highest, respectively lowest, weight of $H^{0}_{\alpha}(\lambda+\rho)\otimes(-\rho)$. This gives the surjection onto $\lambda$ in (5.1), and we then define $K_{\alpha}^{\lambda}$ to be the kernel of this surjection. Likewise we get the injection in (5.2) and define $V_{\alpha}^{\lambda}$ to be the corresponding cokernel. We now note that by Remark 5.3 the middle term in (5.1) has vanishing cohomology in all degrees. Hence we get $H^{j}(\lambda)\simeq H^{j+1}(K_{\alpha}^{\lambda})$ for all $j$. Inserting this in the long exact sequence coming from (5.2) we get the long exact $G$-sequence (5.3) $\cdots\rightarrow H^{j+1}(s_{\alpha}\cdot\lambda)\rightarrow H^{j}(\lambda)\rightarrow H^{j+1}(V_{\alpha}^{\lambda})\rightarrow\cdots$ In this sequence the maps $H^{j+1}(s_{\alpha}\cdot\lambda)\rightarrow H^{j}(\lambda)$ are up to non-zero scalars equal to the homomorphisms $c_{\alpha}^{j}(\lambda)$ from Lemma 5.2. To further explore the terms $H^{j+1}(V_{\alpha}^{\lambda})$ in the above sequence we shall need the following $2$ short exact $B$-sequences (5.4) $0\rightarrow C_{\alpha}^{\lambda}\rightarrow V_{\alpha}^{\lambda}\rightarrow I_{\alpha}^{\lambda}\rightarrow 0,$ and (5.5) $0\rightarrow I_{\alpha}^{\lambda}\rightarrow H^{0}_{\alpha}(\lambda+\rho-\alpha)\otimes(-\rho)\rightarrow Q_{\alpha}^{\lambda}\rightarrow 0.$ These $2$ sequences come from the existence of a $B$-homomorphism (an easy $SL_{2}$-computation) $V_{\alpha}^{\lambda}\rightarrow H^{0}_{\alpha}(\lambda+\rho-\alpha)\otimes(-\rho)$ by setting $C_{\alpha}^{\lambda}$, respectively $I_{\alpha}^{\lambda}$, respectively $Q_{\alpha}^{\lambda}$ equal to the kernel, respectively image, respectively cokernel, of this map. ###### Remark 5.4. In characteristic $0$ we have $V_{\alpha}^{\lambda}\simeq H^{0}_{\alpha}(\lambda+\rho-\alpha)\otimes(-\rho)$. This is a key observation in [27] leading Demazure to his very easy proof of Bott’s theorem. Its characteristic $p$ analogue plays likewise an important rôle in our proof here. The middle term in the sequence (5.5) has vanishing cohomology in all degrees (again by Remark 5.3). Arguing as above we then obtain the long exact $G$-sequence (5.6) $\cdots\rightarrow H^{j+1}(C_{\alpha}^{\lambda})\rightarrow H^{j+1}(V_{\alpha}^{\lambda})\rightarrow H^{j}(Q_{\alpha}^{\lambda})\rightarrow\cdots.$ Inspecting the homomorphism $V_{\alpha}^{\lambda}\rightarrow H^{0}_{\alpha}(\lambda+\rho-\alpha)\otimes(-\rho)$ a bit closer we see that the set of weights of both $C_{\alpha}^{\lambda}$ and $Q_{\alpha}^{\lambda}$ is $X_{\alpha}^{\lambda}=\\{s_{\alpha}\cdot\lambda+rp\alpha\mid 0<rp<\langle\lambda+\rho,\alpha^{\vee}\rangle\\}.$ This implies ###### Proposition 5.5. Let $\lambda$ and $\alpha$ be as above and pick $\mu\in X^{+}$. 1. (1) If $L(\mu)$ is a composition factor of either the kernel or the cokernel of $c_{\alpha}^{i}(\lambda)$ for some $i$, then $L(\mu)$ is a composition factor of $H^{j}(\nu)$ for some $j\geq 0$ and $\nu\in X_{\alpha}^{\lambda}$. 2. (2) If $\nu\in X_{\alpha}^{\lambda}$ then $\nu^{+}\uparrow\lambda^{+}$ and $\nu^{+}<\lambda^{+}$. ###### Proof. (1) comes by combining the sequences (5.3) and (5.6). (2) is a weight calculation, see [4], Lemma 5. ∎ We shall now prove the theorem by induction with respect to the strong linkage ordering on $X^{+}-\rho$. When $\lambda\in X^{+}-\rho$ we shall use the notation $SL(<\lambda)=\\{\mu\in X^{+}-\rho\mid\mu\uparrow\lambda\text{ and }\mu<\lambda\\}.$ Let $\lambda\in X^{+}-\rho$ and pick a reduced expression for $w_{0}$, $w_{0}=s_{N}s_{N-1}\cdots s_{1}$. Setting $\lambda_{i}=s_{i}s_{i-1}\cdots s_{1}\cdot\lambda$ we have $\lambda_{i}^{+}=\lambda$ for all $i$, and we get the following string of homomorphisms (5.7) ${{H^{N}(\lambda_{N})}\xrightarrow{c_{N}}{H^{N-1}(\lambda_{N-1})}\xrightarrow{c_{N-1}}{\cdots}\xrightarrow{c_{1}}{H^{0}(\lambda_{0})}}.$ Here $c_{i}=c_{\alpha_{i+1}}^{i+1}(\lambda_{i})$ with $\alpha_{i+1}$ denoting the simple root corresponding to $s_{i+1}$ (note that $\langle\lambda_{i}+\rho,\alpha_{i+1}^{\vee}\rangle=\langle\lambda^{+}+\rho,s_{1}s_{2}\cdots s_{i}(\alpha_{i+1})^{\vee}\rangle\geq 0$ so that $c_{\alpha_{i+1}}^{i+1}(\lambda_{i})$ exists). Actually, we have such a string for all $j\in{\mathbb{Z}}$: (5.8) ${H^{N+j}(\lambda_{N})\xrightarrow{c_{N}^{j}}H^{N-1+j}(\lambda_{N-1})\xrightarrow{c_{N-1}^{j}}\cdots\xrightarrow{c_{1}^{j}}H^{j}(\lambda_{0})},$ where $c_{i}^{j}=c_{\alpha_{i+1}}^{i+1+j}(\lambda_{i})$. Here we use the convention that $H^{j}(\nu)=0$ when $j$ is negative. When $j$ is positive we notice that the first $j$ terms in (5.8) are $0$ by (4.1). To start the induction assume that $\lambda$ is minimal in $X^{+}-\rho$ with respect to the strong linkage relation. This means that $SL(<\lambda)=\emptyset$. But then all $X_{\alpha_{i}}^{\lambda_{i-1}}$ are also empty by Proposition 5.5 (2), i.e. the kernels and cokernels of all $c_{i}^{j}$ are $0$. We conclude that in this case the string (5.7) consists of $N$ isomorphisms. This means that $H^{N}(\lambda_{N})=H^{i}(\lambda_{i})=H^{0}(\lambda_{0})$, i.e. $H^{i}(\lambda_{i})=L(\lambda)$ for all $i$. On the other hand, if $j\neq i$ then (5.8) shows that $H^{j}(\lambda_{i})=0$. In particular, the strong linkage principle certainly holds in this case. Let now $\lambda\in X^{+}-\rho$ be arbitrary and suppose Theorem 5.1 holds for all $\mu\in SL(<\lambda)$. Suppose $L(\nu)$ is a composition factor of $H^{i+j}(\lambda_{i})$ for some $0\leq i\leq N$ and some $j\in{\mathbb{Z}}$. If $L(\mu)$ is a composition factor of one of the kernels or cokernels of one of the homomorphisms in the string (5.7) we see from Proposition 5.5 (1) that $L(\mu)$ is a composition factor of $H^{t}(\nu)$ for some $t$ and some $\nu\in X_{\alpha_{r}}^{\lambda_{r-1}}$ with $0<r\leq N$. By (2) in the same proposition such $\nu$ all belong to $SL(<\lambda)$ and by our induction hypothesis we get in this case $\mu\uparrow\nu\uparrow\lambda$. On the other hand, if $L(\mu)$ is not a composition factor of any of these kernels or cokernels then the string (5.8) shows, that we cannot have $j\neq 0$ and the string (5.7) reveals that $L(\mu)$ must be in the image of the composite $c_{1}\circ c_{2}\circ\cdots\circ c_{N}$. But this composite is a homomorphism from $H^{N}(w_{0}\cdot\lambda)$ to $H^{0}(\lambda)$. As $H^{0}(\lambda)$ has simple socle $L(\lambda)$ and (by Serre duality) $H^{N}(w_{0}\cdot\lambda)$ has simple head $L(\lambda)$, we conclude that this image equals $L(\lambda)$. Hence in this case $\mu=\lambda$. Observe now that when we vary the reduced expression for $w_{0}$ we can for any $w\in W$ realize $w\cdot\lambda$ as one of the $\lambda_{i}$’s in our strings. As $W\cdot(X^{+}-\rho)=X$ we have thus proved Theorem 5.1. ###### Remark 5.6. 1. (1) As was the case with the original proof in [4] the proof given above does not rely on Kempf’s vanishing theorem (but the proof given in [43], Chapter II.6 does). In the preprint [5] we actually deduced Kempf’s vanishing theorem from the strong linkage principle. However, unless we are in type $A$ this works only for $p\geq h$. The preprint was never published because it was shortly afterwards overtaken by [6], which requires no restrictions on $p$. 2. (2) Since the publication of [4] there have been some improvements and alternative proofs of the results, see e.g. [59], [60], and [29]. 3. (3) It should also be mentioned that we have a close analogue of the strong linkage principle for quantum groups with parameter being a root of $1$, see [13]. ### 5.2. Applications In this section we deduce some consequences of the strong linkage principle and of its proof. The first is ###### Corollary 5.7. (The linkage principle) Let $M$ be an indecomposable $G$-module. If $L(\lambda)$ and $L(\mu)$ are two composition factors of $M$ then $\mu\in W_{p}\cdot\lambda$. ###### Proof. It is standard to reduce this corollary to the following vanishing result $\mathrm{Ext}^{1}_{G}(L(\mu),L(\lambda))=0\text{ unless }\mu\in W_{p}\cdot\lambda.$ To check this we may assume that $\mu\not>\lambda$. In fact, if $\mu>\lambda$ we use the duality $\mathrm{Ext}_{G}^{1}(L(\mu),L(\lambda))\simeq\mathrm{Ext}_{G}^{1}(L(\lambda)^{*},L(\mu)^{*})\simeq\mathrm{Ext}_{G}^{1}(L(-w_{0}\cdot\lambda),L(-w_{0}\cdot\mu))$ to bring us into the desired situation. We claim that $\mathrm{Ext}_{G}^{1}(L(\mu),H^{0}(\lambda))=0$. Indeed, suppose we have an extension $0\rightarrow H^{0}(\lambda)\rightarrow E\rightarrow L(\mu)\rightarrow 0$. Our assumption implies that $\lambda$ is a maximal weight of $E$. Hence the universal property (also known as Frobenius reciprocity, [43] Proposition I.3.4) gives a splitting $E\rightarrow H^{0}(\lambda)$. Now the short exact sequence $0\rightarrow L(\lambda)\rightarrow H^{0}(\lambda)\rightarrow H^{0}(\lambda)/L(\lambda)\rightarrow 0$ gives a surjection $\mathrm{Hom}_{G}(L(\mu),H^{0}(\lambda)/L(\lambda))\rightarrow\mathrm{Ext}^{1}_{G}(L(\mu),L(\lambda))$. Hence if $\mathrm{Ext}^{1}_{G}(L(\mu),L(\lambda))\neq 0$ we deduce that $L(\mu)$ is a composition factor of $H^{0}(\lambda)$. By Theorem 5.1 this implies that $\mu$ is strongly linked to $\lambda$. In particular, $\mu\in W_{p}\cdot\lambda$. ∎ ### 5.3. On the $G$-structure of $H^{1}(\lambda)$ The bound on the composition factors of cohomology modules of line bundles obtained in Theorem 5.1 was the first general result on the $G$-structure of $H^{i}(\lambda)$. It holds for all $G,i,\lambda$, and $p$. Slightly before this result we obtained, however, another general result on the $G$-module structure of $H^{1}(\lambda)$, valid for all $G,\lambda$ and $p$, namely the following (recall that we have already given the complete vanishing behavior of $H^{1}(\lambda)$ in Theorem 4.8). ###### Theorem 5.8. ([3], Theorem 3.5) All non-zero $H^{1}(\lambda)$ have simple socles. ###### Proof. Here we offer a slightly simpler proof than the original one given in [3], Section 3. As observed in Section 4.3 if $H^{1}(\lambda)\neq 0$ then there exists a simple root $\alpha$ such that $\langle\lambda,\alpha^{\vee}\rangle<-1$ and (5.9) $H^{1}(\lambda)\simeq H^{0}(G/P_{\alpha}/B,H^{1}_{\alpha}(\lambda)).$ Let $\mu\in X^{+}$. Then we get from (5.9) that $\mathrm{Hom}_{G}(L(\mu),H^{1}(\lambda))\simeq\mathrm{Hom}_{P_{\alpha}}(L(\mu),H^{1}_{\alpha}(\lambda))$. Now we note that under the natural $P_{\alpha}$-homomorphism from $H^{0}(\mu)$ to $H^{0}_{\alpha}(\mu)$ the simple $G$-submodule $L(\mu)\subset H^{0}(\mu)$ maps onto the simple $P_{\alpha}$-submodule $L_{\alpha}(\mu)\subset H^{0}_{\alpha}(\mu)$. Denoting by $K_{\alpha}(\mu)\subset L(\mu)$ the kernel of this map, we get the short exact sequence of $P_{\alpha}$-modules (5.10) $0\to K_{\alpha}(\mu)\to L(\mu)\rightarrow L_{\alpha}(\mu)\to 0.$ Here we have $\mathrm{Hom}_{B}(L(\mu),\nu)\simeq\mathrm{Hom}_{G}(L(\mu),H^{0}(\nu))$ is non-zero if and only if $\nu=\mu$. Therefore $\mathrm{Hom}_{P_{\alpha}}(L(\mu),H^{1}_{\alpha}(\lambda))$ is non-zero only if $\mu$ belong to the set of weights $\\{\lambda+\alpha,\lambda+2\alpha,\cdots\lambda+(-\langle\lambda,\alpha^{\vee}\rangle-1)\alpha\\}$ of $H^{1}_{\alpha}(\lambda)$. But for none of these $\mu$ do $K_{\alpha}(\mu)$ and $H^{1}_{\alpha}(\lambda)$ have any weights in common. This implies that $\mathrm{Hom}_{P_{\alpha}}(K_{\alpha}(\mu),H^{1}_{\alpha}(\lambda))=0$. We therefore get from (5.10 ) that $\mathrm{Hom}_{P_{\alpha}}(L_{\alpha}(\mu),H^{1}_{\alpha}(\lambda))\simeq\mathrm{Hom}_{P_{\alpha}}(L(\mu),H^{1}_{\alpha}(\lambda))$. So we have reduced the theorem to the $SL_{2}$-case, where it is an easy computation. ∎ ###### Remark 5.9. 1. (1) The proof determines the highest weight $\mu$ of $\mathrm{Soc}_{G}(H^{1}(\lambda))$. It also shows that $L(\mu)=\mathrm{Soc}_{G}(H^{1}(\lambda))$ has multiplicity $1$ as a composition factor in $H^{1}(\lambda)$. 2. (2) By Serre duality the theorem is equivalent to the statement that all non-zero $H^{N-1}(\lambda)$ have simple heads. 3. (3) In the next section we shall show that if $\lambda$ is “generic” (see Definition 5.17 below for this condition) inside a Weyl chamber then there is a unique $i$ for which $H^{i}(\lambda)\neq 0$. Moreover, this cohomology module has simple socle and simple head. - This result fails badly in general. In fact, already for type $B_{2}$, we gave an example of a $\lambda$ for which $H^{2}(\lambda)$ splits into a direct sum of $2$ (simple) modules, see the last paragraph in [4]. ### 5.4. Filtrations and sum formulas for cohomology modules In Section 5.5 we shall recall from [9], how one can obtain Jantzen-type filtrations and sum formulas for our cohomology modules $H^{i}(\lambda),i\geq 0,\;\lambda\in X$. When we take $i=N$ we recover the Weyl module case proved by Jantzen [42] for $p\geq h$. We sketch the proof taking advantage also of the methods developed in [16]. First we need to discuss ${\mathbb{Z}}$-versions of the cohomology modules. So we let $G_{\mathbb{Z}}$ denote the Chevalley group corresponding to $G$. This is a connected reductive algebraic group scheme over ${\mathbb{Z}}$ and $G$ is obtained from $G_{\mathbb{Z}}$ by base change from ${\mathbb{Z}}$ to $k$. More generally, if $A$ is any commutative ring we denote by $G_{A}$ the base change of $G_{\mathbb{Z}}$ to $A$ (so in particular $G_{k}=G$). We have likewise the subgroup schemes $T_{A}$ and $B_{A}$ corresponding to $T$ and $B$. Let $M$ be a $B_{A}$-module (like all modules we shall consider $M$ is finitely generated over $A$). Then we denote by $H^{i}_{A}(M)$ the $i$’th cohomology module of the vector bundle on $G_{A}/B_{A}$ associated to $M$. Alternatively, $H_{A}^{i}$ is also the $i$’th right derived functor of induction from $B_{A}$ to $G_{A}$. Then we have the following universal coefficient theorem, cf. [19], Ch §4. ###### Theorem 5.10. Let $M$ be a $B_{\mathbb{Z}}$-module which is free (of finite rank) over ${\mathbb{Z}}$. Then for each $i\geq 0$ we have a short exact sequence of $A$-modules $0\to H^{i}_{\mathbb{Z}}(M)\otimes_{\mathbb{Z}}A\to H^{i}_{A}(M\otimes_{\mathbb{Z}}A)\to\mathrm{Tor}_{1}^{\mathbb{Z}}(H^{i+1}_{\mathbb{Z}}(M),A)\to 0.$ In particular, we may in Theorem 5.10 take $M$ to be the rank $1$ ${\mathbb{Z}}$-module ${\mathbb{Z}}$ with $B_{\mathbb{Z}}$-structure given by $\lambda\in X$. We shall abuse notation and write just $\lambda$ for this $B_{\mathbb{Z}}$-module. Moreover, we let $H_{t}^{i}(\lambda)$ denote the torsion part of $H^{i}_{\mathbb{Z}}(\lambda)$ for any $i\geq 0$, and we set $H_{f}^{i}(\lambda)=H_{\mathbb{Z}}^{i}(\lambda)/H_{t}^{i}(\lambda)$, the free quotient of $H_{\mathbb{Z}}^{i}(\lambda)$. Then Theorem 5.10 combined with Bott’s theorem (Theorem 2.4) give the following results. ###### Corollary 5.11. Let $\lambda\in X$. 1. (1) If $\lambda$ is singular then $H^{i}(\lambda)=H^{i}_{t}(\lambda)$ for all $i$. 2. (2) If $\lambda$ is regular and $w\in W$ is the unique element with $w\cdot\lambda\in X^{+}$ then $H^{i}_{\mathbb{Z}}(\lambda)=H_{t}^{i}(\lambda)\text{ for all }i\neq\ell(w),$ and $H_{f}^{\ell(w)}(\lambda)\text{ has rank equal to }\dim_{\mathbb{Q}}H^{0}_{\mathbb{Q}}(w\cdot\lambda),$ 3. (3) for each $i\geq 0$ we have a short exact sequence $0\to H^{i}_{\mathbb{Z}}(\lambda)\otimes_{\mathbb{Z}}k\to H^{i}(\lambda\otimes_{\mathbb{Z}}k)\to\mathrm{Tor}_{1}^{\mathbb{Z}}(H^{i+1}_{\mathbb{Z}}(\lambda),k)\to 0.$ Combining Theorem 5.10 with Grothendieck vanishing (Theorem 4.1), respectively Kempf vanishing (Theorem 3.1) we get ###### Corollary 5.12. Let $\lambda\in X$. 1. (1) $H^{i}_{\mathbb{Z}}(\lambda)=0$ for all $i>N$, and $H^{N}_{\mathbb{Z}}(\lambda)$ is free over ${\mathbb{Z}}$ of rank equal to $\dim_{\mathbb{Q}}H^{0}_{\mathbb{Q}}(w_{0}\cdot\lambda)$. Moreover, $H^{N}_{\mathbb{Z}}(\lambda)=0=H^{N}_{k}(\lambda)$ unless $\lambda\in X^{-}$. 2. (2) If $\lambda\in X^{+}$ then $H^{i}_{\mathbb{Z}}(\lambda)=0$ for all $i>0$ and $H^{0}_{\mathbb{Z}}(\lambda)$ is free of rank equal to $\dim_{\mathbb{Q}}H^{0}_{\mathbb{Q}}(\lambda)$. We shall need some more notation in order to state and prove the sum formulas. Recall that $p$ is the characteristic of $k$. If $n\in{\mathbb{Z}}$ is non- zero we shall denote by $\nu_{p}(n)$ the highest exponent $s$ of $p$ such that $p^{s}$ divides $n$. Likewise, if $M$ is a finite abelian group of order $|M|$ we write $\nu_{p}(M)=\nu_{p}(|M|)$. Then $\nu_{p}(M)$ is also the length of $M$ as ${\mathbb{Z}}$-module. Suppose $M$ is a finite $T_{\mathbb{Z}}$-module. Then we define $\mathrm{ch}^{p}(M)=\sum_{\lambda\in X}\nu_{p}(M_{\lambda})e^{\lambda}\in{\mathbb{Z}}[X]$ and call this the $p$-character of $M$. Note that $\mathrm{ch}^{p}$ is additive on short exact sequences of finite $T_{\mathbb{Z}}$-modules. Finally, if $M$ is a finite $B_{\mathbb{Z}}$-module then we define $E^{p}(M)=\sum_{i}(-1)^{i}\mathrm{ch}^{p}(H^{i}_{\mathbb{Z}}(M)).$ $E^{p}$ is also additive on short exact sequences of finite $B_{\mathbb{Z}}$-modules. The following proposition is the key to calculating $E^{p}(M)$. For any $\lambda\in X$ we denote by $\chi(\lambda)$ the Weyl character at $\lambda$, i.e. $\chi(\lambda)=\sum_{i=0}^{N}(-1)^{i}\mathrm{ch}H^{i}(\lambda)$. We have $\chi(w\cdot\lambda)=(-1)^{\ell(w)}\chi(\lambda)$ for any $w\in W$. ###### Proposition 5.13. Let $n\in{\mathbb{Z}}_{>0}$ and $\lambda\in X$. Then $E^{p}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)=\nu_{p}(n)\chi(\lambda).$ ###### Proof. The short exact sequence of $B_{\mathbb{Z}}$-modules $0\to\lambda\xrightarrow{n}\lambda\to\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n\to 0$ gives the long exact cohomology sequence $0\to H^{0}_{\mathbb{Z}}(\lambda)\xrightarrow{n}H^{0}_{\mathbb{Z}}(\lambda)\to H^{0}_{\mathbb{Z}}(\lambda\otimes{\mathbb{Z}}/n)\to H^{1}_{\mathbb{Z}}(\lambda)\xrightarrow{n}H^{1}_{\mathbb{Z}}(\lambda)\to H^{1}_{\mathbb{Z}}(\lambda\otimes{\mathbb{Z}}/n)\to\cdots$ If $\lambda$ is singular, then this sequence consists entirely of finite ${\mathbb{Z}}$-modules and the result is clear (note that $\chi(\lambda)=0$ for $\lambda$ singular). If $\lambda$ is regular, there exists (by Corollary 5.11 (1)) a unique $i$ such that $H^{i}_{\mathbb{Z}}(\lambda)$ is not finite. For this $i$ we have the following diagram ${H^{i-1}_{\mathbb{Z}}(\lambda\otimes{\mathbb{Z}}/n)}$${H^{i}_{t}(\lambda)}$${H^{i}_{t}(\lambda)}$${Q_{t}}$${H^{i-1}_{\mathbb{Z}}(\lambda\otimes{\mathbb{Z}}/n)}$${H^{i}_{\mathbb{Z}}(\lambda)}$${H^{i}_{\mathbb{Z}}(\lambda)}$${Q}$${0}$${H^{i}_{f}(\lambda)}$${H^{i}_{f}(\lambda)}$${Q_{f}.}$ Here the top left vertical arrow is the identity map, the three middle horizontal arrows are multiplication by $n$, and $Q_{t}$, $Q$, and $Q_{f}$ denote the cokernels of multiplication by $n$ on the modules in question. Now the top sequence in the above diagram is the end of an exact sequence of torsion modules. It gives $\mathrm{ch}^{p}(Q_{t})=\sum_{j\geq 0}(-1)^{j}\mathrm{ch}^{p}(H^{i-1-j}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)).$ Likewise $Q$ is the first term in the exact sequence $0\to Q\to H^{i}_{\mathbb{Z}}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)\to H^{i+1}_{\mathbb{Z}}(\lambda)\to H^{i+1}_{\mathbb{Z}}(\lambda)\to H^{i}_{\mathbb{Z}}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)\to\cdots,$ which also consists of torsion modules, cf. Corollary 5.11. Therefore we get $\mathrm{ch}^{p}(Q)=\sum_{j\geq 0}(-1)^{j}\mathrm{ch}^{p}(H^{i+j}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)).$ As the right column in the above diagram is a short exact sequence we deduce that $\mathrm{ch}(Q_{f})=\mathrm{ch}^{p}(Q)-\mathrm{ch}^{p}(Q_{t})=(-1)^{i}\sum_{j}(-1)^{j}\mathrm{ch}^{p}(H^{j}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n))=(-1)^{i}E^{p}(\lambda\otimes_{\mathbb{Z}}{\mathbb{Z}}/n)$. The proposition thus follows as the above formulas show that $Q_{f}=H^{i}_{f}(\lambda)\otimes_{\mathbb{Z}}{\mathbb{Z}}/n$ has $p$-character equal to $\nu_{p}(n)(-1)^{i}\chi(\lambda)$. ∎ Let now $\lambda\in X^{+}$ and set $\Delta(\lambda)=H^{N}(w_{0}\cdot\lambda)\text{ and }\nabla(\lambda)=H^{0}(\lambda).$ These are the Weyl module and the dual Weyl module with highest weight $\lambda$. They both have ${\mathbb{Z}}$-forms, namely $\Delta_{\mathbb{Z}}(\lambda)=H^{N}_{\mathbb{Z}}(w_{0}\cdot\lambda)$, respectively $\nabla_{\mathbb{Z}}(\lambda)=H^{0}_{\mathbb{Z}}(\lambda)$. According to Corollary 5.12 we have $\Delta_{\mathbb{Z}}(\lambda)\otimes_{\mathbb{Z}}{\mathbb{Q}}\simeq H^{N}_{\mathbb{Q}}(w_{0}\cdot\lambda)\simeq H^{0}_{\mathbb{Q}}(\lambda)\simeq\nabla_{\mathbb{Z}}(\lambda)\otimes{\mathbb{Q}}$. Choose a reduced expression $w_{0}=s_{N}\cdots s_{1}s_{0}$ for $w_{0}$ and as in Section 5.1 set $\lambda_{i}=s_{i}s_{i-1}\cdots s_{1}\cdot\lambda$. We then get a long exact sequence analogous to (5.3) $\Delta_{\mathbb{Z}}(\lambda)=H^{N}_{\mathbb{Z}}(\lambda_{N})\to\cdots H^{i+1}_{\mathbb{Z}}(\lambda_{i+1})\to H^{i}_{\mathbb{Z}}(\lambda_{i})\to\cdots\to H^{0}_{\mathbb{Z}}(\lambda_{0})=\nabla_{\mathbb{Z}}(\lambda).$ In the notation above $H^{i}_{t}(\lambda_{i})$ is the torsion submodule of $H^{i}_{\mathbb{Z}}(\lambda_{i})$ and $H_{f}^{i}(\lambda_{i})=H^{i}_{\mathbb{Z}}(\lambda_{i})/H^{i}_{t}(\lambda_{i})$ is the free quotient. If $j\neq i$ then $H^{j}_{\mathbb{Z}}(\lambda_{i})$ is a torsion module, cf. Corollary 5.11. Clearly, each of the homomorphisms $c_{\mathbb{Z}}^{i}(\lambda):H^{i+1}_{\mathbb{Z}}(\lambda_{i+1})\to H_{\mathbb{Z}}^{i}(\lambda_{i})$ induces a homomorphism $c_{f}^{i}(\lambda):H_{f}^{i}(\lambda_{i})\to H_{f}^{i-1}(\lambda_{i-1})$. We set $c_{f}(\lambda)=c_{f}^{1}(\lambda)\circ c_{f}^{2}(\lambda)\circ\cdots\circ c_{f}^{N}(\lambda):\Delta_{\mathbb{Z}}(\lambda)\to\nabla_{\mathbb{Z}}(\lambda)$. As in Section 5.1 we see that on the $\lambda$ weight space each $c_{f}^{i}(\lambda)$ is an isomorphism. Hence the same is true for $c_{f}(\lambda)$. This means in particular, that up to signs $c_{f}(\lambda)$ is independent of the reduced expression we have chosen for $w_{0}$, and it implies that $c_{f}(\lambda)\otimes_{\mathbb{Z}}k$ is a non-zero homomorphism from $\Delta(\lambda)$ to $\nabla(\lambda)$. ###### Theorem 5.14. (The Jantzen filtration and sum formula for Weyl modules.) Let $\lambda\in X^{+}$. The homomorphism $c_{f}(\lambda):\Delta_{\mathbb{Z}}(\lambda)\to\nabla_{\mathbb{Z}}(\lambda)$ induce a filtration of $\Delta(\lambda)$ $0=\Delta^{r+1}(\lambda)\subset\Delta^{r}(\lambda)\subset\cdots\subset\Delta^{1}(\lambda)\subset\Delta^{0}(\lambda)=\Delta(\lambda),$ which satisfies 1. (1) $\Delta(\lambda)/\Delta^{1}(\lambda)=L(\lambda)$ 2. (2) $\sum_{j=1}^{r}\mathrm{ch}\Delta^{j}(\lambda)=-\sum_{\beta\in R^{+}}\sum_{0<m<\langle\lambda+\rho,\beta^{\vee}\rangle}\nu_{p}(m)\chi(\lambda-m\beta)$. ###### Proof. We set $\Delta_{\mathbb{Z}}^{j}(\lambda)=c_{f}(\lambda)^{-1}(p^{r}\nabla_{\mathbb{Z}}(\lambda))$. Denoting by $\pi:\Delta_{\mathbb{Z}}(\lambda)\to\Delta(\lambda)$ the natural homomorphism we then define a filtration of $\Delta(\lambda)$ by setting $\Delta^{j}(\lambda)=\mathrm{span}_{k}(\pi(\Delta_{\mathbb{Z}}^{j}(\lambda))\subset\Delta(\lambda).$ This is clearly a finite filtration, i.e. there exists an $r\geq 0$ such that $\Delta^{r+1}(\lambda)=0$. Moreover, by the observation just above the theorem we see that (1) holds, because $\mathrm{Hom}_{G}(\Delta(\lambda),\nabla(\lambda))=k$ and any non-zero $G$-homomorphism (like $c_{f}(\lambda)\otimes_{\mathbb{Z}}1$) from $\Delta(\lambda)$ to $\nabla(\lambda)$ has image $L(\lambda)$. To prove (2) we notice first that the left hand side of (2) equals $\mathrm{ch}^{p}(Q)$ where $Q$ is the cokernel of $c_{f}(\lambda)$. This is a well known computation, see e.g. [43], Section II.8.11. Now each $c_{f}^{i}(\lambda)$ is injective, because by Bott’s theorem it becomes an isomorphism after tensoring with ${\mathbb{Q}}$. Hence the additivity of $\mathrm{ch}^{p}$ gives $\mathrm{ch}^{p}(Q)=\sum_{i=1}^{N}\mathrm{ch}^{p}(Q_{f}^{i})$ where $Q_{f}^{i}$ denotes the cokernel of $c_{f}^{i}(\lambda)$. Letting $Q^{i}$ be the cokernel of $c_{\mathbb{Z}}^{i}(\lambda)$ and $Q^{i}_{t}$ be the cokernel of the restriction of $c_{\mathbb{Z}}^{i}(\lambda)$ to $H_{t}^{i}(\lambda_{i})$ we have $\mathrm{ch}^{p}(Q^{i}_{f})=\mathrm{ch}^{p}(Q^{i})-\mathrm{ch}^{p}(Q_{t}^{i}).$ We shall now compute the terms on the right hand side of this equation by using the techniques from the proof of Proposition 5.13 combined with a ${\mathbb{Z}}$-variation of the methods from Section 5.1. Let $\mu\in X$ and $\alpha\in S$ satisfy $\langle\mu,\alpha^{\vee}\rangle\geq 0$. When working over ${\mathbb{Z}}$ the 4 short exact $B$-sequences in Section 5.1 “simplify” to the following 3 short exact $B_{\mathbb{Z}}$-sequences (using analogous notation to the one in Section 5.1) (5.11) $0\to K_{\alpha,{\mathbb{Z}}}^{\mu}\to H^{0}_{\alpha,{\mathbb{Z}}}(\mu+\rho)\otimes_{\mathbb{Z}}-\rho\to\mu\to 0,$ (5.12) $0\to s_{\alpha}\cdot\mu\to K_{\alpha.{\mathbb{Z}}}^{\mu}\to V_{\alpha,{\mathbb{Z}}}^{\mu}\to 0,$ and (5.13) $0\to V_{\alpha,{\mathbb{Z}}}^{\mu}\to H^{0}_{\alpha,{\mathbb{Z}}}(\mu-\alpha+\rho)\otimes_{\mathbb{Z}}-\rho\to Q_{\alpha,{\mathbb{Z}}}^{\mu}\to 0.$ The difference when working over ${\mathbb{Z}}$ is that here the natural $B_{\mathbb{Z}}$-homomorphism $V_{\alpha,{\mathbb{Z}}}^{\mu}\to H^{0}_{\alpha,{\mathbb{Z}}}(\mu-\alpha+\rho)\otimes_{\mathbb{Z}}-\rho$ is injective. We shall need the following observation (an $SL_{2}$-computation) about the torsion module $Q_{\alpha,{\mathbb{Z}}}^{\mu}$. (5.14) The weight spaces of $Q_{\alpha,{\mathbb{Z}}}^{\mu}$ are ${\mathbb{Z}}/j\otimes_{\mathbb{Z}}(\mu-j\alpha),\;j=1,2,\cdots,\langle\mu+\rho,\alpha^{\vee}\rangle-1.$ Just as in Section 5.1 we see from (5.11) that $H^{i}(\mu)\simeq H^{i+1}(K_{\alpha,{\mathbb{Z}}}^{\mu})$ and from (5.13) that $H_{\mathbb{Z}}^{i+1}(V_{\alpha,{\mathbb{Z}}}^{\mu})\simeq H^{i}_{\mathbb{Z}}(Q_{\alpha,{\mathbb{Z}}}^{\mu})$ . Therefore we get by combining this with (5.12) the long exact sequence $\cdots\to H^{i+1}_{\mathbb{Z}}(s_{\alpha}\cdot\mu)\to H_{\mathbb{Z}}^{i}(\mu)\to H^{i}_{{\mathbb{Z}}}(Q_{\alpha,{\mathbb{Z}}}^{\mu})\to\cdots.$ We now apply this sequence with $\mu=\lambda_{i}$ and $\alpha=\alpha_{i+1}$, the simple root associated to $s_{i+1}$. Remembering that $H^{j+1}_{\mathbb{Z}}(\lambda_{i+1})$ and $H^{j}_{\mathbb{Z}}(\lambda_{i})$ are torsion modules for all $j\neq i$ we see that $Q_{t}^{i+1}$ is the last term in the exact sequence $\cdots\to H^{i}_{t}(\lambda_{i+1})\to H^{i-1}_{t}(\lambda_{i})\to H^{i-1}_{\mathbb{Z}}(Q_{\alpha_{i},{\mathbb{Z}}}^{\lambda_{i}})\to H^{i+1}_{t}(\lambda_{i+1})\to H^{i}_{t}(\lambda_{i})\to Q_{t}^{i+1}\to 0,$ whereas $Q^{i+1}$ is the first term in the exact sequence $0\to Q^{i+1}\to H^{i}_{\mathbb{Z}}(Q_{\alpha_{i},{\mathbb{Z}}}^{\lambda_{i}})\to H^{i+2}_{t}(\lambda_{i+1})\to H^{i+1}_{\mathbb{Z}}(\lambda_{i})\to\cdots.$ These sequences consist entirely of torsion modules and we get $\mathrm{ch}^{p}(Q_{t}^{i+1})=\sum_{j\geq 0}(-1)^{j}(\mathrm{ch}^{p}(H^{i-j}_{t}(\lambda_{i}))-\mathrm{ch}^{p}(H^{i+1-j}_{t}(\lambda_{i+1}))+\mathrm{ch}^{p}(H^{i-1-j}_{t}(Q_{\alpha_{i+1},{\mathbb{Z}}}^{\lambda_{i}}))),$ while $\mathrm{ch}^{p}(Q^{i+1})=\sum_{j\geq 0}(-1)^{j}(\mathrm{ch}^{p}(H^{i+j}_{t}(Q_{\alpha_{i+1},{\mathbb{Z}}}^{\lambda_{i}}))-\mathrm{ch}^{p}(H^{i+2+j}_{t}(\lambda_{i+1}))+\mathrm{ch}^{p}(H^{i+1+j}_{t}(\lambda_{i}))).$ We get from this $\mathrm{ch}^{p}Q_{f}^{i}=(-1)^{i}(E^{p}(Q_{\alpha_{i+1},{\mathbb{Z}}}^{\lambda_{i}})+\sum_{j\geq 0}(-1)^{j}(\mathrm{ch}^{p}(H^{j}_{t}(\lambda_{i+1}))-\mathrm{ch}^{p}(H_{t}^{j}(\lambda_{i}))).$ By (5.14) and Proposition 5.13 we have $E^{p}(Q_{\alpha_{i+1},{\mathbb{Z}}}^{\lambda_{i}})=\sum_{j=1}^{\langle\lambda_{i}+\rho,\alpha_{i+1}^{\vee}\rangle-1}\nu_{p}(j)\chi(\lambda_{i}-j\alpha_{i+1})=(-1)^{i}\sum_{j=1}^{\langle\lambda+\rho,\beta_{i+1}^{\vee}\rangle-1}\nu_{p}(j)\chi(\lambda-j\beta_{i+1}),$ where $\beta_{i+1}=s_{1}s_{2}\cdots s_{i}(\alpha_{i+1})$. Write now $E_{t}^{p}(i)=\sum_{j\geq 0}(-1)^{j}\mathrm{ch}^{p}(H^{j}_{t}(\lambda_{i})),\;i=0,1,\cdots,N-1$. Noting that $\\{\beta_{1},\beta_{2},\cdots,\beta_{N}\\}=R^{+}$ we get from the above $\mathrm{ch}^{p}(Q)=\sum_{i=0}^{N-1}\mathrm{ch}^{p}(Q_{f}^{i+1})=-\sum_{\beta\in R^{+}}\sum_{j=1}^{\langle\lambda+\rho,\beta^{\vee}\rangle-1}\nu_{p}(j)\chi(\lambda-j\beta)+\sum_{i=0}^{N-1}(E_{t}^{p}(i+1)-E_{t}^{p}(i))$ Here the last sum equals $E_{t}^{p}(N)-E_{t}^{p}(0)=0$ because both $H_{\mathbb{Z}}^{N}(\lambda_{N})$ and $H_{\mathbb{Z}}^{0}(\lambda_{0})$ are torsionfree, see Corollary 5.12. Thus we have established (2). ∎ ###### Remark 5.15. In low rank, e.g. rank $2$ or type $A_{3}$, Theorem 5.14 combined with the translation principle [43], Chapter II.7 give all the irreducible characters. The result has also proved very useful for finding or at least limiting particular composition factor multiplicities in Weyl modules in many other cases, e.g. for small characteristics (note that the above proof requires no restrictions on $p$) or for special highest weights. Consider again $\mu\in X$ and $\alpha\in S$ with $\langle\mu,\alpha^{\vee}\rangle\geq 0$. In addition to the unique (up to sign) non-zero $P_{\alpha,{\mathbb{Z}}}$-homomorphism $c_{\alpha}^{\mu}:H_{\alpha,{\mathbb{Z}}}^{1}(s_{\alpha}\cdot\mu)\to H^{0}_{\alpha,{\mathbb{Z}}}(\mu)$ we have a similar homomorphism $\tilde{c}_{\alpha}^{\mu}:H^{0}_{\alpha,{\mathbb{Z}}}(\mu)\to H_{\alpha,{\mathbb{Z}}}^{1}(s_{\alpha}\cdot\mu)$, which is unique once we require $\tilde{c}_{\alpha}^{\mu}\circ c_{\alpha}^{\mu}=\langle\mu,\alpha^{\vee}\rangle!Id_{H^{1}_{\alpha,{\mathbb{Z}}}(s_{\alpha}\cdot\mu)}$. Just like $c_{\alpha}^{\mu}$ gives rise to $G_{\mathbb{Z}}$-homomorphisms $H_{\mathbb{Z}}^{i+1}(s_{\alpha}\cdot\mu)\to H_{\mathbb{Z}}^{i}(\mu)$ for all $i\geq 0$, so the homomorphism $\tilde{c}_{\alpha}^{\mu}$ induces $G_{\mathbb{Z}}$-homomorphisms in the reverse directions. Composing these maps according to an appropriate reduced expression for $w_{0}$ one obtains for each $w\in W$ and $\lambda\in X^{+}$ a $G_{\mathbb{Z}}$-homomorphism $c_{f}(w,\lambda):H_{f}^{\ell(w)}(w\cdot\lambda)\to H_{f}^{N-\ell(w)}(w_{0}w\cdot\lambda).$ Arguing as in the proof of Theorem 5.14 (cf. [9]) we then get ###### Theorem 5.16. (Filtrations and sum formulas of cohomology modules) Let $\lambda\in X^{+}$ and set $r_{\beta}=\min\\{r\in Z_{\geq 0}|p^{r}\leq\langle\lambda+\rho,\beta^{\vee}\rangle\\}$ for each $\beta\in R^{+}$. Then for $w\in W$ the module $H_{f}^{\ell(w)}(w\cdot\lambda)\otimes_{\mathbb{Z}}k$ has a filtration $0=F^{s+1}(w\cdot\lambda)\subset F^{s}(w\cdot\lambda)\subset\cdots\subset F^{1}(w\cdot\lambda)\subset F^{0}(w\cdot\lambda)=H_{f}^{\ell(w)}(w\cdot\lambda)\otimes_{\mathbb{Z}}k$ which satisfies the sum formula $\sum_{j\geq 1}\mathrm{ch}(F^{j}(w\cdot\lambda))=(\sum_{\alpha\in R^{+}\cap w^{-1}R^{+}}r_{\alpha})\chi(\lambda)+\sum_{\beta\in R^{+}}sgn(w(\beta))\sum_{0<m<\langle\lambda+\rho,\beta^{\vee}\rangle}\nu_{p}(m)\chi(\lambda-m\beta)$ $+(-1)^{\ell(w_{0}w)}(E_{t}^{p}(w_{0}w\cdot\lambda)-E_{t}^{p}(w\cdot\lambda)).$ ###### Remark 5.17. Unfortunately, this theorem gives in general only a filtration of a subquotient of $H^{\ell(w)}(w\cdot\lambda)$. Moreover, in contrast to Theorem 5.14 it contains no statement (1) about the top term of the filtration. Finally, in Theorem 5.14 the right hand side of the sum formula is a sum of well known terms (Weyl characters), but in the present theorem the right hand side has two additional terms, which are not known in general. As we shall see in the next section (cf. in particular Corollary 5.21) we can “repair” these deficiencies for generic weights. Also see [11] for further related results on the “torsion part” of $H^{i}(w\cdot\lambda)$. ### 5.5. Cohomology of line bundles with generic weights It turns out that if $\mu$ lies “far away” from the walls of the Weyl chambers in $X$, then the cohomology $H^{i}(\mu)$ behaves much better than in general (both when it comes to its vanishing behavior, and when it concerns its $G$-module structure). In this section we shall briefly discuss this case. We follow [12] where further details and results can be found. Although parts of the statements below are true under milder assumptions we shall (as in [12]) say that a dominant weight is generic if it satisfies the following conditions. ###### Definition 5.18. Let $\lambda\in X^{+}$ and write $\lambda=\lambda^{0}+p^{n}\lambda^{1}$ with $\lambda^{0}\in X_{n}$ and $\lambda^{1}\in X^{+}$. We say that $\lambda$ is generic if it satisfies the conditions $6(h-1)\leq\langle\lambda^{1},\beta^{\vee}\rangle\leq p-6(h-1)\text{ for all }\beta\in R^{+}.$ Note that generic weights exist only when $p>12(h-1)$. If $H$ is a subgroup(scheme) in $G$ and $M$ is an $H$-module we denote by $\mathrm{Soc}_{H}(M)$, respectively by $\mathrm{Hd}_{H}(M)$, the socle, respectively the head, of $M$. When $\lambda\in X^{+}$ and $w\in W$ we denote by $\lambda^{w}$ the weight determined by $\lambda^{w}=(w\cdot\lambda)^{0}+p^{n}w^{-1}\cdot((w\cdot\lambda)^{1})$. ###### Theorem 5.19. ([12], Theorem 2.1 and Theorem 2.2) Let $\lambda\in X^{+}$ be generic. Then for any $w\in W$ we have 1. (1) $H^{i}(w\cdot\lambda)=0$ for all $i\neq\ell(w)$, 2. (2) $\mathrm{Soc}_{G_{n}}(H^{\ell(w)}(w\cdot\lambda))=L(\lambda^{w})=\mathrm{Soc}_{G}(H^{\ell(w)}(w\cdot\lambda))$, 3. (3) $\mathrm{Hd}_{G_{n}}(H^{\ell(w)}(w\cdot\lambda))=L(\lambda^{w_{0}w})=\mathrm{Hd}_{G}(H^{\ell(w)}(w\cdot\lambda))$. ###### Remark 5.20. 1. (1) Theorem 5.19 (1) says that the vanishing statement in Bott’s theorem holds in characteristic $p>0$, when $\lambda$ is generic (note that this condition depends on $p$). This was first observed by Cline, Parshall and Scott in the appendix of [23]. 2. (2) The proof of this theorem uses the isomorphisms $H^{i}(\mu)\simeq H^{i}(G/G_{n}B,H^{0}(G_{n}B/B,\mu))$ resulting from (4.5). As all $G_{n}B$-composition factors $H^{0}(G/G_{n}B,\mu)$ have highest weights lying “close to” $\mu$ we get information about $H^{i}(\mu)$ by studying a $G_{n}B$-composition series of $H^{0}(G/G_{n}B,\mu)$. In addition to the vanishing result in (1) this may also be used to find the $G$-composition factors for $H^{\ell(w)}(w\cdot\lambda)$ in terms of the $G_{n}B$-composition factors of $H^{0}(G/G_{n}B,w\cdot\lambda)$, see [23], Proposition A.1(b) and [12], Theorem 2.1. The vanishing result in Theorem 5.19 (1) ensures that the set of composition factors of $H^{\ell(w)}(w\cdot\lambda)$ is independent of $w$. 3. (3) The proof of the results in (2) and (3) on the socles and heads of $H^{\ell(w)}(w\cdot\lambda))$ relies on the fact that the corresponding indecomposable injective modules for $G_{n}$ have a $G_{n}B$-structure. This is known to be true for $p\geq 2(h-1)$, see [43] , Section II.11.11, a condition much weaker than our condition for generic weights to exist. Furthermore, the indecomposable injective $G_{n}$-modules have $G_{n}B$-filtrations with quotients equal to $H^{0}(G_{n}B/B,\mu)$ for certain $\mu\in X$, which are again “close enough” to $w\cdot\lambda$. For details, see the proof of Theorem 2.2 in [12]. 4. (4) J.E. Humphreys conjectured in [37], Conjecture p. 178, that if $\mu$ is in sufficiently general position, then it will always have a unique non-vanishing cohomology module, say $H^{j}(\mu)$, and this module will have simple head and socle. Theorem 5.19 proves this conjecture (with the above generic conditions on $\lambda$ being our way of ensuring that all $\mu\in W\cdot\lambda$ are in “sufficient general position”). 5. (5) J.E. Humphreys presented in [37], Section 2 several other conjectures on the module structures of the cohomology modules $H^{i}(\mu)$. For instance, he suggested that when $\lambda\in X^{+}$ is in “sufficient general position” then the $G$-module structure of $H^{\ell(w)}(w\cdot\ \lambda)$ is “similar” to that of the dual Weyl module $H^{0}(\lambda)$. Note that by (1) it follows that for $\lambda$ generic, all $H^{\ell(w)}(w\cdot\lambda)$ have the same composition factors (counted with multiplicity). Jim suggested that these composition factors are $scrambled$ in a predictable way for each $w$. He also speculated that when moving $\lambda$ from a sufficiently general position towards one or more walls of the dominant chamber, the “missing” occurrences of certain composition factors in $H^{0}(\lambda)$ that one observes will happen in this process, should be explained by the non-standard behavior of the higher cohomology modules for the various $w\cdot\lambda$. These predictions/speculations are very much still open today (even for very large primes). See [30], [48] and [49] for some partial results. 6. (6) For some alternative ways of obtaining explicit descriptions of higher cohomology modules for line bundles on flag varieties see [28], [50], and [51]. In addition to trying to answer some of Jim’s questions and predictions we had another motivation for proving Theorem 5.19, namely we wanted to improve on Theorem 5.16. More precisely, we wanted to address the problems mentioned in Remark 5.17. This consequence of Theorem 5.19 is (we use notation as above as well as from Theorem 5.14): ###### Corollary 5.21. Let $\lambda\in X^{+}$ be generic and suppose $w\in W$. Then $H^{\ell(w)}(w\cdot\lambda)$ has a filtration $0=F^{s+1}(w\cdot\lambda)\subset F^{s}(w\cdot\lambda)\subset\cdots\subset F^{1}(w\cdot\lambda)\subset F^{0}(w\cdot\lambda)=H^{\ell(w)}(w\cdot\lambda)$ which satisfies 1. (1) $H^{\ell(w)}(w\cdot\lambda)/F^{1}(w\cdot\lambda)=L(\lambda^{w_{0}w})$, 2. (2) $\sum_{j\geq 1}\mathrm{ch}(F^{j}(w\cdot\lambda))=(\sum_{\alpha\in R^{+}\cap w^{-1}R^{+}}r_{\alpha})\chi(\lambda)+\\\ \hskip 56.9055pt\sum_{\beta\in R^{+}}sgn(w(\beta))\sum_{0<m<\langle\lambda+\rho,\beta^{\vee}\rangle}\nu_{p}(m)\chi(\lambda-m\beta).$ ## References * [1] P. Achar, S. Makisumi, S. Riche, and G. Williamson, Koszul duality for Kac-Moody groups and characters of tilting modules, J. Amer. Math. Soc. 32 (2019), 261-310. * [2] H.H. Andersen, Cohomology of line bundles on G/B, Ann. Scient. Éc. Norm. Sup. (4) 12 (1979), 82-100. * [3] H.H. Andersen, The first cohomology group of a line bundle on G/B, Invent. Math. 5 (1979), 287-296. * [4] H.H. Andersen, The strong linkage principle, J. reine angew. Math. 315 (1980), 53-59. * [5] H. H. Andersen, A G-equivariant proof of the vanishing theorem for dominant line bundles on G/B, Preprint Inst. Adv. Study (1979), pp.1-13. * [6] H.H. Andersen, The Frobenius morphism on the cohomology of homogeneous vector bundles on G/B, Ann. of Math. 112 (1980), 113-121. * [7] H.H. Andersen, Vanishing theorems and induced representations, J. Alg. 62 (1980), 86 – 100. * [8] H.H. Andersen, On the structure of the cohomology of line bundles on G/B, J. Alg. 71 (1981), 242 – 258. * [9] H.H. Andersen, Filtrations of cohomology modules for Chevalley groups, Ann. Scient. Éc. Norm. Sup. (4) 16 (1983), 495 - 528. * [10] H.H. Andersen, Schubert varieties and Demazure’s character formula, Invent. Math. 79 (1985), 611-618. * [11] H.H. Andersen, Torsion in the cohomology of line bundles on homogeneous spaces for Chevalley groups, Proc. A. M. S. 96 (1986), 537- 544. * [12] H.H. Andersen, On the generic structure of cohomology modules for semisimple algebraic groups . Trans. Amer. Math. Soc. 295 (1986) 397-415. * [13] H. H. Andersen, The strong linkage principle for quantum groups at roots of 1, J. Alg., 260 (2003), 2-15. * [14] H.H. Andersen, Cohomology of line bundles, Algebraic Groups and Homogeneous Spaces. Mehta, V. (red.). Tata Institute of Fundamental Research (2007), 13-36. * [15] H. H. Andersen and M. Kaneda, Cohomology of line bundles on the flag variety for type G2, Journal of Pure and Applied Algebra 216 (7) (2012), 1566 – 1579. * [16] H.H. Andersen and U. Kulkarni) Sum formulas for reductive algebraic groups in: Advances in Mathematics 217, (2008), 419 - 447. * [17] H.H. Andersen and K. Wen, Representations of quantum algebras. The mixed case, J. Reine Angew. Math. 427 (1992), 35 – 50. * [18] R. Bott, Homogeneous vector bundles, Ann. of Math. 66 (1957), 203-248. * [19] N. Bourbaki, Algèbre, Chapitre X, Masson 1980. * [20] M. Brion and S. Kumar, Frobenius Splitting Methods in Geometry and Representation Theory, Progress in Mathematics 231 (2005), Birkhauser. * [21] R. Carter and G. Lusztig, On the modular representation theory of the general linear and symmetric groups, Math. Z. 136 (1974), 193 - 242. * [22] C. Chevalley, Théorie des groupes de Lie, Tome II. Groupes algébriques, Actualités Sci. Ind. no 1152. Hermann & Cie., Paris (1951). * [23] E. Cline, B. Parshall and L. Scott, On injective modules for infinitesimal algebraic groups, J. London Math. Soc. (2) 31 (1985), 277 - 291. * [24] C. Curtis, Representations of Lie algebras of classical types with applications to linear groups, J. Math. Mech. 9 (1960), 307-326. * [25] M. Demazure, Une démonstration algébrique d’un théorème de Bott, Invent. Math. 5 (1968), 349 - 356. * [26] M. Demazure, Désingularisation des variétés de Schubert généralisées, Ann. Sci. École Norm. Sup. 7 (1974), 53-88. * [27] M. Demazure, A very simple proof of Bott’s theorem, Invent. Math. 33 (1976), 271 - 272. * [28] S. Donkin, The cohomology of line bundles on the three-dimensional flag variety, J. Alg. 307 (2007), 570–613. * [29] S. Doty, The strong linkage principle, Amer. J. Math. 111 (1989), 135-141. * [30] S. R. Doty and J. B. Sullivan, On the structure of higher cohomology modules of line bundles on G/B, J. Alg. 114 (1988), 286–332. * [31] W.L Griffith, Cohomology of flag varieties in characteristic p, Illinois J. Math., 24 (1980), 452 - 461. * [32] A. Grothendieck, Sur quelques points d’algèbre homologique, Tohuku Math. J. 9 (1957), 119 - 221. * [33] W. Haboush, A short proof of the Kempf vanishing theorem, Invent. Math., 56 (1980), 109 - 112. * [34] J. E. Humphreys, Modular representations of classical Lie algebras and semisimple groups, J. Alg. 19 (1971), 51 - 79. * [35] J. E. Humphreys, Ordinary and Modular Representations of Chevalley Groups, Lect. Notes in Mathematics 528, Springer-Verlag, Berlin-Heidelberg-New York 1976. * [36] J. E. Humphreys, Weyl modules and Bott’s theorem in characteristic p, pp. 474–483, Lie Theories and their Applications, Queen’s Papers in Pure & Appl. Math. 48, Kingston, Ont., 1978. * [37] J.E. Humphreys, Cohomology of G/B in characteristic p, Adv. in Math. 59 (1986), 170–183. * [38] J. E. Humphreys, Cohomology of line bundles on G/B for the exceptional group $G_{2}$, J. Pure Appl. Math. 44 (1987), 227–239. * [39] J. E. Humphreys, Cohomology of line bundles on flag varieties in prime characteristic, pp. 193–204, Proc. Hyderabad Conference on Algebraic Groups, Manoj Prakashan, Madras, 1991. * [40] J. E. Humphreys, Cohomology of line bundles on flag varieties, Note posted on homepage May 2018. * [41] J. C. Jantzen, Sur Characterformel gewisser Darstellungen halbeinfacher Gruppen und Lie Algebren, Math. Z. 140 (1974), 127- 149. * [42] J. C. Jantzen, Darstellungen halbeinfacher Gruppen und kontravariante Formen, J. reine angew. Math. 290 (1977), 117 - 141. * [43] J.C. Jantzen, Representations of Algebraic Groups, Mathematical Surveys and Monographs 107, Second edition, American Mathematical Society (2003). * [44] V. Kac and B. Weisfeiler, Coadjoint action of a semi-simple algebraic group and the center of the enveloping algebra in characteristic $p$ , Indag. Math. 38 (1976), 136 - 151. * [45] G. Kempf, Vanishing theorem for flag manifolds, Amer. J. Math. 98 (1976), 325 - 331. * [46] G. Kempf, Linear systems on homogeneous spaces, Ann. of Math., 103 (1976), 557 - 591. * [47] V. Lakshmibai, C. Musili, C.S. Seshadri, Cohomology of line bundles on $G/B$, Ann. Sci. École Norm. Sup. 7 (1974), 88 -132. * [48] Z. Lin, The structure of cohomology of line bundles on $G/B$ for semisimple algebraic groups, J. Alg. 134 (1990), 225–256. * [49] Z. Lin, Socle series of cohomology groups of line bundles on $G/B$, J. Pure Appl. Algebra 72 (1991), 275–294. * [50] L. Liu, On the cohomology of line bundles over certain flag schemes, Journal of Combinatorial Theory, Series A Volume 182 (2021), 105448. * [51] L. Liu and P. Polo, On the cohomology of line bundles over certain flag schemes II, Journal of Combinatorial Theory, Series A Volume 178 (2021), 105352. * [52] V. Mehta and A. Ramanathan, Frobenius splitting and cohomology vanishing for Schubert varieties, Ann. of Math. 122 (1985), 27 - 45. * [53] S. Riche and G. Williamson, A simple character formula. Annales Henri Lebesgue, UFR de Mathématiques - IRMAR (2020), 503 - 535. * [54] J.-p. Serre, Représentations linéaires et espaces homogènes kählériens des groupes de Lie compacts, Séminaire N. Bourbaki, (1954), exp. no 100, 447-454. * [55] C. S. Seshadri, Line bundles on Schubert varieties, in:Vector bundles on algebraic varieties (Bombay, 1984), 499 - 528, Tata Inst. Fund. Res., Bombay (1987). * [56] P. Sobaje, On character formulas for simple and tilting modules, Adv. in Math., 369, (2020), 107172. * [57] R. Steinberg, Representations of algebraic groups, Nagoya Math. J. 22 (1963), 33-56. * [58] D.-n. Verma, Rôle of affine Weyl groups in representation theory of algebraic Chevalley groups and their Lie algebras, pp. 653 - 705 in: I. M. Gelfand (ed.), Lie groups and their Lie algebras, Proc. Budapest 1971, London 1975. * [59] W. Wong, Very strong linkage principle for cohomology of line bundles on $G/B$, J. Alg. 113 (1988), 71 - 80. * [60] W. Wong, Weyl modules for $p$-singular weights, J. Alg. 114 (1988), 357 - 368.
# Transition rates, survival probabilities, and quality of bias from time- dependent biased simulations Karen Palacio-Rodriguez1,2 Hadrien Vroylandt3 Lukas S. Stelzl4,5,6,7 Fabio Pietrucci1 Gerhard Hummer7,8 Pilar Cossio2,7,9<EMAIL_ADDRESS>1Sorbonne Université, Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, IMPMC, F-75005 Paris, France 2Biophysics of Tropical Diseases Max Planck Tandem Group, University of Antioquia, Medellín, Colombia 3Sorbonne Université, Institut des sciences du calcul et des données, ISCD, F-75005 Paris, France 4Faculty of Biology, Johannes Gutenberg University Mainz, Gresemundweg 2, 55128 Mainz, Germany 5KOMET 1, Institute of Physics, Johannes Gutenberg University Mainz, 55099 Mainz, Germany 6Institute of Molecular Biology (IMB), 55128 Mainz, Germany 7Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Max-von-Laue Straße 3, 60438 Frankfurt am Main, Germany 8Institute for Biophysics, Goethe University Frankfurt, 60438 Frankfurt am Main, Germany 9Center for Computational Mathematics, Flatiron Institute, New York, USA<EMAIL_ADDRESS>1Sorbonne Université, Institut de Minéralogie, de Physique des Matériaux et de Cosmochimie, IMPMC, F-75005 Paris, France 2Biophysics of Tropical Diseases Max Planck Tandem Group, University of Antioquia, Medellín, Colombia 3Sorbonne Université, Institut des sciences du calcul et des données, ISCD, F-75005 Paris, France 4Faculty of Biology, Johannes Gutenberg University Mainz, Gresemundweg 2, 55128 Mainz, Germany 5KOMET 1, Institute of Physics, Johannes Gutenberg University Mainz, 55099 Mainz, Germany 6Institute of Molecular Biology (IMB), 55128 Mainz, Germany 7Department of Theoretical Biophysics, Max Planck Institute of Biophysics, Max-von-Laue Straße 3, 60438 Frankfurt am Main, Germany 8Institute for Biophysics, Goethe University Frankfurt, 60438 Frankfurt am Main, Germany 9Center for Computational Mathematics, Flatiron Institute, New York, USA ###### Abstract Simulations with an adaptive time-dependent bias, such as metadynamics, enable an efficient exploration of the conformational space of a system. However, the dynamic information of the system is altered by the bias. With infrequent metadynamics it is possible to recover the transition rate of crossing a barrier, if the collective variables are ideal and there is no bias deposition near the transition state. Unfortunately, for simulations of complex molecules, these conditions are not always fulfilled. To overcome these limitations, and inspired by single-molecule force spectroscopy, we developed a method based on Kramers’ theory for calculating the barrier-crossing rate when a time-dependent bias is added to the system. We assess the quality of the bias parameter by measuring how efficiently the bias accelerates the transitions compared to ideal behavior. We present approximate analytical expressions of the survival probability that accurately reproduce the barrier- crossing time statistics, and enable the extraction of the unbiased transition rate even for challenging cases, where previous methods fail. ††preprint: APS/123-QED Kinetic rate constants are of fundamental importance by quantifying the speed of interconversion between metastable states in the description of physical phenomena. From protein folding to nucleation, the estimation of the transition rates allows us to understand the time scales of the events and the mechanistic implications. However, obtaining rate coefficients for rare events is not a trivial task. Computer-assisted methods have gained relevance in recent decades to predict kinetic properties [1, 2, 3, 4]. Among these methods, molecular dynamics (MD) simulations have been widely used to study the thermodynamic and kinetic behavior of molecular systems [5, 6, 7]. Processes such as protein folding or ligand binding have been successfully studied by means of MD simulations [8, 9, 10, 11, 12, 13]. However, MD has the limitation that the time scales of many rare events are not accessible by standard simulations, even using powerful supercomputers [14, 15, 16]. Enhanced sampling methods in combination with MD simulations have become useful alternatives for studying events that occur at long timescales [17]. These methods have been developed to accelerate the sampling of the conformational space, typically characterized by rugged landscapes and high energy barriers [18]. Among these methods, metadynamics [19] (MetaD), is an enhanced-sampling technique where the conformational search is accelerated by adding a history-dependent bias potential to the force-field. The biasing potential is a function of collective variables (CVs) chosen to describe the degrees of freedom considered most relevant to the transition mechanism [19, 20]. MetaD has the advantage that, for a converged simulation and appropriate CVs [21], it is possible to directly recover the free energy profile of the system from the MetaD bias [22]. However, a disadvantage of MetaD, as well as other enhanced sampling methods, is that information about the dynamics of the simulated system is corrupted, due to the sampling acceleration [21]. Therefore, it may seem impossible to extract quantitative rate information from such simulations. Nevertheless, several methods have been developed to estimate rate coefficients from enhanced-sampling simulations [15, 23, 24, 25]. Some involve the calculation of diffusion coefficients and the construction of Markov State Models [26, 27, 28, 29, 30, 31, 32, 33, 34]. Among these methods infrequent metadynamics [28] (iMetaD), has been widely used in recent years [35, 36, 37, 38, 39, 40, 41, 42]. iMetaD is based on the transition state theory, and employs an acceleration factor [43, 44] extracted from MetaD simulations. The main idea is to deposit bias infrequently so that no bias is deposited in the region of the transition state. In this way the dynamics of the TS region is not corrupted [28], and it is possible to correct the escape times from a state. In addition to the slow biasing frequency, this approach also requires a small set of CVs that determine the relevant states and pathways of the system [15]. When these conditions are satisfied, the distribution of escape times follows a Poisson behavior [45]. The reliability of the distribution of the rescaled escape times obtained with iMetaD is tested using the Kolmogorov- Smirnov test (KS), which compares the cumulative distribution function (CDF) of the rescaled (escape) times to that theoretically expected [45]. Despite the usefulness of this, a major limitation is that it relies on ideal CVs, where the bias potential is zero at all the dividing surfaces [46, 47]. Modifications of iMetaD have thus been proposed [48, 49]. However, these methods do not directly compute the time-dependent rate or survival probably due to the bias. Here, we take inspiration from dynamic force spectroscopy experiments, where a bias is ramped up with time similar to the simulations with a dynamic bias. For these experiments, accurate kinetic predictions of the force-dependent rates and transition probabilities [50, 51, 52] have been derived. By considering the biasing potential analogous to an external force, we introduce a physical model of barrier-crossing events in time-dependent biased simulations for computing directly the transition statistics. The major advantages are that one can extract the unbiased rate and, at the same time, assess the quality of the CVs in terms of their contribution to the bias acceleration. Figure 1: Kramers time-dependent barrier-crossing rate for biased simulations. (a) Schematic representation of the escape from a bottom-well with a time- dependent bias. $\gamma V_{\mathrm{MB}}(t)$ measures the effective contribution of the added bias-height toward lowering the effective barrier. (b) Examples of $V_{\mathrm{MB}}(t)$ for frequent and infrequent bias- deposition times. The results are, for the double-well potential example, $V_{\mathrm{MB}}(t)$ along $y$ with bias-deposition times $d_{t}=1$ (orange), $5$ (green), and $20$ (blue) ($\times 10^{3}$) steps averaged over multiple runs. For diffusive dynamics, and high barriers, Kramers’ theory [53, 54] is used to calculate the rate (i.e. inverse of mean residence time) to cross a barrier along a coordinate $k_{0}=k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}}~{},$ (1) where $\Delta G^{\ddagger}_{0}$ is the barrier height (Fig. 1a), $\beta=1/k_{B}T$ is the inverse temperature, and $k_{\mathrm{pre}}$ is the pre-exponential factor that depends on the diffusion coefficient, shape of the bottom-well and barrier-top. In MetaD short-range repulsive functions are deposited at regular time intervals within a low-dimensional CV space. Over time, the bias fills the well. This reduces the effective barrier experienced by the system which is thus crossed more rapidly. To describe this reduction of the barrier height, we use the time-dependent maximum bias (MB) averaged over multiple runs $V_{\mathrm{MB}}(t)=\frac{1}{R}\sum_{r}\max_{t^{\prime}\in[0,t]}\,V^{r}_{B}(t^{\prime})~{},$ (2) where $V^{r}_{B}(t)$ is the instantaneous bias at time $t$ for simulation run $r$, and $R$ is the total number of runs. $V_{\mathrm{MB}}(t)$ is the average maximum height of the biasing potential (i.e., the level of bias added to the bottom-well) up to time $t$. In the case of an ideal CV, $\Delta G^{\ddagger}_{0}-V_{\mathrm{MB}}(t)$ would be the effective time-dependent barrier experienced by the biased system. $V_{\mathrm{MB}}(t)$ depends on the shape of the potential surface, the bias- deposition time ($d_{t}$), and bias-deposition height, among others. In Fig. 1b, we present some examples of $V_{\mathrm{MB}}(t)$ for several $d_{t}$. The blue line shows the case for which iMetaD is valid. We will compute directly the statistics for barrier-crossing times from biased simulations, covering a wider range of $V_{\mathrm{MB}}(t)$, by explicitly taking into account their time dependence. Assuming a quasi-adiabatic bias deposition, we apply Kramers’ theory over the potential presented in Fig. 1a, to calculate the time-dependent rate of escape due to the bias acceleration $k(t)=k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}+\beta\gamma V_{\mathrm{MB}}(t)}=k_{0}\,e^{\beta\gamma V_{\mathrm{MB}}(t)}~{},$ (3) where $k_{0}$ is the intrinsic rate (Eq. 1), and we are assuming that $k_{\mathrm{pre}}$ does not change due to the bias. We introduce $\gamma~{}\in[0,1]$ as an additional parameter that measures how much of the bias contributes to the acceleration. For ideal CVs, i.e., where the added bias acts along the direction of the true transition and helps to lower the effective barrier, we expect $\gamma=1$. By contrast, we expect $\gamma=0$ for poorly chosen CVs, i.e., where the bias acts in directions orthogonal to the transition. We illustrate this behavior of $\gamma$ for a cusp-like harmonic double-well potential (shown in Supplementary Text Eq. 1) where parameter $a$ controls the separation between the wells along $x$, $i.e.,$ the quality of the $x$ CV. If $a\approx 0$, $x$ is a poor coordinate because the wells are not separated when projecting along this direction. In this case, if we bias along $x$, we show that $\gamma\approx 2a$ (and therefore $\gamma\rightarrow 0$). In contrast, for this potential, when biasing along the good CV $y$, $\gamma=1$ is derived. Therefore, $\gamma$ is not simply an ad hoc bias factor, but it measures a physical property that can be related to the quality of the CV. Using Kramers’ Time-dependent Rate (KTR), from Eq. 3, we calculate the survival probability by adapting the methods used for the analysis of single- molecule force spectroscopy experiments [50], $S(t)=\exp\left(-\int_{0}^{t}k(t^{\prime})dt^{\prime}\right)=\exp\left(-k_{0}\int_{0}^{t}e^{\beta\gamma V_{\mathrm{MB}}(t^{\prime})}dt^{\prime}\right)\,.$ (4) The survival probability depends on $V_{\mathrm{MB}}(t)$. For example, if the bias presents a logarithmic time dependence, $V_{\mathrm{MB}}(t)=a\log(1+b\,t)$ (e.g., orange line in Fig. 1b), then the survival probability is $S(t)=\exp\left(\frac{k_{0}}{b(\beta\gamma a+1)}\left(1-(1+b\,t)^{\beta\gamma a+1}\right)\right)\,.$ (5) In the Supplementary Text, the analytic expression for a linear time-dependent bias is presented. For more general $V_{\mathrm{MB}}(t)$, we can solve Eq. 4 numerically. Let us assume that there are $M+N$ independent biased simulations that start from the well bottom where $M$ cross the barrier and $N$ remain in the basin. By monitoring $V^{r}_{B}(t)$ over the runs, we can calculate $V_{\mathrm{MB}}(t)$ (Fig. 1b), and use it to calculate $S(t)$ (Eq. 4). The transition rate $k_{0}$ and $\gamma$ can be extracted in two manners: i) by calculating the cumulative distribution function (CDF) using the simulation barrier-crossing times (without rescaling) and fitting the CDF using $1-S(t)$, or ii) by maximizing the likelihood function $\mathcal{L}=\prod_{i\in\mathrm{events}}^{M}\frac{dS(t)}{dt}\Big{|}_{t=t_{i}}\prod_{j\in\mathrm{non- events}}^{N}S(t_{j})~{},$ (6) where $i$ and $j$ account for events and non-events, respectively, $t_{i}$ is the escape time observed in the biased simulation $i$, and $t_{j}$ is the total simulation time for run $j$ that did not transition. To validate the statistics of the simulation barrier-crossing times, bootstrap analysis and KS-tests can be performed using the CDF (see the Supplementary Text). Figure 2: MetaD Monte Carlo simulations on a 2D double-well potential. (a) Two dimensional potential surface, where the bias is deposited along a poor ($x$) or good ($y$) CV. (b) CDF of the simulation barrier-crossing (jump) times for runs with bias along $x$ (bottom) or $y$ (top) for different bias-deposition times ($d_{t}$). Red to blue points go from frequent to infrequent $d_{t}$. Fits of the KTR theory are shown as solid lines. (c) Maximum likelihood extracted intrinsic rate ($k_{0}$) normalized by the true rate (calculated from unbiased simulations) as a function of the bias-deposition time for the KTR (black) and iMetaD (red) methods for biases along $x$ (squares) and $y$ (circles). Empty squares indicate cases where the KS-test failed for more than 25% of the bootstrap trials. (d) Maximum likelihood extracted quality of bias ($\gamma$) as a function of the bias-deposition time for biases along $x$ (squares) and $y$ (circles) using the KTR method. Error bars show the standard deviation from bootstrap analysis (see the Supplementary Text). To study the effect of the CVs, we first tested the theory on a 2D double-well potential (Fig. 2a). We ran MetaD Monte Carlo simulations over a good CV ($y$) or a poor CV ($x$) (see the Supplementary Text for details) having a wide range of bias-deposition times. We started $100$ simulations from the lower well and counted a transition when the system reached the top well. Fig. 1b shows examples of the average $V_{\mathrm{MB}}(t)$ over the runs biased along $y$. In Fig. 2b, we show the CDFs for the simulation-jump times and their fits using $1-S(t)$ (from Eq. 4) for both coordinates and the different bias- deposition times. These results show that $x$ is a poor coordinate because it does not accelerate as much the barrier-crossing events. However, in all cases there is good agreement between the theoretical and empirical CDFs. In Supplementary Fig. 1, we show CDFs for the rescaled times using iMetaD (see the Supplementary Text) together with their theoretical fits. We find that along the good CV $y$ this method works well. However, along the poor CV $x$ with frequent bias-deposition times iMetaD fails because its underlying assumptions do not hold [45]. In Fig. 2c, we present the extracted rates from numerical integration of Eq. 4 with maximum likelihood (Eq. 6) and compare them to those extracted with iMetaD by rescaling the times. We find that the KTR method estimates accurate unbiased rates, even for the challenging cases of fast deposition times and poor reaction coordinates. We note that for some of these cases (empty squares in Fig. 2c) the iMetaD KS-test fails, indicating that the iMetaD estimate should not be used (see Supplementary Fig. 1). Interestingly, our KTR theory enables extracting information about the quality of the biased CVs. Fig. 2d shows the extracted $\gamma$ as a function of the bias-deposition time for both coordinates. As expected, biasing along the poor CV leads to a lower $\gamma$. These results indicate that the KTR method is able to extract accurate unbiased rates and assess the quality of a CV. We now apply the KTR theory to study ligand unbinding from all-atom MD simulations. CDK2 is a kinase with abundant structural and pharmacological information available [55]. Due to its important role in the cell cycle, CDK2 has been considered a potential target for anticancer drugs [56]. We studied the unbinding of ligand 03K (ZINC13580440), starting from PDB structure 4EK5. Figure 3: CDK2-ligand unbinding biased simulations. (a) Global view of the protein and binding pocket. We highlight the residues that interact with the ligand inside the binding pocket. (b) CDK2-ligand unbinding trajectories in five example trajectories ($d_{t}=10~{}$ps). The solvation state of the ligand (top) and the distance between the center of mass of the ligand and the center of mass of the binding pocket (bottom) are shown as a function of time. (c) Representative snapshots of the different metastable states in the path to unbinding. We follow the crystallographic contacts between the ligand and the binding pocket to define the metastable states. The arrows show the location of the snapshots along the trajectory. $i)$ Bound state: the crystallographic pose and crystallographic contacts are shown. $ii)$ and $iii)$ Before complete unbinding, two intermediate states are formed, where the ligand presents some interactions with residues within the active site. $iv)$ Unbound state: the crystallographic contacts approach to zero, and the ligand is fully solvated with no interactions inside the binding site. For our analysis, we defined the unbound state when the crystallographic-contacts CV is $<0.01$. We used well-tempered metadynamics [57] that modulates the height of the bias as the simulation progresses. A global view of the protein-ligand complex and its interactions is shown in Fig. 3a. We estimated the rate ($k_{0}$) for the ligand unbinding using three sets of MetaD simulations with 50 replicas each of 10, 40 and 300 ns and bias-deposition time of 1, 10 and 100 ps, respectively (see the Supplementary Text for details). We biased two CVs which caused the ligand to unbind during the simulations in most cases. The CV1 ($w$) tracks the solvation state of the ligand and CV2 ($d$) the distance between the center of mass of the ligand and the pocket (see the Supplementary Text). In Fig. 3b, we show representative examples of these CVs as a function of time for five trajectories using a bias deposition time of 10 ps. To clearly identify the states, we monitor an additional CV: the number of crystallographic contacts, $c$, (see the Supplementary Text), without adding bias to it (Fig. 3c). When the main interactions between the ligand and the pocket are broken, the crystallographic contacts approach zero. In our analysis, we considered a transition on the final dissociation event, $i.e.,$ when CV $c$ (measuring the fraction of crystallographic contacts) is less than 0.01. However, note that the unbinding of the ligand from CDK2 involves several intermediate states before the unbound state is reached (Fig. 3c). Figure 4: Barrier-crossing statistics of CDK2-ligand unbinding. (a) CDF of the simulation barrier-crossing (jump) times for the well-tempered metadynamics simulations for different bias-deposition times ($1$ps (red), $10$ps (orange) and $100$ps (blue)). The solid lines show the fits of the KTR method using the analytical $S(t)$ from Eq. 5, starting from the same initial conditions to search for the optimal parameters. The extracted rate $k_{0}$ and quality of bias $\gamma$ are shown with the same color scheme. The KS-tests pass for all bias-deposition time setups. (b) CDF of the iMetaD rescaled times from the same simulations. Attempted fits are shown as solid lines, together with extracted rates. For these cases the KS-test fails. We found that, for all simulation setups, the time dependence of the average maximum bias $V_{\mathrm{MB}}(t)$ was well fitted to a logarithmic function $f(t)=a\log(1+b\,t)$ (see Supplementary Fig. 2). This allowed us to use the analytic expression of $S(t)$ in Eq. 5 (with fitted parameters $a$ and $b$ fixed) to extract $\gamma$ and $k_{0}$, and to perform KS-tests against the analytic S(t) (Eq. 5). In Fig. 4a, we show the empirical CDFs for the simulation barrier-crossing times for the different setups together with the fits of the KTR formalism. The estimated $\gamma$ and $k_{0}$, and their errors (extracted from bootstrap analysis of the passed KS-trials) are shown in Supplementary Fig. 3. As expected, $\gamma$ increases for slower deposition times; however, even for the best case its median is around 0.78, indicating that the bias setup and CVs are not perfect. In comparison to the experimental rate $k_{exp}=0.26\pm 0.05s^{-1}$[58], the extracted rates approach the experimental one as the quality of bias increases. For the longest bias- deposition time, the estimate is $k_{0}=0.14\pm 0.03s^{-1}$, which is on the same order of magnitude as the experiment. We consider this a successful result since we are using simulations of maximum length 300 ns to estimate a value that is on the order of seconds. Moreover, the KTR method proves extremely valuable when predicting the underlying statistics of barrier- crossing times in comparison to attempted CDF-fits using the iMetaD rescaled times (Fig. 4b) [28, 45]. Inspired by the methods from the force spectroscopy community [50, 51, 52] that calculate barrier-crossing rates induced by forces acting on single molecules, in this work, we used Kramers’ theory to calculate the time- dependent transition rates and survival probabilities from time-dependent biased simulations. Here, we have used examples from MetaD simulations, however, the KTR method is general for any time-dependent biased simulation along some CVs, such as, adaptive biasing force [59, 60], adiabatic bias MD [61, 62], adaptively biased MD [63, 64], among others. Importantly, our method not only enables estimating the unbiased intrinsic rate but also provides a measure for the effectiveness of the added bias to accelerate the transition (related to the quality of the CVs). This overcomes severe limitations encountered with previous approaches where the bias had to be deposited very infrequently over ideal CVs. There are several points of the KTR theory that can be improved in future work. For example, there could be cases where multiple values of $k_{0}$ and $\gamma$ fit equally well the CDFs, and additional restrictions over these parameters might be required. Automatized methods to determine when the barrier-crossing occurs might be helpful. Moreover, Kramers’ high-barrier approximation or the quasi-adiabatic assumption might breakdown for extremely large biases or deposition times. A generalization to multiple-basins and dimensions would also be useful. P.C. was supported by MinCiencias, University of Antioquia (Colombia), the Max Planck Society (Germany), and Flatiron Institute, a division of the Simons Foundation (USA). The authors thank Attila Szabo, Erik Thiede and Marylou Gabrié for useful discussions, as well as David Silva for code optimization advice. Calculations were performed on the GENCI-IDRIS French national supercomputing facility, under grant number A0090811069. K.P-R. and H.V. contributed equally to this work. ## References * Holenz and Stoy [2019] J. Holenz and P. Stoy, Advances in lead generation, Bioorganic & medicinal chemistry letters 29, 517 (2019). * Rognan [2017] D. Rognan, The impact of in silico screening in the discovery of novel and safer drug candidates, Pharmacology & therapeutics 175, 47 (2017). * Talele _et al._ [2010] T. T. Talele, S. A. Khedkar, and A. C. Rigby, Successful applications of computer aided drug discovery: moving drugs from concept to the clinic, Current topics in medicinal chemistry 10, 127 (2010). * Jorgensen [2004] W. L. Jorgensen, The many roles of computation in drug discovery, Science 303, 1813 (2004). * Bernetti _et al._ [2019] M. Bernetti, M. Masetti, W. Rocchia, and A. Cavalli, Kinetics of drug binding and residence time, Annual review of physical chemistry 70, 143 (2019). * Hollingsworth and Dror [2018] S. A. Hollingsworth and R. O. Dror, Molecular dynamics simulation for all, Neuron 99, 1129 (2018). * Ganesan _et al._ [2017] A. Ganesan, M. L. Coote, and K. Barakat, Molecular dynamics-driven drug discovery: leaping forward with confidence, Drug discovery today 22, 249 (2017). * Lindorff-Larsen _et al._ [2011] K. Lindorff-Larsen, S. Piana, R. O. Dror, and D. E. Shaw, How fast-folding proteins fold, Science 334, 517 (2011). * Piana _et al._ [2013] S. Piana, K. Lindorff-Larsen, and D. E. Shaw, Atomic-level description of ubiquitin folding, Proceedings of the National Academy of Sciences 110, 5915 (2013). * Chodera and Noé [2014] J. D. Chodera and F. Noé, Markov state models of biomolecular conformational dynamics, Current opinion in structural biology 25, 135 (2014). * Plattner _et al._ [2017] N. Plattner, S. Doerr, G. De Fabritiis, and F. Noé, Complete protein–protein association kinetics in atomic detail revealed by molecular dynamics simulations and markov modelling, Nature chemistry 9, 1005 (2017). * Tang _et al._ [2020] Z. Tang, S.-H. Chen, and C.-E. A. Chang, Transient states and barriers from molecular simulations and the milestoning theory: Kinetics in ligand–protein recognition and compound design, Journal of chemical theory and computation 16, 1882 (2020). * Wolf _et al._ [2020] S. Wolf, B. Lickert, S. Bray, and G. Stock, Multisecond ligand dissociation dynamics from atomistic simulations, Nature communications 11, 1 (2020). * De Vivo _et al._ [2016] M. De Vivo, M. Masetti, G. Bottegoni, and A. Cavalli, Role of molecular dynamics and related methods in drug discovery, Journal of medicinal chemistry 59, 4035 (2016). * Dickson _et al._ [2017] A. Dickson, P. Tiwary, and H. Vashisth, Kinetics of ligand binding through advanced computational approaches: a review, Current topics in medicinal chemistry 17, 2626 (2017). * Ribeiro _et al._ [2018] J. M. L. Ribeiro, S.-T. Tsai, D. Pramanik, Y. Wang, and P. Tiwary, Kinetics of ligand–protein dissociation from all-atom simulations: Are we there yet?, Biochemistry 58, 156 (2018). * Pietrucci [2017] F. Pietrucci, Strategies for the exploration of free energy landscapes: Unity in diversity and challenges ahead, Reviews in Physics 2, 32 (2017). * Bernardi _et al._ [2015] R. C. Bernardi, M. C. Melo, and K. Schulten, Enhanced sampling techniques in molecular dynamics simulations of biological systems, Biochimica et Biophysica Acta (BBA)-General Subjects 1850, 872 (2015). * Laio and Parrinello [2002] A. Laio and M. Parrinello, Escaping free-energy minima, Proceedings of the National Academy of Sciences 99, 12562 (2002). * Aci-Sèche _et al._ [2016] S. Aci-Sèche, S. Ziada, A. Braka, R. Arora, and P. Bonnet, Advanced molecular dynamics simulation methods for kinase drug discovery, Future medicinal chemistry 8, 545 (2016). * Cavalli _et al._ [2015] A. Cavalli, A. Spitaleri, G. Saladino, and F. L. Gervasio, Investigating drug–target association and dissociation mechanisms using metadynamics-based algorithms, Accounts of chemical research 48, 277 (2015). * Bussi and Laio [2020] G. Bussi and A. Laio, Using metadynamics to explore complex free-energy landscapes, Nature Reviews Physics 2, 200 (2020). * Camilloni and Pietrucci [2018] C. Camilloni and F. Pietrucci, Advanced simulation techniques for the thermodynamic and kinetic characterization of biological systems, Advances in Physics: X 3, 1477531 (2018). * Kokh _et al._ [2018] D. B. Kokh, M. Amaral, J. Bomke, U. Grädler, D. Musil, H.-P. Buchstaller, M. K. Dreyer, M. Frech, M. Lowinski, F. Vallee, _et al._ , Estimation of drug-target residence times by $\tau$-random acceleration molecular dynamics simulations, Journal of chemical theory and computation 14, 3859 (2018). * Nunes-Alves _et al._ [2020] A. Nunes-Alves, D. B. Kokh, and R. C. Wade, Recent progress in molecular simulation methods for drug binding kinetics, Current Opinion in Structural Biology 64, 126 (2020). * Hummer [2005] G. Hummer, Position-dependent diffusion coefficients and free energies from bayesian analysis of equilibrium and replica molecular dynamics simulations, New Journal of Physics 7, 34 (2005). * Marinelli _et al._ [2009] F. Marinelli, F. Pietrucci, A. Laio, and S. Piana, A kinetic model of trp-cage folding from multiple biased molecular dynamics simulations, PLoS computational biology 5, e1000452 (2009). * Tiwary and Parrinello [2013] P. Tiwary and M. Parrinello, From metadynamics to dynamics, Physical review letters 111, 230602 (2013). * Stelzl and Hummer [2017] L. S. Stelzl and G. Hummer, Kinetics from replica exchange molecular dynamics simulations, Journal of chemical theory and computation 13, 3927 (2017). * Stelzl _et al._ [2017] L. S. Stelzl, A. Kells, E. Rosta, and G. Hummer, Dynamic histogram analysis to determine free energies and rates from biased simulations, Journal of chemical theory and computation 13, 6328 (2017). * Donati and Keller [2018] L. Donati and B. G. Keller, Girsanov reweighting for metadynamics simulations, The Journal of chemical physics 149, 072335 (2018). * Schäfer and Settanni [2020] T. M. Schäfer and G. Settanni, Data reweighting in metadynamics simulations, Journal of chemical theory and computation 16, 2042 (2020). * Kieninger _et al._ [2020] S. Kieninger, L. Donati, and B. G. Keller, Dynamical reweighting methods for markov models, Current opinion in structural biology 61, 124 (2020). * Linker _et al._ [2020] S. M. Linker, R. G. Weiß, and S. Riniker, Connecting dynamic reweighting algorithms: Derivation of the dynamic reweighting family tree, The Journal of Chemical Physics 153, 234106 (2020). * Tiwary _et al._ [2015] P. Tiwary, V. Limongelli, M. Salvalaglio, and M. Parrinello, Kinetics of protein–ligand unbinding: Predicting pathways, rates, and rate-limiting steps, Proceedings of the National Academy of Sciences 112, E386 (2015). * Tiwary _et al._ [2017] P. Tiwary, J. Mondal, and B. J. Berne, How and when does an anticancer drug leave its binding site?, Science advances 3, e1700014 (2017). * Casasnovas _et al._ [2017] R. Casasnovas, V. Limongelli, P. Tiwary, P. Carloni, and M. Parrinello, Unbinding kinetics of a p38 map kinase type ii inhibitor from metadynamics simulations, Journal of the American Chemical Society 139, 4780 (2017). * Sun _et al._ [2017] H. Sun, Y. Li, M. Shen, D. Li, Y. Kang, and T. Hou, Characterizing drug–target residence time with metadynamics: How to achieve dissociation rate efficiently without losing accuracy against time-consuming approaches, Journal of chemical information and modeling 57, 1895 (2017). * Pramanik _et al._ [2019] D. Pramanik, Z. Smith, A. Kells, and P. Tiwary, Can one trust kinetic and thermodynamic observables from biased metadynamics simulations?: Detailed quantitative benchmarks on millimolar drug fragment dissociation, The Journal of Physical Chemistry B 123, 3672 (2019). * Zou _et al._ [2020] R. Zou, Y. Zhou, Y. Wang, G. Kuang, H. Ågren, J. Wu, and Y. Tu, Free energy profile and kinetics of coupled folding and binding of the intrinsically disordered protein p53 with mdm2, Journal of chemical information and modeling 60, 1551 (2020). * Lamim Ribeiro _et al._ [2020] J. M. Lamim Ribeiro, D. Provasi, and M. Filizola, A combination of machine learning and infrequent metadynamics to efficiently predict kinetic rates, transition states, and molecular determinants of drug dissociation from g protein-coupled receptors, The Journal of Chemical Physics 153, 124105 (2020). * Shekhar _et al._ [2021] M. Shekhar, Z. Smith, M. Seeliger, and P. Tiwary, Protein flexibility and dissociation pathway differentiation can explain onset of resistance mutations in kinases, bioRxiv (2021). * Grubmüller [1995] H. Grubmüller, Predicting slow structural transitions in macromolecular systems: Conformational flooding, Physical Review E 52, 2893 (1995). * Voter [1997] A. F. Voter, Hyperdynamics: Accelerated molecular dynamics of infrequent events, Physical Review Letters 78, 3908 (1997). * Salvalaglio _et al._ [2014] M. Salvalaglio, P. Tiwary, and M. Parrinello, Assessing the reliability of the dynamics reconstructed from metadynamics, Journal of chemical theory and computation 10, 1420 (2014). * Dickson [2018] B. M. Dickson, Erroneous rates and false statistical confirmations from infrequent metadynamics and other equivalent violations of the hyperdynamics paradigm, Journal of chemical theory and computation 15, 78 (2018). * Khan _et al._ [2020] S. A. Khan, B. M. Dickson, and B. Peters, How fluxional reactants limit the accuracy/efficiency of infrequent metadynamics, The Journal of Chemical Physics 153, 054125 (2020). * Callegari _et al._ [2017] D. Callegari, A. Lodola, D. Pala, S. Rivara, M. Mor, A. Rizzi, and A. M. Capelli, Metadynamics simulations distinguish short-and long-residence-time inhibitors of cyclin-dependent kinase 8, Journal of chemical information and modeling 57, 159 (2017). * Wang _et al._ [2018] Y. Wang, O. Valsson, P. Tiwary, M. Parrinello, and K. Lindorff-Larsen, Frequency adaptive metadynamics for the calculation of rare-event kinetics, The Journal of chemical physics 149, 072309 (2018). * Hummer and Szabo [2003] G. Hummer and A. Szabo, Kinetics from nonequilibrium single-molecule pulling experiments, Biophysical journal 85, 5 (2003). * Dudko _et al._ [2006] O. K. Dudko, G. Hummer, and A. Szabo, Intrinsic rates and activation free energies from single-molecule pulling experiments, Physical review letters 96, 108101 (2006). * Cossio _et al._ [2016] P. Cossio, G. Hummer, and A. Szabo, Kinetic ductility and force-spike resistance of proteins from single-molecule force spectroscopy, Biophysical journal 111, 832 (2016). * Kramers [1940] H. A. Kramers, Brownian motion in a field of force and the diffusion model of chemical reactions, Physica 7, 284 (1940). * Hänggi _et al._ [1990] P. Hänggi, P. Talkner, and M. Borkovec, Reaction-rate theory: fifty years after kramers, Reviews of modern physics 62, 251 (1990). * Kontopidis _et al._ [2006] G. Kontopidis, C. McInnes, S. R. Pandalaneni, I. McNae, D. Gibson, M. Mezna, M. Thomas, G. Wood, S. Wang, M. D. Walkinshaw, _et al._ , Differential binding of inhibitors to active and inactive cdk2 provides insights for drug design, Chemistry & biology 13, 201 (2006). * Shapiro [2006] G. I. Shapiro, Cyclin-dependent kinase pathways as targets for cancer treatment, Journal of clinical oncology 24, 1770 (2006). * Barducci _et al._ [2008] A. Barducci, G. Bussi, and M. Parrinello, Well-tempered metadynamics: a smoothly converging and tunable free-energy method, Physical review letters 100, 020603 (2008). * Dunbar Jr _et al._ [2013] J. B. Dunbar Jr, R. D. Smith, K. L. Damm-Ganamet, A. Ahmed, E. X. Esposito, J. Delproposto, K. Chinnaswamy, Y.-N. Kang, G. Kubish, J. E. Gestwicki, _et al._ , Csar data set release 2012: ligands, affinities, complexes, and docking decoys, Journal of chemical information and modeling 53, 1842 (2013). * Darve _et al._ [2008] E. Darve, D. Rodríguez-Gómez, and A. Pohorille, Adaptive biasing force method for scalar and vector free energy calculations, The Journal of chemical physics 128, 144120 (2008). * Henin _et al._ [2010] J. Henin, G. Fiorin, C. Chipot, and M. L. Klein, Exploring multidimensional free energy landscapes using time-dependent biases on collective variables, Journal of chemical theory and computation 6, 35 (2010). * Marchi and Ballone [1999] M. Marchi and P. Ballone, Adiabatic bias molecular dynamics: a method to navigate the conformational space of complex molecular systems, The Journal of chemical physics 110, 3697 (1999). * Paci and Karplus [1999] E. Paci and M. Karplus, Forced unfolding of fibronectin type 3 modules: an analysis by biased molecular dynamics simulations, Journal of molecular biology 288, 441 (1999). * Babin _et al._ [2008] V. Babin, C. Roland, and C. Sagui, Adaptively biased molecular dynamics for free energy calculations, The Journal of chemical physics 128, 134101 (2008). * Babin _et al._ [2009] V. Babin, V. Karpusenka, M. Moradi, C. Roland, and C. Sagui, Adaptively biased molecular dynamics: An umbrella sampling method with a time-dependent potential, International Journal of Quantum Chemistry 109, 3666 (2009). * Rosta and Hummer [2015] E. Rosta and G. Hummer, Free energies from dynamic weighted histogram analysis using unbiased markov state model, Journal of chemical theory and computation 11, 276 (2015). * Berendsen _et al._ [1995] H. J. Berendsen, D. van der Spoel, and R. van Drunen, Gromacs: a message-passing parallel molecular dynamics implementation, Computer physics communications 91, 43 (1995). * Abraham _et al._ [2015] M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, and E. Lindahl, Gromacs: High performance molecular simulations through multi-level parallelism from laptops to supercomputers, SoftwareX 1, 19 (2015). * Tribello _et al._ [2014] G. A. Tribello, M. Bonomi, D. Branduardi, C. Camilloni, and G. Bussi, Plumed 2: New feathers for an old bird, Computer Physics Communications 185, 604 (2014). * Lindorff-Larsen _et al._ [2010] K. Lindorff-Larsen, S. Piana, K. Palmo, P. Maragakis, J. L. Klepeis, R. O. Dror, and D. E. Shaw, Improved side-chain torsion potentials for the amber ff99sb protein force field, Proteins: Structure, Function, and Bioinformatics 78, 1950 (2010). * Jorgensen _et al._ [1983] W. L. Jorgensen, J. Chandrasekhar, J. D. Madura, R. W. Impey, and M. L. Klein, Comparison of simple potential functions for simulating liquid water, The Journal of chemical physics 79, 926 (1983). * Wang _et al._ [2006] J. Wang, W. Wang, P. A. Kollman, and D. A. Case, Automatic atom type and bond type perception in molecular mechanical calculations, Journal of molecular graphics and modelling 25, 247 (2006). * Wang _et al._ [2004] J. Wang, R. M. Wolf, J. W. Caldwell, P. A. Kollman, and D. A. Case, Development and testing of a general amber force field, Journal of computational chemistry 25, 1157 (2004). * Da Silva and Vranken [2012] A. W. S. Da Silva and W. F. Vranken, Acpype-antechamber python parser interface, BMC research notes 5, 1 (2012). * Bussi _et al._ [2007] G. Bussi, D. Donadio, and M. Parrinello, Canonical sampling through velocity rescaling, J. Chem. Phys. 126, 014101 (2007). * Parrinello and Rahman [1981] M. Parrinello and A. Rahman, Polymorphic transitions in single crystals: A new molecular dynamics method, Journal of Applied physics 52, 7182 (1981). Supplementary Information: Transition rates, survival probabilities, and quality of bias from time-dependent biased simulations Karen Palacio-Rodriguez1,2 Hadrien Vroylandt3 Lukas S. Stelzl4,5,6,7 Fabio Pietrucci1 Gerhard Hummer7,8 Pilar Cossio2,7,9 ## I Supplementary Text ### I.1 Rate-acceleration factor: quality of bias $\gamma$ Consider the diffusive escape from a 2D double-well potential over a cusp-like barrier. The potential is given by $V(x,y)=\begin{cases}\frac{y^{2}}{2}+\frac{(x-ay)^{2}}{2},&\text{for $y\leq 1$}~{},\\\ -\infty,&\text{for $y>1$}~{}.\end{cases}$ (1) The minimum of the potential well is at $x=y=0$, and the barrier for escape at $x=a$ and $y=1$. The barrier height is $\Delta G^{\ddagger}_{0}=\frac{1}{2}$. We consider diffusion on this potential with a uniform and isotropic diffusion coefficient at an inverse temperature $\beta\gg 1$, $i.e.$, in the high- barrier limit. Then, for $a\approx 0$, $y$ is a good CV because it reports on the approach to the barrier, whereas $x$ is a poor CV because it reports on motions orthogonal to the escape over the barrier. Now, consider that metadynamics is performed with $x$ as the chosen CV. We assume that bias deposition is very slow such that we have quasi-equilibrium conditions. Then, metadynamics flattens the potential of mean force (PMF) along $x$ up to a preset level $\Delta G$. The PMF along $x$ is defined as $e^{-\beta g(x)}=\int^{1}_{-\infty}dye^{-\beta V(x,y)}~{},$ (2) up to an additive constant, chosen such that the minimum is at a PMF value of zero. For small $a$ and $x$, the PMF along $x$ can be approximated as $g(x)\approx\frac{x^{2}}{2(1+a^{2})}~{}.$ (3) For a given metadynamics bias level $\Delta G$, the bias acts on the range $|x|<\sqrt{2(1+a^{2})\Delta G}$. The combined potential including the metadynamics bias is then $\begin{split}U(x,y|\Delta G)=\begin{cases}V(x,y)-g(x),&\text{for $|x|<\sqrt{2(1+a^{2})\Delta G}$}\\\ V(x,y)-\Delta G,&\text{otherwise}~{}.\end{cases}\end{split}$ (4) The potential $U(x,y)$ has a minimum in the bound well for $|a|<1/\sqrt{2\Delta G-1}$. The minimum is located at $x=\sqrt{2(1+a^{2})\Delta G}$ and $y=a\sqrt{2\Delta G/(1+a^{2})}$. The lowest energy barrier to escape on the combined potential is located at $x=\sqrt{2(1+a^{2})\Delta G}$ and $y=1$. For $|a|<1/\sqrt{2\Delta G-1}$, the height of the barrier on the potential $U(x,y)$ including the metadynamics bias is $\Delta G^{\ddagger}=\frac{1+a[a-\sqrt{8(1+a^{2})\Delta G}+2a\Delta G]}{2}~{}.$ (5) Now if metadynamics has reached a level of $\Delta G=\Delta G^{\ddagger}_{0}=1/2$, $i.e.$, the level of the barrier in the potential $V(x,y)$, the potential well is nominally filled. However, as the above calculation shows, for $|a|<1/\sqrt{2\Delta G-1}$ a barrier $\Delta G^{\ddagger}_{0}$ remains. For small $a$ and $\Delta G=\Delta G^{\ddagger}_{0}=1/2$, the height of this remaining barrier is $\Delta G^{\ddagger}\approx\frac{1}{2}-a~{}.$ (6) If one now uses the Kramers’ approximation for the rate of escape from the potential well without bias (Eq. 1 Main Text), $k_{0}=k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}}~{}.$ and with a bias potential filled up along $x$ to a height of $\Delta G=\Delta G^{\ddagger}_{0}$, then the rate accelerates to $k_{\mathrm{meta}}=k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}}\approx k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}(1-2a)}~{}.$ (7) This means that we have not gotten the full boost from metadynamics. In the Main Text, we correct for this reduced boost by the factor $\gamma$. For the problem here, this factor is obtained from $k_{\mathrm{meta}}\approx k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}(1-2a)}=k_{\mathrm{pre}}e^{-\beta\Delta G^{\ddagger}_{0}(1-\gamma)}~{}.$ (8) Therefore, for the escape from this double well and $a\approx 0$, the quality of bias is thus $\gamma\approx 2a~{}.$ (9) In other words, for $a\rightarrow 0$, when $x$ is orthogonal to the escape flux, we have no acceleration from metadynamics, and from $0<a\ll 1$, we have only a much smaller acceleration than what would be achieved by biasing along a well-chosen CV. If instead $y$ is chosen as the CV, metadynamics flattens the PMF along $y$. The PMF along $y$ is given, up to a constant, by $G(y)=\frac{y^{2}}{2}~{}.$ (10) The combined potential is $U(x,y|\Delta G)=\begin{cases}V(x,y)-G(y),&\text{for $|y|<\sqrt{2\Delta G}$}\\\ V(x,y)-\Delta G,&\text{otherwise}~{}.\end{cases}$ (11) For $\Delta G=\Delta G^{\ddagger}_{0}=1/2$, the barrier to the exit at $x=a$ and $y=1$ vanishes. Therefore, the choice of $y$ as a CV results in a boost factor $\gamma=1$ for all values of $a$. ### I.2 Analytical $S(t)$ for a linear time-dependent bias Assuming a linear time dependence of the bias $V_{\mathrm{MB}}(t)=a\,t$, and using Eq. 4. (Main Text), we derived the following analytical expression for the survival probability $S(t)=\exp\left(\frac{k}{\beta\gamma\,a}\left(1-e^{\beta\gamma\,a\,t}\right)\right)~{},$ (12) where $\gamma$ and $k$ are the quality of bias and intrinsic transition rate, respectively. ### I.3 Maximization of the likelihood function Combining the expressions of the likelihood from Eq. 6 (Main Text) and the survival function from Eq. 4 (Main Text), we obtain an expression for the log- likelihood $\displaystyle\ln\mathcal{L}(k_{0},\gamma)=-k_{0}\sum_{i}^{M+N}\int_{0}^{t_{i}}e^{\beta\gamma V_{\mathrm{MB}}(t^{\prime})}dt^{\prime}$ $\displaystyle+M\ln k_{0}+\sum_{i\in\mathrm{events}}^{M}\beta\gamma V_{\mathrm{MB}}(t_{i})~{},$ (13) where $M$ is the number of events and $N$ the number of non-events. Our aim is to extract parameters $k^{*}_{0}$ and $\gamma^{*}$ that maximize this function. The derivative with respect to $k_{0}$ of the log-likelihood is $\frac{\partial}{\partial k_{0}}\ln\mathcal{L}(k_{0},\gamma)=-\sum_{i}^{M+N}\int_{0}^{t_{i}}e^{\beta\gamma V_{\mathrm{MB}}(t^{\prime})}dt^{\prime}+\frac{M}{k_{0}}~{}.$ (14) To solve for the optimal intrinsic rate, we set the previous derivative to zero, and find it as a function of $\gamma$ $k^{*}_{0}(\gamma)=\frac{M}{\sum_{i}^{M+N}\int_{0}^{t_{i}}e^{\beta\gamma V_{\mathrm{MB}}(t^{\prime})}dt^{\prime}}~{}.$ (15) Using this function, we search for the optimal parameters $\gamma^{*}$ and $k^{*}_{0}=k^{*}_{0}(\gamma^{*})$ through numerical maximization of $\ln\mathcal{L}(k^{*}_{0}(\gamma),\gamma)$. ### I.4 Rate from iMetaD In MetaD, the kinetic rates of the system are altered in a complex way because of the acceleration due to the time-dependent bias, constructed as a sum of repulsive Gaussian-shaped functions deposited at regular time intervals within a low-dimensional CV space. Different approaches have been designed to recover unbiased kinetic rates from MetaD using Markov state models [27, 31]. iMetaD, proposed by Tiwary and Parrinello [28], is a simple and commonly applied approach that starts by repeating a series of identical MetaD simulations where Gaussians are deposited relatively slowly, in order to reduce the probability of biasing the transition state region. Under these conditions, ideas from Grubmüller [43] and Voter [44] based on TST are exploited to recover the unbiased dynamics of the system by rescaling the simulation time with an acceleration factor correcting for the bias. In practice, we rescale the times from MetaD by [37] $t_{\mathrm{res}}=\sum_{i=1}^{N_{t}}\mathrm{d}t\>e^{\beta V^{B}_{i}},$ (16) where $\mathrm{d}t$ is the time step, $i$ is the step number, $N_{t}$ is the total number of steps until an event occurred or the simulation stopped, $V^{B}_{i}$ is the instantaneous MetaD biasing potential at step $i$ and $\beta=1/(k_{B}T)$ with $k_{B}$ the Boltzmann’s constant and $T$ the absolute temperature. ### I.5 Cumulative distribution function and Kolmogorov-Smirnov test The Kolmogorov-Smirnov test (KS-test) is a non-parametric test that enables comparing the similarity between one-dimensional probability distributions. In iMetaD, the reliability of the rescaled escape (residence) times is quantitatively assessed by comparing their empirical cumulative distribution function (ECDF) to the theoretical cumulative distribution function (TCDF) of a homogeneous Poisson process using a two-sample KS test [45]. A $p$-value threshold of 0.05 is typically used [45]. The ECDF is fitted to the TCDF of a homogeneous Poisson process, $P(t)=1-\exp({-t/\tau})$. Following Salvalaglio _et al._ [45], a TCDF is built from a large number of sample times (e.g., $10^{6}$) randomly generated according to $P(t)$. Then, a two sample KS test is applied to assess the similarity between ECDF and TCDF, with the null hypothesis that the two sampled distributions share the same underlying distribution. If the $p$-value from the KS-test is higher that 0.05, ignoring the fact that $\tau$ has been fitted, then the null hypothesis is accepted and a Poissonian process is considered appropriate to describe the statistics of the process. The rate coefficient is then calculated as the inverse of the average time from the fit of the ECDF, $k_{\mathrm{iMetaD}}=1/\tau$. In contrast, the KTR method does not rescaled the times and therefore, is not constrained to assume Poissonian statistics. When the analytical form of $V_{\mathrm{MB}}(t)$ is known ($e.g.,$ linear or logarithmic time-dependency), the survival probability ($S(t)$) can be extracted from Eq. 4 (Main Text), and the TCDF is given by $1-S(t)$. One can perform the KS-test as described above using the new TCDFs and comparing it to the ECDF of the simulation-jump times, extracting a p-value that indicates if the KS-test passed or not. We adapted a Matlab code provided by Salvalaglio _et al._ [45] to python3.6 for the calculation of $\tau$ using iMetaD and the KS-test (for both methods, iMetaD and KTR). The histograms of the CDFs are built using the number of bins equal the total number of simulations (this was tested empirically to obtain the best fits of the CDF for the 2D system). We use logarithmic scale to create the histograms for iMetaD and a linear scale for KTR. ### I.6 Practical considerations Bootstrap analysis: To estimate the errors of the extracted parameters, we performed a bootstrapping analysis. The number of bootstrapping samples is 100 for all systems. We use the distributions of each parameter from the bootstrap samples to estimate its error (Fig. 2, Fig. 4a and, SI Fig. 3). Code availability: The codes to fit analytically or numerically $V_{MB}(t)$ and the survival probabilities are available on GitHub: https://github.com/kpalaciorodr/KTR ### I.7 2D double-well potential To analyse the dependence of the results on the quality of the CVs, we used a 2D double-well potential $U(x)=-k_{B}T\ln[e^{(-20\cdot(x-0.2)^{2}-100\cdot(y-0.2)^{2})}+e^{(-20\cdot(x-0.8)^{2}-100\cdot(y-0.8)^{2})}]$ (similar to that in ref. 65). For this potential, the projection of the free energy surface along the $x$ coordinate leads to an underestimation of the barrier between the wells, while the projection along the $y$ coordinate represents faithfully the underlying 2D barrier of around $8~{}k_{B}T$ (see Fig. 2a). The Monte Carlo (MC) step was performed randomly from a uniform distribution for each CV in the range [-0.005, 0.005] around the current point inside a grid between -0.4 and 1.4 for $x$ and $y$. These trial moves were accepted according to the Metropolis criterion. A bias height of 0.04 and a bias width of 0.04 were used. To asses the effect of the CV, the simulations were performed adding bias only along a single coordinate (i.e., either the $x$ or $y$ coordinate, in independent simulations). The bias deposition time was varied and 100 simulations were launched for each CV. To obtain the reference rate coefficient $k_{unbias}$ for the double-well potential, we performed 1000 unbiased runs with $3\times 10^{7}$ MC steps. We calculated $k_{unbias}=M/(\sum_{i}t_{i}+\sum_{j}t_{j})$ where $M$ is the total number of events, $t_{i}$ is the escape time observed in run $i$, and $t_{j}$ is the total simulation time for run $j$ that did not have a transition (for the unbiased case around 30% of non-events). ### I.8 CDK2-ligand unbinding simulations: Well-tempered MetaD simulations [57] were carried out using the GROMACS 2019.4 program [66, 67] patched with PLUMED 2.5.3 [68]. The complex was solvated with a cubic water box, centered at the geometric center of the complex with at least 2.0 nm between any two periodic images. The AMBER99SB-ILDN [69] force field was used to model the system with the TIP3P water model [70]. The ligand was parameterized using antechamber [71] with GAFF [72]. The parameters found were converted into GROMACS format using ACPYPE [73]. A minimization was done with the steepest descent algorithm and stopped when the maximum force was $\leq$ 1000 kJ/mol$\cdot$nm. Periodic boundary conditions were considered. We used the leapfrog algorithm to propagate the equations of motion and the nonbonding interactions were calculated using a PME scheme with a 1.0 nm cutoff for the part in real space. We performed a 100 ps equilibration in an NVT ensemble using the the velocity rescaling thermostat [74] followed by a 100 ps equilibration in an NPT ensemble using Parrinello-Rahman barostat [75] with a time step of 2 fs. The MD production was performed without restrains, with a time step of 2 fs in an NPT ensemble at 300.15 K and 1 atm. We chose two CVs for the well-tempered MetaD simulations. The first CV was the solvation state of the ligand ($w$), calculated as the coordination number between two groups $w=\sum_{i\in A}\sum_{i\in B}w_{ij},$ (17) with $w_{ij}=\frac{1-\Big{(}\frac{r_{ij}-d_{0}}{r_{0}}\Big{)}^{n}}{1-\Big{(}\frac{r_{ij}-d_{0}}{r_{0}}\Big{)}^{m}},$ (18) where $d_{0}=0$, $r_{0}=0.3$, $n=6$ and $m=10$. In the sum of Eq. 17, group A is the center of mass (COM) of the ligand and group B are the oxygen atoms of all water molecules at a distance shorter than 5 Å from the pocket. The second CV was the distance between binding pocket and ligand ($d$). We define $d$ as the distance between the COM of the heavy atoms in the ligand and the COM of the $\alpha$-carbons in the binding pocket, i.e., at a distance of 5 Å from the ligand in the binding pose. Well-tempered MetaD [57] is performed with an initial Gaussian height of 1.5 kJ/mol. The width ($\sigma$) of the Gaussians was $\sigma_{w}=0.13$ and $\sigma_{d}=0.02$ nm, for the $w$ and $d$ CVs, respectively. We used a bias factor of 15. We performed independent simulations where Gaussians were deposited every 1, 10 and 100 ps with total simulation time of 10, 40 and 300 ns, respectively. 50 simulations per Gaussian deposition time were performed. To monitor the escape of the ligand from the binding pocket, we followed (without biasing it) the evolution of an additional CV, the crystallographic contacts ($c$). Using $c$ we can distinguish easily between the bound and unbound state of the ligand. To define $c$ we use an equation analogous to Eq. 18, involving the atoms responsible for the main interactions between the ligand and the binding pocket, $i.e.,$ group A are the nitrogen atoms of the ligand that form hydrogen bonds – in the binding pose – with the atoms in the group B: O-Glu81, O-Leu83 and O-Gln85. These interactions are shown in Fig. 3a (Main Text). ## II Supplementary Figures Figure 1: CDFs of rescaled times and fits to Poisson distributions along x and y. CDFs of the rescaled barrier-crossing (jump) times from iMetaD. Runs with bias along $x$ (bottom) or $y$ (top) for different bias-deposition times ($d_{t}$). Red to blue points go from frequent to infrequent $d_{t}$. Fits to the respective Poisson distribution are shown as solid lines [45]. Figure 2: Fits of $V_{MB}(t)$ for the CDK2-ligand unbinding simulations using a logarithmic function. Time-dependent maximum bias $V_{\mathrm{MB}}(t)$ (see Main Text Eq. 2) for the CDK2-ligand unbinding simulations. Solid lines show the fits to a logarithmic function $f(t,a,b)=a\log(1+b\,t)$. The fit for the longest bias-deposition time was up to $200$ns. Figure 3: Extracted parameters from CDK2-ligand unbinding simulations using the KTR theory. (top) Quality of bias $\gamma$ and (bottom) transition rate $k_{0}$ extracted using the KTR method for the CDK2-ligand unbinding simulations with different bias- deposition times. The results show the median, and credible interval calculated at 25% and 75% for the distribution of trials that passed the KS- test from bootstrap analysis (which were more than 50% of the trails for all cases). The experimental value of the transition rate is shown as a dashed black line.
$\displaystyle=[a(\bar{x}-x^{1})+\alpha^{1},\ldots,a(\bar{x}-x^{i}),\ldots,a(\bar{x}-x^{N})+\alpha^{N}]^{\operatorname{T}},$ (3.33) $\displaystyle h^{i}(t,\bm{x},y,\bm{z};\bm{\alpha}^{-i})$ $\displaystyle=\frac{\epsilon}{2}(\bar{x}-x^{i})^{2}-\frac{1}{2}(q(\bar{x}-x^{i})-\frac{z^{i}}{\sigma\sqrt{1-\rho^{2}}})^{2},$ (3.34) where $\bm{z}=(z^{0},z^{1},\dots,z^{N})\in\mathbb{R}^{N+1}$. Figures 10–11 show the performance of the DFP algorithm on a ten-player game, using the parameter, $a=0.1,\quad q=0.1,\quad c=0.5,\quad\epsilon=0.5,\quad\rho=0.2,\quad\sigma=1,\quad T=1.$ (3.35) The relative squared error (RSE) is defined by $\displaystyle\text{RSE}=\frac{\sum_{\begin{subarray}{c}i\in\mathcal{I}\\\ 1\leq j\leq J\end{subarray}}\left(u^{i}(0,\bm{x}_{t_{0}}^{(j)})-\widehat{u}^{i}(0,\bm{x}_{t_{0}}^{(j)})\right)^{2}}{\sum_{\begin{subarray}{c}i\in\mathcal{I}\\\ 1\leq j\leq J\end{subarray}}\left(u^{i}(0,\bm{x}_{t_{0}}^{(j)})-\bar{u}^{i}\right)^{2}},\;\text{or }\;\text{RSE}=\frac{\sum_{\begin{subarray}{c}i\in\mathcal{I}\\\ 0\leq n\leq N_{T}-1\\\ 1\leq j\leq J\end{subarray}}\left(\nabla_{\bm{x}}u^{i}(t_{n},\bm{x}_{t_{n}}^{(j)})-\nabla_{\bm{x}}\widehat{u}^{i}(t_{n},\bm{x}_{t_{n}}^{(j)})\right)^{2}}{\sum_{\begin{subarray}{c}i\in\mathcal{I}\\\ 0\leq n\leq N_{T}-1\\\ 1\leq j\leq J\end{subarray}}\left(\nabla_{\bm{x}}u^{i}(t_{n},\bm{x}_{t_{n}}^{(j)})-\overline{\nabla_{\bm{x}}u}^{i}\right)^{2}},$ where $\hat{u}^{i}$ is the prediction from the neural networks, and $\bar{u}^{i}$ (resp. $\overline{\nabla_{\bm{x}}u}^{i}$) is the average of $u^{i}~{}(\emph{resp.~{}}\nabla_{\bm{x}}u^{i})$ evaluated at all the indices $j,n$. To compute the relative error, $J=256$ ground truth sample paths $\\{\bm{x}_{t_{n}}^{(j)}\\}_{n=0}^{N_{T}-1}$ are generated using an Euler scheme based on (3.13)(3.28)(3.29) and the true optimal strategy. Note that the superscript ${(j)}$ here does not mean the player index, but the $j^{th}$ path for all players. In particular, Figure 10 compares the relative squared error as $N_{\text{SGD\\_per\\_stage}}$ varies from 10 to 400. The convergence of the learning curves with small $N_{\text{SGD\\_per\\_stage}}$ asserts that each individual problem does not need to be solved so accurately. Furthermore, the fact that the performances are similar under different $N_{\text{SGD\\_per\\_stage}}$ with the same total budget of SGD updates suggests that the algorithm is insensitive to the choice of this hyperparameter. The final relative squared errors of $u$ and $\nabla u$ averaged from three independent runs of deep fictitious play are 4.6% and 0.2%, respectively. Figure 11 presents one sample path for each player of the optimal state process $X_{t}^{i}$ and the optimal control $\alpha_{t}^{i}$ _vs._ their approximations $\hat{X}_{t}^{i},\hat{\alpha}_{t}^{i}$ provided by the optimized neural networks. Figure 10: Linear-quadratic systemic risk example in Section 3.1.3. The relative squared errors of $u^{i}$ (left) and $\nabla u^{i}$ (right) along the training process of deep fictitious play for the inter-bank game. The relative squared errors of $u^{i}(0,\check{\bm{X}}_{0}^{i})$ and $\\{\nabla u^{i}(t_{n},\check{\bm{X}}_{n}^{i})\\}_{n=0}^{N_{T-1}}$ are evaluated. Figure 11: Linear-quadratic systemic risk example in Section 3.1.3. A sample path for each player of the inter-bank game with $N=10$. Top: the optimal state process $X_{t}^{i}$ (solid lines) and its approximation $\hat{X}_{t}^{i}$ (circles) provided by the optimized neural networks, under the same realized path of Brownian motion. Bottom: comparisons of the strategies $\alpha_{t}^{i}$ and $\hat{\alpha}_{t}^{i}$ (dashed lines). #### 3.1.4 PDE-based deep learning algorithms For the sake of completeness, we briefly discuss how PDE-based methods can be adapted to solve $N$-player Nash equilibria. As mentioned earlier, such equilibria can be characterized using PDE systems, as seen in (3.15). We recall that this system stems from the combination of the HJB equations for every player’s control problem. They are coupled since the value function of each player depends on the other players’ actions. Instead of translating this system into an FBSDE system as discussed in Section 3.1.3, one can directly solve the HJB equations using for instance the DGM method presented in Section 2.5 for stochastic optimal control problems. The value function for each player is replaced by a neural network. Then, the loss is computed based on the residuals of all the PDEs at random points in space and time. The application is rather straightfoward so we refrain from repeating the DGM algorithm. Technical details in the implementation (e.g., the neural network architecture or the distribution used to sample points) should be discussed on a case-by-case basis. This approach has been used for instance in [12, Section 5] by Al-Aradi et al. to solve a system of HJB equations arising in a finite-player game modeling systemic risk introduced by Carmona et al. [86]. This example has already been discussed in the numerical illustration provided in Section 3.1.2. ### 3.2 Mean-field games Mean-field games, introduced independently by Lasry and Lions in [228, 229, 230] and by Huang, Malhamé and Caines in [192, 191], provide a paradigm to approximate the solutions of stochastic games with very large number of players. The approximation relies on two main assumptions: anonymity, which means that the players interact only through the population’s empirical distribution, and indistinguishability, which means that $(b^{i},\sigma^{i},f^{i},g^{i})$ are the same for all $i$. Then, one can pass (at least formally) to the limit by letting the number of players grow to infinity. In the asymptotic problem, the influence of each individual player on the rest of the population vanishes and the Nash equilibrium can be characterized by studying the problem posed to a representative player. Under suitable assumptions, it can be shown that solving this limiting problem provides an approximate equilibrium for the finite-player game. We refer to the notes [67] and the books [47, 84, 85] as well as the references therein for further background on mean-field games. Mean field games are usually studied using the notion of mean field Nash equilibrium. As in finite-player games, intuitively this notion corresponds to a situation in which no player can benefit from unilateral deviation. This is different from the solution to MFC problems discussed in Section 2.6.2, which can be interpreted as a social optimum, i.e., a situation in which the players collectively choose their control in order to minimize the social cost. Numerical methods for such games have been developed using mostly traditional techniques such as finite difference schemes [5, 2], semi-Lagrangian schemes [73, 75], or methods based on probabilistic approaches [103, 19]. See, e.g., [9, 232] for recent surveys. However, similarly to control problems or finite- player games, these methods do not scale well in terms of dimensionality, and in particular, they are not very suitable for problems with delay or with common noise. This motivates the development of deep learning methods, some of which are described in [91]. In the sequel, we describe the theoretical framework of mean field games and then survey recent deep learning methods. #### 3.2.1 Theoretical background We start by defining the notion of MFG, and then we discuss how equilibria can be characterized in terms of PDEs, BSDEs, and the so-called master equation. ##### 3.2.1.1 Definition of the problem Going back to problem (3.2), let us assume that $b^{i},\sigma^{0},\sigma^{i},f^{i},g^{i}$ depend on the rest of the population’s states and actions in an anonymous way, i.e., there exists functions $b,\sigma^{0},\sigma,f,g$ such that $\displaystyle b^{i}(t,\bm{X}_{t},\bm{\alpha}_{t})=b(t,X^{i}_{t},\nu^{N}_{t},\alpha^{i}_{t}),\quad\sigma^{0}(t,\bm{X}_{t},\bm{\alpha}_{t})=\sigma^{0}(t,X^{i}_{t},\nu^{N}_{t},\alpha^{i}_{t}),\quad\sigma^{i}(t,\bm{X}_{t},\bm{\alpha}_{t})=\sigma(t,X^{i}_{t},\nu^{N}_{t},\alpha^{i}_{t}),$ (3.36) $\displaystyle f^{i}(t,\bm{X}_{t},\bm{\alpha}_{t})=f(t,X^{i}_{t},\nu_{t}^{N},\alpha^{i}_{t}),\quad g^{i}(\bm{X}_{T})=g(X_{T}^{i},\mu_{T}^{N}),$ where $\nu^{N}_{t}=\frac{1}{N}\sum_{j=1}^{N}\delta_{(X^{j}_{t},\alpha^{j}_{t})}$ is the empirical state-action distribution of the population and $\mu^{N}_{t}=\frac{1}{N}\sum_{j=1}^{N}\delta_{X^{j}_{t}}$ is its first marginal, which corresponds to the state distribution. We keep the same notation $\sigma^{0}$ for simplicity, with a slight abuse of notation. Then the cost associated to a strategy profile $\bm{\alpha}$ is defined as $J^{i}(\bm{\alpha})=\mathbb{E}\left[\int_{0}^{T}f(t,X^{i}_{t},\nu_{t}^{N},\alpha^{i}_{t})\,\mathrm{d}t+g(X_{T}^{i},\mu_{T}^{N})\right],$ (3.37) where the processes $X^{j}$, $j=1,\dots,N$, solve the SDE system $\,\mathrm{d}X_{t}^{j}=b(t,X^{j}_{t},\nu^{N}_{t},\alpha^{j}_{t})\,\mathrm{d}t+\sigma(t,X^{j}_{t},\nu^{N}_{t},\alpha^{j}_{t})\,\mathrm{d}W_{t}^{j}+\sigma^{0}(t,X^{j}_{t},\nu^{N}_{t},\alpha^{j}_{t})\,\mathrm{d}W_{t}^{0},\quad X_{0}^{j}\sim\mu_{0},\quad j\in\mathcal{I},$ (3.38) where the initial positions are i.i.d., with $\nu$ and $\mu$ being as above the flows of empirical state-action and empirical state distributions. The influence of a given player on the dynamics and the cost of another player occurs only through the empirical distribution flow $\nu^{N}$. So when $N$ increases, the influence of each player decreases. By symmetry, we can expect that in the limit it is sufficient to study the problem for a single representative player. To formulate the MFG, let $\nu=(\nu_{t})_{0\leq t\leq T}$ be a stochastic distribution flow adapted to the filtration generated by $W^{0}$, which is interpreted at the evolution of the population’s state-action configuration. Let $\alpha$ be an open-loop control. A representative player’s dynamics are given by, $\displaystyle\begin{dcases}\,\mathrm{d}X_{t}^{\nu,\alpha}=b(t,X_{t}^{\nu,\alpha},\nu_{t},\alpha_{t})\,\mathrm{d}t+\sigma(t,X_{t}^{\nu,\alpha},\nu_{t},\alpha_{t})\,\mathrm{d}W_{t}+\sigma^{0}(t,X_{t}^{\nu,\alpha},\nu_{t},\alpha_{t})\,\mathrm{d}W^{0}_{t},\quad t\geq 0\\\ X^{\nu,\alpha}_{0}\sim\mu_{0},\end{dcases}$ (3.39) where $W$ is a standard $m$-dimensional Brownian motion independent of $W^{0}$. For a representative player, the cost associated to using the control $\alpha$ when the population is given by the distribution flow $\nu=(\nu_{t})_{0\leq t\leq T}$ is defined as $\displaystyle J^{MFG}(\alpha;\nu)=\mathbb{E}\left[\int_{0}^{T}f(t,X_{t}^{\nu,\alpha},\nu_{t},\alpha_{t})\,\mathrm{d}t+g(X_{T}^{\nu,\alpha},\mu_{T})\right],$ (3.40) under the constraint that the process $X^{\nu,\alpha}=(X_{t}^{\nu,\alpha})_{t\geq 0}$ solves the SDE (3.39). ###### Definition 3.7 (Mean-field Nash equilibrium). Consider the MFG problem introduced above. A pair $(\hat{\nu},\hat{\alpha})$ consisting of a stochastic flow $\hat{\nu}=(\hat{\nu}_{t})_{0\leq t\leq T}$ of probability measures in $\mathcal{P}_{2}(\mathbb{R}^{d})$ adapted to the common noise filtration and an open-loop control $\hat{\alpha}=(\hat{\alpha}_{t})_{t\in[0,T]}$ is a mean-field Nash equilibrium if it satisfies the following two conditions 1. 1. $\hat{\alpha}$ minimizes $J^{MFG}(\cdot;\hat{\nu})$; 2. 2. For all $t\in[0,T]$, $\hat{\nu}_{t}$ is the probability distribution of $(X_{t}^{\hat{\nu},\hat{\alpha}},\hat{\alpha}_{t})$ conditioned on $W^{0}$. Note that, in the first condition, $\hat{\nu}$ is fixed when an infinitesimal agent performs their optimization. The second condition ensures that if all the players use the control $\hat{\alpha}$, the law of their individual states and actions is indeed $\hat{\nu}$. The original formulation of MFGs [231] considers interactions through the state distribution only. MFGs with interactions through the joint distribution of state and actions, as presented in the above definition, are sometimes referred to as extended MFGs or MFGs of controls, see e.g., [147, 148, 70, 211, 236]. ###### Remark 3.8. For a given model (cost functions and dynamics), one can look for a mean field Nash equilibrium (Definition 3.7) or a mean field social optimum (Definition 2.13). As already mentioned, the first notion corresponds to a situation in which the agents selfishly minimize their individual cost, while the second notion corresponds to a situation in which the agents cooperate to minimize the social cost. From the mathematical viewpoint, the key difference is that in Definition 3.7, the distribution is fixed when one looks for an optimal control, while in Definition 2.13, the control directly influences the distribution. As a consequence an MFG (Nash equilibrium) problem is a fixed point problem, while an MFC (social optimum) is an optimization problem (or more precisely an optimal control problem for McKean-Vlasov dynamics). In general, the two solutions are different, which leads to a notion of price of anarchy. See e.g. [153, 87, 71, 232, 17] for more details and comparisons of MFG and MFC solutions. Next, we review several ways to characterize the mean field Nash equilibrium concept using analytical and probabilistic techniques. ##### 3.2.1.2 PDE system For simplicity, let us assume that there is no common noise. We assume that there exists an equilibrium, and we denote by $\hat{\nu}=(\hat{\nu}_{t})_{t\geq 0}$ the associated mean-field flow of distributions. When considering Markovian controls, one can define the value function $u$ by $u(t,x)=\inf_{\alpha}\mathbb{E}\left[\int_{t}^{T}f(s,X_{s},\hat{\nu}_{s},\alpha_{s})\,\mathrm{d}s+g(X_{T},\hat{\mu}_{T})|X_{t}=x\right].$ (3.41) As in standard OC problems, under suitable conditions, $u(t,x)$ solves the HJB equation: $\begin{dcases}\partial_{t}u(t,x)+\min_{\alpha\in\mathcal{A}}H(t,x,\hat{\nu}_{t},\nabla_{x}u(t,x),\mathrm{Hess}_{x}u(t,x),\alpha)=0,\\\ u(T,x)=g(x,\hat{\mu}_{T}),\end{dcases}$ (3.42) where $H(t,x,\nu,p,q,\alpha)=b(t,x,\nu,\alpha)\cdot p+\frac{1}{2}\mathrm{Tr}(\sigma(t,x,\nu,\alpha)\sigma(t,x,\nu,\alpha)^{\operatorname{T}}q)+f(t,x,\nu,\alpha).$ (3.43) If (3.42) has a classical solution, then the optimal control is given by $\hat{\alpha}(t,x)=\alpha(t,x,\hat{\nu}_{t},\nabla_{x}u(t,x),\mathrm{Hess}_{x}u(t,x)),$ where $\alpha(t,x,\nu,p,q)=\operatorname*{arg\,min}_{\alpha\in\mathcal{A}}H(t,x,\nu,p,q,\alpha).$ The consistency condition for the equilibrium mean-field flow is equivalent to: the state distribution flow $\hat{\mu}=(\hat{\mu}_{t})_{t\geq 0}$ solves the following Kolmogorov-Fokker-Planck (KFP) PDE $\begin{dcases}\displaystyle\partial_{t}\hat{\mu}(t,x)-\sum_{i,j}\frac{\partial^{2}}{\partial_{x_{i}}\partial_{x_{j}}}\left(\hat{D}_{i,j}(t,x)\hat{\mu}(t,x)\right)+\mathrm{div}\Bigl{(}\hat{\mu}(t,x)\hat{b}(t,x)\Bigr{)}=0,\\\ \hat{\mu}(0)=\mu_{0},\end{dcases}$ (3.44) where $\hat{D}(t,x)=\frac{1}{2}\sigma(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x))\sigma(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x))^{\operatorname{T}},\qquad\hat{b}(t,x)=b(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x)),$ (3.45) and the state-action distribution $\hat{\nu}_{t}$ at time time $t$ is the push forward of $\hat{\mu}_{t}$ by $(I_{d},\hat{\alpha}(t,\cdot))$, which we will denote by $\hat{\nu}_{t}=\hat{\mu}_{t}\circ(I_{d},\hat{\alpha}(t,\cdot))^{-1}.$ The forward-backward PDE system (3.43)–(3.44) characterizes the mean field Nash equilibrium. We refer to e.g., [211] for the existence of classical solutions to such PDE systems under suitable assumptions. ###### Remark 3.9. MFC problems also give rise to analogous forward-backward PDE systems, except that the solution $u$ of the backward equation is not interpreted as a value function of an optimal control problem but rather as an adjoint state. We refer to [47, 7] for more details. The KFP equation remains the same, but the HJB equation has one extra term reflecting the fact that the whole population performs the optimization simultaneously. In the presence of common noise, the HJB and KFP equations become stochastic. We will not discuss this system in the sequel, and refer the interested readers to [267] for the derivation of stochastic HJB equations and [83, 68] for stochastic HJB-KFP systems arising in MFG (with the state distribution only). ##### 3.2.1.3 FBSDE system We now review the characterization of MFG equilibria using BSDEs. As for standard OC (see Section 2.1), BSDEs can be used to characterize the value function or its gradient. For simplicity, we assume that there is no common noise. We further assume that the volatility of the idiosyncratic noise is uncontrolled, in which case $\hat{\alpha}$ is independent of $\mathrm{Hess}_{x}u(t,x)$ and the PDE (3.42) becomes semi-linear: $\partial_{t}u(t,x)+\frac{1}{2}\mathrm{Tr}(\sigma(t,x,\hat{\nu}_{t})\sigma(t,x,\hat{\nu}_{t})^{\operatorname{T}}\mathrm{Hess}_{x}u(t,x))+b(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x,\hat{\nu}_{t},\nabla_{x}u(t,x)))\cdot\nabla_{x}u(t,x)\\\ +f(t,x,\hat{\alpha}(t,x,\hat{\nu}_{t},\nabla_{x}u(t,x)))=0.$ (3.46) Suppose that there exist functions $\mu(t,\nu,x)$ and $h(t,x,\nu,z)$ such that $\tilde{b}(t,\hat{\nu}_{t},x)\cdot\nabla_{x}u(t,x)+h(t,x,\hat{\nu}_{t},\sigma(t,x)^{\operatorname{T}}\nabla_{x}u(t,x))\\\ =b(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x,\hat{\nu}_{t},\nabla_{x}u(t,x)))\cdot\nabla_{x}u(t,x)+f(t,x,\hat{\nu}_{t},\hat{\alpha}(t,x,\nabla_{x}u(t,x))).$ Then the non-linear Feynman-Kac formula (see [265]) gives the following BSDE interpretation of $u(t,x)$: $\begin{dcases}\,\mathrm{d}\mathcal{X}_{t}=\tilde{b}(t,\hat{\nu}_{t},\mathcal{X}_{t})\,\mathrm{d}t+\sigma(t,\hat{\nu}_{t},\mathcal{X}_{t})\,\mathrm{d}W_{t},\quad\mathcal{X}_{0}\sim\mu_{0},\\\ \,\mathrm{d}\mathcal{Y}_{t}=-h(t,\hat{\nu}_{t},\mathcal{X}_{t},\mathcal{Z}_{t})\,\mathrm{d}t+\mathcal{Z}_{t}\,\mathrm{d}W_{t},\quad\mathcal{Y}_{T}=g(\mathcal{X}_{T},\hat{\mu}_{T}),\end{dcases}$ (3.47) by the relation $\mathcal{Y}_{t}=u(t,\mathcal{X}_{t}),\quad\mathcal{Z}_{t}=\sigma(t,\hat{\nu}_{t},\mathcal{X}_{t})^{\operatorname{T}}\nabla_{x}u(t,\mathcal{X}_{t}).$ Moreover, the optimal value is given by $\mathbb{E}[\mathcal{Y}_{0}]=\mathbb{E}[u(0,\mathcal{X}_{0})]$. This BSDE characterizes the value function for a representative player given the mean field flow $\hat{\nu}$. Then, the consistency condition reads: $\hat{\nu}_{t}=\mathcal{L}\left(\mathcal{X}_{t},\hat{\alpha}(t,\mathcal{X}_{t},\hat{\nu}_{t},(\sigma(t,\hat{\nu}_{t},\mathcal{X}_{t})^{\operatorname{T}})^{-1}\mathcal{Z}_{t})\right).$ In the controlled volatility case, the PDE (3.42) is fully nonlinear, and its solution is connected to a solution of the 2BSDE, see [106] and Section 2.1. The Pontryagin stochastic maximum principle provides the connection to the FBSDE. Define the generalized Hamiltonian $\mathcal{H}$ by $\mathcal{H}(t,x,\nu,y,z,\alpha)=b(t,x,\nu,\alpha)y+\mathrm{Tr}(\sigma^{\operatorname{T}}(t,x,\nu,\alpha)z)+f(t,x,\nu,\alpha).$ (3.48) If the Hamiltonian $\mathcal{H}$ is convex in $(x,\alpha)$, and $(X_{t},Y_{t},Z_{t})$ solve $\begin{dcases}\,\mathrm{d}X_{t}=b(t,X_{t},\hat{\nu}_{t},\hat{\alpha}_{t})\,\mathrm{d}t+\sigma(t,X_{t},\hat{\nu}_{t},\hat{\alpha}_{t})\,\mathrm{d}W_{t},\qquad X_{0}\sim\mu_{0},\\\ \,\mathrm{d}Y_{t}=-\nabla_{x}\mathcal{H}(t,X_{t},\hat{\nu}_{t},Y_{t},Z_{t},\hat{\alpha}_{t})\,\mathrm{d}t+Z_{t}\,\mathrm{d}W_{t},\quad Y_{T}=\partial_{x}g(X_{T},\hat{\mu}_{T}),\end{dcases}$ (3.49) such that $\hat{\alpha}$ minimizes $\mathcal{H}$ along $(X_{t},\hat{\nu}_{t},Y_{t},Z_{t})$, then $\hat{\alpha}$ is the optimal control. If the value function is smooth enough, then $Y_{t}=\nabla_{x}u(t,X_{t}),\quad Z_{t}=\sigma(t,X_{t},\hat{\nu}_{t},\hat{\alpha})^{\operatorname{T}}\mathrm{Hess}_{x}u(t,X_{t}).$ (3.50) In this case, the consistency condition for the equilibrium mean field flow $\hat{\nu}$ reads $\hat{\nu}_{t}=\mathcal{L}\left(X_{t},\hat{\alpha}(t,X_{t},\hat{\nu}_{t},(\sigma(t,\hat{\nu}_{t},X_{t})^{\operatorname{T}})^{-1}Y_{t},Z_{t})\right).$ When there is common noise, the FBSDE system becomes $\begin{dcases}\,\mathrm{d}X_{t}=b(t,X_{t},\hat{\nu}_{t},\hat{\alpha}_{t})\,\mathrm{d}t+\sigma(t,X_{t},\hat{\nu}_{t},\hat{\alpha}_{t})\,\mathrm{d}W_{t}+\sigma^{0}(t,X_{t},\hat{\nu}_{t},\hat{\alpha}_{t})\,\mathrm{d}W^{0}_{t},\qquad X_{0}\sim\mu_{0},\\\ \,\mathrm{d}Y_{t}=-\nabla_{x}\mathcal{H}(t,X_{t},\hat{\nu}_{t},Y_{t},Z_{t},Z^{0}_{t},\hat{\alpha}_{t})\,\mathrm{d}t+Z_{t}\,\mathrm{d}W_{t}+Z^{0}_{t}\,\mathrm{d}W^{0}_{t},\quad Y_{T}=\partial_{x}g(X_{T},\hat{\mu}_{T}),\end{dcases}$ (3.51) where, compared with (3.48), the definition of $\mathcal{H}$ includes an extra term $\mathrm{Tr}({\sigma^{0}}^{\operatorname{T}}(t,x,\nu,\alpha)z^{0})$. ###### Remark 3.10. MFC problems also lead to analogous FBSDE systems. In the absence of common noise, Pontryagin’s maximum principle is derived for instance in [82] and [1] when the interactions are through the state or the state-action distributions, respectively. This leads to a BSDE with an extra term accounting for the variation of the distribution during the optimization of the control. All the above systems are particular cases of the following generic system of FBSDEs of McKean-Vlasov type (MKV FBSDE for short) $\left\\{\begin{aligned} \,\mathrm{d}X_{t}=\,&B\left(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},Z_{t},Z^{0}_{t}\right)\,\mathrm{d}t\\\ &\qquad+\sigma(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},Z_{t},Z^{0}_{t})\,\mathrm{d}W_{t}\\\ &\qquad+\sigma^{0}(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},Z_{t},Z^{0}_{t})\,\mathrm{d}W^{0}_{t},\\\ \,\mathrm{d}Y_{t}=\,&-F\Big{(}t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},\sigma^{\operatorname{T}}(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},Z_{t},Z^{0}_{t})Z_{t},\\\ &\quad\qquad{\sigma^{0}}^{\operatorname{T}}(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}|W^{0}),Y_{t},Z_{t},Z^{0}_{t})Z^{0}_{t}\Big{)}\,\mathrm{d}t\\\ &\qquad+Z_{t}\,\mathrm{d}W_{t}+Z^{0}_{t}\,\mathrm{d}W^{0}_{t},\\\ \mathcal{L}(X_{0})&=\mu_{0},\qquad Y_{T}=G(X_{T},\mathcal{L}(X_{T}|W^{0})).\end{aligned}\right.$ (3.52) ###### Remark 3.11. When there is no common noise, $W^{0}$ and $Z^{0}$ are dropped, and the system becomes $\left\\{\begin{aligned} \,\mathrm{d}X_{t}=&B\left(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}),Y_{t},Z_{t}\right)\,\mathrm{d}t+\sigma(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}),Y_{t},Z_{t})\,\mathrm{d}W_{t},\\\ \,\mathrm{d}Y_{t}=&-F\Big{(}t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}),Y_{t},\sigma^{\operatorname{T}}(t,X_{t},\mathcal{L}(X_{t},Y_{t},Z_{t}),Y_{t},Z_{t})Z_{t}\Big{)}\,\mathrm{d}t+Z_{t}\,\mathrm{d}W_{t},\\\ \mathcal{L}(X_{0})&=\mu_{0},\qquad Y_{T}=G(X_{T},\mathcal{L}(X_{T})).\end{aligned}\right.$ (3.53) When the interactions are not through the state-action distribution but through the state distribution only, $\mathcal{L}(X_{t},Y_{t},Z_{t})$ is reduced to $\mathcal{L}(X_{t})$. ##### 3.2.1.4 Master equation As mentioned earlier, in the PDE system (3.42)–(3.44), $u$ plays the role of the value function of a representative player when the rest of the population is at equilibrium. This function depends explicitly on $t$ and $x$ but, intuitively, a player’s value function can also depend on the population distribution. When there is no common noise, this distribution evolves in a deterministic way, so knowing $\mu_{0}$ and $t$ as well as the control used by the population (which is the equilibrium control $\hat{\alpha}$, assuming the population is at equilibrium) is enough to recover $\hat{\mu}(t)$, e.g., by solving the corresponding KFP equation (3.44). However, we can make this dependence explicit by considering a function $\mathcal{U}:[0,T]\times\mathbb{R}^{d}\times\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}$ such that $\mathcal{U}(t,x,\hat{\mu}(t))=u(t,x),$ (3.54) where $\hat{\mu}=(\hat{\mu}(t))_{t}$ is the mean-field equilibrium distribution flow. This correspondence is even more useful when common noise influences the dynamics of the players. In this case, $u(t,x)$ is a random variable whereas $\mathcal{U}$ is still a deterministic function and the lefthand side of (3.54) is random only due to $\hat{\mu}(t)$. This function $\mathcal{U}$ has been instrumental in proving the convergence of finite- player Nash equilibria towards mean field Nash equilibria, see [68] for more details. It turns out that, under suitable conditions, $\mathcal{U}$ satisfies the PDE that we will present below, introduced by Pierre-Louis Lions and called the Master equation. It involves partial derivatives with respect to the probability measure argument in $\mathcal{U}$. We say that a function $F:\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}$ is $\mathcal{C}^{1}$ if there exists a continuous map $\displaystyle\frac{\delta F}{\delta\mu}:\mathcal{P}(\mathbb{R}^{d})\times\mathbb{R}^{d}\to\mathbb{R}$ such that, for any $\mu,\mu^{\prime}\in\mathcal{P}(\mathbb{R}^{d})$, $\lim_{s\to 0^{+}}\frac{F((1-s)\mu+s\mu^{\prime})-F(\mu)}{s}=\int_{\mathbb{R}^{d}}\frac{\delta F}{\delta\mu}(\mu,y)d(\mu^{\prime}-\mu)(y).$ The derivative $\displaystyle\frac{\delta F}{\delta\mu}$ is sometimes referred to as the flat derivative. If $\displaystyle\frac{\delta F}{\delta\mu}$ is of class $\mathcal{C}^{1}$ with respect to the second variable, the intrinsic derivative $\partial_{\mu}F:\mathcal{P}(\mathbb{R}^{d})\times\mathbb{R}^{d}\to\mathbb{R}$ is defined by $\partial_{\mu}F(\mu,y)=\partial_{y}\frac{\delta F}{\delta\mu}(\mu,y).$ We will write $\partial_{\mu}F(\mu)(y)$ instead of $\partial_{\mu}F(\mu,y)$. For more details, we refer to the lectures of Pierre-Louis Lions [245], as well as [67] and [84, Chapter 5]. We can now present the Master equation. To the best of our knowledge, the theory has not yet been developed for the general MFG model described above. We thus consider the case in which the volatility is not controlled, and the interactions are only through the state distribution instead of the state- action distribution. For the sake of brevity, we omit the derivation and refer to e.g. [85, Section 4.4]. The Master equation is the following backward PDE, posed on the space $[0,T]\times\mathbb{R}^{d}\times\mathcal{P}_{2}(\mathbb{R}^{d})$, $\displaystyle\partial_{t}\mathcal{U}(t,x,\mu)$ $\displaystyle\quad+b(t,x,\mu,\hat{\alpha}(t,x,\mu,\partial_{x}\mathcal{U}(t,x,\mu)))\cdot\partial_{x}\mathcal{U}(t,x,\mu)$ $\displaystyle\quad+\int_{\mathbb{R}^{d}}b(t,v,\mu,\hat{\alpha}(t,v,\partial_{x}\mathcal{U}(t,v,\mu)))\cdot\partial_{\mu}\mathcal{U}(t,x,\mu)(v)d\mu(v)$ $\displaystyle\quad+\frac{1}{2}\mathrm{Tr}\left[(\sigma\sigma^{\operatorname{T}}+\sigma^{0}(\sigma^{0})^{\operatorname{T}})(t,x,\mu)\partial_{xx}^{2}\mathcal{U}(t,x,\mu)\right]$ $\displaystyle\quad+\frac{1}{2}\int_{\mathbb{R}^{d}}\mathrm{Tr}\left[(\sigma\sigma^{\operatorname{T}}+\sigma^{0}(\sigma^{0})^{\operatorname{T}})(t,v,\mu)\partial_{v}\partial_{\mu}\mathcal{U}(t,x,\mu)(v)\right]d\mu(v)$ $\displaystyle\quad+\frac{1}{2}\int_{\mathbb{R}^{2d}}\mathrm{Tr}\left[(\sigma\sigma^{\operatorname{T}}+\sigma^{0}(\sigma^{0})^{\operatorname{T}})(t,v,\mu)\partial_{\mu}^{2}\mathcal{U}(t,x,\mu)(v,v^{\prime})\right]d\mu(v)d\mu(v^{\prime})$ $\displaystyle\quad+\int_{\mathbb{R}^{d}}\mathrm{Tr}\left[(\sigma^{0}(t,x,\mu)(\sigma^{0})^{\operatorname{T}})(t,v,\mu)\partial_{x}\partial_{\mu}\mathcal{U}(t,x,\mu)(v)\right]d\mu(v)$ $\displaystyle\quad+f(t,x,\mu,\hat{\alpha}(t,x,\mu,\partial_{x}\mathcal{U}(t,x,\mu)))=0,$ for $t\in[0,T]$, $x\in\mathbb{R}^{d}$ and $\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})$, and with the terminal condition: for every $x\in\mathbb{R}^{d}$ and $\mu\in\mathcal{P}_{2}(\mathbb{R}^{d})$, $\mathcal{U}(T,x,\mu)=g(x,\mu).$ For more details on the analysis of this PDE, we refer the interested reader to the monographs [68], [85, Chapters 4 to 7], and [100] concerning the existence of classical solutions under suitable conditions. #### 3.2.2 Direct parameterization As discussed earlier (Section 2.2), the direct parameterization approach for optimal control (Section 3.1.2) can be extended to finite-player games. It can also be extended to mean-field games by updating alternatively the control (using the direct parameterization method) and the mean-field instead of updating individually all the player’s controls as in finite-player games. This idea can be applied to various classes of controls (e.g., open-loop and closed-loop ones). To avoid repetition, we discuss below the application of this approach to a class of MFGs with common noise, which forces to use a more general class of controls. As seen in Definition 3.7, a mean-field equilibrium is a standard control problem (corresponding to the first item in the definition) plus a fixed point problem (corresponding to the second item). Motivated by MFG models with common noise, Min and Hu [252] proposed an algorithm called Sig-DFP utilizing the concept of signature in rough path theory [249] and fictitious play from game theory [61, 62]. Signature is used to accurately represent the conditional distribution of the state given the common noise, and fictitious play is used to solve the fixed-point problem and identify the equilibrium [69]. For a path $x:[0,T]\to\mathbb{R}^{d}$, the $p$-variation is defined by $\|x\|_{p}=\left(\sup_{D\subset[0,T]}\sum_{n=0}^{r-1}\|x_{t_{n+1}}-x_{t_{n}}\|^{p}\right)^{1/p},$ (3.55) where $D\subset[0,T]$ denotes a partition $0\leq t_{0}<t_{1}<\ldots<t_{r}\leq T$. Let $T((\mathbb{R}^{d}))=\bigoplus_{k=0}^{\infty}(\mathbb{R}^{d})^{\bigotimes k}$ be the tensor algebra. Let $\mathcal{V}^{p}([0,T],\mathbb{R}^{d})$ be the space of continuous mappings from $[0,T]$ to $\mathbb{R}^{d}$ with finite $p$-variation, equipped with norm $\|\cdot\|_{\mathcal{V}^{p}}=\|\cdot\|_{\infty}+\|\cdot\|_{p}$. ###### Definition 3.12 (Signature). Let $X\in\mathcal{V}^{p}([0,T],\mathbb{R}^{d})$ such that the following integral is well defined. The signature of $X$, denoted by $S(X)$, is the element of $T((\mathbb{R}^{d}))$ defined by $S(X)=(1,X^{1},\cdots,X^{k}\cdots)$ with $X^{k}=\int_{0<t_{1}<t_{2}<\cdots<t_{k}<T}\,\mathrm{d}X_{t_{1}}\otimes\cdots\otimes\,\mathrm{d}X_{t_{k}}.$ (3.56) Denoting by $S^{M}(X)$ the truncated signature of $X$ of depth $M$, i.e., $S^{M}(X)=(1,X^{1},\cdots,X^{M})$ which has the dimension $\frac{d^{M+1}-1}{d-1}$. In the current setting, $X$ is a semi-martingale, thus equation (3.56) is understood in the Stratonovich sense. The signature has many nice properties, including the following ones. First, it characterizes paths uniquely up to the tree-like equivalence, and the equivalence is removed if at least one dimension of the path is strictly increasing [52]. Therefore, in practice one usually augments the original path $X_{t}$ with the time dimension, i.e., working with $\hat{X}_{t}=(t,X_{t})$ since $S(\hat{X})$ characterizes paths $\hat{X}$ uniquely. Second, terms in the signature present a factorial decay property [248], which implies that a path can be well approximated with just a few terms in the signature (i.e., a small $M$). Last, As a feature map of sequential data, the signature has a universality property [54], which is summarized below. Let $p\geq 1$ and $f:\mathcal{V}^{p}([0,T],\mathbb{R}^{d})\to\mathbb{R}$ be a continuous function. For any compact set $K\subset\mathcal{V}^{p}([0,T],\mathbb{R}^{d})$, if $S(x)$ is a geometric rough path (see [249, Definition 3.13] for a detailed definition) for any $x\in K$, then for any $\epsilon>0$, there exists a linear functional $l$ in the dual space of $\in T((\mathbb{R}^{d}))$ such that $\sup_{x\in K}|f(x)-\langle l,S(x)\rangle|<\epsilon.$ (3.57) Motivated by the unique characterization of $(W_{s}^{0})_{s\in[0,t]}$ by $S(\hat{W}_{t}^{0})$ and the factorial decay property, one can approximate $\nu_{t}\equiv\mathcal{L}(X_{t},\alpha_{t}|\mathcal{F}_{t}^{0})=\mathcal{L}(X_{t},\alpha_{t}|S(\hat{W}_{t}^{0})),$ (3.58) by $\mathcal{L}(X_{t},\alpha_{t}|S^{M}(\hat{W}_{t}^{0}))$, for $\hat{W}^{0}_{t}=(t,W^{0}_{t})$. In particular, if the mean-field interaction is through moments $\bar{\nu}_{t}=\mathbb{E}[\iota(X_{t},\alpha_{t})|\mathcal{F}_{t}^{0}]$, for some measurable function $\iota$, the approximation can be arbitrarily accurate for sufficiently large $M$, see [252, Lemma 4.1]. Then [252] proposed to use the approximation $\bar{\nu}_{t}\approx\langle\tilde{l},S^{M}(\hat{W}_{t}^{0})\rangle,\;\text{ where }\tilde{l}=\operatorname*{arg\,min}_{\bm{\beta}}\|\bm{y}-\bm{X}\bm{\beta}\|^{2},\;\bm{y}=\\{\iota(X_{t}(\omega_{i}),\alpha_{t}(\omega_{i}))\\}_{i=1}^{N},\;\bm{X}=\\{S^{M}(\hat{W}^{0}_{t}(\omega_{i}))\\}_{i=1}^{N},$ (3.59) where $\omega_{i}$ denotes the $i^{th}$ sample path. The rationale behind this approximation is the universality of signatures and the interpretation of ordinary linear regression: the least square minimization gives the best possible prediction of $\mathbb{E}[\bm{y}|\bm{X}]$ using linear relations. Once $\tilde{l}$ is obtained, the prediction on an unseen common noise is efficient: $\bar{\nu}_{t}(\tilde{\omega})\approx\langle\tilde{l},S^{M}(\hat{W}^{0}_{t}(\tilde{\omega}))\rangle$ for any $\tilde{\omega}$ and $t$. Then finding the mean-field equilibrium is broken down into the following steps. We start with an initial value $\bar{\nu}^{(0)}$. Then, we solve the standard control problem given $\bar{\nu}^{(0)}$ in (3.40) in the spirit of [171]. From here, we approximate $\bar{\nu}^{(1)}$ via signature using (3.59), i.e., compute $\tilde{l}^{(1)}$. These steps are repeated until convergence. The update of $\bar{\nu}_{t}$ from step to step is done by averaging $\tilde{l}^{(n)}$. The Sig-DFP algorithm consists of repeatedly solving (3.39)–(3.40) for a given $\bar{\nu}$ using deep learning in the spirit of [171], and passing the obtained $\bar{\nu}$ to the next iteration by using signatures. A flowchart illustrating the ideas is given in Figure 12. Figure 12: Flowchart of one iteration in the Sig-DFP Algorithm. Input: idiosyncratic noise $W$, common noise $W^{0}$, initial position $X_{0}$ and vector $\hat{\nu}^{(\mathtt{k}-1)}$ from the last iteration. Output: vector $\hat{\nu}^{(\mathtt{k})}$ for the next iteration. More precisely, at each step, given a proxy $\hat{\nu}^{(\mathtt{k-1})}$ of the equilibrium distribution $\hat{\nu}$, the problem (3.39)–(3.40) becomes a standard stochastic control and is solved by using the direct parameterization approach reviewed in Section 2.2.1: the loss function will be the discretized version of (3.40), $\check{X}$ will be follow the Euler scheme of (3.39) with $\nu$ replaced by $\hat{\nu}^{(\mathtt{k}-1)}$ and so do $f$ and $g$, and the control $\alpha_{t_{n}}$ is parameterized by a neural network of the following form $\alpha_{t_{n}}=\alpha(t_{n},\check{X}_{t_{n}},\hat{\nu}^{(\mathtt{k}-1)}_{t_{n}};\theta),$ (3.60) which takes $\hat{\nu}^{(\mathtt{k}-1)}_{t_{n}}$ as an extra input on top of $(t_{n},\check{X}_{t_{n}})$. The optimizer $\theta^{\ast}$ obtained in this way gives $\alpha_{t_{n}}^{(\mathtt{k})}$, with which the optimized state process paths are simulated. The conditional law, denoted by $\nu^{(\mathtt{k})}$, is approximated using signatures via (3.59). This finishes one iteration of fictitious play. Denote by $\tilde{\nu}^{(\mathtt{k})}$ the approximation of $\nu^{(\mathtt{k})}$, we then pass $\tilde{\nu}^{(\mathtt{k})}$ to the next iteration via updating $\hat{\nu}^{(\mathtt{k})}=\frac{1}{\mathtt{k}}\tilde{\nu}^{(\mathtt{k})}+\frac{\mathtt{k}-1}{\mathtt{k}}\hat{\nu}^{(\mathtt{k}-1)}$ by averaging the coefficients obtained in (3.59). We summarize it in Algorithm 5 in Appendix C; see [252, Appendix B] for the implementation details. We remark that signatures can also be useful for generating multimodal data [253]. ###### Remark 3.13 (Theoretical analysis). In [252, Theorems 4.1 and 4.2] Min and Hu provided a proof of convergence of this algorithm showing that, under suitable assumptions, the difference between the $\mathtt{k}^{th}$ iteration solution and the mean-field equilibrium can be made arbitrarily small, provided that $\mathtt{k}$ is sufficiently large and $\nu^{(\mathtt{k})}$ can be approximated sufficiently well by truncated signatures. ##### Numerical illustration: MFG of optimal consumption and investment. We consider an extended heterogeneous MFG proposed by [224], where agents interact via both states and controls. The setup is similar to [225] except for including consumption and using power utilities. Each agent’s type is characterized by a random vector $\zeta=(\xi,\delta,\theta,b,\sigma,\sigma^{0},\epsilon)$, and the optimization problem reads $\sup_{\pi,c}\mathbb{E}\biggl{[}\int_{0}^{T}U(c_{t}X_{t}(\Gamma_{t}m_{t})^{-\theta};\delta)\,\mathrm{d}t+\epsilon U(X_{T}m^{-\theta}_{T};\delta)\biggr{]},$ (3.61) where $U(x;\delta)=\frac{1}{1-\frac{1}{\delta}}x^{1-\frac{1}{\delta}}$, $\delta\neq 1$, is the power utility function, $X_{t}$ follows $\,\mathrm{d}X_{t}=\pi_{t}X_{t}(b\,\mathrm{d}t+\sigma\,\mathrm{d}W_{t}+\sigma^{0}\,\mathrm{d}W_{t}^{0})-c_{t}X_{t}\,\mathrm{d}t,$ (3.62) and $X_{0}=\xi$. The processes $\Gamma_{t}=\exp\mathbb{E}[\log c_{t}|\mathcal{F}^{0}_{t}]$ and $m_{t}=\exp\mathbb{E}[\log X_{t}|\mathcal{F}^{0}_{t}]$ are the mean-field interactions from the control and state processes. Two constraints are posed: $X_{t}\geq 0$, $c_{t}\geq 0$. The interpretation of this problem is as follows. There are infinitely many agents trade in a common investment horizon $[0,T]$, each invests between a bond (with constant return rate $r$) and a private stock with dynamics $\,\mathrm{d}S_{t}/S_{t}=b\,\mathrm{d}t+\sigma\,\mathrm{d}W_{t}+\sigma^{0}\,\mathrm{d}W_{t}^{0}$, and consume $c_{t}$ of his wealth at time $t$. The portion of wealth into $S_{t}$ is denoted by $\pi_{t}$. Assuming $r\equiv 0$ without loss of generality, the wealth process reads (3.62). Then each agent aims to maximize his utility of consumption plus his terminal wealth compared to his peers’ averages $\Gamma_{t}$ and $m_{t}$. To relate it to the formulation (3.39)–(3.40), $\alpha\equiv(\alpha^{1},\alpha^{2})=(\pi,c)$ will be a 2D control with the constraint $\alpha^{2}_{t}\geq 0$, $b(t,x,\nu,\alpha)=b\alpha^{1}x-\alpha^{2}x$, $\sigma(t,x,\nu,\alpha)=\sigma\alpha^{1}x$, $\sigma^{0}(t,x,\nu,\alpha)=\sigma^{0}\alpha^{1}x$, $f=-U$ and $g=-U$. The explicit solutions are derived in [224] and summarized in [252, Appendix D]. For this experiment, we use truncated signatures of depth $M=4$. The optimal controls $(\pi_{t},c_{t})_{0\leq t\leq 1}$ are parameterized by two neural networks $\pi(\cdot;\theta)$ and $c(\cdot;\theta)$, each with three hidden layers of size 64 and taking $(\zeta,t,X_{t},m_{t},\Gamma_{t})$ as inputs due to the nature of heterogeneous extended MFG. Due to the extended mean-field interaction term $\Gamma_{t}$, we will propagate two conditional distribution flows, i.e., two linear functionals $\hat{l}^{(\mathtt{k})},\hat{l}_{c}^{(\mathtt{k})}$ during each iteration of fictitious play. Instead of estimating $m_{t},\Gamma_{t}$ directly, we estimate $\mathbb{E}[\log X_{t}|\mathcal{F}^{0}_{t}],\mathbb{E}[\log c_{t}|\mathcal{F}^{0}_{t}]$ by $\langle\hat{l}^{(\mathtt{k})},S^{4}(W_{t}^{0})\rangle$, $\langle\hat{l}_{c}^{(\mathtt{k})},S^{4}(W_{t}^{0})\rangle$ and then take exponential to get $m_{t},\Gamma_{t}$. To ensure the non-negativity condition of $X_{t}$, we evolve $\log X_{t}$ and then take exponential to get $X_{t}$. For optimal consumption, $c(\cdot;\theta)$ is used to predicted $\log c_{t}$ and thus $\exp c(\cdot;\theta)$ gives the predicted $c_{t}$. With 600 iterations of fictitious play and a learning rate of 0.1 decaying by a factor of 5 for every 200 iterations, the relative $L^{2}$ errors for $\pi_{t},c_{t},m_{t},\Gamma_{t}$ are 0.1126, 0.0614, 0.0279, 0.0121, respectively. Figure 13 compares $X$ and $m$ to their approximations, and plots the maximized utilities. Further comparison with the existing literature, different choices of truncation $M$, and the ability to deal with higher $m_{0}$ are also discussed in [252]. (a) $X_{t}$ (b) $m_{t}=\exp\mathbb{E}(\log X_{t}|\mathcal{F}^{B}_{t})$ (c) Maximized Utility Figure 13: MFG of optimal consumption and investment in Section 3.2.2. Panels (a) and (b) give three trajectories of $X_{t}$ and $m_{t}=\exp\bigl{(}\mathbb{E}(\log X_{t}|\mathcal{F}^{0}_{t})\bigr{)}$ (solid lines) and their approximations $\hat{X}_{t}$ and $\hat{m}_{t}$ (dashed lines) using different $(X_{0},W,W^{0})$ from validation data. Panel (c) shows the maximized utility computed using validation data over fictitious play iterations. Parameter choices are: $\delta\sim U(2,2.5),b\sim U(0.25,0.35),\sigma\sim U(0.2,0.4),\theta,\xi\sim U(0,1),\sigma^{0}\sim U(0.2,0.4)$, $\epsilon\sim U(0.5,1)$. #### 3.2.3 BSDE-based deep learning algorithms We now explain how to adapt the Deep BSDE method introduced in [125] and reviewed in Section 2.3.1 to mean-field FBSDEs. We recall that the principle of the method is to use neural networks to approximate $Y_{0}$ and $(Z_{t})_{t\in[0,T]}$ and to train the neural network parameters by relying on Monte Carlo samples until the terminal condition is approximately matched. In the mean-field setting, the same idea can be used to solve forward-backward systems of McKean-Vlasov (MKV) SDEs; see [138, 90, 142, 170]. Let us consider the FBSDE (3.53) in the absence of common noise, with interactions through the state distribution only, and uncontrolled volatility. We rewrite the problem as: minimize over $y_{0}:\mathbb{R}^{d}\to\mathbb{R}^{d}$ and $z:\mathbb{R}_{+}\times\mathbb{R}^{d}\to\mathbb{R}^{d\times m}$ the cost functional $J(y_{0},z)=\mathbb{E}\left[\,\left|Y^{y_{0},z}_{T}-G(X^{y_{0},z}_{T},\mathcal{L}(X^{y_{0},z}_{T}))\right|^{2}\,\right],$ where $(X^{y_{0},z},Y^{y_{0},z})$ solves $\begin{cases}\,\mathrm{d}X^{y_{0},z}_{t}=B\left(t,X^{y_{0},z}_{t},\mathcal{L}(X^{y_{0},z}_{t}),Y^{y_{0},z}_{t}\right)\,\mathrm{d}t+\sigma(t,X_{t})\,\mathrm{d}W_{t},\quad t\geq 0,\\\ \,\mathrm{d}Y^{y_{0},z}_{t}=-F\left(t,X^{y_{0},z}_{t},\mathcal{L}(X^{y_{0},z}_{t}),Y^{y_{0},z}_{t},\sigma^{\operatorname{T}}(t,X_{t})z(t,X^{y_{0},z}_{t})\right)\,\mathrm{d}t+z(t,X^{y_{0},z}_{t})\,\mathrm{d}W_{t},\quad t\geq 0,\\\ X^{y_{0},z}_{0}\sim\mu_{0},\qquad Y^{y_{0},z}_{0}=y_{0}(X_{0}^{y_{0},z}).\end{cases}$ (3.63) The above problem is an MFC problem if we view $(X^{y_{0},z}_{t},Y^{y_{0},z}_{t})$ as state and $(y_{0},z)$ as control. Under suitable conditions, the optimally controlled process $(X,Y)$ solves the MKV FBSDE system (3.53) and vice versa. Then, to be able to implement this method, we can proceed similarly to the method described in Section 2.6.2. The mean-field distribution can be approximated by an empirical distribution based on a finite population of interacting particles. Furthermore, the controls $y_{0}$ and $z$ can be replaced by neural networks, say $y_{\theta}$ and $z_{\omega}$ with parameters $\theta$ and $\omega$ respectively. Time can be discretized using for instance an Euler-Maruyama scheme. We thus obtain a new optimization problem over finite-dimensional parameters that can be adjusted using SGD. ###### Remark 3.14 (Theoretical analysis). Motivated by numerical schemes and in particular the above adaptation of the deep BSDE method [125, 174] to MKV FBSDEs, Reisinger, Stockinger and Zhang in [279] analyzed a posteriori error for approximate solutions based on a discrete time scheme and a finite population of interacting particles [279, Theorems 3.2 and 4.3]. [170] proposed a deep learning method for computing MKV FBSDEs with a general form of mean-field interactions, and proved that the convergence of the numerical solution obtained by the proposed method to the true solution is free of the curse of dimensionality [170, Theorem 3.9] by using a special class of integral probability metrics previously developed in [169]. Although we focus here on the continuous-state space setting, the same strategy can be applied to finite-state MFGs; see, e.g., [23, 22]. ##### Numerical illustration: a linear-quadratic mean-field game in systemic risk problems. We now consider the MFG version of the systemic risk model introduced in Section 1.2 which has been studied in Section 3.1.2 and revisited in Section 3.1.3. This MFG mean-field game has been analyzed in [80]. Given a mean-field flow $\mu=(\mu_{t})_{t\in[0,T]}$, the log-monetary reserves of a typical bank evolves according to the dynamics: $\,\mathrm{d}X_{t}=[a(\bar{\mu}_{t}-X_{t})+\alpha_{t}]\,\mathrm{d}t+\sigma\left(\rho\,\mathrm{d}W_{t}^{0}+\sqrt{1-\rho^{2}}\,\mathrm{d}W_{t}\right).$ (3.64) The standard Brownian motions $W^{0}$ and $W$ are independent, in which $W$ stands for the idiosyncratic noises and $W^{0}$ denotes the systemic shock, which is an example of common noise (see also Section 3.2.2). The cost functions $f$ and $g$ appearing in (3.37) takes the following form: $f(t,x,\nu,\alpha)=\frac{1}{2}\alpha^{2}-q\alpha(\bar{\mu}-x)+\frac{\epsilon}{2}(\bar{\mu}-x)^{2},\quad g(x,\nu)=\frac{c}{2}(\bar{\mu}-x)^{2},$ (3.65) which depend only on the mean $\bar{\mu}=\mathbb{E}_{X\sim\mu}[X]$ of the first marginal of the state-action distribution $\nu$. It has been shown in [80] that, in the MFG setting, the open-loop equilibrium is the same as the closed-loop Nash equilibrium, and it admits an explicit solution. Furthermore, it can be characterized using an MKV FBSDE system, that we omit for brevity; see [80] for the details. If $\rho=0$, then one can apply directly the method described above, with $y_{0}$ a function of $X_{0}$ and $z$ a function of $(t,X_{t})$. When $\rho>0$, two changes need to be made: first, there is an extra process $Z^{0}$ to be learned, for which we use a neural network approximation as for $Z$; second, we expect the random variables $Z_{t}$ and $Z^{0}_{t}$ to depend not only on $X_{t}$ but also on the past of the common noise. In general, this would mean learning functions of the common noise’s trajectory. However, in the present case, it is enough to rely on finite- dimensional information. Here, we add $\bar{\mu}_{t}$ as an input to the neural networks playing the roles of $Z_{t}$ and $Z^{0}_{t}$, and this is sufficient to learn the optimal solution, see [80]. Figure 14 displays three sample trajectories of $X$ and $Y$, obtained after training the neural networks for $Y_{0},Z$ and $Z^{0}$, by simulating in a forward fashion the trajectories of $X$ and $Y$ using Monte Carlo samples and the same Euler-Maruyama scheme used in the numerical method. One can see that the approximation is better for $X$ than for $Y$, particularly towards the end of the time interval. This is probably because the BSDE is solved by guessing the initial point instead of starting from a terminal point, resulting in errors accumulated over time. Furthermore, [90] shows that the results improve as the number of time steps, particles, and units in the neural network increase. In the numerical experiments presented here, we used the following parameters: $\sigma=0.5,\rho=0.5,q=0.5,\epsilon=q^{2}+0.5=0.75,a=1,c=1.0$ and $T=0.5$. (a) Trajectory of $X^{i},i=1,2,3$ (b) Trajectory of $Y^{i},i=1,2,3$ Figure 14: Systemic risk MFG example solved by the algorithm described in Section 3.2.3. Left: three sample trajectories of $X$ using the neural network approach (’Deep Solver’ with full lines, in cyan, blue, and green) or using the analytical formula (’benchmark’ with dashed lines, in orange, red and purple). Right: three sample trajectories of $Y$ (similar labels and colors). Note that the analytical formula satisfies the true terminal condition, whereas the solution computed by neural networks satisfies it only approximately since the trajectories are generated in a forward way starting from the learned initial condition. #### 3.2.4 PDE-based deep learning algorithms Besides direct parameterization of the controls and BSDE-based methods, it is also possible to adapt PDE-based methods to solve MFGs. In fact, such methods can be used in two different ways. First, one can use them to tackle the forward-backward PDE system characteriziting the distribution and the value function (see Section 3.2.1.2). One can also try to solve the Master equation (see Section 3.2.1.4). ##### 3.2.4.1 Deep learning for mean-field PDE systems We now consider the PDE systems describing the equilibrium or social optimum in MFG or MFC, respectively. The Deep Galerkin Method (DGM) introduced in [290] and reviewed in Section 2.5 has been adapted to solve such PDE systems, see [12, 89, 283, 66, 244, 232]. We recall that the principle of the method is, for a single PDE, to replace the unknown function by a neural network and to optimize the parameters so as to minimize the residual of the PDE. For the sake of the presentation, we consider the MFG PDE system (3.42)–(3.44). In line with the DGM method described in section 2.5, we proceed as follows. First, the MFG PDE system is rewritten as a minimization problem over the pair consisting of the density and the value function. The loss function is the sum of the two PDE residuals, as well as penalization terms for the initial and terminal conditions. Instead of the whole state space $\mathbb{R}^{d}$, we focus on a compact subset $\tilde{\mathcal{Q}}\subset\mathbb{R}^{d}$. If needed, extra penalization terms taking into account the boundary conditions can be added in the loss function. To be specific, we introduce the following loss function $L(\mu,u)=L^{\text{(KFP)}}(\mu,u)+L^{\text{(HJB)}}(\mu,u),$ (3.66) which is composed of one term for each PDE of the MFG system (3.42)–(3.44). Each term is itself split into two terms: one for the residual inside the domain and one for the initial or terminal condition. The KFP loss function is $\displaystyle L^{\text{(KFP)}}(\mu,u)$ $\displaystyle=C^{\text{(KFP)}}\left\|\displaystyle\partial_{t}\mu-\sum_{i,j}\frac{\partial^{2}}{\partial_{x_{i}}\partial_{x_{j}}}\left(D_{i,j}\mu\right)+\mathrm{div}\left(\mu b\right)\right\|^{2}_{L^{2}([0,T]\times\tilde{\mathcal{Q}})}+C^{\text{(KFP)}}_{0}\left\|\mu(0)-\mu_{0}\right\|^{2}_{L^{2}(\tilde{\mathcal{Q}})},$ (3.67) with $\nu_{t}=\mu_{t}\circ(I_{d},\alpha(t,\cdot))^{-1}$ where $\alpha(t,x)=\alpha(t,x,\nu_{t},\nabla_{x}u(t,x),\mathrm{Hess}_{x}u(t,x))$, and $D$ and $b$ are defined as: $\displaystyle D(t,x)=\frac{1}{2}\sigma(t,x,\nu_{t},\alpha(t,x))\sigma(t,x,\nu_{t},\alpha(t,x))^{\operatorname{T}},\quad b(t,x)=b(t,x,\nu_{t},\alpha(t,x)).$ (3.68) The HJB loss function is $L^{\text{(HJB)}}(\mu,u)=C^{\text{(HJB)}}\left\|\partial_{t}u+\min_{\alpha\in\mathcal{A}}H(\cdot,\cdot,\nu,\nabla_{x}u,\mathrm{Hess}_{x}u,\alpha)\right\|^{2}_{L^{2}([0,T]\times\tilde{\mathcal{Q}})}+C^{\text{(HJB)}}_{T}\left\|u(T)-g(\cdot,\mu(T))\right\|^{2}_{L^{2}(\tilde{\mathcal{Q}})},$ (3.69) with $H$ defined by (3.43). The weights $C^{\text{(KFP)}},C^{\text{(KFP)}}_{0},C^{\text{(HJB)}},$ and $C^{\text{(HJB)}}_{T}$ are positive constants that are used to tune the importance of each component relatively to the other components. If $(\mu,u)$ is a smooth enough solution to the PDE system (3.42)–(3.44), then $L(\mu,u)=0$. From here, the same strategy as in the DGM can be applied: one can look for an approximate solution using a class of parameterized functions for $\mu$ and $u$, replace the $L^{2}$ norms by integrals, and use samples to get Monte Carlo estimates; see [290] and Section 2.5 for more details. ###### Remark 3.15. The same ideas can be applied to a variety of settings such as ergodic MFG [89], non-separable Hamiltonians [21] or finite-state MFGs [247]. Each case may require some adjustments. For instance, in the case of ergodic MFGs, the initial and terminal conditions are replaced by normalization conditions; see [230]. Furthermore, if the PDE system was initially posed on a bounded domain and the solution had to satisfy boundary conditions, then these extra conditions could be dealt with by adding more penalty terms as in [89] or changing the architecture of the neural networks [66]. ##### Numerical illustration: a mean-field model of optimal execution. We now present an example based on a model of optimal execution. This model is similar to the one studied in Subsection 2.6.2. We consider a population of traders in which each trader wants to liquidate $Q_{0}$ shares of a given stock by a fixed time horizon $T$. At time $t\in[0,T]$, we denote by $S_{t}$ the price of the stock, by $Q_{t}$ the inventory (i.e., number of shares) held by the representative trader, and by $X_{t}$ their wealth. These state variables are subject to the following dynamics $\begin{cases}dS_{t}=\gamma\bar{\mu}_{t}dt+\sigma\,\mathrm{d}W_{t},\\\ dQ_{t}=\alpha_{t}\,\mathrm{d}t,\\\ dX_{t}=-\alpha_{t}(S_{t}+\kappa\alpha_{t})\,\mathrm{d}t.\end{cases}$ The evolution of the price $S$ is stochastic, representing that it can not be predicted with certainty. The randomness, scaled by $\sigma$, comes in through a standard Wiener process $W$. Furthermore, the drift of $S$ captures the permanent price impact $\gamma\bar{\mu}_{t}$ at time $t$. Here $\gamma>0$ is a multiplicative constant and $\bar{\mu}_{t}$ is the aggregate trading rate of all the traders. The control $\alpha_{t}$ at time $t$ corresponds to the individual rate of trading of the representative trader. Last, $\kappa>0$ is a constant that represents a quadratic transaction cost. We assume that the representative agent tries to maximize the following quantity, in which the first two terms reflect their payoff while the last two terms capture their risk aversion $\mathbb{E}\left[X_{T}+Q_{T}S_{T}-A|Q_{T}|^{2}-\phi\int_{0}^{T}|Q_{t}|^{2}\,\mathrm{d}t\right].$ The constants $\phi>0$ and $A>0$ give weights to penalties for holding inventory through time and at the terminal time, respectively. ###### Remark 3.16. Except for the fact that $\bar{\mu}_{t}$ is here endogenous, this is the model considered in [96], to which a deep learning method has been applied in [237] to approximate the optimal control on real data. In contrast with the model studied in Subsection 2.6.2, the model considered here is not linear-quadratic and the inventory is not directly subject to random shocks. We refer the interested reader to [95] and [94] for more details and variants of these models. Although this problem is formulated with three state variables, we can actually reduce the complexity of the problem in the following way. When $(\bar{\mu}_{t})_{0\leq t\leq T}$ is given, the optimal control of the representative agent can be found by solving an HJB equation. Following [96], the value function $V(t,x,s,q)$ can be decomposed as $V(t,x,s,q)=x+qs+v(t,q)$ for some function $v$ which is a solution to $-\gamma\bar{\mu}q=\partial_{t}v-\phi q^{2}+\sup_{\alpha}\\{\alpha\partial_{q}v-\kappa\alpha^{2}\\},$ with terminal condition $v(T,q)=-Aq^{2}$. The maximizer in the supremum leads to the optimal control, which can be expressed as: $\alpha^{*}_{t}(q)=\frac{\partial_{q}v(t,q)}{2\kappa}$. Based on this and going back to the consistency condition yields that, at equilibrium, the aggregate trading rate is $\bar{\mu}_{t}=\int\alpha^{*}_{t}(q)\mu(t,dq)=\int\frac{\partial_{q}v(t,q)}{2\kappa}\mu(t,dq),$ where $\mu(t,\cdot)$ is the distribution of inventories at time $t$ satisfying the KFP PDE: $\partial_{t}\mu+\partial_{q}\left(\mu\frac{\partial_{q}v(t,q)}{2\kappa}\right)=0,t\geq 0,\qquad\mu(0,\cdot)=\mu_{0}.$ As a consequence, the equilibrium solution of the MFG satisfies $\left\\{\begin{aligned} &\quad-\gamma\bar{\mu}q=\partial_{t}v-\phi q^{2}+\frac{|\partial_{q}v(t,q)|^{2}}{4\kappa},\\\ &\quad\partial_{t}\mu+\partial_{q}\left(\mu\frac{\partial_{q}v(t,q)}{2\kappa}\right)=0,\\\ &\quad\bar{\mu}_{t}=\int\frac{\partial_{q}v(t,q)}{2\kappa}\mu(t,dq),\\\ &\quad\mu(0,\cdot)=\mu_{0},v(T,q)=-Aq^{2}.\end{aligned}\right.$ (3.70) The mean-field coupling between the two equations is non-local since it involves $\bar{\mu}_{t}$, and it combines the population distribution with the HJB solution. The value function, the associated control, and the mean of the distribution can be computed by solving a system of ODEs, which provides a benchmark to test numerical methods. We refer to [70] for more details. Moreover, [12] proposed a further change of variables to simplify the numerical computations and then used the DGM to approximate the solution of the transformed PDE system. Here, to simplify the presentation, we stick to the above PDE system (3.70) and solve it directly using DGM. The initial and terminal conditions are imposed by penalization. For the non-local term $\bar{\mu}_{t}$, we use Monte Carlo samples to estimate the integral. In the implementation, we used the following values for the parameters: $T=1$, $\sigma=0.3$, $A=1$, $\phi=1$, $\kappa=1$, $\gamma=1$, and a Gaussian initial distribution with mean $4$ and variance $0.3$. To ensure that the neural network for the distribution always outputs positive values, we used on the last layer the exponential function as an activation function. Figure 15 shows the evolution of the distribution $m$. Figure 16 shows the control obtained from the neural network approximating the HJB solution. The final distribution is concentrated near $0$, which is consistent with the intuition that the traders need to liquidate. Furthermore, the learned control coincides with the theoretical optimal control that can be computed by solving an ODE system [70]. (a) (b) Figure 15: Trade crowding MFG example in Section 3.2.4.1 solved by DGM. Evolution of the distribution $m$. Left: surface with the horizontal axes representing time and space and the vertical axis representing the value of the density. Right: contour plot of the density with a dashed red line corresponding to the mean of the density computed by the semi-explicit formula. (a) (b) (c) Figure 16: Trade crowding MFG example in Section 3.2.4.1 solved by DGM. Each plot corresponds to the control at a different time step: Optimal control $\alpha^{*}$ (dashed line) and learned control (full line). ##### 3.2.4.2 Deep learning for mean-field master equation We now turn our attention to the question of solving the MFG master equation (Section 3.2.1.4). Intuitively, the main motivation is to be able to approximate the value function of a representative player for any population distribution. This is in contrast with the methods presented above, which are based on controls that are fully decentralized in the sense that they are functions of the time and the state of the agent only. The fact that they do not depend on the population distribution is an advantage in that it simplifies the implementation, but it is also a limitation since the agent is not able to react to new distributions. For example, if the initial distribution is not known, the agent is not able to solve the forward equation and hence she is not able to anticipate the distribution at future times. The presence of common noise in the dynamics poses a similar challenge. For these reasons, being able to approximately solve the master equation is interesting for applications. When the state space is continuous, the distribution is an infinite dimensional object which is hard to approximate. For simplicity, we will thus focus here on a finite-state setting, in which case the distribution is simply a histogram. The convergence of finite-state MFGs to continuous- state MFGs has been studied for instance in [165]. Even though assuming the state space to be finite resolves the question of approximating the distribution, the master equation is posed on a high-dimensional space if the number of states is large. We will hence rely once again on neural networks to approximate the solution to this equation. Master equation for finite state MFG. We consider a finite-state MFG model based on the presentation of such models in [84, Section 7.2]. We consider a finite state space $\mathcal{E}=\\{e_{1},\dots,e_{d}\\}$ and an action space $\mathcal{A}\subseteq\mathbb{R}^{k}$, which can be discrete or continuous. The states can be viewed as one-hot vectors, i.e., as the elements of a canonical basis of $\mathbb{R}^{d}$. Then, the set of probability distributions on $\mathcal{E}$ is the simplex $\\{m\in\mathbb{R}^{d}\,|\,\sum_{i=1}^{d}m_{i}=1\\}$, and we will sometimes write $m(x)$ instead of $m(\\{x\\})$. The running cost and the terminal cost are denoted by $f:\mathcal{E}\times\mathcal{P}(\mathcal{E})\times\mathcal{A}\to\mathbb{R}$ and $g:\mathcal{E}\times\mathcal{P}(\mathcal{E})\to\mathbb{R}$. The dynamics are given by a jump rate function denoted by $\lambda:\mathcal{E}\times\mathcal{P}(\mathcal{E})\times\mathcal{A}\to\mathbb{R}$. We denote by $\mathbb{R}^{\mathcal{E}}$ the set of functions from $\mathcal{E}$ to $\mathbb{R}$. In this context, a finite state MFG equilibrium is a pair $(\hat{m},\hat{\alpha})$ with $\hat{m}:[0,T]\times\mathcal{E}\to\mathbb{R}$ and $\hat{\alpha}:[0,T]\times\mathcal{E}\to\mathcal{A}$ such that 1. 1. $\hat{\alpha}$ minimizes $\displaystyle J^{MFG}_{\hat{m}}:\alpha\mapsto\mathbb{E}\left[\int_{0}^{T}f(X_{t}^{\hat{m},\alpha},\hat{m}(t,\cdot),\alpha(t,X_{t}^{\hat{m},\alpha}))dt+g(X_{T}^{\hat{m},\alpha},\hat{m}(T,\cdot))\right],$ subject to: $X^{\hat{m},\alpha}=(X_{t}^{\hat{m},\alpha})_{t\geq 0}$ is a nonhomogeneous $\mathcal{E}$-valued Markov chain with transition probabilities determined by the $Q$-matrix of rates $q^{\hat{m},\alpha}:[0,T]\times\mathcal{E}\times\mathcal{E}\to\mathbb{R}$ given by $q^{\hat{m},\alpha}(t,x,x^{\prime})=\lambda(x,x^{\prime},\hat{m}(t,\cdot),\alpha(t,\cdot)),\qquad(t,x,x^{\prime})\in[0,T]\times\mathcal{E}\times\mathcal{E},$ (3.71) and $X_{0}^{\hat{m},\alpha}$ has distribution with density $m_{0}$; 2. 2. For all $t\in[0,T]$, $\hat{m}(t,\cdot)$ is the law of $X_{t}^{\hat{m},\hat{\alpha}}$. The Hamiltonian of the problem is defined as $H(x,m,h)=\sup_{\alpha\in\mathcal{A}}-L(x,m,h,\alpha)$ where $L:\mathcal{E}\times\mathcal{P}(\mathcal{E})\times\mathbb{R}^{\mathcal{E}}\times\mathcal{A}\to\mathbb{R}$ denotes the Lagrangian $L(x,m,h,\alpha)=\sum_{x^{\prime}\in\mathcal{E}}\lambda(x,x^{\prime},m,\alpha)h(x^{\prime})+f(x,m,\alpha).$ Under suitable assumptions on the model, the supremum in the definition of $H$ admits a unique maximizer for every $(x,m,h)\in\mathcal{E}\times\mathcal{P}(\mathcal{E})\times\mathbb{R}^{\mathcal{E}}$, that we denote by $\alpha^{*}(x,m,h)=\operatorname*{arg\,max}_{\alpha\in\mathcal{A}}-L(x,m,h,\alpha).$ (3.72) The coefficients of the rates of the $Q$-matrix when using the optimal control are denoted by $q^{*}(x,x^{\prime},m,h)=\lambda\big{(}x,x^{\prime},m,\alpha^{*}(x,m,h)\big{)},$ where $q^{*}:\mathcal{E}\times\mathcal{E}\times\mathcal{P}(\mathcal{E})\times\mathbb{R}^{\mathcal{E}}\to\mathbb{R}$. Similarly to the continuous setting (see Section 3.2.1) the mean-field Nash equilibrium can be characterized using a forward-backward system of deterministic or stochastic equations. Using the deterministic approach, the optimal conditions take the form of an ODE system (instead of a PDE system as in the continuous space case). The system is composed of a forward ODE for the mean-field $m:[0,T]\times\mathcal{E}\to\mathbb{R}$ and a backward ODE for the value function $u:[0,T]\times\mathcal{E}\to\mathbb{R}$. Under suitable assumptions (see, e.g., [84, section 7.2]), there is a unique MFG equilibrium $(\hat{m},\hat{\alpha})$, which is characterized by: $\hat{\alpha}(t,x)=\alpha^{*}(x,\hat{m}(t,\cdot),\hat{u}(t,\cdot)),$ where $\alpha^{*}$ is defined by (3.72) and $(u,m)$ solves the forward- backward system $\begin{dcases}\displaystyle 0=-\partial_{t}\hat{u}(t,x)+H(x,\hat{m}(t,\cdot),\hat{u}(t,\cdot)),\quad(t,x)\in[0,T)\times\mathcal{E},,\\\ 0=\partial_{t}\hat{m}(t,x)-\sum_{x^{\prime}\in\mathcal{E}}\hat{m}(t,x^{\prime})q^{*}(x^{\prime},x,\hat{m}(t,\cdot),\hat{u}(t,\cdot)),\quad(t,x)\in(0,T]\times\mathcal{E},\\\ \hat{u}(T,x)=g(x,\hat{m}(T,\cdot)),\qquad\hat{m}(0,x)=m_{0}(x),\quad x\in\mathcal{E}.\end{dcases}$ (3.73) This ODE system can be solved using for example techniques discussed in previous sections for forward-backward PDE or SDE systems. However, this assumes that the initial distribution $m_{0}$ is known and when it is unknown, new techniques are required. We thus consider the master equation. As in the continuous space case described in Section 3.2.1.4, the solution to the master equation makes the dependence of $\hat{u}$ and $\hat{m}$ completely explicit. In the present discrete space setting, the master equation can be written as follows (see, e.g., [84, section 7.2]) $-\partial_{t}\mathcal{U}(t,x,m)+H(x,m,\mathcal{U}(t,\cdot,m))-\sum_{x^{\prime}\in\mathcal{E}}h^{*}(m,\mathcal{U}(t,\cdot,m))(x^{\prime})\frac{\partial\mathcal{U}(t,x,m)}{\partial m(x^{\prime})}=0,$ (3.74) for $(t,x,m)\in[0,T]\times\mathcal{E}\times\mathcal{P}(\mathcal{E})$, with the terminal condition $\mathcal{U}(T,x,m)=g(x,m)$, for $(x,m)\in\mathcal{E}\times\mathcal{P}(\mathcal{E})$. The function $h^{*}:[0,T]\times\mathcal{P}(\mathcal{E})\times\mathbb{R}^{\mathcal{E}}\times\mathcal{E}\to\mathbb{R}$ is defined as $h^{*}(m,u)(x^{\prime})=\sum_{x\in\mathcal{E}}\lambda(x,x^{\prime},m,\alpha^{*}(x,m,u))m(x).$ Besides a simple representation of probability distributions, the fact that the state space is finite has another advantage: we do not need to involve the notions of derivative with respect to a measure discussed in Section 3.2.1.4. Instead, we can rely on standard partial derivatives with respect to the finite-dimensional inputs of $\mathcal{U}$. As a matter of fact, in the above equation, $\displaystyle\frac{\partial\mathcal{U}(t,x,m)}{\partial m(x^{\prime})}$ denotes the standard partial derivative of $\mathbb{R}^{d}\ni m\mapsto\mathcal{U}(t,x,m)$ with respect to the coordinate corresponding to $x^{\prime}$ (recall that $m$ is viewed as a vector of dimension $d$). The analog of (3.54) in the continuous case is $\mathcal{U}(t,x,\hat{m}(t))=\hat{u}(t,x),$ (3.75) where $\hat{m}=(\hat{m}(t))_{t}$ is the mean-field equilibrium distribution flow. Notice that both $\hat{m}$ and $\hat{u}$ implicitly depend on the initial distribution $m_{0}$, but $\mathcal{U}$ does not. The master equation (3.74) is posed on a possibly high dimensional space since the number $d$ of states can be large. To numerically solve this equation, we can thus rely on deep learning methods for high-dimensional PDEs, such as the DGM introduced in [290] and already discussed above in Sections 2.5 and 3.2.4.1. For the sake of completeness, let us mention that this technique boils down to approximating $\mathcal{U}$ by a neural network, say $\mathcal{U}_{\theta}$ with parameters $\theta$, and using SGD to adjust the parameters $\theta$ such that the residual of (3.74) is minimized and the terminal condition is satisfied. SGD as described in Algorithm 1 in Appendix A.2 is used, where a sample is $\xi=(t,x,m)\in[0,T]\times\mathcal{E}\times\mathcal{P}(\mathcal{E})$ and the loss function is $\mathfrak{L}(\mathcal{U}_{\theta},\xi)=\left|\partial_{t}\mathcal{U}_{\theta}(t,x,m)-H(x,m,\mathcal{U}_{\theta}(t,x,m))+\sum_{x^{\prime}\in\mathcal{E}}h^{*}(m,\mathcal{U}_{\theta}(t,x,m))(x^{\prime})\frac{\partial\mathcal{U}_{\theta}(t,x,m)}{\partial m(x^{\prime})}\right|^{2}.$ (3.76) ##### Numerical illustration: A Cybersecurity model. Here we present an example of the application of the above method. We consider the cybersecurity introduced in [216]; see also [84, Section 7.2.3]. Each player owns a computer and her goal is to avoid being infected by a malware. The state space is denoted by $\mathcal{E}=\\{DI,DS,UI,US\\}$, which represents the four possible states in which a computer can be depending on its protection level – defended (D) or undefended (U) – and on its infection status – infected (I) or susceptible (S) of infection. The player can choose to switch its protection level between D and U. The change is not instantaneous so the player can only influence the transition rate. We represent by “$1$” the fact that the player has the intention to change its level of protection (be it from D to U or from U to D). On the other hand, “$0$” corresponds to the situation where the player does not try to change her protection level. So the set of possible actions is $\mathcal{A}=\\{0,1\\}$. When the action is equal to $1$, the change of level of protection takes place at a rate $\rho>0$. A computer in states DS or US might get infected either directly by a hacker or by getting the virus from an infected computer. We denote by $v_{H}q_{inf}^{D}$ (resp. $v_{H}q_{inf}^{U}$) the rate of infection from a hacker if the computer is defended (resp. undefended). We denote by $\beta_{UU}\mu(\\{UI\\})$ (resp. $\beta_{UD}\mu(\\{UI\\})$) the rate of infection from an undefended infected computer if the computer under consideration is undefended (resp. defended). Likewise, we denote by $\beta_{DU}\mu(\\{DI\\})$ (resp. $\beta_{DD}\mu(\\{DI\\})$) the rate of infection from a defended infected computer if the computer under consideration is undefended (resp. defended). Note that these rates involve the distribution since the probability of getting infected should increase with the number of infected computers in the rest of the population. Last, an infected computer can recover and switch to the susceptible state at rate $q_{rec}^{D}$ or $q_{rec}^{U}$ depending on whether it is defended or not. These transition rates can be summarized in a matrix form: for $m\in\mathcal{P}(\mathcal{E}),a\in\mathcal{A}$, $\lambda(\cdot,\cdot,m,a)=\left(\lambda(x,x^{\prime},m,a)\right)_{x,x^{\prime}\in\mathcal{E}}=\begin{pmatrix}\dots&P^{m,a}_{DS\rightarrow DI}&\rho a&0\\\ q_{rec}^{D}&\dots&0&\rho a\\\ \rho a&0&\dots&P^{m,a}_{US\rightarrow UI}\\\ 0&\rho a&q_{rec}^{U}&\dots\end{pmatrix},$ where $\displaystyle P^{m,a}_{DS\rightarrow DI}=v_{H}q_{inf}^{D}+\beta_{DD}m(\\{DI\\})+\beta_{UD}m(\\{UI\\}),$ $\displaystyle P^{m,a}_{US\rightarrow UI}=v_{H}q_{inf}^{U}+\beta_{UU}m(\\{UI\\})+\beta_{DU}m(\\{DI\\}).$ The dots ($\dots$) on each row stand for the value such that the sum of the coefficients on this row equals $0$. We assume that each player wants to avoid seeing her computer being infected, but protecting a computer costs some resources. So the running cost is of the form $f(t,x,\nu,\alpha)=-\left[k_{D}\mathbf{1}_{\\{DI,DS\\}}(x)+k_{I}\mathbf{1}_{\\{DI,UI\\}}(x)\right],$ where $k_{D}>0$ is a protection cost to be paid whenever the computer is defended, and $k_{I}>0$ is a penalty incurred if the computer is infected. We consider $g\equiv 0$ (no terminal cost). By using the DGM, we train a neural network $\mathcal{U}_{\theta}$ to approximate the solution $\mathcal{U}$ to the master equation. Equation (3.75) provides us with a way to check how accurate this approximation is: we can fix an initial distribution, solve the forward-backward ODE system (which is easy given the initial condition), and then compare the value of the neural network $\mathcal{U}_{\theta}$ evaluated along the equilibrium flow of distributions with the solution to the backward ODE for the value function. To be specific, for every $m_{0}$, we first compute the equilibrium value function $\hat{u}^{m_{0}}$ and the equilibrium flow of distributions $\hat{m}^{m_{0}})$. We then evaluate $\mathcal{U}_{\theta}(t,x,\hat{m}^{m_{0}}(t,\cdot))$ for all $t\in[0,T]$ and check how close it is to $\hat{u}^{m_{0}}(t,x)$ for each of the four possible states $x$. Figures 17–19 show that the two curves (for each state) coincide for at least three different initial conditions. This means that, using the DGM, we managed to train a neural network that accurately represents the value function of a representative player for various distributions at once. In the numerical experiments, we used the following values for the parameters $\displaystyle\beta_{UU}=0.3,\beta_{UD}=0.4,\beta_{DU}=0.3,\beta_{DD}=0.4,\qquad v_{H}=0.2,\lambda=0.5,$ $\displaystyle q_{rec}^{D}=0.1,q_{rec}^{U}=0.65,q_{inf}^{D}=0.4,q_{inf}^{U}=0.3,\qquad k_{D}=0.3,k_{I}=0.5.$ (a) (b) Figure 17: MFG Cybersecurity example in Section 3.2.4.2. Test case 1: Evolution of the distribution $m^{m_{0}}$ (left) and the value function $u^{m_{0}}$ and $\mathcal{U}(\cdot,\cdot,m^{m_{0}}(\cdot))$ (right) for $m_{0}=(1/4,1/4,1/4,1/4)$. First published in [232] by the American Mathematical Society. (a) (b) Figure 18: MFG Cybersecurity example in Section 3.2.4.2. Test case 2: Evolution of the distribution $m^{m_{0}}$ (left) and the value function $u^{m_{0}}$ and $\mathcal{U}(\cdot,\cdot,m^{m_{0}}(\cdot))$ (right) for $m_{0}=(1,0,0,0)$. First published in [232] by the American Mathematical Society. (a) (b) Figure 19: MFG Cybersecurity example in Section 3.2.4.2. Test case 3: Evolution of the distribution $m^{m_{0}}$ (left) and the value function $u^{m_{0}}$ and $\mathcal{U}(\cdot,\cdot,m^{m_{0}}(\cdot))$ (right) for $m_{0}=(0,0,0,1)$. First published in [232] by the American Mathematical Society. ## 4 Reinforcement Learning All the previous methods rely, in one way or another, on the fact that the cost functions $f$ and $g$ as well as the drift $b$ and the volatility $\sigma$ (_cf._ (2.2)–(2.3)) are known. However, in many applications, coming up with a realistic and accurate model is a daunting task. It is sometimes impossible to guess the form of the dynamics, or the way the costs are incurred. This has motivated the development of so-called model-free methods. The reinforcement learning (RL) theory provides a framework for studying such problems. Intuitively, an agent evolving in an environment can take actions and observe the consequences of her actions: the state of the environment (or her own state) changes, and a cost is incurred to the agent. The agent does not know how the new state and the cost are computed. The goal for the agent is then to learn an optimal behavior by trial and error. Numerous algorithms have been developed under the topic of RL; see, e.g., the surveys and books [206, 64, 242, 293, 167]. Most of them focus on RL itself, with state-of-the-art methods in single-agent or multi-agent problems and some provide theoretical guarantees of numerical performances. We aim to review its connections to stochastic control and games, as well as the mean-field setting. We shall start by discussing how problems in Section 2 are formulated as single-agent RL222The terminology RL is from the perspective of artificial intelligence/computer science. In the operation research community, it is often called approximate dynamic programming (ADP) [275].. Although we here focus on the traditional presentation of RL in discrete time, let us mention that a continuous-time stochastic optimal control viewpoint on RL has also been studied, see e.g., [260] for a policy gradient algorithm, and [299, 298] for a mean-variance portfolio problem and for generic continuous time and space problems. Furthermore, [203, 204] studied policy-based and value function-based RL methods. It has also been extended in several directions, such as variance reduction techniques [212], risk-aware problems [199] and mean-field games [163] and [136]. Since the environment is unknown to the agent, a recurring question that needs to be addressed is the tradeoff between exploration and exploitation, whether in the single-agent, multiple-agent, or mean-field settings. Exploitation involves choosing actions that the agent believes will yield the highest immediate rewards based on her current knowledge, or equivalently; the agent follows the best-known strategies to maximize its short-term gains. Exploration, on the other hand, means taking actions that the agent has less information about, even if it might not result in the highest immediate rewards. The purpose of exploration is to gather more data and learn about the environment to improve the agent’s long-term performance. Pure exploitation can lead to suboptimal decisions if the agent’s initial knowledge is incomplete or incorrect. She may miss out on potentially better actions. Pure exploration, while informative, can be inefficient and may lead to delayed or reduced rewards, as the agent keeps trying unproven actions. Effective RL algorithms aim to strike a balance between these opposing objectives. Alongside classical approaches like $\epsilon$-greedy and Upper- Confidence-Bound for exploration, recent developments have expanded our understanding of managing this trade-off to optimize an agent’s learning and decision-making process in dynamic and uncertain environments. Notable contributions include [298], which elucidates the trade-off through entropy regularization from a continuous-time stochastic control perspective and offers theoretical support for Gaussian exploration, particularly in the context of a linear-quadratic regulator. [163] explores entropy regularization and devises a policy-gradient algorithm for exploration within the mean-field setting. Furthermore, [122] demonstrates that common noise can serve as an exploration mechanism for learning the solution of a mean-field game, through a linear-quadratic model. ### 4.1 Reinforcement learning for stochastic control problems Recall the stochastic control problems studied in Section 2, see (2.3)–(2.2). In this section, we will assume that the agent can not directly access $b,\sigma,f$ and $g$, but can observe the “next step” information given the current state and control. We consider the time discretized problem $\displaystyle\check{X}_{t_{n+1}}=\check{X}_{t_{n}}+b(t_{n},\check{X}_{t_{n}},\alpha_{t_{n}})\Delta t+\sigma(t_{n},\check{X}_{t_{n}},\alpha_{t_{n}})\Delta\check{W}_{t_{n}},$ (4.1) $\displaystyle\min_{(\alpha_{t_{n}})_{n=0,\dots,N_{T}-1}}\mathbb{E}\left[\sum_{n=0}^{N_{T}-1}f(t_{n},\check{X}_{t_{n}},\alpha_{t_{n}})\Delta t+g(\check{X}_{T})\right],$ (4.2) where $0=t_{0}<t_{1}<\ldots<t_{N_{T}}=T,\text{ with }t_{n}-t_{n-1}=\Delta t=T/N_{T},$ (4.3) is the temporal discretization on $[0,T]$ as before. By doing so, the system is Markovian, and can be viewed as a Markov decision process (MDP). #### 4.1.1 Markov decision processes Problem (4.1)–(4.2) can be recast as an MDP, which is a tuple $(\mathcal{X},\mathcal{A},p,f,g,N_{T})$, where * • $\mathcal{X}$ is the set of states called the state space; * • $\mathcal{A}$ is the set of actions called the action space; * • $N_{T}<+\infty$ is the time horizon; * • $p:\mathcal{X}\times\mathcal{A}\times\\{0,\Delta t,2\Delta t,\dots,T\\}\to\mathcal{P}(\mathcal{X})$ is the transition kernel, and $p(\cdot|x,a,t_{n})$ is a probability density function; * • $f:\\{0,\Delta t,2\Delta t,\dots,T\\}\times\mathcal{X}\times\mathcal{A}\to\mathbb{R}$ is the one-step cost function, and $f(t_{n},x,a)$ is the immediate cost at time $t_{n}$ at state $x$ due to action $a$; * • $g:\mathcal{X}\to\mathbb{R}$ is the terminal cost function, and $g(x)$ is the terminal cost at the final time $N_{T}$. A large part of the RL literature focuses on the infinite horizon setting with discounted costs. Furthermore, the state space and action space are usually discrete (and they are often in fact finite), in which case $p(x^{\prime}|x,a,t_{n})=\mathbb{P}(X_{t_{n+1}}=x^{\prime}|X_{t_{n}}=x,\alpha_{t_{n}}=a)$ is the probability to go to state $x^{\prime}$ at time $t_{n+1}$ if at time $t_{n}$ the state $x$ and the action is $a$. However, for the sake of consistency with the previous sections and the literature on optimal control, we stick to the finite horizon and continuous space setting in this section. In model-free RL, the agent typically uses multiple episodes to learn the control that optimizes (4.1) with a simulator. In one episode of learning, the agent-environment interaction is as follows: Starting with $X_{0}\in\mathcal{X}$, the agent chooses $\alpha_{0}\in\mathcal{A}$, pays a one-step cost $f(0,X_{0},\alpha_{0})\Delta t$ and finds herself in a new state $X_{t_{1}}$; the process continues, forming a sequence: $X_{0},\;\alpha_{0},\;f(0,X_{0},\alpha_{0})\Delta t,\;X_{t_{1}},\;\alpha_{t_{1}},\;f(t_{1},X_{t_{1}},\;\alpha_{t_{1}})\Delta t,\;\ldots,\;X_{T},\;g(X_{T}).$ (4.4) Under the Euler scheme (4.1), given the state-action pair $(\check{X}_{t_{n}},\alpha_{t_{n}})=(x,a)$ at time $t_{n}$, $X_{t_{n+1}}$ follows a normal distribution $\mathcal{N}(x+b(t_{n},x,a)\Delta t,\sigma^{2}(t_{n},x,a)\Delta t)$. In RL, there are four main components: policy, reward signal, value function, and optionally, a model of the environment. The MDP provides a mathematical framework to describe the agent-environment interface. A _policy_ $\pi:\\{0,\Delta t,2\Delta t,\dots,T\\}\times\mathcal{X}\to\mathcal{P}(\mathcal{A})$ is a mapping from the state space to the probability space of the action space, and $\pi_{t}(a|x)$ describes the probability of choosing action $a$ at state $x$, which is in general random. The _value function_ associated to a specific policy $\pi$ is denoted by $V^{\pi}_{t}$ and defined as the expected cost when starting from $x$ at time $t$ and following $\pi$ thereafter, i.e., $V_{t_{n}}^{\pi}(x)=\mathbb{E}^{\pi}\left[\sum_{j=n}^{N_{T}-1}f(t_{j},X_{t_{j}},\alpha_{t_{j}})\Delta t+g(X_{T})|X_{t_{n}}=x\right],$ (4.5) where the superscript $\pi$ over the expectation means that, at each time step, the action is sampled according to $\pi$. Similarly, the _action-value function_ $Q^{\pi}_{t}$ associated to $\pi$ is defined as the expected cost when starting from $x$ at time $t$, taking the action $a$ and then following $\pi$, i.e., $Q^{\pi}_{t_{n}}(x,a)=\mathbb{E}^{\pi}\left[\sum_{j=n}^{N_{T}-1}f(t_{j},X_{t_{j}},\alpha_{t_{j}})\Delta t+g(X_{T})|X_{t_{n}}=x,\;\alpha_{t_{n}}=a\right].$ (4.6) Both functions satisfy the dynamic programming equations (also called Bellman equations), $\displaystyle V^{\pi}_{t_{n}}(x)=\int_{a\in\mathcal{A}}\pi_{t_{n}}(a|x)\int_{x^{\prime}\in\mathcal{X}}p(x^{\prime}|x,a,t_{n})[f(t_{n},x,a)\Delta t+V^{\pi}_{t_{n+1}}(x^{\prime})]\,\mathrm{d}x^{\prime}\,\mathrm{d}a,$ (4.7) $\displaystyle Q_{t_{n}}^{\pi}(x,a)=\int_{x^{\prime}\in\mathcal{X}}p(x^{\prime}|x,a,t_{n})[f(t_{n},x,a)\Delta t+V_{t_{n+1}}^{\pi}(x^{\prime})]\,\mathrm{d}x^{\prime},$ (4.8) with terminal conditions $V_{T}^{\pi}(x)=Q_{T}^{\pi}(x,a)=g(x)$, where we have simplified the subscript $t_{N_{T}}=T$. The goal of RL is to identify the optimal $\pi^{\ast}=(\pi^{\ast}_{t})_{t}$ that minimizes $V_{t}^{\pi}(x)$ for every $t$ and $x\in\mathcal{X}$. To this end, one also works with the _optimal value function_ $V^{\ast}_{t}(x)=\inf_{\pi}V_{t}^{\pi}(x)$ and the _optimal action-value function_ defined as $Q^{\ast}_{t}(x,a)=\inf_{\pi}Q^{\pi}_{t}(x,a)$, which satisfy and the optimal Bellman equation reads, $\displaystyle V^{\ast}_{t_{n}}(x)=\inf_{a\in\mathcal{A}}\int_{x^{\prime}\in\mathcal{X}}p(x^{\prime}|x,a,t_{n})[f(t_{n},x,a)\Delta t+V^{\ast}_{t_{n+1}}(x^{\prime})]\,\mathrm{d}x^{\prime},$ (4.9) $\displaystyle Q^{\ast}_{t_{n}}(x,a)=f(t_{n},x,a)\Delta t+\int_{x^{\prime}\in\mathcal{X}}p(x^{\prime}|x,a,t_{n})\inf_{a^{\prime}\in\mathcal{A}}Q^{\ast}_{t_{n+1}}(x^{\prime},a^{\prime})\,\mathrm{d}x^{\prime},$ (4.10) with terminal conditions $V_{T}^{\ast}(x)=Q_{T}^{\ast}(x,a)=g(x)$. Model-free RL aims at computing $\pi^{*}$ without using the knowledge of the transition probability kernel $p$, and by instead relying on samples of transitions $X_{t_{n+1}}\sim p(x^{\prime}|x,a,t_{n})$. There are primarily two categories of learning methods: value-based methods and policy gradient methods. In the continuous time framework, the connection between policy evaluation in value-based methods and policy gradient methods has been developed in [203, 204]. ##### 4.1.1.1 Value-based methods For value-based methods, the workflow can be summarized as follows: starting with an arbitrary policy $\pi$, evaluate its value, improve the policy, and repeat until convergence: $\pi_{0}\to V^{\pi_{0}}\to\pi_{1}\to V^{\pi_{1}}\to\ldots\pi_{\ast}\to V^{\ast}.$ (4.11) The symbol $\pi_{\mathtt{k}}\to V^{\pi_{\mathtt{k}}}$ denotes a _policy evaluation_ , and the symbol $\pi_{\mathtt{k}}\to V^{\pi_{\mathtt{k}+1}}$ denotes a _policy improvement_. Evaluating a given policy $\pi_{\mathtt{k}}\to V^{\pi_{\mathtt{k}}}$ exactly is not possible since we assume that $p(\cdot|x,a,t_{n})$ is unknown. The Temporal-Difference (TD) learning remedies this issue by updating $V^{\pi}_{t_{n}}(x)$ with one sample drawn according to $X_{t_{n+1}}\sim p(x^{\prime}|x,a,t_{n})$, $V^{\pi}_{t_{n}}(X_{t_{n}})\leftarrow V^{\pi}_{t_{n}}(X_{t_{n}})+\beta[f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+V_{t_{n+1}}^{\pi}(X_{t_{n+1}})-V_{t_{n}}^{\pi}(X_{t_{n}})],$ where $\beta>0$ is a learning rate. This is the simplest TD method, usually denoted by TD(0). To unify TD methods and MC methods, one can view the latter as updating $V^{\pi}_{t_{n}}$ using the entire sequence of observed cost from time $t_{n}$ until the end of the episode $T$. The $n$-step TD method lies in between and consists of simulating $n$ Monte Carlo samples to update $V^{\pi}$. TD learning can also be applied to action-value functions, for example by using the update rule: $Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})\leftarrow Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})+\beta[f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+Q^{\pi}_{t_{n+1}}(X_{t_{n+1}},\alpha_{t_{n+1}})-Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})],$ where $X_{t_{n+1}},\alpha_{t_{n+1}}$ are random samples from (4.1) and from $Q$ plus some randomization using for instance the $\epsilon$-greedy policy, which picks the currently optimal action with probability $1-\epsilon$ and, with probability $\epsilon$, picks an action uniformly at random. This approach is called SARSA. Then the optimal action-value function $Q^{\ast}$ can be learned as follows: choose $\alpha_{t_{n}}$ according to $Q$ plus $\epsilon$-greedy for some exploration, then update $Q$ using SARSA. This method falls into the category of _on-policy_ algorithms since it evaluates or improves the policy that is used to make decisions. In fact, it uses an $\epsilon$-greedy policy to balance between learning an optimal behavior and behaving non-optimally for exploration, so it learns the value function for a sub-optimal policy that still explores. _Off-policy_ methods, on the contrary, use different policies for evaluation and data generation. _Q-learning_ may be the earliest well-known off-policy algorithm, which directly approximates $Q^{\ast}$ using the update rule: $Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})\leftarrow Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})+\beta[f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+\max_{a}Q^{\pi}_{t_{n+1}}(X_{t_{n+1}},a)-Q^{\pi}_{t_{n}}(X_{t_{n}},\alpha_{t_{n}})].$ ##### 4.1.1.2 Policy gradient methods This section describes some methods that aim at directly learning an optimal policy without deducing it from the value function. They use a parameterized class of policies. We denote by $\pi_{t}(a|x;\theta)$ the probability of taking action $a$ at state $x$ with parameter $\theta$. In practice, this can be a linear function $\theta^{\operatorname{T}}\mathrm{f}(x,a)$ where $\mathrm{f}(x,a)$ is called feature vector, or a neural network taking $x$ as input and outputting a probability distribution over actions. Policy gradient methods update the policy parameter $\theta$ based on the gradient of some performance measure $L(\theta)$, with updates of the form $\theta\leftarrow\theta-\beta\widehat{\nabla J(\theta)},$ where $\widehat{\nabla L(\theta)}$ denotes an estimation of $\nabla L(\theta)$ based on Monte Carlo samples. A natural choice of $L(\theta)$ is the value function $V^{\pi_{\theta}}$ we aim to minimize. According to the policy gradient theorem, $\nabla V^{\pi_{\theta}}_{t_{n}}(x)=\mathbb{E}_{\pi}\left[\int_{\mathcal{A}}Q^{\pi}_{t_{n}}(x,a)\nabla\pi_{t_{n}}(a|x;\theta)\,\mathrm{d}a\right].$ Multiplying the first term by $\pi_{t_{n}}(a|x;\theta)$, dividing the second term by $\pi_{t_{n}}(a|x;\theta)$, replacing $a$ by a sample $\alpha_{t_{n}}$, and using $\mathbb{E}_{\pi}[G_{t_{n}}|x,a]=Q^{\pi}_{t_{n}}(x,a)$ leads to the REINFORCE algorithm [301], $\theta\leftarrow\theta-\beta G_{t_{n}}\frac{\nabla_{\theta}\pi_{t_{n}}(\alpha_{t_{n}}|X_{t_{n}};\theta)}{\pi_{t_{n}}(\alpha_{t_{n}}|X_{t_{n}};\theta)},$ where $G_{t_{n}}=\sum_{n^{\prime}=n+1}^{N_{T}-1}f(t_{n^{\prime}},\check{X}_{t_{n^{\prime}}},\alpha_{t_{n^{\prime}}})\Delta t+g(\check{X}_{T})$ denotes the cumulated cost from time $t_{n+1}$ to $T$. ###### Remark 4.1 (Theoretical analysis). Convergence of policy gradient has been studied in various settings. As for the model-based framework, LQ problems have attracted a particular interest since the optimal control can be written as a linear function of the state. Global convergence in the infinite horizon setting has been proved by Fazel et al. in [133, Theorems 7 and 9]. This result has been extended in various directions, such as the finite horizon setting in [166], the neural setting [300], problems with entropy regularization [98], and MFC [92], to cite just a few examples. With an additional parameterized value function $V_{t_{n}}(x;\theta^{\prime})$, this leads to the actor-critic algorithm (see, e.g., [293, Section 13.5] or [121]), $\displaystyle\delta_{t_{n}}=f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+V_{t_{n+1}}(X_{t_{n+1}};\theta^{\prime})-V_{t_{n}}(X_{t_{n}};\theta^{\prime}),$ (4.12) $\displaystyle\theta^{\prime}\leftarrow\theta^{\prime}-\beta^{\prime}\delta_{t_{n}}\nabla_{\theta^{\prime}}V_{t_{n}}(X_{t_{n}};\theta^{\prime}),$ (4.13) $\displaystyle\theta\leftarrow\theta-\beta\delta_{t_{n}}\nabla_{\theta}\ln\pi_{t_{n}}(\alpha_{t_{n}}|X_{t_{n}};\theta).$ (4.14) Both REINFORCE and actor-critic methods mentioned above stochastically select an action $a$ when in state $x$ according to the parameter $\theta$, i.e., $a\sim\pi_{t_{n}}(\cdot|x,\theta)$. For some problems, it is more appropriate to look for a deterministic policy $\alpha_{t_{n}}(x;\theta)\in\mathcal{A}$. To ensure exploration, one can use an off-policy approach: a stochastic policy $\tilde{\pi}_{t_{n}}(a|x)$ is used to choose the action, and a deterministic policy $\alpha_{t_{n}}(x;\theta)$ is learned to approximate the optimal one. An example of such methods is the deterministic policy gradient (DPG) [289], which is an off-policy actor-critic algorithm that learns a deterministic target policy $\alpha_{t_{n}}(x;\theta)$ from an exploratory behavior policy $\tilde{\pi}_{t_{n}}(a|x)$. In particular, a differentiable critic $Q(x,a;\theta^{\prime})$ is used to approximate $Q^{\alpha(\cdot;\theta)}(x,a)$ and is updated via Q-learning: at each step, we sample $\alpha_{t_{n}}$ from $\tilde{\pi}_{t_{n}}(a|x)$ and $\displaystyle\delta_{t_{n}}=f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+Q_{t_{n+1}}(X_{t_{n+1}},\alpha_{t_{n+1}}(X_{t_{n+1}};\theta);\theta^{\prime})-Q_{t_{n}}(X_{t_{n}},\alpha_{t_{n}};\theta^{\prime}),$ (4.15) $\displaystyle\theta^{\prime}\leftarrow\theta^{\prime}-\beta^{\prime}\delta_{t_{n}}\nabla_{\theta^{\prime}}Q_{t_{n}}(X_{t_{n}},\alpha_{t_{n}};\theta^{\prime}),$ (4.16) $\displaystyle\theta\leftarrow\theta-\beta\delta_{t_{n}}\nabla_{\theta}\alpha_{t_{n}}(X_{t_{n}};\theta)\nabla_{a}Q(X_{t_{n}},\alpha_{t_{n}};\theta^{\prime})|_{a=\alpha_{t_{n}}(X_{t_{n}};\theta)}.$ (4.17) When using neural networks to approximate $Q^{\alpha(\cdot;\theta)}$ and the deterministic policy $\alpha_{t_{n}}(x;\theta)$, one can use the Deep DPG (DDPG) algorithm [243], which is based on the same intuition as DPG. For the sake of robustness it uses the “replay-buffer” idea borrowed from the Deep Q Network (DQN) algorithm, see [256]: the network parameters are learned in mini-batches rather than online by using a replay buffer, so that correlation between samples are kept minimal. Another pair of networks $Q^{\prime}(x,a;\hat{\theta}^{\prime})$ and $\alpha^{\prime}_{t_{n}}(x;\hat{\theta})$ are copied from $Q(x,a;\theta^{\prime})$ and $\alpha_{t_{n}}(x;\theta)$ for calculating the target value in order to improve the stability. At each step, an action $\alpha_{t_{n}}$ is sampled from $\alpha_{t_{n}}(X_{t_{n}};\theta)+\mathcal{N}_{t_{n}}$ where $\mathcal{N}_{t}$ is a noise process for exploration; then the cost $f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t$ and the new state $X_{t_{n+1}}$ are observed and saved to the buffer. A mini-batch of $N$ transitions $(X_{t_{n}},\alpha_{t_{n}},f,X_{t_{n+1}})$ are sampled from the buffer, acting as supervised learning data for the critic $Q(x,a;\theta^{\prime})$. The loss to be minimized is the mean-squared error of $Q_{t_{n}}(X_{t_{n}},\alpha_{t_{n}};\theta^{\prime})$ and $f(t_{n},X_{t_{n}},\alpha_{t_{n}})\Delta t+Q^{\prime}_{t_{n+1}}(X_{t_{n+1}},\alpha^{\prime}_{t_{n+1}}(X_{t_{n+1}};\hat{\theta});\hat{\theta}^{\prime})$. The actor network and both copies are updated via $\displaystyle\theta\leftarrow\theta-\beta\frac{1}{N}\sum_{i}\nabla_{a}Q_{t_{n}}(x,a;\theta^{\prime})|_{x=X_{t_{n}}^{i},a=\alpha_{t_{n}}(X_{t_{n}}^{i};\theta)}\nabla_{\theta}\alpha_{t_{n}}(X_{t_{n}};\theta),$ (4.18) $\displaystyle\hat{\theta}^{\prime}\leftarrow\tau\theta^{\prime}+(1-\tau)\hat{\theta}^{\prime},\quad\hat{\theta}\leftarrow\tau\theta+(1-\tau)\hat{\theta},$ (4.19) where the superscript $i$ indicates the $i^{th}$ sample from the mini-batch, and $\tau\ll 1$ is used to slowly track the learnt counterparts $\theta$ and $\theta^{\prime}$. #### 4.1.2 Mean-field MDP and reinforcement learning for mean-field control problems We now consider the MFC setting discussed in Section 2.6.2 as an extension of standard OC and we present an RL framework for this setting. MFC can be viewed as an optimal control problem in which a “state” is a population configuration. However an “action” is not a finite-dimensional object but rather a function providing a control for every individual state. Intuitively, in discrete time, this yields an MDP of the form $(\mathcal{P}(\mathcal{X}),\mathcal{F}_{\mathcal{A}},\bar{p},\bar{f},\bar{g},N_{T})$, where * • The state space is the set $\mathcal{P}(\mathcal{X})$ of probability measures on $\mathcal{X}$; * • The action space $\mathcal{F}_{\mathcal{A}}$ is a suitable subset of $\mathcal{A}^{\mathcal{X}}$, the set of functions from $\mathcal{X}$ to $\mathcal{A}$; * • The transition kernel is given by $\bar{p}:\\{t_{0},t_{1},\dots,T\\}\times\mathcal{P}(\mathcal{X})\times\mathcal{F}_{\mathcal{A}}\to\mathcal{P}(\mathcal{P}(\mathcal{X})),\quad\bar{p}(\cdot|t,\mu,\bar{a})=\delta_{\int p(\cdot|t,x,\mu,\bar{a}(x))\mu(x)dx}\,,$ meaning that with probability one, the new mean field state is given by one transition of the population distribution. Here $\mu$ represents a population distribution, $\bar{a}$ is an action at the population level, and $\bar{p}(\cdot|t,\mu,\bar{a})$ is the distribution of the next population distribution, which is a Dirac mass at the next population distribution since there is no common noise in the present model; * • The running and terminal cost functions are given by $\bar{f}:\\{t_{0},t_{1},\dots,T\\}\times\mathcal{P}(\mathcal{X})\times\mathcal{F}_{\mathcal{A}}\to\mathbb{R},\qquad\bar{f}(t,\mu,\bar{a})=\int_{x}f(t,x,\mu,\bar{a}(x))\mu(x)\,\mathrm{d}x\,,$ and $\bar{g}:\mathcal{P}(\mathcal{X})\to\mathbb{R},\qquad\bar{g}(\mu)=\int_{x}g(x,\mu)\mu(x)\,\mathrm{d}x.$ Such MDPs have been referred to as mean field MDPs (MFMDP for short) in the literature [140, 93, 159, 160, 259, 35]. These MDPs can be rigorously studied using the tools developed for instance by Bertsekas and Shreve in [50]. Since this problem fits in the framework of MDPs, one can directly apply RL methods in principle. For instance, the Q-function of the MDP naturally satisfies a dynamic programming principle; see [93, 159, 160, 259]. Note that, if there is no common noise (as in the setting presented above), the evolution of the population distribution is purely deterministic. To implement RL methods for MFC, the main difficulties are related to handling the distribution and the class of controls. In particular, we note that * • If $\mathcal{X}$ is finite, then the state of the MDP, namely $\mu$, is a finite-dimensional vector; if $\mathcal{A}$ is also finite, then $\mathcal{F}_{\mathcal{A}}$ can simply be taken as $\mathcal{A}^{\mathcal{X}}$, which is a finite set as well; * • If $\mathcal{X}$ is not finite, then $\mu$ is infinite-dimensional and likewise for the elements of $\mathcal{A}^{\mathcal{X}}$. One simple approach is to discretize $\mathcal{P}(\mathcal{X})$ and $\mathcal{A}^{\mathcal{X}}$, and then use standard RL techniques for finite state, finite action MDPs, such as the ones described in Section 4.1.1. For instance tabular Q-learning has been used e.g. in [93, 160] in the first case above by identifying $\mathcal{P}(\mathcal{X})$ with the simplex $\Delta_{\mathcal{X}}$ in dimension $|\mathcal{X}|$ and by approximating the latter with an $\epsilon$-net. However, this approach does not scale well when the number of states is large or when $\mathcal{X}$ is continuous. In this case, one can use RL methods for continuous state space, such as deep RL methods, see for instance [93]. ###### Remark 4.2 (Theoretical analysis). The convergence of Q-learning for MFMDP has been analyzed in [93] and [160] using tabular or kernel-based methods respectively. The convergence of a policy gradient method for LQ MFC has been proved in [92] based the ideas of [133]. For the sake of illustration, we provide an example in a setting where $\mathcal{X}$ is finite. Let $d=|\mathcal{X}|$ be the number of states. As mentioned above, we view $\mathcal{P}(\mathcal{X})$ as the $d$-dimensional simplex $\Delta_{\mathcal{X}}$. In this case, the MFMDP is an MDP over a finite-dimensional continuous state space. To avoid discretizing the space, deep RL methods rely on neural networks to efficiently approximate the value function or the policy. ##### Numerical illustration: A Cybersecurity model revisited. We consider the cybersecurity model introduced in [216] (see also [84, Section 7.2.3]) and that we already discussed in Section 3.2.4.2. We revisit this problem from the point of view of MFC, meaning that the players cooperate to jointly minimize the social cost. To be able to tackle this problem using RL, we discrete time using a mesh $\\{t_{n}=n\Delta t,n=0,1,2,\dots,N_{T}\\}$ where $\Delta t=T/N_{T}>0$. The total cost for the whole population is $\displaystyle J(\alpha)=\sum_{n=0}^{N_{T}-1}\bar{f}(\mu_{t_{n}},\alpha(t_{n},\cdot))\Delta t,$ under the constraint that the evolution of distribution is given by $\mu_{t_{n+1}}=\bar{p}(\mu_{t_{n}},\alpha(t_{n},\cdot))=(\mu_{t_{n}})^{\operatorname{T}}(I+P^{\alpha(t_{n},\cdot),\mu_{t_{n}}}\Delta t),\qquad n=0,1,\dots,N_{T}-1,$ (4.20) with a given initial condition $\mu_{0}$. The population-wise cost function $\bar{f}:\mathcal{P}(\mathcal{X})\times\mathcal{A}^{\mathcal{X}}\to\mathbb{R}$ is defined based on the individual cost function $f$ by $\bar{f}(m,\alpha)=\sum_{x\in\mathcal{X}}f(x,m,\alpha(x))m(x),\qquad(m,\alpha)\in\mathcal{P}(\mathcal{X})\times\mathcal{A}^{\mathcal{X}},$ and $P^{\alpha,m}$ denotes the matrix whose coefficients are given by $P^{\alpha,m}(x^{\prime},x)=\lambda(x^{\prime},x,m,\alpha(x^{\prime})),\qquad(x^{\prime},x,m,\alpha)\in\mathcal{X}\times\mathcal{X}\times\mathcal{P}(\mathcal{X})\times\mathcal{A}^{\mathcal{X}}.$ From this formulation, we see that the problem fits in the framework of MFMDPs, or MDPs with finite horizon and continuous space, the state being the distribution. In [232], the solution is learned using tabular Q-learning after discretizing the simplex: replacing $\mathcal{P}(\mathcal{X})$ by an $\epsilon$-net with a finite number of distributions allows one to replace the MFMDP by a finite- state MFMDP on which tabular RL methods can be applied. This approach is convenient in that tabular methods typically have fewer hyperparameters and furthermore convergence results are easier to obtain. However, the main drawback is that such methods do not scale well to very large state space. In our case, discretizing the simplex requires a large number of points when the number of states increases. Alternatively, the value function can be approximated directly on the simplex $\mathcal{P}(\mathcal{S})$, without any discretization. For example, we can replace the Q-function by a neural network and employ deep RL techniques to train the parameters. Here we follow the approach proposed in [93] and we focus on deterministic controls. The control and the value function are approximated by neural networks and trained using the DDPG method [243], which has been reviewed in Section 4.1.1.2. Since this method allows the control to take continuous values, we replace $A=\\{0,1\\}$ by $A=[0,1]$ (without changing the transition rate matrix), which amounts to letting the player choose the intensity with which she seeks to change her computer’s level of protection. We aim at learning the solution for various distributions. To train the neural networks, we sample at each iteration a random initial distribution $\mu_{0}$ and generate a trajectory in the simplex by following the dynamics (4.20). Figure 20 displays the evolution of the population when using the learned control starting from five initial distributions of the testing set and one initial distribution of the training set. The testing set of initial distributions is: $\\{(0.25,0.25,0.25,0.25),$ $(1,0,0,0),$ $(0,0,0,1),$ $(0.3,0.1,0.3,0.1),$ $(0.5,0.2,0.2,0.1)\\}$. We see that, in this setting, the distribution always evolves towards a configuration in which there is no defended agents, and the proportion of undefended infected and undefended susceptible are roughly $0.43$ and $0.57$, respectively. Figure 20: Cybersecurity MFC model solved with DDPG in Section 4.1.2: Evolution of the population distribution for five initial distributions. ### 4.2 Reinforcement learning for stochastic differential games #### 4.2.1 Multi-agent reinforcement learning (MARL) Multi-agent reinforcement learning (MARL) studies reinforcement learning methods for multiple learners. The main difficulty is that, when several agents learn while interacting, from the point of view of each agent, the environment is non-stationary. Another issue is the question of scalability, which arises when the number of learners is very large. However, for a small number of agents, MARL has led to recent breakthrough results; see e.g. in autonomous driving [287], the game of Go [288], or video games such as Star Craft [297]. Several viewpoints can be adopted. Relying on dynamical systems theory, one approach is to consider that each agent uses a learning algorithm, and to study the resulting behavior of the group of agents viewed as a system evolving in discrete or continuous time. Another approach, based on game theory and closer to the topics discussed in Section 3, is to look for notions of solutions such as Nash equilibria and to design algorithms that let the agents learn such solutions. A typical example is Nash Q-learning, in which every player runs their own version of Q-learning simultaneously with the other players. Each player tries to compute its optimal Q-function, but the optimal policy of player $i$ depends on the policies implemented by the other players. To be specific, consider an $N$-player game as in Section 3.1 but now in discrete time. Note that the problem faced by player $i$ is not an MDP with state $X^{i}$ because the cost and dynamics of player $i$ depend on the other players. Assume the players use a strategy profile $\bm{\pi}=(\pi^{1},\dots,\pi^{N})$. Then the Q-function of player $i$ is: for $\bm{x}=(x^{1},\dots,x^{N})$ and $\bm{a}=(a^{1},\dots,a^{N})$, $Q^{i,\bm{\pi}}_{t_{n}}(\bm{x},\bm{a})=\mathbb{E}^{\bm{\pi}}\left[\sum_{j=n}^{N_{T}-1}f^{i}(t_{j},\bm{X}_{t_{j}},\bm{\alpha}_{t_{j}})\Delta t+g^{i}(\bm{X}_{T})\Big{|}\bm{X}_{t_{n}}=\bm{x},\;\bm{\alpha}_{t_{n}}=\bm{a}\right].$ (4.21) Hu and Wellman proposed in [186] a version of Q-learning for (infinite horizon discounted) $N$-player games, called Nash Q-learning, and identified conditions under which this algorithm converges to a Nash equilibrium. The method can be adapted with deep neural networks, as done for instance in [97]. We refer the interested reader to, e.g., [64, 295, 51, 226, 305, 309, 158] for more details on MARL. Recently, [160, 161] also studied mean-field control RL in a decentralized way using cooperative MARL. #### 4.2.2 Reinforcement learning for mean-field games We now turn our attention to RL methods for MFG. As pointed out in Section 3.2, finding a mean-field Nash equilibrium boils down to (1) finding a control that is optimal for a representative infinitesimal player facing the equilibrium distribution flow, and (2) computing the induced distribution flow, which should match the equilibrium one. These two elements can be tackled alternatively, as described in Section 3.1 in the N-player case and in Section 3.2.2 in the mean-field case. The first part is a standard optimal control problem, which can thus be tackled using standard RL techniques, see Section 4.1.1. In this setting, we assume that the agent who is learning can repeat experiments of the following form: given the current state, the agent chooses an action (or a sequence of actions), and the environment returns the new state as well as the reward (or a sequence of states and rewards). In the representative player’s MDP, the distribution enters as a parameter that influences the reward and dynamics, but is fixed when the player learns an optimal policy. During such experiments, we generally assume that the population distribution is fixed, and it is updated after a number of iterations; see e.g. [162, 127]. Alternatively, we can assume that it is updated at every iteration but at a slow rate; see e.g. [292, 18, 302]. Most of the literature thus far focuses on tabular methods. A few works have used deep RL methods to compute the best response. For example, DDPG has been used in [127], soft actor-critic (SAC) has been used for a flocking model in [269], while deep Q-learning or some variants of it has been used in [111, 268, 234]. Recently, several works have studied the advantages and the limitations brought by the regularization of the policy through penalization terms in the cost function [14, 111, 163]. We refer to [233] for a survey of learning algorithms and reinforcement learning methods to approximate MFG solutions. ##### Numerical illustration: an example with explicit solution. For the sake of illustration, we consider an MFG model which admits an explicit solution in the continuous time ergodic setting. The model has been introduced and solved in [13]. The MFG is defined as follows. The state space is the one-dimensional unit torus, i.e., $\mathbb{T}=[0,1]$ with periodic boundary conditions. The action space is $\mathbb{R}$ (or in practice any bounded interval containing $[-2\pi,2\pi]$, which is the range of the equilibrium control). The drift function is $b(x,m,a)=a.$ The running cost is $f(x,m,a)=\tilde{f}(x)+\frac{1}{2}|a|^{2}+\log(m),$ where the first term is the following cost, which encodes spatial preferences for some regions of the domain $\tilde{f}(x)=-2\pi^{2}\sin(2\pi x)+2\pi^{2}\cos(2\pi x)^{2}-2\sin(2\pi x).$ In the ergodic MFG setting, the objective of an infinitesimal representative player is to minimize $\lim_{T\to+\infty}\frac{1}{T}\mathbb{E}\left[\int_{0}^{T}f(X_{t},\mu_{t}(X_{t}),\alpha_{t}(X_{t}))\,\mathrm{d}t\right],$ where $X$ is controlled by $\alpha$. Here $\mu_{t}$ is assumed to have a density for every $t\geq 0$, and we identify it with its density. So $\mu_{t}(X_{t})$ denotes the value of the density of $\mu_{t}$ at $X_{t}$. The equilibrium control and the equilibrium mean-field distribution are respectively given by $a^{*}:x\mapsto 2\pi\cos(2\pi x)\;\quad\mbox{and}\;\quad\mu^{*}:x\mapsto\frac{e^{2\sin(2\pi x)}}{\int_{\mathbb{T}}e^{2\sin(2\pi y)}\,\mathrm{d}y}\;.$ We use fictitious play [69] combined with a deep RL algorithm to learn the best response at each iteration for the solution. The problem is in continuous state and action spaces and admits a deterministic equilibrium control. Hence, following [127], at each iteration, we solve the representative player’s MDP using DDPG [243] reviewed in Section 4.1.1.2. The plots in Figure 21 are borrowed from [127]. The left plot displays the $L^{2}$ distance between the true equilibrium control and the control learnt by the algorithm. The right plot shows the stationary distribution learnt by the algorithm, which is to be compared with the distribution described in [13] for the ergodic problem. Although the two problems are slightly different (one being in the infinite horizon discounted setting and the other one in the ergodic setting), we can see that the distribution has the same shape, for suitable choises of parameters. We refer to [127] for more details on the implementation and the choice of parameters and hyperparameters for the results shown in Figure 21. Figure 21: MFG described in Section 4.2.2, solved with fictitious play and DDPG. Left: $L^{2}$ error on the analytical control; right: stationary distribution. Results obtained by 125 iterations of fictitious play. ## 5 Conclusion and Future Directions This paper reviews recent developments in machine learning methods for stochastic optimal control and games, with a special focus on emerging applications of deep learning and reinforcement learning to these problems. Despite the rapidly growing number of recent works, many questions remain to be investigated further. We hope this survey will generate more interest in this topic and attract more researchers to work on it. Besides the material already reviewed in this survey, we outline a few research directions below. First, the main goal of this survey was to provide an overview of existing methods, with a harmonized presentation of the types of problems studied in this field. However, the literature still lacks unified and thorough comparison of all existing methods on common benchmark problems. Indeed, most existing works use different assumptions for the their theoretical analysis and, on the numerical side, focus on illustrating the performance of one method on examples which are not necessarily the same as examples considered in other works. To understand better in which case each method is the most suitable, it would be important to provide a detailed comparison of the assumptions required for the analysis and to perform rigorous numerical comparisons on common problems. Most of the methods presented here lack full analysis on the theoretical side. The mathematical foundations of deep learning are attracting growing interest, and recent results could help analyze the methods described in this paper. The main motivation underlying the use of deep networks is their ability to cope with the curse of dimensionality. However, rigorously phrasing and proving such a statement has only been done in particular cases. Analyzing the generalization capability of neural networks is typically done by splitting the analysis into several types of errors, such as approximation, estimation, and optimization errors. Bounds on the approximation and estimation errors can generally be obtained based on the regularity of the function to be approximated, which can be difficult in the context of differential games. Furthermore, bounding the optimization error is even more challenging since it involves not only the definition of the game but also the optimization algorithm. Due to these difficulties, estimating these errors remains an open question for most methods discussed in this survey. From a practical viewpoint, an important question related to neural network- based methods is the choice of hyperparameters. The most obvious one is the architecture of the neural network. In many cases, a feedforward fully connected architecture provides good performances (e.g., for deep BSDE, DBDP, Sig-DFP). However, in other cases (e.g., DGM, RNN for problems with delay, as discussed in this survey), ad hoc architectures seem necessary to reach the best results. In any case, architectures undoubtedly play a crucial role in the performance of every deep learning method, and a careful design is, in general, what leads to state-of-the-art results in high dimensions. However, most deep learning methods for differential games presented in this survey have been limited to proof of concepts. As such, exploring more sophisticated architectures is a natural progression towards achieving better numerical performance. Once the neural network architecture is fixed, the next important step is to determine the hyperparameters of the optimization method, such as the initialization of network parameters, the learning rate, and the mini- batch size, which are crucial factors for ensuring fast convergence. However, finding a systematic rule for choosing these hyperparameters a priori remains a challenge. A common approach is to try several values and measure the empirical convergence speed on problems for which the solution is known, using either an analytical formula or another numerical method. This task is complex due to the interdependent influences of hyperparameters. For problems without benchmarks, finding suitable hyperparameters is even more challenging. To the best of our knowledge, the literature does not yet provide a comprehensive understanding of how to choose hyperparameters and measure algorithm performance without benchmark solutions. Although we did not discuss this aspect in the present survey, finding efficient heuristics for hyperparameter tuning is certainly an interesting direction. Another related question is how to assess the convergence of algorithms that compute Nash equilibria in games since the objective is to find a fixed point, not an optimizer. A direction that has received little attention thus far regarding specific problems related to MFGs is numerical methods that can work even when a common noise affects the entire population. The difficulties that arise numerically are connected to the difficulty of solving such MFGs from a theoretical viewpoint. We have presented the Sig-DFP method to tackle MFGs with common noise, focusing on mean-field interactions through moments. Common noise appears in applications, for instance, in the form of aggregate shocks in macroeconomics. Therefore, it is worth developing further machine learning algorithms to deal with MFGs with common noise and general interactions. Currently, we lack efficient ways to parameterize, represent, and discretize probability measures defined on a continuous-state space. Another aspect related to concrete applications of the methods presented in this survey pertains to the resources needed to train deep neural networks. For model-based methods and even more for model-free reinforcement learning methods, sophisticated models typically need a vast number of training episodes, leading to two challenges. First, as the model complexity grows, the massive computational cost required to learn the solution becomes prohibitive. Second, for real-world applications, Monte Carlo simulations will be replaced by real data, but we generally have much fewer data points than the number of samples used by most deep learning methods described in this survey. Therefore, it will be very interesting to design more sample-efficient methods and establish sharp estimates of their sample complexity. Last but not least, to the best of our knowledge, the methods presented in this survey have only been applied to relatively simple models for academic research purposes. However, a significant motivation for the development of machine learning methods is to enable us to efficiently solve more realistic optimal control problems and games. We hope that this survey can contribute to fostering interactions between theoretical research and applied research communities, leading to concrete applications in real-world problems. ## Acknowledgement R.H. was partially supported by the NSF grant DMS-1953035, the Faculty Career Development Award, the Research Assistance Program Award, the Early Career Faculty Acceleration funding, and the Regents’ Junior Faculty Fellowship at University of California, Santa Barbara. Some parts of the review paper have been used for teaching special topic graduate classes at the University of California, Santa Barbara, and R.H. appreciates all the feedback from the audience of these classes. R.H. and M.L. are grateful to all their co-authors of the papers mentioned in this review. ## Appendix A Deep Learning Tools In this section, we briefly review neural networks and stochastic gradient descent, which are two of the main tools of modern machine learning. We refer to [182] for a more comprehensive mathematical introduction to deep learning for applied mathematicians and to [150] for more background on deep learning. ### A.1 Neural network architectures We start by introducing the feedforward fully connected architecture, before discussing recurrent neural networks and long short-term memory networks. #### A.1.1 Feedforward fully connected neural networks Feedforward neural networks (FNNs) are the most common type of neural networks. We denote by $\displaystyle\mathbf{L}^{\rho}_{d_{1},d_{2}}=$ $\displaystyle\Big{\\{}\phi:\mathbb{R}^{d_{1}}\to\mathbb{R}^{d_{2}}\,\Big{|}\,\exists(w,\beta)\in\mathbb{R}^{d_{2}\times d_{1}}\times\mathbb{R}^{d_{2}},\forall i\in\\{1,\dots,d_{2}\\},\;\phi(x)_{i}=\rho\Big{(}\beta_{i}+\sum_{j=1}^{d_{1}}w_{i,j}x_{j}\Big{)}\Big{\\}}$ the set of layer functions with input dimension $d_{1}$, output dimension $d_{2}$, and activation function $\rho:\mathbb{R}\to\mathbb{R}$. Typical choices for $\rho$ are ReLU (positive part), identity, sigmoid, or hyperbolic tangent, $\rho_{\text{ReLU}}(x)=\max\\{x,0\\},\quad\rho_{\text{Id}}(x)=x,\quad\rho_{\text{s}}(x)=\frac{1}{1+e^{-x}},\quad\rho_{\tanh}(x)=\tanh(x).$ (A.1) Building on this notation and denoting by $\circ$ the composition of functions, we define $\displaystyle\mathbf{N}^{\rho,\tilde{\rho}}_{d_{0},\dots,d_{\ell+1}}=$ $\displaystyle\Big{\\{}\phi_{\ell}\circ\phi_{\ell-1}\circ\dots\circ\phi_{0}\,\Big{|}\,(\phi_{i})_{i=0,\dots,\ell-1}\in\bigtimes_{i=0}^{i=\ell-1}\mathbf{L}^{\rho}_{d_{i},d_{i+1}},\phi_{\ell}\in\mathbf{L}^{\tilde{\rho}}_{d_{\ell},d_{\ell+1}}\Big{\\}}\,$ as the set of regression neural networks with $\ell$ hidden layers and one output layer, the activation function of the output layer being $\tilde{\rho}$. The number $\ell$ of hidden layers, the numbers $d_{0}$, $d_{1}$, $\cdots$ , $d_{\ell+1}$ of units per layer, and the activation functions, are the components of what is called the architecture of the network. Once it is fixed, the actual network function $\varphi\in\mathbf{N}^{\rho,\tilde{\rho}}_{d_{0},\dots,d_{\ell+1}}$ is determined by the remaining parameters $\theta=(\beta^{(0)},w^{(0)},\beta^{(1)},w^{(1)},\cdots\cdots,\beta^{(\ell-1)},w^{(\ell-1)},\beta^{(\ell)},w^{(\ell)}),$ defining the functions $\phi_{0}$, $\phi_{1}$, $\cdots$ , $\phi_{\ell-1}$ and $\phi_{\ell}$ respectively. Let us denote by $\Theta$ the set of values for such parameters. For each $\theta\in\Theta$, the function computed by the network will be denoted by $\varphi^{\theta}\in\mathbf{N}^{\rho,\tilde{\rho}}_{d_{0},\dots,d_{\ell+1}}$ when we want to stress the dependence on the parameters. To alleviate the presentation, we will follow the convention of using vector and matrix notations, and here activation functions are implicitly applied coordinate-wisely. Then $\varphi^{\theta}(x)=\tilde{\rho}\left(\beta^{(\ell)}+w^{(\ell)}\rho\left(\beta^{(\ell-1)}+w^{(\ell-1)}\rho\left(\dots\beta^{(0)}+w^{(0)}x\right)\right)\right).$ #### A.1.2 Recurrent neural networks Although FNNS are universal approximators, they are not very suitable to handle path-dependent properties of the state process, which are important for instance when the stochastic control problem or the game has delay features. The idea of recurrent neural networks (RNNs) [282] is to make use of sequential information, and thus provide a natural framework for overcoming these issues. In fact, RNNs have already shown great success in, e.g., natural language processing and handwriting recognition [154, 155, 156]. Many variants exist and below we shall focus on one such variant, but the generic architecture can be described as follows: the neural network takes two inputs, $x$ and $h$, and produces two outputs, $y$ and $h^{\prime}$, as follows: $\displaystyle h^{\prime}$ $\displaystyle=\rho\left(\beta^{(1)}+w^{(1,1)}h+w^{(1,2)}x\right),$ $\displaystyle y$ $\displaystyle=\tilde{\rho}\left(\beta^{(2)}+w^{(2)}h^{\prime}\right),$ where $\rho,\tilde{\rho}$ are two activation functions, and the parameters of the neural network are vectors $\beta^{(1)},\beta^{(2)}$ of suitable sizes, and matrices $w^{(1,1)},w^{(1,2)},w^{(2)}$ of suitable sizes. Given a sequence of data points $(x_{k})_{k\geq 0}$, which can represent the discrete-time trajectory of state process for instance, and an initial input $h_{0}$, an RNN can be used recursively to produce the sequence $(y_{k},h_{k})_{k\geq 1}$ defined by $\displaystyle h_{k}$ $\displaystyle=\rho\left(\beta^{(1)}+w^{(1,1)}h_{k-1}+w^{(1,2)}x_{k-1}\right),$ $\displaystyle y_{k}$ $\displaystyle=\tilde{\rho}\left(\beta^{(2)}+w^{(2)}h_{k}\right).$ Here, $h_{k+1}$ encodes information that is transmitted from iteration $k$ to iteration $k+1$. This information is produced using previous information $h_{k}$ and the current data point $x_{k}$. It is used to compute the output $y_{k}$ associated with the current input $x_{k}$, and the future information $h_{k+2}$. Based on the idea of using an architecture in a recurrent way, many generalizations of the above simple neural network have been proposed. We next present one of them. #### A.1.3 Long short-term memory One of the most common types of RNN is the long short-term memory (LSTM) neural network [183]. The advantage of an LSTM is the ability to deal with the vanishing gradient problem and data with lags of unknown duration. An LSTM is composed of a series of units, each of which corresponds to a timestamp, and each unit consists of a cell $\mathfrak{c}$ and three gates: input gate $\mathfrak{i}$, output gate $\mathfrak{o}$, and forget gate $\mathfrak{f}$. Among these components, the cell keeps track of the information received so far, the input gate captures to which extent new input information flows into the cell, the forget gate captures to which extent the existing information remains in the cell, and the output gate controls to which extent the information in the cell will be used to compute the output of the unit. Given a data sequence $(x_{k})_{k\geq 0}$ and an initial input $h_{0}$, the information flows are $\displaystyle\text{forget gate: }\mathfrak{f}_{k}=\rho_{\text{s}}(W_{f}x_{k}+U_{f}h_{k-1}+b_{f}),$ (A.2) $\displaystyle\text{input gate: }\mathfrak{i}_{k}=\rho_{\text{s}}(W_{i}x_{k}+U_{i}h_{k-1}+b_{i}),$ $\displaystyle\text{ontput gate: }\mathfrak{o}_{k}=\rho_{\text{s}}(W_{o}x_{k}+U_{o}h_{k-1}+b_{o}),$ $\displaystyle\text{cell: }\mathfrak{c}_{k}=\mathfrak{f}_{k}\odot\mathfrak{c}_{k-1}+\mathfrak{i}_{k}\odot\rho_{\tanh}(W_{c}x_{k}+U_{c}h_{k-1}+b_{c}),$ $\displaystyle\text{output of the }k^{th}\text{ unit: }h_{k}=\mathfrak{o}_{k}\odot\rho_{\tanh}(\mathfrak{c}_{k}),$ where the operator $\odot$ denotes the Hadamard product, $W_{f}$, $W_{i}$, $W_{o}$, $W_{c}$, $U_{f}$, $U_{i}$, $U_{o}$, $U_{c}$, $b_{f}$, $b_{i}$, $b_{o}$ and $b_{c}$ are neural network parameters of compatible sizes, and $\rho_{\text{s}}$ and $\rho_{\tanh}$ are activation functions given in (A.1). #### A.1.4 Expressive power of neural networks The first theories about neural networks date back to 1989 by Cybenko [114] and by Hornik, Stinchcombe and White [185], concerning the approximation capabilities of feedforward networks within a given function space of interest. Hornik [184] then extended the results to approximating the function’s derivatives, and Leshno, Lin, Pinkus and Schocken [240] proved results under arbitrary nonpolynomial activation functions. These results are referred to as universal approximation theorems. See also [274]. In the past decade, the mathematical theory has been greatly developed, which is complemented by unprecedented advances in highly parallelizable graphics processing units (GPUs), the introduction of new network architectures, and the development of GPU-enabled algorithms. For instance, in terms of approximation theories, other types of neural networks have been investigated, including RNN [285], convolutional neural networks (CNNs) [312] and graph neural networks (GNNs) [207]. Concerning the expressive power of neural networks, [177] analyzed it from the depth point view, while [246, 179, 266] considered a width perspective. Several works tackle the question of how neural networks can tackle the curse of dimensionality; see, e.g., [178, 157, 201, 196]. For further discussion on the mathematical theory of deep learning, we refer to, e.g., [49]. ### A.2 Stochastic gradient descent and its variants The process of adjusting the parameters of a parameterized system, such as a neural network, in order to optimize a loss function is called _training_. Stochastic gradient descent (SGD) is one of the most popular methods to train neural network parameters, for example for the aforementioned FNNs, RNNs and LSTMs. Consider a generic optimization problem: minimize over $\varphi$, $J(\varphi)=\mathbb{E}_{\xi\sim\nu}[\mathfrak{L}(\varphi,\xi)],$ where $\xi$ follows a distribution $\nu$ and $\mathfrak{L}$ is a loss function. Using a neural network with a given architecture as an approximator for $\varphi$, the goal becomes to minimize over $\theta$, $J(\theta)=\mathbb{E}_{\xi\sim\nu}[\mathfrak{L}(\varphi_{\theta},\xi)].$ Even if $\mathfrak{L}$ is known, the loss cannot be computed exactly when $\nu$ is unknown. If one does not have access to $\nu$ but only to samples drawn from $\nu$, one can use SGD, described in Algorithm 1, which relies on an empirical risk minimization problem, $J^{S,N}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\mathfrak{L}(\varphi_{\theta},\xi^{i}),$ where $N$ is the number of training samples of $\xi$ and we denote the sample set by $S=(\xi^{1},\dots,\xi^{N})$. Input: An initial parameter $\theta_{0}\in\Theta$. A mini-batch size $N_{\text{Batch}}$. A number of iterations $M$. A sequence of learning rates $(\beta_{m})_{m=0,\dots,M-1}$. Output: Approximation of $\theta^{*}$ for $m=0,1,2,\dots,M-1$ do Sample a minibatch of $N_{\text{Batch}}$ samples $S=(\xi^{i})_{i=1,\dots,N_{\text{Batch}}}$ where $\xi^{i}$ are i.i.d. drawn from $\nu$ Compute the gradient $\nabla J^{S,N_{\text{Batch}}}(\theta_{m})$ Update $\theta_{m+1}=\theta_{m}-\beta_{m}\nabla J^{S,N_{\text{Batch}}}(\theta_{m})$ end for Return $\theta_{M}$ Algorithm 1 Stochastic Gradient Descent (SGD) SGD is generally used with a moderately large size of mini-batch, which reduces the computational cost of each iteration and can furthermore help escape local minima. In practice, the choice of the learning rate can be crucial to ensure convergence. A popular way to adjust the learning rate is the Adam method [209] which is summarized in Algorithm 2 and can be viewed as an adaptive momentum accelerated SGD. The computation of the gradient $\nabla J^{S,N}(\theta)$ with respect to $\theta$ can be done automatically by libraries such as TensorFlow or PyTorch, which perform this computation efficiently by using backpropagation. Input: Stepsize $\alpha$. Exponential decay rates for the moment estimates $\beta_{1},\beta_{2}\in[0,1)$. Initial parameter $\theta_{0}$. Small parameter for numerical stability $\epsilon$. Output: Approximation of $\theta^{*}$ Initialize first moment vector $\bar{M}_{0}$ and second moment vector $\bar{V}_{0}$ for $m=0,1,2,\dots,M-1$ do Sample a minibatch of $N_{\text{Batch}}$ samples
# Quantitatively predicting angle-resolved polarized Raman intensity of black phosphorus flakes Tao Liu1,2,‡, Jia-Liang Xie1,2,‡, Yu-Chen Leng1, Heng Wu1,2, Jiahong Wang3, Yang Li3, Xue-Feng Yu3, Miao-Ling Lin1<EMAIL_ADDRESS>Ping-Heng Tan1,2 <EMAIL_ADDRESS>1 State Key Laboratory of Superlattices and Microstructures, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China 2 Center of Materials Science and Optoelectronics Engineering & CAS Center of Excellence in Topological Quantum Computation, University of Chinese Academy of Sciences, Beijing, 100049, China 3 Shenzhen Engineering Center for the Fabrication of Two-Dimensional Atomic Crystals, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China ‡ Contributed equally to this work ###### Abstract In-plane anisotropic layered materials (ALMs), such as black phosphorus (BP), exhibit unique angle-resolved polarized Raman (ARPR) spectroscopy characteristics, as attributed to birefringence, linear dichroism and complex Raman tensor. Moreover, the ARPR intensity profiles of BP flakes deposited on multilayer dielectrics are notably sensitive to their thickness, owing to interference effects. The intricate anisotropic effects present challenges in accurately predicting the ARPR intensity of BP flakes. In this study, we propose a comprehensive strategy for predicting the ARPR intensity of BP flakes by explicitly considering optical anisotropy, encompassing birefringence, linear dichroism, and anisotropic cavity interference effects within multilayered structures. Through this approach, we have identified the intrinsic complex Raman tensors for phonon modes, independent of the BP flake thickness. By leveraging this methodology, we have elucidated the flake thickness-dependent effective complex Raman tensor elements, allowing for precise prediction of the observed ARPR intensity profile for the BP flake. This work provides a profound understanding of ARPR behaviors for ALM flakes. Raman scattering intensity in general depends on the direction of incident laser and collected Raman light relative to the principal axes of the crystal[1], whose polarization vectors are $\textit{{e}}_{\rm i}$ and $\textit{{e}}_{\rm s}$, respectively. The Raman tensor (R with 3$\times$3 tensor elements $R_{uv}$, $u,v=x,y,z$) serves as a crucial component in determining the Raman intensity by $I\propto|\textit{{e}}_{\rm s}\cdot{\rm\textbf{R}}\cdot\textit{{e}}_{\rm i}|^{2}$ (Fig.1(a1))[1]. Once the phonon symmetry is known, the Raman selection rule for the corresponding Raman mode can be experimentally verified[1, 2], determining whether it is observed or not. By altering the direction of $\textit{{e}}_{\rm i}$ and $\textit{{e}}_{\rm s}$ with respect to the crystallographic axes, angle- resolved polarized Raman (ARPR) intensity can be estimated[1, 3, 4, 5, 6]. In most cases, only real Raman tensor is generally involved to deduce a formalism for calculating Raman scattering intensity dependent on the polarization configuration for bulk crystals[3, 4, 5]. However, in H.B. Ribeiro’s pioneering work[6], the unusual ARPR spectra were observed in black phosphorus (BP) flakes (360nm thickness), which can be explained only by considering complex Raman tensor. Then, huge efforts were made to understand the ARPR intensity profile of anisotropic layered materials (ALMs) after taking birefringence and linear dichroism effects into account in detail[6, 7, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18]. The fitted R (amplitude ratio and phase difference between two tensor elements) were found to be sensitive to the flake thickness and the layer thickness of substrate dielectrics[6, 7, 8, 9, 10, 11, 12, 13, 14], making it impossible to predict ARPR spectrum for BP flakes with different thickness and on different substrate. In principle, Raman tensor is an inherent parameter for a crystal to understand its Raman spectrum [1, 19, 3], regardless of its volume[20], dimensionality[21, 22] and even its counterpart[20, 23] in multilayer dielectrics. For ALMs, such as BP flakes[24, 25, 26, 27, 28], the birefringence and linear dichroism effects result in depth ($y$)-dependent polarization and intensity of both excitation and scattered light[5]. This leads to $y$-dependent polarization vectors $\textit{{e}}^{\prime}_{\rm s}(y)$ and $\textit{{e}}^{\prime}_{\rm i}(y)$[5], which cannot be approximately treated as constants of $\textit{{e}}_{\rm s}$ and $\textit{{e}}_{\rm i}$ (Fig.1(a2))[5, 29, 30, 31, 32], respectively. Therefore, it is crucial to unveil the intrinsic Raman tensor ${\rm\textbf{R}_{int}}$ correlated with $\textit{{e}}^{\prime}_{\rm s}(y)$ and $\textit{{e}}^{\prime}_{\rm i}(y)$ to estimate Raman intensity at the location of the scattering event at depth $y$ within BP flakes by $I(y)\propto|\textit{{e}}^{\prime}_{\rm s}(y)\cdot{\rm\textbf{R}_{int}}\cdot\textit{{e}}^{\prime}_{\rm i}(y)|^{2}$. Flake-substrate multilayer dielectrics can further modulate the light propagation within BP flakes due to the interference effects of excitation/scattered light[33, 20]. How to extract ${\rm\textbf{R}_{int}}$ in all the experimental scenes to quantitatively predict the ARPR response of BP flakes is thus a major challenge in this field. Figure 1: (a1) Schematic diagram of Raman scattering event expressed by Raman tensor R. (a2) Schematic diagram of Raman scattering event within ALM governed by Rint, the case of oblique incidence for convenience. (b) Optical image of BP flake with $d_{\rm BP}$=139nm measured by AFM (inset). (c) Crystallographic structure of BP from the side and top views. (d) Schematic diagram of ARPR spectroscopy setup. (e) Raman spectra of BP flakes with $d_{\rm BP}$=38nm and 139nm on 90nm-SiO2/Si substrate, when $\textit{{e}}_{\rm i}$ ($\textit{{e}}_{\rm s}$) is along the ZZ and AC axes, and (f) the corresponding ARPR intensity profiles of the $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes, $\lambda_{\rm ex}=$633nm, and the fitted $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ are indicated. In this letter, we have formulated an approach to extract ${\rm\textbf{R}_{int}}$ of BP by analyzing ARPR intensity profiles of BP flakes with different thickness deposited on 90nm-SiO2/Si substrate. The light propagation within a BP flake modulated from optical anisotropy involving birefringence, linear dichroism and interference effects in air/BP/90nm- SiO2/Si multilayers are fully taken into account. The experimental complex ${\rm\textbf{R}_{int}}$ can be used to quantitatively reproduce ARPR intensity profile of BP flakes on 305nm-SiO2/Si substrate without fitting parameters. We also generated contour plots to visualize the correlation between effective Raman tensor elements and the variations in BP flake thickness ($d_{\rm BP}$) and SiO2 layer thickness ($d_{\rm SiO_{2}}$) at three common laser wavelengths. This framework can be extended to other ALM flakes deposited on dielectric substrate to determine the Raman tensors for fully predicting their ARPR response. Figure 2: (a1) Schematic diagram for propagation paths of incident laser (blue) and scattered Raman signal (red) in ALM and (a2) the interference effect of incident laser and scattered light within ALM/SiO2/Si multilayer structure, the case of oblique incidence for convenience. $I$(AC)/$I$(ZZ) of (b)$A_{\rm g}^{1}$ and (c) $A_{\rm g}^{2}$ modes with $\lambda_{\rm ex}$=633nm. The solid lines are the fitted results. (d) $|c_{\rm int}|/|a_{\rm int}|$ and (e) $\phi_{\rm int}$ (open circles) and their averaged values (dashed lines) of $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes versus $d_{\rm BP}$. (f) ARPR intensity profiles of $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP/90nm-SiO2/Si with different $d_{\rm BP}$, in which filled circles and pink lines are experimental and calculated results, respectively. (g) Predicted $|c_{\rm eff}|/|a_{\rm eff}|$ and (h) $\Phi_{\rm eff}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP flakes on 90nm-SiO2/Si substrate. BP flakes were mechanically exfoliated (Section 1 of the Supplementary Materials (SM)) onto SiO2/Si substrates with $d_{\rm SiO_{2}}$=90nm and 305nm. Figure 1(b) shows the optical image of BP flakes with $d_{\rm BP}\sim$139nm, as measured by atomic force microscopy (AFM). BP is a van der Waals semiconductor with strong in-plane anisotropy along the zigzag(ZZ) and armchair(AC) axes, belonging to the orthorhombic symmetry (i.e., $D_{2h}$ point symmetry). We establish the $X$ and $Z$ axes alignment with the ZZ and AC directions[5], respectively (Fig.1(c)). We utilized the Raman setup in Fig.1(d) to measure the ARPR response of BP flakes at normal laser incidence on the basal plane under a parallel polarization configuration (Section 1 of SM). The $\textit{{e}}_{\rm i}$ and $\textit{{e}}_{\rm s}$ relative to the ZZ axis ($\theta$) are controlled by the half-wave plate in the common optical path to measure ARPR intensity of BP flakes, with $\theta=0^{\circ}$ for $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$ZZ axis and $\theta=90^{\circ}$ for $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$AC axis. In this case, $\textit{{e}}_{\rm i}$=$\textit{{e}}_{\rm s}$=(cos$\theta$, 0, sin$\theta$). Figure 1(e) plots the Raman spectra of BP flakes with $d_{\rm BP}$=38nm and 139nm for $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$ZZ and $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$AC, where two typical Raman modes, i.e., $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes are observed at 362 cm-1 and 466 cm-1, respectively. The Raman intensity ratio of $A_{\rm g}^{1}$ ($A_{\rm g}^{2}$) mode between $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$ZZ ($I$(ZZ)) and $\textit{{e}}_{\rm i}(\textit{{e}}_{\rm s})\parallel$AC ($I$(AC)) varied with $d_{\rm BP}$, as depicted in the ARPR intensity (Fig.1(f)). The nonzero tensor elements $R_{uv}$ for $A_{\rm g}$ mode are $R_{xx}=a$, $R_{yy}=b$, $R_{zz}=c$. Due to the normal incidence onto the basal plane, only $a$ and $c$ are involved. By utilizing R with effective complex tensor elements, $a=|a_{\rm eff}|e^{{\rm i}\Phi_{a}}$ and $c=|c_{\rm eff}|e^{{\rm i}\Phi_{c}}$ ($\Phi_{\rm eff}$=$\Phi_{c}-\Phi_{a}$)[6, 7, 8, 9, 18, 12, 14], one can connect the experimentally measured ARPR intensity with $\textit{{e}}_{\rm s}$ and $\textit{{e}}_{\rm s}$ by $I\propto|\textit{{e}}_{\rm s}\cdot{\rm\textbf{R}}\cdot\textit{{e}}_{\rm i}|^{2}$, i.e., $I\propto|a_{\rm eff}|^{2}{\rm cos}^{4}\theta+|c_{\rm eff}|^{2}{\rm sin}^{4}\theta+2|a_{\rm eff}||c_{\rm eff}|{\rm sin}^{2}\theta{\rm cos}^{2}\theta\cos\Phi_{\rm eff}.$ (1) By fitting the ARPR intensity with the Eq.1, $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ can be obtained. The fitted $\Phi_{\rm eff}$ are different for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes and vary with $d_{\rm BP}$ for each Raman mode. This contradicts the physical mechanism based on either birefringence[8], linear dichroism effects[6] or anisotropic electron-photon (e-pht) and electron-phonon (e-phn) couplings[9]. If $\Phi_{\rm eff}$ only originates from the impacts of birefringence and linear dichroism on $\textit{{e}}_{\rm i}$ and $\textit{{e}}_{\rm s}$, the $\Phi_{\rm eff}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes[8] in a BP flake should be equal; if $\Phi_{\rm eff}$ only arises from the anisotropic e-pht and e-phn couplings[9, 18], the relatively fixed electronic band structure[24] for thick (tens of nanometers) BP flake with different $d_{\rm BP}$ should exhibit constant $\Phi_{\rm eff}$ for $A_{\rm g}^{1}$ ($A_{\rm g}^{2}$). Thus, the fitted $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ involve interplay of various anisotropy effects, distinct for different Raman modes and sensitive to $d_{\rm BP}$, making it a challenge to predict the ARPR intensity of ALM flakes. Figure 3: (a,c) Predicted $|c_{\rm eff}|/|a_{\rm eff}|$ and (b,d) $\Phi_{\rm eff}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP flakes with varied $d_{\rm BP}$ and $d_{\rm SiO_{2}}$ for $\lambda_{\rm ex}$=633nm. (e) Comparison of ARPR intensity profiles between the experimental (filled circles) and predicted (pink lines) results for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP flakes on 305nm-SiO2/Si substrate, where the predicted curves are calculated by $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ in (a-d). To tell apart various anisotropy effects on ARPR intensity of ALM flakes, as depicted in Fig.2(a1), the Raman scattering processes can be separated into the propagation paths of incident/scattered light and Raman scattering event at depth $y$. The latter is an inherent physical process governed by the anisotropic e-pht and e-phn couplings presented by ${\rm\textbf{R}_{int}}$ (Section 2 of SM), $I(y)\propto|\textit{{e}}^{\prime}_{\rm s}(y)\cdot{\rm\textbf{R}_{int}}\cdot\textit{{e}}^{\prime}_{\rm i}(y)|^{2}$, in which $\textit{{e}}^{\prime}_{\rm s}(y)$ and $\textit{{e}}^{\prime}_{\rm i}(y)$ are modulated by birefringence and linear dichroism effects[5]. In addition, BP flakes commonly deposited onto SiO2/Si substrate can generate a natural cavity due to the refractive index mismatch between BP flake and underlying substrate, where partial reflections of incident and scattered light occur at air/BP, BP/SiO2 and SiO2/Si interfaces (Fig.2(a2)). Multiple reflection and optical interference can further modulate both $\textit{{e}}^{\prime}_{\rm s}(y)$ and $\textit{{e}}^{\prime}_{\rm i}(y)$. These modulations arising from birefringence, linear dichroism and optical interference effects shows evident in-plane anisotropy, which can be described by the interference factor matrices of the incident laser ($J_{\rm i}(y)$) and Raman signals ($J_{\rm s}(y)$) at varied $y$ using the transfer matrix method (TMM)[20, 5], $\displaystyle J_{\rm i(s)}(y)={\left(\begin{array}[]{ccc}F_{{\rm i(s)}X}(y)&0&0\\\ 0&0&0\\\ 0&0&F_{{\rm i(s)}Z}(y)\end{array}\right)}.$ (2) where $F_{{\rm i(s)}X}(y)$ and $F_{{\rm i(s)}Z}(y)$ are respectively defined as the enhancement factors for incident laser (scattered signal) along $X$ and $Z$ axes, calculated by the TMM (Section 3 and Fig.S1 of SM[20]). Birefringence, linear dichroism and anisotropic interference effects are manifested in the different values of $F_{{\rm i(s)}X}(y)$ and $F_{{\rm i(s)}Z}(y)$ due to the varied complex refractive indexes along $X$ ($\tilde{n}_{X}$) and $Z$ ($\tilde{n}_{Z}$) axes. Thus, $\textit{{e}}^{\prime}_{\rm i}(y)$=$J_{\rm i}(y)\textit{{e}}_{\rm i}$ and $\textit{{e}}^{\prime}_{\rm s}(y)=\textit{{e}}_{\rm s}J_{\rm s}(y)$. And the measured Raman scattered intensity for a given phonon mode from BP flake is the integration of Raman signal over $d_{\rm BP}$, expressed as follows, $I\propto\int_{0}^{d_{\rm BP}}\left|\textit{{e}}_{\rm s}J_{\rm s}(y)\cdot\textbf{R}_{\rm int}\cdot J_{\rm i}(y)\textit{{e}}_{\rm i}\right|^{2}dy.$ (3) We express the nonzero elements of ${\rm\textbf{R}_{int}}$ for $A_{\rm g}$ modes of BP flakes as $R_{xx}=|a_{\rm int}|e^{{\rm i}\phi_{a}}$, $R_{zz}=|c_{\rm int}|e^{{\rm i}\phi_{c}}$ and $\phi_{\rm int}=\phi_{c}-\phi_{a}$, then the Eq.3 becomes: $\displaystyle I\propto\int_{0}^{d_{\rm BP}}|$ $\displaystyle F_{{\rm i}X}(y)F_{{\rm s}X}(y)|a_{\rm int}|e^{{\rm i}\phi_{a}}{\rm cos}^{2}\theta$ (4) $\displaystyle+F_{{\rm i}Z}(y)F_{{\rm s}Z}(y)|c_{\rm int}|e^{{\rm i}\phi_{c}}{\rm sin}^{2}\theta|^{2}dy.$ Accordingly, $I$(ZZ)$\propto\int_{0}^{d_{\rm BP}}\left|F_{{\rm i}X}(y)F_{{\rm s}X}(y)|a_{\rm int}|\right|^{2}dy$ ($\theta=0^{\circ}$) and $I$(AC)$\propto\int_{0}^{d_{\rm BP}}\left|F_{{\rm i}Z}(y)F_{{\rm s}Z}(y)|c_{\rm int}|\right|^{2}dy$ ($\theta=90^{\circ}$). By fitting $I$(AC)/$I$(ZZ) versus $d_{\rm BP}$ (Section 4 of SM), one can obtain $\tilde{n}_{x}$, $\tilde{n}_{z}$ and $|c_{\rm int}|/|a_{\rm int}|$ for BP flakes. We summarized $I$(AC)/$I$(ZZ) with excitation wavelength ($\lambda_{\rm ex}$) of 633nm as a function of $d_{\rm BP}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in Fig.2(b,c), respectively. The fitting of $I$(AC)/$I$(ZZ) for these two modes are processed independently. The fitted $\tilde{n}_{X}$ ($\tilde{n}_{Z}$) for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes are almost identical to each other, whose averaged values are summarized in Table 1. With the fitted $\tilde{n}_{X}$ and $\tilde{n}_{Z}$, $F_{{\rm i(s)}X}(y)$ and $F_{{\rm i(s)}Z}(y)$ can be numerically calculated, and then $|c_{\rm int}|/|a_{\rm int}|$ and $\phi_{\rm int}$ for the $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in each BP flake can be determined by fitting the corresponding ARPR intensity in Fig.1(f) and Fig.S2 (Section 4 of SM), as illustrated in Figs.2(d,e), respectively. The average $|c_{\rm int}|/|a_{\rm int}|$ and $\phi_{\rm int}$ of the $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes are used to calculate the ARPR intensity profile for BP flakes with varied $d_{\rm BP}$ (pink curves in Fig.2(f)), showing good agreement with the experimental ones. The above constant $|c_{\rm int}|/|a_{\rm int}|$ and $\phi_{\rm int}$ imply that the anisotropic e-pht ($H_{\rm e-pht}$) and e-phn coupling ($H_{\rm e-phn}$) matrices are almost unchanged for BP flakes ($d_{\rm BP}$$>$20nm), which can be ascribed to the similar electronic band structure[9]. In addition, $|c_{\rm int}|/|a_{\rm int}|$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes are both larger than 1, indicating a larger $H_{\rm e-pht(s)}\cdot H_{\rm e-phn}\cdot H_{\rm e-pht(i)}$ (Section 2 of SM) along $Z$ axis than that along $X$ axis owing to the much larger light absorption along $Z$ axis[24, 9]. The larger $|c_{\rm int}|/|a_{\rm int}|$ of $A_{\rm g}^{2}$ ($\sim 1.6$) than $A_{\rm g}^{1}$ ($\sim 1.18$) mode implies that the ratio of $H_{\rm e-phn}$ between the $Z$ and $X$ axes for $A_{\rm g}^{2}$ mode is $\sim$1.4 times of that for $A_{\rm g}^{1}$ mode. Furthermore, the nonzero $\phi_{\rm int}$ in Rint for both $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes are induced by the anisotropic dielectric function due to the linear dichroism in BP flakes. Table 1: Complex refractive indexes $\tilde{n}$ along $X$(ZZ) and $Z$(AC) axes of BP flakes at $\lambda_{\rm ex}$=633nm, 532nm and 488nm. Wavelength(nm) | $\tilde{n}_{X}$ | $\tilde{n}_{Z}$ ---|---|--- 633 | 4.04+0.03i | 3.95+0.33i 532 | 4.41+0.23i | 4.09+0.67i 488 | 4.82+0.28i | 4.50+0.74i With the above insights into the modulations from birefringence, linear dichroism, anisotropic interference effects, and anisotropic e-pht (e-phn) coupling[9, 18] in Raman scattering process of BP flakes, we aim to integrate all these effects to derive the formalism of effective elements in R to directly predict their ARPR response. By comparing the Eq.4 with the Eq.1, we can get the formalism for R as follows(Section 5 of SM), $\displaystyle\frac{|c_{\rm eff}|}{|a_{\rm eff}|}=\left(\frac{|c_{\rm int}|}{|a_{\rm int}|}\right)\cdot\left(\frac{F_{Z}}{F_{X}}\right)$ (5) $\displaystyle\Phi_{\rm eff}={\rm arccos}\left(\frac{\int^{d_{\rm BP}}_{0}A_{X}A_{Z}{\rm cos}(\varphi_{X}-\varphi_{Z}+\phi_{\rm int})dy}{F_{X}F_{Z}}\right)$ where $F_{X}$=$\sqrt{\int^{d_{\rm BP}}_{0}A_{X}^{2}dy}$ and $F_{Z}$=$\sqrt{\int^{d_{\rm BP}}_{0}A_{Z}^{2}dy}$ with $A_{X}$ and $A_{Z}$ the amplitudes of $F_{{\rm i}X}(y)F_{{\rm s}X}(y)$ and $F_{{\rm i}Z}(y)F_{{\rm s}Z}(y)$, respectively. $\varphi_{X}$ and $\varphi_{Z}$ are defined as the phases of $F_{{\rm i}X}(y)F_{{\rm s}X}(y)$ and $F_{{\rm i}Z}(y)F_{{\rm s}Z}(y)$, respectively. With these analysis, we numerically calculated $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP flakes on 90nm-SiO2/Si with $\lambda_{\rm ex}$=633nm, as elucidated in Fig.2(g,h). Both $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ are sensitive to $d_{\rm BP}$. With the derived R with effective elements, one can predict the ARPR intensity for BP flakes. Good agreements between the predicted and experimental ARPR intensity for $\lambda_{\rm ex}$=633nm are shown in Fig.S3 of SM. The periodic variations of $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ of BP flakes give rise to periodic changes of the ARPR intensity shape. It is clear that the complicated dependencies of $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ on $d_{\rm BP}$ is the main reason for the challenge in predicting ARPR intensity for BP flakes in previous studies[6, 7, 8, 9, 16, 12]. Similar derivation for R can be applied to other anisotropic ALM flakes to acquire a quantitative prediction of the ARPR response. Figure 4: $I$(AC)/$I$(ZZ) of (a)$A_{\rm g}^{1}$ and (b) $A_{\rm g}^{2}$ modes of BP flakes on 90nm-SiO2/Si with $\lambda_{\rm ex}$=532nm, (c) the corresponding $|c_{\rm int}|/|a_{\rm int}|$ and (d) $\phi_{\rm int}$ versus $d_{\rm BP}$. (e) Comparison of ARPR intensity profiles of $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes between the experimental (filled circles) and predicted (pink lines) results for BP flakes on 305nm-SiO2/Si with $\lambda_{\rm ex}$=532nm. Owing to the evident optical interference effect for the BP/SiO2/Si multilayer structure, $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ are also sensitive to $d_{\rm SiO_{2}}$. We plot the dependencies of $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$ on $d_{\rm SiO_{2}}$ and $d_{\rm BP}$ for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in Fig.3(a-d). The $|c_{\rm eff}|/|a_{\rm eff}|$ ratio of $A_{\rm g}^{1}$ mode consistently remains smaller than that of the $A_{\rm g}^{2}$ mode. Similar behaviors are found for the $\Phi_{\rm eff}$. With these predicted $|c_{\rm eff}|/|a_{\rm eff}|$ and $\Phi_{\rm eff}$, the ARPR intensities for BP flakes on SiO2/Si substrates with different $d_{\rm SiO_{2}}$ are also predictable, as exemplified by the ARPR intensity of BP flakes with $d_{\rm BP}$=84nm, 105nm and 131nm on 305nm- SiO2/Si substrate in Figs.3(e-g). The predicted ARPR intensities well reproduce the measured ones. To validate the universality of the above strategy for effective elements of R to quantitatively predict the ARPR intensity in ALM flakes, we further measured the ARPR intensity for BP flakes under $\lambda_{\rm ex}$=532nm (Fig.4 and Fig.S4-6 of SM) and 488nm (Fig.S7-10 of SM). We summarize the $I$(AC)/$I$(ZZ) for $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in Figs.4(a,b) for $\lambda_{\rm ex}$=532nm. Similar to the case for $\lambda_{\rm ex}$=633nm, we fit the $I$(AC)/$I$(ZZ) versus $d_{\rm BP}$ by using the Eq.4 to acquire $\tilde{n}_{X}$ and $\tilde{n}_{Z}$ for $\lambda_{\rm ex}$=532nm, as shown in Table 1. By further fitting the ARPR intensity of $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes (Fig.S4 of SM) with Eq.4, we can obtain the elements ratio $|c_{\rm int}|/|a_{\rm int}|$ and phase difference $\phi_{\rm int}$ in Rint, as illustrated in Figs.4(c,d) for $\lambda_{\rm ex}$=532nm. We also calculate the effective element tensor ratio $|c_{\rm eff}|/|a_{\rm eff}|$ and phase difference $\Phi_{\rm eff}$ in R for BP flakes on SiO2/Si substrate with varied $d_{\rm SiO_{2}}$ and $d_{\rm BP}$ based on the averaged values of $|c_{\rm int}|/|a_{\rm int}|$ and $\phi_{\rm int}$ obtained above, which can successfully modeled the observed ARPR intensities of $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes for BP flakes without any additional fitting parameters, as exemplified the ones for BP flakes with $d_{\rm BP}$=105nm (also $d_{\rm BP}$=84nm and 131nm) on 305nm-SiO2/Si substrate are shown in Fig.4(e) (in Fig.S6 of SM). The successful model for ARPR intensities of BP flakes with varying $\lambda_{\rm ex}$ and $d_{\rm SiO_{2}}$ by the predicted effective elements of R suggests the general validity of our proposed strategy for R of ALMs with effective and intrinsic elements. The obtained Rint for different $\lambda_{\rm ex}$ provides a new approach to study the anisotropic e-pht and e-phn coupling in ALM flakes. For example, the converse variation behaviors, i.e., continuous decrease (increase) in $|c_{\rm int}|/|a_{\rm int}|$ for $A_{\rm g}^{1}$ ($A_{\rm g}^{2}$) mode with $\lambda_{\rm ex}$ decreasing from 633nm to 488nm, imply that the e-phn coupling matrix ratio between AC and ZZ axes for the $A_{\rm g}^{1}$ mode experiences opposite variation with decreasing $\lambda_{\rm ex}$ to that of the $A_{\rm g}^{2}$ mode, due to the comparable e-pht coupling for the $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes under specific excitation. More insights into the anisotropy in e-pht and e-phn couplings could be derived by tracing the Rint under varied laser excitations. In conclusion, the Raman selection rule in BP flakes is influenced by anisotropy in both optics and e-pht/e-phn coupling, manifested through birefringence, linear dichroism, optical interference effects of multi-layer structures, and Rint. Our proposed strategy to delineate the Rint for phonon modes in BP flakes leads to a deeper understanding of the modulation of anisotropic effects on ARPR spectra in ALM flakes. By determining the effective Raman tensor elements of R, we have been able to accurately predict the ARPR intensity profiles of the $A_{\rm g}^{1}$ and $A_{\rm g}^{2}$ modes in BP flakes on various substrates. This research overcomes the challenge in predicting ARPR intensity and provides a comprehensive insight into anisotropy and Raman scattering in ALM flakes. We acknowledge the support from the Ministry of Science and Technology of China (Grant No. 2023YFA1407000), the Strategic Priority Research Program of CAS (Grant No. XDB0460000), National Natural Science Foundation of China (Grant Nos. 12322401, 12127807 and 12393832), CAS Key Research Program of Frontier Sciences (Grant No. ZDBS-LY-SLH004), Beijing Nova Program (Grant No. 20230484301), Youth Innovation Promotion Association, Chinese Academy of Sciences (No. 2023125) and CAS Project for Young Scientists in Basic Research (YSBR-026). ## References * [1] Loudon, R. The raman effect in crystals. _Adv. Phys._ 13, 423–482 (1964). * [2] Cardona, M. (ed.) _Light Scattering in Solids I_ , vol. 8 (Springer-Verlag, Berlin, 1983). * [3] Kranert, C., Sturm, C., Schmidt-Grund, R. & Grundmann, M. Raman tensor formalism for optically anisotropic crystals. _Phys. Rev. Lett._ 116, 127401 (2016). * [4] Kranert, C., Sturm, C., Schmidt-Grund, R. & Grundmann, M. Raman tensor elements of $\beta$-Ga2O3. _Sci. Rep._ 6, 35964 (2016). * [5] Lin, M.-L. _et al._ Understanding angle-resolved polarized raman scattering from black phosphorus at normal and oblique laser incidences. _Sci. Bull._ 65, 1894–1900 (2020). * [6] Ribeiro, H. B. _et al._ Unusual angular dependence of the Raman response in black phosphorus. _ACS Nano_ 9, 4270–4276 (2015). * [7] Kim, J. _et al._ Anomalous polarization dependence of Raman scattering and crystallographic orientation of black phosphorus. _Nanoscale_ 7, 18708–18715 (2015). * [8] Mao, N. _et al._ Birefringence-directed Raman selection rules in 2D black phosphorus crystals. _Small_ 12, 2627–2633 (2016). * [9] Ling, X. _et al._ Anisotropic electron-photon and electron-phonon interactions in black phosphorus. _Nano Lett._ 16, 2260–2267 (2016). * [10] Phaneuf-L’Heureux, A.-L. _et al._ Polarization-resolved raman study of bulk-like and davydov-induced vibrational modes of exfoliated black phosphorus. _Nano Lett._ 16, 7761–7767 (2016). * [11] Zheng, W., Yan, J., Li, F. & Huang, F. Elucidation of ”phase difference” in raman tensor formalism. _Photon. Res._ 6, 709–712 (2018). * [12] Choi, Y. _et al._ Complete determination of the crystallographic orientation of ReX2 (X = S, Se) by polarized raman spectroscopy. _Nanoscale Horiz._ 5, 308–315 (2020). * [13] Zhu, Y. _et al._ Raman tensor of layered black phosphorus. _PhotoniX_ 1, 17 (2020). * [14] Pimenta, M. A., Resende, G. C., Ribeiro, H. B. & Carvalho, B. R. Polarized raman spectroscopy in low-symmetry 2d materials: angle-resolved experiments and complex number tensor elements. _Phys. Chem. Chem. Phys._ 23, 27103–27123 (2021). * [15] Zou, B. _et al._ Unambiguous determination of crystal orientation in black phosphorus by angle-resolved polarized raman spectroscopy. _Nanoscale Horiz._ 6, 809–818 (2021). * [16] Huang, S. _et al._ In-plane optical anisotropy of layered gallium telluride. _ACS Nano_ 10, 8964–8972 (2016). * [17] Wu, J., Mao, N., Xie, L., Xu, H. & Zhang, J. Identifying the crystalline orientation of black phosphorus using angle-resolved polarized Raman spectroscopy. _Angew. Chem. Int. Ed._ 54, 2366–2369 (2015). * [18] Mao, N. _et al._ Direct observation of symmetry-dependent electron-phonon coupling in black phosphorus. _J. Am. Chem. Soc._ 141, 18994–19001 (2019). * [19] Strach, T., Brunen, J., Lederle, B., Zegenhagen, J. & Cardona, M. Determination of the phase difference between the Raman tensor elements of the ${A}_{1g}$-like phonons in SmBa2Cu3O7-δ. _Phys. Rev. B_ 57, 1292–1297 (1998). * [20] Li, X.-L. _et al._ Layer number identification of intrinsic and defective multilayer graphenes by the Raman mode intensity from substrate. _Nanoscale_ 7, 8135–8141 (2015). * [21] Ribeiro-Soares, J. _et al._ Group theory analysis of phonons in two-dimensional transition metal dichalcogenides. _Phys. Rev. B_ 90, 115438 (2014). * [22] Wu, J.-B., Lin, M.-L., Cong, X., Liu, H.-N. & Tan, P.-H. Raman spectroscopy of graphene-based materials and its applications in related devices. _Chem. Soc. Rev._ 47, 1822–1873 (2018). * [23] Lin, M.-L. _et al._ Cross-dimensional electron-phonon coupling in van der Waals heterostructures. _Nat. Commun._ 10, 2419 (2019). * [24] Qiao, J., Kong, X., Hu, Z.-X., Yang, F. & Ji, W. High-mobility transport anisotropy and linear dichroism in few-layer black phosphorus. _Nat. Commun._ 5, 4475 (2014). * [25] Xia, F., Wang, H. & Jia, Y. Rediscovering black phosphorus as an anisotropic layered material for optoelectronics and electronics. _Nat. Commun._ 5, 4458 (2014). * [26] Luo, Z. _et al._ Anisotropic in-plane thermal conductivity observed in few-layer black phosphorus. _Nat. Commun._ 6, 8572 (2015). * [27] Schuster, R., Trinckauf, J., Habenicht, C., Knupfer, M. & Büchner, B. Anisotropic particle-hole excitations in black phosphorus. _Phys. Rev. Lett._ 115, 026404 (2015). * [28] Wang, S.-Y. _et al._ Tunable optical activity in twisted anisotropic two-dimensional materials. _ACS Nano_ 17, 16230–16238 (2023). * [29] Dawson, P. Polarisation measurements in Raman spectroscopy. _Spectrochim. Acta Part A_ 28, 715–723 (1972). * [30] Rulmont, A. & Flamme, J. Birefringence effect in the Raman spectrum of a crystal which is not cut parallel to the principal axes-I. _Spectrochim. Acta Part A_ 35, 629–633 (1979). * [31] Rulmont, A., Flamme, J., Pottier, M. & Wanklyn, B. Birefringence effect in the Raman spectrum of a crystal which is not cut parallel to the principal axes-II. application to a single crystal of LaBO3. _Spectrochim. Acta Part A_ 35, 635 – 639 (1979). * [32] Alonso-Gutiérrez, P., Sanjuán, M. L. & Morón, M. C. Raman selection rules in uniaxial media: The nonpolar modes of $\mathrm{Mn}{\mathrm{Ga}}_{2}{\mathrm{Se}}_{4}$. _Phys. Rev. B_ 71, 085205 (2005). * [33] Zhang, H. _et al._ Cavity-enhanced linear dichroism in a van der waals antiferromagnet. _Nat. Photon._ 16, 311–317 (2022).
1 # Reconciling Shannon and Scott with a Lattice of Computable Information Sebastian Hunt<EMAIL_ADDRESS>0000-0001-7255-4465 City, University of LondonLondonUnited Kingdom , David Sands<EMAIL_ADDRESS>0000-0001-6221-0503 Chalmers University of TechnologyGothenburgSweden and Sandro Stucki<EMAIL_ADDRESS>0000-0001-5608-8273 Amazon Prime VideoGothenburgSweden (2023; 2022-07-07; 2022-11-07) ###### Abstract. This paper proposes a reconciliation of two different theories of information. The first, originally proposed in a lesser-known work by Claude Shannon (some five years after the publication of his celebrated quantitative theory of communication), describes how the information content of channels can be described _qualitatively_ , but still abstractly, in terms of _information elements_ , where information elements can be viewed as equivalence relations over the data source domain. Shannon showed that these elements have a partial ordering, expressing when one information element is more informative than another, and that these partially ordered information elements form a complete lattice. In the context of security and information flow this structure has been independently rediscovered several times, and used as a foundation for understanding and reasoning about information flow. The second theory of information is Dana Scott’s domain theory, a mathematical framework for giving meaning to programs as continuous functions over a particular topology. Scott’s partial ordering also represents when one element is more informative than another, but in the sense of computational progress – i.e. when one element is a more defined or evolved version of another. To give a satisfactory account of information flow in computer programs it is necessary to consider both theories together, in order to understand not only what information is conveyed by a program (viewed as a channel, à la Shannon) but also how the precision with which that information can be observed is determined by the definedness of its encoding (à la Scott). To this end we show how these theories can be fruitfully combined, by defining _the Lattice of Computable Information_ ($\operatorname{LoCI}$), a lattice of preorders rather than equivalence relations. $\operatorname{LoCI}$ retains the rich lattice structure of Shannon’s theory, filters out elements that do not make computational sense, and refines the remaining information elements to reflect how Scott’s ordering captures possible varieties in the way that information is presented. We show how the new theory facilitates the first general definition of termination-insensitive information flow properties, a weakened form of information flow property commonly targeted by static program analyses. Information Flow, Semantics ††copyright: rightsretained††doi: 10.1145/3571740††journalyear: 2023††submissionid: popl23main-p600-p††journal: PACMPL††journalvolume: 7††journalnumber: POPL††article: 68††publicationmonth: 1††ccs: Theory of computation Program analysis††ccs: Theory of computation Denotational semantics††ccs: Security and privacy Information flow control ## 1\. Introduction Note to Reader: this paper is not about information theory (Shannon, 1948), but about a theory of information (Shannon, 1953). ### 1.1. What is the Information in Information Flow? The study of information flow is central to understanding many properties of computer programs, and in particular for certain classes of confidentiality and integrity properties. In this paper we are concerned with providing a better semantic foundation for studying information flow. The starting point for understanding information flow is to understand information itself. Shannon’s celebrated theory of information (Shannon, 1948) naturally comes to mind, but Shannon’s theory is a theory about _quantities_ of information, and purposefully abstracts from the information itself. In a relatively obscure paper111With around 150 citations, a factor of 1000 fewer than his seminal work on information theory (Shannon, 1948) (source: Google Scholar); according to Rioul et al. (2022), all but ten of these actually intended to cite the 1948 paper., Shannon (1953) himself notes: > $\ldots$ $H(X)$ [the entropy of a channel $X$] can hardly be said to > represent the actual information. Thus, two entirely different sources might > produce information at the same rate (same $H$) but certainly they are not > producing the same information. Shannon goes on to introduce the term _information elements_ to denote the information itself. The concept of an information element can be derived by considering some channel – a random variable in Shannon’s world, but we can think of it as simply a function $f$ from a “source” domain to some “observation” codomain – and asking what information does $f$ produce about its input. Shannon’s idea was to view the information itself as the set of functions which are equivalent, up to bijective postprocessing, with $f$, i.e. $\\{b\circ f\mid\text{$b$ is bijective on the range of $f$}\\}$ – in other words, all the alternative ways in which the information revealed by $f$ might be faithfully represented. Shannon observed that information elements have a natural partial ordering, reflecting when one information element is subsumed by (represents more information than) another, and that any set of information elements relating to a common information source domain can be completed into a _lattice_ , with a least-upper-bound representing any information-preserving combination of information elements, and a greatest-lower-bound, representing the common information shared by two information elements, thus providing the title of Shannon’s note: “A Lattice Theory of Information”. Shannon observes that any such lattice of information over a given source domain can be embedded into a general and well-known lattice, namely the lattice of equivalence relations over that source domain (Ore, 1942). In fact, the most precise lattice of information for a given domain, i.e. the one containing all information elements over that domain, is isomorphic to the lattice of equivalence relations over that domain. In the remainder of this paper will think in terms of the most precise lattice of information for any given domain. This lattice structure, independently dubbed _the lattice of information_ by Landauer and Redmond (1993), can be used in a uniform way to phrase a large variety of interesting information flow questions, from simple confidentiality questions (is the information in the public output channel of a program no greater than in the public input data?), to arbitrarily fine-grained, potentially conditional information flow policies. The lattice of information, described in more detail in §2, is the starting point of our study. ### 1.2. Shortcomings of the Lattice of Information The lattice of information provides a framework for reasoning about a variety of information flow properties in a uniform way. It is natural in this approach, to view programs as functions from an input domain to some output domain. But this is where we hit a shortcoming in the lattice of information: program behaviours may be _partial_ , ranging from simple nontermination, to various degrees of partiality when modelling structured outputs such as streams. While these features can be modelled in a functional way using domain theory (see e.g. (Abramsky and Jung, 1995)) the lattice of information is oblivious to the distinction between degrees of partiality. Towards an example, consider the following two Haskell functions: ⬇ parity1 x = if even x then 1 else 0 ⬇ parity2 x = if even x then "Even" else "Odd" Even though these two functions have different codomains, intuitively they release the same information about their argument, albeit encoded in different ways. In Shannon’s view they represent the same information element. The information released by a function $f$ can be represented simply its _kernel_ – the smallest equivalence relation that relates two inputs whenever they get mapped to the same output by $f$. It is easy to see that the two functions above have the same kernel. What about programs with partial behaviours? A natural approach is to follow the denotational semantics school, and model nontermination as a special “undefined” value, $\bot$, and more generally to capture nontermination and partiality via certain families of partially ordered sets (_domains_ (Abramsky and Jung, 1995)) and to model programs as continuous functions between domains. Consider this example: ⬇ parity0 x = if even x then 1 else parity0 x Here the program returns 1 if the input is even, and fails to terminate otherwise. The kernel of (the denotation of) this function is the same as the examples above, which means that it is considered to reveal the same amount of information. But intuitively this is clearly not the case: parity0 provides information in a less useful form than parity1. When the input is odd, an observer of parity0 will remain in limbo, waiting for an output that never comes, whereas an observer of parity1 will see the value 0 and thus learn the parity of the input. The two are only equivalent from Shannon’s perspective if we allow _uncomputable_ postprocessors. (Of course, we are abstracting away entirely from timing considerations here. This is an intrinsic feature of the denotational model, and a common assumption in security reasoning.) Intuitively, parity0 provides information which is consistent with parity1, but the “quality” is lower, since some of the information is encoded by nontermination. Now consider programs A and B, where the input is the value of variable x and the output domain is a channel on which multiple values may be sent. Program A simply outputs the absolute value of x. Program B outputs the same value but in unary, as a sequence of outputs, then silently diverges. ⬇ A: output(abs(x)) B: y := abs(x); for i := 1 to y { output () }; while True { }; Just as in the previous example, A and B compute functions which have the same kernel, so in the lattice of information they are equivalent. But consider what we can actually deduce from B after observing $n$ output events: we know that the absolute value of x is some value $\geq n$, but we cannot infer that it is exactly $n$, since we do not know whether there are more outputs yet to come, or if the program is stuck in the final loop. By contrast, as soon as we see the output of A, we know with certainty the absolute value of x. In summary, the lattice of information fails to take into account that information can be encoded at different degrees of definedness222Here we have drawn, albeit very informally, on foundational ideas developed by Smyth (1983), Abramsky (1987, 1991) and Vickers (1989), which reveal deep connections between domain theory, topology and logics of observable properties.. A second shortcoming addressed in this paper, again related to the lattice of information’s unawareness of nontermination, is its inability to express, in a general non ad hoc way, a standard and widely used weakening of information flow properties to the so-called _termination-insensitive_ properties (Sabelfeld and Sands, 2001; Sabelfeld and Myers, 2003). (When considering programs with stream outputs, they are also referred to as _progress- insensitive_ properties (Askarov and Sabelfeld, 2009).) These properties are weakenings of information flow policies which ignore any information which is purely conveyed by the definedness of the output (i.e. termination in the case of batch computation, and progress in the case of stream-based output). ### 1.3. Contributions #### Contribution 1: A refined lattice of information In this paper we present a new abstraction for information, the _Lattice of Computable Information_ ($\operatorname{LoCI}$), which reconciles Shannon’s lattice of information with Scott’s domain ordering (§3). It does so by moving from a lattice of equivalence relations to a lattice of preorder relations, where the equivalence classes of the preorder reflect the “information elements”, and the ordering between them captures the distinction in quality of information that arises through partiality and nontermination (the Scott ordering). Just as with the lattice of information, $\operatorname{LoCI}$ induces an information ordering relation on functions; in this ordering, `parity0` is less than `parity1`, but `parity1` is still equivalent to `parity2`. Similarly programs $A$ and $B$ above are related but not equivalent. We show that $\operatorname{LoCI}$ is, like the lattice of information, well behaved with respect to various composition properties of functions. #### Contribution 2: A generalised definition of termination-insensitive noninterference The lattice of computable information gives us the ability to make finer distinctions about information flow with respect to progress and termination. By modelling this distinction we also have the ability to systematically ignore it; this provides the first uniform generalisation of the definition of termination-insensitive information flow properties (§4). The remainder of the paper begins with a review of the lattice of information (§2), which is followed by our refinement (§3), the treatment of termination- insensitivity (§4), a discussion of related work (§5), and some directions for further work (§6). ## 2\. The Lattice of Information The lattice of information is a way to abstract the information about a data source $D$ which might be revealed by various functions over that data. Mathematically, it is simply the set of equivalence relations over $D$, ordered by reverse inclusion, a structure that forms a _complete lattice_ (Ore, 1942), i.e. every set of elements in the lattice has a least upper bound and a greatest lower bound. The lattice of information has been rediscovered in several contexts relating to information and information flow, e.g. using partial equivalence relations (PERs) (Hunt, 1991; Hunt and Sands, 1991). Here we use the terminology from Landauer and Redmond (1993) who call it the _lattice of information_. To introduce the lattice of information let us consider a simple set of values $D=\\{\text{Red},\text{Orange},\text{Green},\text{Blue}\\}$ and the following three functions over $D$: $\displaystyle\text{isPrimary}(c)=\begin{cases}\text{True}&\text{if $c\in\\{\text{Red},\text{Blue}\\}$}\\\ \text{False}&\text{otherwise}\end{cases}\qquad\quad\text{isTrafficLight}(c)=\begin{cases}\text{False}&\text{if $c=\text{Blue}$}\\\ \text{True}&\text{otherwise}\end{cases}$ $\displaystyle\text{primary}(c)=\begin{cases}\text{``The primary colour red''}&\text{if $c=\text{Red}$}\\\ \text{``The primary colour blue''}&\text{if $c=\text{Blue}$}\\\ \text{``Not a primary colour''}&\text{otherwise}\end{cases}$ Now consider the information that each of these functions reveals about its input: _isPrimary_ and _isTrafficLight_ reveal incomparable information about their inputs – for example we cannot define either one of them by postprocessing the result of the other. The function _primary_ , however, not only subsumes both of them, but represents _exactly_ the information that the pair of them together reveal about the input, nothing more, nothing less. The lattice of information (over $D$) makes this precise by representing the information itself as an equivalence relation on the elements of $D$. Elements that are equivalent for a given relation are elements which we can think of as indistinguishable. ###### Definition 0 (Lattice of Information). For a set $D$, the lattice of information over $D$, $\operatorname{LoI}(D)$, is defined to be the lattice $\operatorname{LoI}(D)=(\mathrm{ER}(D),\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}},\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}},\mathrel{\sqcap_{\text{\tiny$\operatorname{LoI}$}}})$ where $\mathrm{ER}(D)$ is the set of all equivalence relations over $D$, $P\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}Q\mathrel{\stackrel{{\scriptstyle{\scriptscriptstyle\mathrm{def}}}}{{=}}}Q\subseteq P$, the join operation $\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}$ is set intersection of relations, and the meet, $\mathrel{\sqcap_{\text{\tiny$\operatorname{LoI}$}}}$, is the transitive closure of the set-union of relations. Note that $\operatorname{LoI}(D)$ is a complete lattice (contains all joins and meets, not just the binary ones) (Ore, 1942). The top element of $\operatorname{LoI}(D)$ is the identity relation on $D$, which we write as $\mathrm{Id}_{D}$, or just $\mathrm{Id}$ when $D$ is clear from context; the bottom element is the relation which relates every element to every other element, which we write as $\mathrm{All}_{D}$, or just $\mathrm{All}$. In the above definitions we consider equivalence relations to be sets of pairs of elements of $D$. Another useful way to view equivalence relations is as partitions of $D$ into disjoint blocks (equivalence classes). Given an equivalence relation $P$ on a set $D$ and an element $a\in D$, let $[a]_{P}$ denote the (necessarily unique) equivalence class of $P$ which contains $a$. Let $[P]$ denote the set of all equivalence classes of $P$. Note that $[P]$ is a partition of $A$. Figure 1. An Example Sublattice of the lattice of Information over $\\{\text{Red},\text{Orange},\text{Green},\text{Blue}\\}$ In Fig. 1 we present a Hasse diagram of a sublattice of the lattice of information containing the points representing the information provided by the functions above, and visualising the equivalence relations by representing them as partitions. Note that with the partition view, the ordering relation is partition refinement. The full lattice $\operatorname{LoI}\\{\text{Red},\text{Orange},\text{Green},\text{Blue}\\}$ contains 15 elements (known in combinatorics as the _$4^{th}$ Bell number_). ### 2.1. The Information Ordering on Functions To understand the formal connection between the functions and the corresponding information that they release, we use the well-known notion of the _kernel_ of a function: We recall that the _kernel_ of a function $f:D\rightarrow E$ is the equivalence relation $\ker(f)\in\operatorname{LoI}(D)$ which relates all elements mapped by $f$ to the same result: $a\mathrel{\ker(f)}b$ iff $f(a)=f(b)$. Thus the points illustrated in the lattice do indeed correspond to the respective kernels of the functions, and it can readily be seen that $\ker(\text{primary})=\ker(\text{isPrimary})\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}\ker(\text{isTrafficLight})$. Note that taking kernels induces an information preorder on any functions $f$ and $g$ which have a common input domain (we write $\mathrm{dom}(f)=\mathrm{dom}(g)$), namely $f\precsim g\mathrel{\mathrm{\ iff\ }}\ker(f)\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}\ker(g)$, i.e. $g$ reveals at least as much information about its argument as $f$. Note that this information ordering between functions can be characterised in a number of ways. ###### Proposition 0. For any functions $f$ and $g$ such that $\mathrm{dom}(f)=\mathrm{dom}(g)$ the following are equivalent: 1. (1) $f\precsim g$ 2. (2) $\\{p\circ f\mid\mathrm{codom}(f)=\mathrm{dom}(p)\\}\subseteq\\{p\circ g\mid\mathrm{codom}(g)=\mathrm{dom}(p)\\}$ (where $\mathrm{codom}(f)$ is the codomain of function $f$) 3. (3) There exists $p$ such that $f=p\circ g$ The proposition essentially highlights the fact that the information ordering on functions can alternatively be understood in terms of postprocessing (the function $p$). The set $\\{p\circ f\mid\mathrm{codom}(f)=\mathrm{dom}(p)\\}$ can be viewed as all the things which can be computed from the result of applying $f$. ### 2.2. An Epistemic View In our refinement of the lattice of information we will lean on an _epistemic_ characterisation of the function ordering which focuses on the facts which an observer of the output of a function might learn about its input. ###### Definition 0. For $f:A\rightarrow B$ and $a\in A$, define the $f$-_knowledge set_ for $a$ as: $K_{f}(a)=\\{a^{\prime}\in A\mid f(a)=f(a^{\prime})\\}$ The knowledge set for an input $a$ is thus what an observer who knows the function $f$ can maximally deduce about the input if they only get to observe the result, $f(a)$. For example, $K_{\text{primary}}(\text{Green})=\\{c\mid\text{primary}(c)=\text{primary}(\text{Green})\\}=\\{\text{Green},\text{Orange}\\}$. Note that although we use the terminology “knowledge”, following work on the semantics of dynamic security policies (Askarov and Sabelfeld, 2007; Askarov and Chong, 2012), it is perhaps more correct to think of this as _uncertainty_ in the sense that a smaller set corresponds to a more precise deduction. The point here is that $f\precsim g$ can be characterised in terms of knowledge sets: $g$ will produce knowledge sets which are at least as precise as those of $f$: ###### Proposition 0. Let $f$ and $g$ be any two functions with domain $A$. Then $f\precsim g$ iff $K_{g}(a)\subseteq K_{f}(a)$ for all $a\in A$. ### 2.3. Information Flow and Generalised Kernels Although we can understand the information released by a function by considering its kernel as an element of the lattice of information, for various reasons it is useful to generalise this idea. The first reason is that we are often interested in understanding the information flow through a function when just a part of the function’s output is observed. For example, if we want to know whether a function is secure, this may require verifying that the public parts of the output reveal information about at most the non- secret inputs. The second reason to generalise the way we think about information flow of functions is to build compositional reasoning principles. Suppose that we know that a function $f$ reveals information $P$ about its input. Now suppose that we wish to reason about $f\circ g$. In order to make use of what we know about $f$ we need to understand the information flow of $g$ when the output is “observed” through $P$. This motivates the following generalised information flow definition (the specific notation here is taken from (Hunt, 1991; Sabelfeld and Sands, 2001), but we state it for arbitrary binary relations à la logical relations (Reynolds, 1983)): ###### Definition 0. Let $P$ and $Q$ be binary relations on sets $A$ and $B$, respectively. Let $f:A\rightarrow B$. Define: $f:P\Rightarrow Q\mathrel{\mathrm{\ iff\ }}\forall a,a^{\prime}.({a\mathrel{P}a^{\prime}}\quad\text{implies}\quad{{f(a)}\mathrel{Q}{f(a^{\prime})}})$ When $P$ and $Q$ are equivalence relations, these definitions describe information flow properties of $f$ where $P$ describes an upper bound on what can be learned about the input when observing the output “through” $Q$ (i.e. we cannot distinguish $Q$-related outputs). We can read $f:P\Rightarrow Q$ as an information flow typing at the semantic level. As such it can be seen to enjoy natural composition and subtyping properties. Again, we state these in a more general form as we will reuse them for different kinds of relation: ###### Fact 1. The following inference rules are valid for all functions and binary relations of appropriate type: $\displaystyle\frac{\begin{array}[]{c}\;P^{\prime}\subseteq P\;\;\;f:P\Rightarrow Q\;\;\;Q\subseteq Q^{\prime}\end{array}}{\begin{array}[]{c}\;f:{P^{\prime}}\Rightarrow{Q^{\prime}}\end{array}}~{}\text{\emph{Sub}}\hskip 30.00005pt\displaystyle\frac{\begin{array}[]{c}\;f:P\Rightarrow Q\;\;\;g:Q\Rightarrow R\end{array}}{\begin{array}[]{c}\;g\circ f:{P}\Rightarrow{R}\end{array}}~{}\text{\emph{Comp}}$ When these relations are elements of the lattice of information, the conditions $P^{\prime}\subseteq P$ and $Q\subseteq Q^{\prime}$ in the Sub-rule amount to $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny$\operatorname{LoI}$}}}P$ and $Q\mathrel{\sqsupseteq_{\text{\tiny$\operatorname{LoI}$}}}Q^{\prime}$, respectively. Information flow properties also satisfy weakest precondition and strongest postcondition-like properties. To present these, we start by generalising the notion of kernel of a function: ###### Definition 0 (Generalised Kernel). Let $\operatorname{REL}(A)$ denote the set of all binary relations on a set $A$. For any $f:A\to B$, define $f^{\ast}:\operatorname{REL}(B)\to\operatorname{REL}(A)$ as follows: ${x\mathrel{f^{\ast}(R)}y}\mathrel{\mathrm{\ iff\ }}{f(x)\mathrel{R}f(y)}$ We call this the _generalised kernel map_ , since $\ker(f)=f^{\ast}(\mathrm{Id})$. Now, it is evident that $f^{\ast}$ preserves reflexivity, transitivity and symmetry, so restricting $f^{\ast}$ to equivalence relations immediately yields a well defined map in $\operatorname{LoI}$ (Landauer and Redmond (1993) use the notation $f\\#$ for this map). Moreover, we can define a partner $f_{!}$, which operates in the opposite direction and has dual properties (as formalised below): ###### Definition 0. For $f:A\to B$: 1. (1) $f^{\ast}:\operatorname{LoI}(B)\to\operatorname{LoI}(A)$ is the restriction of the generalised kernel map to $\operatorname{LoI}(B)$. 2. (2) $f_{!}:\operatorname{LoI}(A)\to\operatorname{LoI}(B)$ is given by ${f_{!}(P)}={\textstyle{\bigsqcup_{\text{\tiny LoI}}}\\{Q\in\operatorname{LoI}\mid f:P\Rightarrow Q\\}}$. Note that we are overloading our notation here, using $f^{\ast}$ for both the map on $\operatorname{REL}$ and its restriction to $\operatorname{LoI}$. Later, in §3.6, we overload it again (along with $f_{!}$). Our justification for this overloading is that in each case these maps are doing essentially the same thing333This can be made precise, categorically. See §3.7.: $f^{\ast}(Q)$ is the _weakest precondition_ for $Q$ (i.e. the smallest $P$ such that $f:P\Rightarrow Q$) while $f_{!}(P)$ is the _strongest postcondition_ of $P$ (i.e. the largest $Q$ such that $f:P\Rightarrow Q$), where “smallest” and “largest” are interpreted within the relevant lattice ($\operatorname{LoI}$ here, our refined lattice $\operatorname{LoCI}$ later). The following proposition formalises this for $\operatorname{LoI}$ (see Proposition 11 for its $\operatorname{LoCI}$ counterpart): ###### Proposition 0. For any $f:A\to B$, $f^{\ast}$ and $f_{!}$ are monotone and, for any $P\in\operatorname{LoI}(A)$ and $Q\in\operatorname{LoI}(B)$, the following are all equivalent: _(1)_ ${f:P\Rightarrow Q}$ _(2)_ ${{f^{\ast}(Q)}\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}P}$ _(3)_ ${Q\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}{f_{!}(P)}}$ We have summarised a range of key properties of the lattice of information that make it useful for both formulating a wide variety of information flow properties, as well as proving them in a compositional way. An important goal in refining the lattice of information will be to ensure that we still enjoy properties of the same kind. ## 3\. LoCI: The Lattice of Computable Information Our goal in this section is to introduce a refinement of noninterference which accounts for the difference in quality of knowledge that arises from nontermination, or more generally partiality, for example when programs produce output streams that may at some point fail to progress. We will assume that a program is modelled in a domain-theoretic denotational style, as a continuous function between partially ordered sets. In this setting, the order relation on a set of values models their relative degrees of “definedness”. Simple nontermination is modelled as a bottom element, $\bot$, and in general the ordering relation reflects the evolution of computation. Following Scott, the pioneer of this approach, when one element $d$ is dominated by another $e$, one can think of $e$ as containing _more information_ than $d$. In the domain-theoretic view, a partial element is not a concrete observation or outcome, but a degree of knowledge about a computation. In this sense $\bot$ represents no knowledge – you do not fully observe a nonterminating computation, it may still evolve into some more defined result. Note how this view emphasises how we are abstracting away from time. This also explains the basic requirement that all functions (which will be the denotation of programs) are monotone: if you know more about the input (in Scott’s sense) you know more about the output. In domain theory (a standard reference is (Abramsky and Jung, 1995)) one restricts attention to some subclass of well- behaved partially ordered sets (the _domains_ of the theory), in order that recursive computations may be given denotations as least fixpoints. Being well-behaved in this context entails the existence of suprema of directed sets (and usually a requirement that the domain has a finitary presentation in terms of its compact elements). In this paper we keep our key definitions as general as possible by stating them for arbitrary partially ordered sets, but still requiring that the functions under study are continuous (preserve directed suprema when they exist). We expect that some avenues for future work may require additional structure to be imposed (see §6). ### 3.1. Order-Theoretic Preliminaries A _partial order_ on a set $A$ is a reflexive, transitive and antisymmetric relation on $A$. A _poset_ is a pair $(A,{\sqsubseteq}_{A})$ where $\sqsubseteq$ is a partial order on $A$. We typically elide the subscript on ${\sqsubseteq}_{A}$ when $A$ is clear from the context. The _supremum_ of a subset $X\subseteq A$, if it exists, is the least upper bound $\bigsqcup X$ with respect to $\sqsubseteq_{A}$. A set $X\subseteq A$ is _directed_ if $X$ is non-empty and, for all $x_{1}\in X,x_{2}\in X$, there exists $x^{\prime}\in X$ such that $x_{1}\sqsubseteq x^{\prime}$ and $x_{2}\sqsubseteq x^{\prime}$. For posets $A$ and $B$, a function $f:A\rightarrow B$ is _monotone_ iff $a\sqsubseteq a^{\prime}$ implies $f(a)\sqsubseteq f(a^{\prime})$. A function $f:A\rightarrow B$ is _Scott-continuous_ if, for all $X$ directed in $A$, whenever $\bigsqcup X$ exists in $A$ then $\bigsqcup f(X)$ exists in $B$ and is equal to $f(\bigsqcup X)$. From now on we will simply say continuous when we mean Scott-continuous. Note: 1. (1) Continuity implies monotonicity because $a\sqsubseteq a^{\prime}$ implies both that $\\{a,a^{\prime}\\}$ is directed and that $\bigsqcup\\{a,a^{\prime}\\}=a^{\prime}$, while $\bigsqcup\\{f(a),f(a^{\prime})\\}=f(a^{\prime})$ implies $f(a)\sqsubseteq f(a^{\prime})$. 2. (2) Monotonicity in turn implies that, if $X$ is directed in $A$, then $f(X)$ is directed in $B$. ###### Notation 0. In what follows, we write $f\in[A\to B]$ as a shorthand to mean both that $A$ and $B$ are posets and that $f$ is continuous. ### 3.2. Ordered Knowledge Sets Our starting point is the epistemic view presented in §2.2. Recall that we defined the $f$-knowledge set for an input $a$ to be the set $\\{a^{\prime}\mid f(a^{\prime})=f(a)\\}$, which is what we learn by observing the output of $f$ when the input is $a$. However, as discussed in the introduction to this section, in a domain-theoretic setting, observation of a _partial_ output should be understood as provisional: there may be more to come. This requires us to modify the definition of knowledge set accordingly. What we learn about the input when we see a partial output is that the input could be anything which produces that observation, _or something greater_. Hence: ###### Definition 0. For $f\in[A\to B]$ and $a\in A$, define the _ordered $f$-knowledge set_ for $a$ as: $K^{\sqsubseteq}_{f}(a)=\\{a^{\prime}\in A\mid f(a)\sqsubseteq f(a^{\prime})\\}$ Recall (Proposition 4) that the LoI preorder on functions has an alternative characterisation in terms of knowledge sets: the kernel of $g$ is a refinement of (i.e. more discriminating than) the kernel of $f$ just when each knowledge set of $g$ is a subset of (i.e. more precise than) the corresponding knowledge set of $f$. Unsurprisingly however, if we compare continuous functions based on their _ordered_ knowledge sets, the correspondence with LoI is lost. Consider the examples parity0 and parity1 from §1.2. We can model these as functions $f_{0},f_{1}\in[Z\rightarrow Z_{\bot}]$, where $Z$ is discretely ordered (the partial order is just equality) and the lifting $Z_{\bot}$ adds a new element $\bot$ which is $\sqsubseteq$ everything. We have: $f_{0}(x)=\left\\{\begin{array}[]{cl}1&\mbox{ if }x\mbox{ is even}\\\ \bot&\mbox{ if }x\mbox{ is odd}\end{array}\right.$ and $f_{1}(x)=\left\\{\begin{array}[]{cl}1&\mbox{ if }x\mbox{ is even}\\\ 0&\mbox{ if }x\mbox{ is odd}\end{array}\right.$ As discussed previously, these two functions have the same kernel and so are LoI-equivalent. Moreover, in accordance with Proposition 4, it is easy to see that they induce the same knowledge sets: $K_{f_{0}}(x)=K_{f_{1}}(x)$ for all $x\in Z$. However, they do not induce the same _ordered_ knowledge sets. In particular, when $x$ is odd we have $K^{\sqsubseteq}_{f_{1}}(x)=\\{y\in Z\mid y\mbox{ is odd}\\}$ but $K^{\sqsubseteq}_{f_{0}}(x)=Z$. In fact, not only do the two functions induce different ordered knowledge sets, but $f_{1}$ is (strictly) more informative than $f_{0}$, since $K^{\sqsubseteq}_{f_{1}}(x)\subseteq K^{\sqsubseteq}_{f_{0}}(x)$ for all $x$ (and $K^{\sqsubseteq}_{f_{1}}(x)\neq K^{\sqsubseteq}_{f_{0}}(x)$ for some $x$). Our key insight is that it is possible to define an alternative information lattice, one which corresponds exactly with ordered knowledge sets, by using (a certain class of) preorders, in place of the equivalence relations used in LoI. ### 3.3. Ordered Kernels A _preorder_ is simply a reflexive and transitive binary relation. Clearly, every equivalence relation is a preorder, but not every preorder is an equivalence relation. As with equivalence relations, it is possible to present a preorder in an alternative form, as a partition rather than a binary relation, but with one additional piece of information: a partial order on the blocks of the partition. In fact, there is a straightforward 1-1 correspondence between preorders and partially ordered partitions: 1. (1) Given a preorder $Q$ on a set $A$, for each $a\in A$, define ${[}a{]}_{Q}=\\{a^{\prime}\mid{a\mathrel{Q}a^{\prime}}\wedge{a^{\prime}\mathrel{Q}a}\\}$ and ${[}Q{]}=\\{{[}a{]}_{Q}\mid a\in A\\}$. (Note: although we appear to be overloading the notation introduced in §2, the definitions agree in the case that $Q$ is an equivalence relation.) Then define ${[}a_{1}{]}_{Q}\mathrel{{\sqsubseteq}_{Q}}{[}a_{2}{]}_{Q}$ iff $a_{1}\mathrel{Q}a_{2}$. This is a well-defined partial order on ${[}Q{]}$. 2. (2) Conversely, given a poset $(\Phi,\sqsubseteq)$, where $\Phi$ is a partition of set $A$, we recover the corresponding preorder on $A$ by defining $a\mathrel{Q}a^{\prime}$ iff ${[}a{]}_{\Phi}\sqsubseteq{[}a^{\prime}{]}_{\Phi}$. For a preorder $Q$, we refer to the equivalence relation with equivalence classes ${[}Q{]}$ as the _underlying_ equivalence relation of $Q$. Clearly, the underlying equivalence relation of $Q$ is just $Q\cap Q^{-1}$. Taking the same path for kernels that we took from unordered to ordered knowledge sets, we arrive at the following definition: ###### Definition 0. Let $(B,\sqsubseteq)$ be a poset. Given $f\in[A\rightarrow B]$, define its _ordered kernel_ $\mathrel{\ker_{\sqsubseteq}(f)}$ to be $f^{\ast}(\sqsubseteq)$, thus $x\mathrel{\ker_{\sqsubseteq}(f)}y$ iff $f(x)\sqsubseteq f(y)$. ###### Proposition 0. $\mathrel{\ker_{\sqsubseteq}(f)}$ is a preorder, and its underlying equivalence relation is $\ker(f)$. Only some preorders are the ordered kernels of continuous functions. For example, if $a\sqsubseteq a^{\prime}$ and $Q$ is the ordered kernel of some continuous $f$, then it must be the case that $a\mathrel{Q}a^{\prime}$, since $a\sqsubseteq a^{\prime}$ implies $f(a)\sqsubseteq f(a^{\prime})$. ###### Definition 0 (Complete Preorder). Let $A$ be a poset and let $Q$ be a preorder on $A$. We say that $Q$ is _complete_ iff, whenever $X$ is directed in $A$ and $\bigsqcup X$ exists: 1. (1) $\forall x\in X.\;x\mathrel{Q}(\bigsqcup X)$ 2. (2) $\forall a\in A.\;(\forall x\in X.\;x\mathrel{Q}a)\mathrel{\text{implies}}{(\bigsqcup X)\mathrel{Q}a}$ Note that part (1) entails that every complete $Q$ contains the domain ordering $(\sqsubseteq)$. It is perhaps more illuminating to see the definition of completeness for $Q$ presented in terms of its corresponding partially ordered partition: ###### Lemma 0. Let $A$ be a poset and let $Q$ be a preorder on $A$. Then $Q$ is complete iff, whenever $X$ is directed in $A$ and $\bigsqcup X$ exists in $A$, $\bigsqcup{\\{{[}x{]}_{Q}\mid x\in X\\}}$ exists in $({[}Q{]},\mathrel{{\sqsubseteq}_{Q}})$ and is equal to ${[}\bigsqcup X{]}_{Q}$. In other words, $Q$ is complete iff the quotient map $(\lambda a.{[}a{]}_{Q}):A\to({[}Q{]},\sqsubseteq_{Q})$ is continuous. To round off this section, we establish that the complete preorders on a poset are just the ordered kernels of all the continuous functions with that domain: ###### Theorem 6. Let $A$ be a poset. Then $Q$ is a complete preorder on $A$ iff there is some poset $B$ and $f\in[A\rightarrow B]$ such that $Q={\mathrel{\ker_{\sqsubseteq}(f)}}$. ###### Proof. The implication from left to right is established by Lemma 5. For the implication right to left, assume $f$ is continuous and let $Q={\mathrel{\ker_{\sqsubseteq}(f)}}$. Let $X$ be directed in $A$ such that $\bigsqcup X$ exists. Then: 1. (1) Let $x\in X$. Since $f$ is monotone, $f(x)\sqsubseteq f(\bigsqcup X)$, thus $x\mathrel{Q}(\bigsqcup X)$. 2. (2) Let $a\in A$ be such that $x\mathrel{Q}a$ for all $x\in X$. Then $f(x)\sqsubseteq f(a)$ for all $x\in X$, hence $(\bigsqcup f(X))\sqsubseteq f(a)$, hence $f(\bigsqcup X)\sqsubseteq f(a)$. Thus $(\bigsqcup X)\mathrel{Q}a$. ∎ ### 3.4. LoCI We now define the lattice of computable information as a lattice of complete preorders, directly analogous to the definition of LoI as a lattice of equivalence relations. In particular, we can rely on the fact that the complete preorders are closed under intersection: ###### Lemma 0. Let $\\{Q_{i}\\}$ be an arbitrary family of complete preorders. Then $\bigcap Q_{i}$ is a complete preorder. ###### Definition 0 (Lattice of Computable Information). For a poset $A$, the lattice of information over $A$, $\operatorname{LoCI}(A)$, is defined to be the lattice $\operatorname{LoCI}(A)=(\mathrm{PRE}(A),\mathrel{\sqsubseteq_{\text{\tiny LoCI}}},\mathrel{\sqcup_{\text{\tiny LoCI}}})$ where $\mathrm{PRE}(A)$ is the set of all complete preorders on $A$, $P\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}Q\mathrel{\stackrel{{\scriptstyle{\scriptscriptstyle\mathrm{def}}}}{{=}}}Q\subseteq P$, and ${\textstyle{\bigsqcup_{\text{\tiny LoCI}}}}\mathrel{\stackrel{{\scriptstyle{\scriptscriptstyle\mathrm{def}}}}{{=}}}{\bigcap}$ Since $\operatorname{LoCI}(A)$ has all joins (not just the binary ones), with the bottom element given by $\mathrm{All}_{A}=A\times A$, and top element $\sqsubseteq_{A}$, it also has all meets, and hence is a _complete_ lattice. Meets are not used in what follows so we do not dwell on them further here. As for $\operatorname{LoI}$, we can define a preorder on (continuous) functions based on their ordered kernels: $f\mathrel{\precsim_{\text{\tiny LoCI}}}g$ iff ${\mathrel{\ker_{\sqsubseteq}(f)}}\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}{\mathrel{\ker_{\sqsubseteq}(g)}}$. As claimed above, this corresponds exactly to an ordering of continuous functions based on their ordered knowledge sets: ###### Proposition 0. Let $A$ be a poset and let $f$ and $g$ be any two continuous functions with domain $A$. Then $f\mathrel{\precsim_{\text{\tiny LoCI}}}g$ iff $K^{\sqsubseteq}_{g}(a)\subseteq K^{\sqsubseteq}_{f}(a)$ for all $a\in A$. ### 3.5. An Example LoCI In this section we describe $\operatorname{LoCI}(V)$ for the simple four-point domain $V$ shown in Fig. 2(a). A Hasse diagram of the lattice structure is shown in Fig. 2(b). On the right we enumerate all the complete preorders on $V$, presented as partially ordered partitions. Note that we write $a$ to mean the singleton block $\\{a\\}$, and $ac\bot$ to mean $\\{a,c,\bot\\}$, etc. ][c]0.12 (a) Domain $V$ ][c]0.75 (b) Preorders over $V$ Figure 2. $\operatorname{LoCI}(V)$ Let us now consider two continuous functions whose ordered kernels are presented here, $f_{1},f_{2}\in[V\rightarrow V]$ where: $f_{1}=\lambda x.a$ $f_{2}=\lambda x.\left\\{\begin{array}[]{cl}a&\mbox{ if }x=a\\\ c&\mbox{ if }x\in\\{b,c\\}\\\ \bot&\mbox{ if }x=\bot\end{array}\right.$ Since $f_{1}$ is a constant function it conveys no information about its input, so its ordered kernel is the least element, N ($=\mathrm{All})$. The ordered kernel of $f_{2}$ is D: when the input is $a$, the observer learns this exactly; when the input is $b$ or $c$, the observer learns only that the input belongs to $\\{a,b,c\\}$; when the input is $\bot$, the observer (inevitably) learns nothing at all. Thus, in the $\operatorname{LoCI}$ ordering, $f_{2}$ is strictly more informative than $f_{1}$. It is interesting to note by contrast, that in the Scott-ordering on functions, $f_{1}$ is _maximal_ , and strictly more defined than $f_{2}$ (recall that $f\sqsubseteq g$ in the Scott-order iff $f(x)\sqsubseteq g(x)$ for all $x$). In general, the Scott-ordering between functions tells us little or nothing about their relative capacity to convey information about their inputs. This can be viewed as an instance of the _refinement problem_ known from secure information flow (McLean, 1994), where a point in a domain can be viewed as its upper set (all its possible “futures”) and a higher point is then a refinement (a smaller set of futures). ### 3.6. Information Flow Properties in LoCI We can directly use the notation $f:P\Rightarrow Q$ introduced earlier to express information flow properties for $P$ and $Q$ in $\operatorname{LoCI}$. Since the ordering on relations in $\operatorname{LoCI}$ is still reversed set containment, both the “subtyping” and composition properties stated previously (Fact 1) hold equally well for $\operatorname{LoCI}$ as for $\operatorname{LoI}$. And, as promised, we also have weakest precondition and strongest postcondition properties, provided by appropriate versions of $f^{\ast}$ and $f_{!}$ for continuous $f$ and complete preorders: ###### Definition 0. For $f\in[A\to B]$: 1. (1) $f^{\ast}:\operatorname{LoCI}(B)\to\operatorname{LoCI}(A)$ is the restriction of the generalised kernel map to $\operatorname{LoCI}(B)$. 2. (2) $f_{!}:\operatorname{LoCI}(A)\to\operatorname{LoCI}(B)$ is given by ${f_{!}(P)}={\textstyle{\bigsqcup_{\text{\tiny LoCI}}}\\{Q\in\operatorname{LoCI}\mid f:P\Rightarrow Q\\}}$. (Well-definedness of $f^{\ast}:\operatorname{LoCI}(B)\to\operatorname{LoCI}(A)$ is slightly less immediate than for the $\operatorname{LoI}$ variant, but the key requirement is to show that $f^{\ast}(Q)$ is complete and this follows easily using continuity of $f$.) The $\operatorname{LoCI}$ analogue of Proposition 8 is then: ###### Proposition 0. For any $f\in[A\to B]$, $f^{\ast}$ and $f_{!}$ are monotone and, for any $P\in\operatorname{LoCI}(A)$ and $Q\in\operatorname{LoCI}(B)$, the following are all equivalent: _(1)_ ${f:P\Rightarrow Q}$ _(2)_ ${{f^{\ast}(Q)}\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}P}$ _(3)_ ${Q\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}{f_{!}(P)}}$ ### 3.7. A Category of Computable Information Some of the definitions and properties introduced earlier can be recast in category-theoretic terms through the framework of _Grothendieck fibrations_. In this subsection, we briefly sketch the relevant connections. The subsection is intended as an outline for interested readers rather than a definitive category-theoretic treatment of $\operatorname{LoCI}$ – which is beyond the scope of this paper. The remainder of the paper does not depend on any of the ideas discussed in this subsection, but some notational choices and technical developments are inspired by it. So far we have treated posets $A$, $B$ and continuous functions $f:A\to B$ as a semantic framework, in which we have studied, separately, the information associated with individual domains $A$ via $\operatorname{LoCI}(A)$, and the flow of information over a channel $f$ via $f:-\Rightarrow-$. An alternative approach is to combine the information represented by a preorder $P\in\operatorname{LoCI}(A)$ and its underlying poset $A$ into a single mathematical structure, and to study the overall properties of such _information domains_. ###### Definition 0. An _information domain_ is a pair $(A,P)$ consisting of a poset $A$ and a complete preorder $P\in\operatorname{LoCI}(A)$. An _information-sensitive function_ between information domains $(A,P)$ and $(B,Q)$ is a continuous function $f:A\to B$, such that $f:P\Rightarrow Q$. Information domains and information-sensitive functions form the _category of computable information_ $\mathbf{CoCI}$. Identities and composition are defined via the underlying continuous maps; composition preserves information- sensitivity by Fact 1 (Comp). The category $\mathbf{CoCI}$ and the family of lattices $\operatorname{LoCI}(A)$ are related by a _fibration_ or, to use the terminology coined by Melliès and Zeilberger (2015), by a _type refinement system_. Intuitively, we may think of a poset $A$ as a _type_ , and of an information domain $(A,P)$ as a _refinement_ of $A$. For each type $A$, there is a subcategory of $\mathbf{CoCI}$, called the _fibre over $A$_, whose objects are the refinements of $A$, and which is equivalent to $\operatorname{LoCI}(A)$. Formally, there is a forgetful functor $U$ from $\mathbf{CoCI}$ to the category $\mathbf{PC}$ of posets and continuous functions that maps refinements to their types $U(A,P)=A$ and information-sensitive functions to the underlying continuous maps $U(f)=f$. The fibre $\mathbf{CoCI}_{A}$ over $A$ is the “inverse image” of $A$ under $U$, i.e. the subcategory of $\mathbf{CoCI}$ with objects of the form $(A,P)$ and arrows of the form $\operatorname{id}_{A}:(A,P)\to(A,Q)$, where $P,Q\in\operatorname{LoCI}(A)$. Note that the objects of $\mathbf{CoCI}_{A}$ are uniquely determined by their second component, and that there is an arrow between the pair of objects $(A,P)$ and $(A,Q)$ iff $P\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}Q$. In other words, the category $\mathbf{CoCI}_{A}$ is equivalent to the dual lattice of $\operatorname{LoCI}(A)$, thought of as a complete (and co- complete) posetal category. In line with the terminology of Melliès and Zeilberger, we may call $\mathbf{CoCI}_{A}$ the _subtyping_ lattice over $A$. Furthermore, the functor $U$ is a _bifibration_. Intuitively, this ensures that we can reindex refinements along continuous maps. The formal definition of a bifibration is somewhat involved (see e.g. Melliès and Zeilberger, 2015), but it can be shown, in our setting, to correspond to the existence of weakest preconditions and strongest postconditions as characterised in Proposition 11, plus the following identities $\displaystyle\operatorname{id}_{A}^{\ast}$ $\displaystyle=\operatorname{id}_{\operatorname{LoCI}(A)}$ $\displaystyle(g\circ f)^{\ast}$ $\displaystyle=f^{\ast}\circ g^{\ast}$ $\displaystyle(\operatorname{id}_{A})_{!}$ $\displaystyle=\operatorname{id}_{\operatorname{LoCI}(A)}$ $\displaystyle(g\circ f)_{!}$ $\displaystyle=g_{!}\circ f_{!}$ which are easy to prove. For the last one, rather than showing ${(g\circ f)_{!}}={g_{!}\circ f_{!}}$ directly – which is awkward – it is simpler to show ${(g\circ f)^{\ast}}={f^{\ast}\circ g^{\ast}}$ first, and then use the fact that each $(h^{\ast},h_{!})$ is an adjoint pair. The _cartesian_ and _opcartesian liftings_ of $f:A\to B$ to $(B,Q)$ and $(A,P)$ are then given by $f:(A,f^{\ast}(Q))\to(B,Q)$ and $f:(A,P)\to(B,f_{!}(P))$, respectively. Using the reindexing maps $f^{\ast}$ and $f_{!}$, we can extend the poset- indexed set $\\{\mathbf{CoCI}_{A}\\}_{A\in\mathbf{PC}}$ of fibres over $A$ into a _poset-indexed category_ , that is, a contravariant functor $F:\mathbf{PC}^{\operatorname{op}}\to\mathbf{Cat}$, that maps posets $A$ to fibres $F(A)=\mathbf{CoCI}_{A}$ and whose action on continuous maps $f:A\to B$ is given by $\displaystyle F(f)$ $\displaystyle:\mathbf{CoCI}_{B}\to\mathbf{CoCI}_{A}$ $\displaystyle F(f)$ $\displaystyle(Q)=f^{\ast}(Q)$ Replacing the reindexing map $f^{\ast}$ with $f_{!}$, we obtain a similar, covariant functor $G:\mathbf{PC}\to\mathbf{Cat}$.444The existence of $F$ and $G$ is in fact sufficient to establish that $U$ is a bifibration. The family of lattices $\operatorname{LoCI}(A)$ and the category $\mathbf{CoCI}$ fully determine each other: we may obtain $\operatorname{LoCI}(A)$ as the fibres of $\mathbf{CoCI}$ via $U$, and conversely, we may reconstruct the category $\mathbf{CoCI}$ from the indexed category $F$ via the Grothendieck construction $\mathbf{CoCI}=\int F$. Finally, note that the above can also be adapted to the simpler setting of $\operatorname{LoI}$. In that case, types are simply sets, and refinements are _setoids_ , i.e. pairs $(S,R)$ consisting of a set $S$ and an equivalence relation $R\in\operatorname{LoI}(S)$. The relevant fibration is the obvious forgetful functor $U:\mathbf{Setoid}\to\mathbf{Set}$ from the category of setoids and equivalence-preserving maps to the underlying sets and total functions. ### 3.8. A Partial Embedding of LoI into LoCI As discussed earlier, a key advantage of $\operatorname{LoCI}$ in comparison to $\operatorname{LoI}$ is that it distinguishes between functions which have the same (unordered) kernel but which differ fundamentally in what information they actually make available to an output observer, due to different degrees of partiality. But there is another advantage of $\operatorname{LoCI}$: it excludes “uncomputable” kernels, those equivalence relations in $\operatorname{LoI}$ which are not the kernel of _any_ continuous function. Consider the example of $\operatorname{LoCI}(V)$ in Fig. 2(b). Since $V$ has four elements, there are 15 distinct equivalence relations in $\operatorname{LoI}(V)$. Note, however, that $\operatorname{LoCI}(V)$ has only 14 elements. Clearly then there must be at least one equivalence relation which is being excluded by $\operatorname{LoCI}(V)$ (in fact, five elements of $\operatorname{LoI}(V)$ are excluded). Let us settle on some terminology for this: ###### Definition 0. Let $A$ be a poset. Let $R$ be an equivalence relation on $A$ and let $Q$ be a complete preorder on $A$. Say that $Q$ _realises_ $R$ if $R$ is the underlying equivalence relation of $Q$. When such $Q$ exists for a given $R$, we say that $R$ is _realisable_. Note that, by Proposition 3, the underlying equivalence relation of ${\mathrel{\ker_{\sqsubseteq}(f)}}$ is ${\ker(f)}$, so by Theorem 6 it is equivalent to say that $R$ is realisable iff $R$ is the kernel of some continuous function. In $\operatorname{LoCI}(V)$, note that A, B and C all realise the identity relation. Similarly, F, G and J all realise the same equivalence relation as each other. Thus, while $\operatorname{LoCI}(V)$ has 14 elements, together they realise only 10 of the 15 possible equivalence relations over $V$. As an example of a missing equivalence relation, consider the one with equivalence classes $\\{a,b,\bot\\}$, $\\{c\\}$. Recall that a subset $X$ of a poset is _convex_ iff, whenever $x\sqsubseteq y\sqsubseteq z$ and $x,z\in X$, then $y\in X$. Note that $\\{a,b,\bot\\}$ is not convex, but it is easy to see that all equivalence classes in the kernel of a monotone function _must_ be convex. (The convexity test also fails for the four other missing equivalence relations. But convexity alone is not sufficient for realisability, even in the finite case. See §3 below.) When an equivalence relation $S$ _is_ realisable, we can show that there must be a _greatest_ element of LoCI which realises it. Moreover, we can use this realiser to re-express an $\operatorname{LoI}$ property $f:R\Rightarrow S$ as an equivalent $\operatorname{LoCI}$ property. To this end, we define a pair of monotone maps which allow us to move back and forth between $\operatorname{LoI}$ and $\operatorname{LoCI}$: ###### Definition 0. For poset $A$ define $\mathrm{Cp}_{A}:\operatorname{LoI}(A)\to\operatorname{LoCI}(A)$ and $\mathrm{Er}_{A}:\operatorname{LoCI}(A)\to\operatorname{LoI}(A)$ by: 1. (1) $\mathrm{Cp}_{A}(R)=\textstyle{\bigsqcup_{\text{\tiny LoCI}}}\\{P\in\operatorname{LoCI}(A)\mid P\supseteq R\\}=\bigcap\\{P\in\operatorname{LoCI}(A)\mid P\supseteq R\\}$ 2. (2) $\mathrm{Er}_{A}(P)$ is the underlying equivalence relation of $P$: $\mathrm{Er}_{A}(P)=P\cap{P^{-1}}$ It is easy to see that both maps are monotone. We will routinely omit the subscripts on $\mathrm{Cp}$ and $\mathrm{Er}$ in contexts where the intended domain is clear. Note that, by definition, $R\in\operatorname{LoI}(A)$ is realisable iff there exists some $P\in\operatorname{LoCI}(A)$ such that $\mathrm{Er}(P)=R$. Now, $\mathrm{Cp}(R)$ is defined above to be the greatest $P\in\operatorname{LoCI}(A)$ such that $P\supseteq R$. But observe that $\mathrm{Er}(P)\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}R$ iff $\mathrm{Er}(P)\supseteq R$, and $\mathrm{Er}(P)=P\cap{P^{-1}}\supseteq R$ iff $P\supseteq R$, since $R$ is symmetric. So we have actually defined $\mathrm{Cp}(R)$ to be the greatest $P\in\operatorname{LoCI}(A)$ such that $\mathrm{Er}(P)\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}R$. The following propositions are immediate consequences: ###### Proposition 0. $R$ is realisable iff $\mathrm{Er}(\mathrm{Cp}(R))=R$ (in which case $\mathrm{Cp}(R)$ is its greatest realiser). ###### Proposition 0. The pair $(\mathrm{Er}_{A},\mathrm{Cp}_{A})$ forms a _Galois connection_ between $\operatorname{LoCI}$ (A) and $\operatorname{LoI}$ (A). That is to say for every $P\in\operatorname{LoCI}(A)$ and every $R\in\operatorname{LoI}(A)$: (GC) ${\mathrm{Er}(P)\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}R}\mathrel{\mathrm{\ iff\ }}{P\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}\mathrm{Cp}(R)}$ (See (Erné et al., 1993) for an introduction to Galois connections.) This extends to an encoding of $\operatorname{LoI}$ properties as $\operatorname{LoCI}$ properties: ###### Theorem 17. For all $f\in[A\to B]$, for all $R\in LoI(A)$, for all $Q\in\operatorname{LoCI}(B)$: $f:R\Rightarrow\mathrm{Er}(Q)\mathrel{\mathrm{\ iff\ }}f:\mathrm{Cp}(R)\Rightarrow Q$ ###### Proof. By Propositions 8 and 11, it suffices to show $f^{\ast}(\mathrm{Er}(Q))\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}R$ iff $f^{\ast}(Q)\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}\mathrm{Cp}(R)$. First we note that the following holds by an easy unwinding of the definitions: ($\ast$) $f^{\ast}(\mathrm{Er}_{B}(Q))=\mathrm{Er}_{A}(f^{\ast}(Q))$ Then we have: $f^{\ast}(\mathrm{Er}(Q))\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}R\mathrel{\mathrm{\ iff\ }}\mathrm{Er}(f^{\ast}(Q))\mathrel{\sqsubseteq_{\text{\tiny$\operatorname{LoI}$}}}R\mathrel{\mathrm{\ iff\ }}f^{\ast}(Q)\mathrel{\sqsubseteq_{\text{\tiny LoCI}}}\mathrm{Cp}(R)$ where the first equivalence holds by ($\ast$ ‣ 17) and the second by (GC). ∎ ###### Corollary 0. If $S$ is realisable then $f:R\Rightarrow S\mathrel{\mathrm{\ iff\ }}f:\mathrm{Cp}(R)\Rightarrow\mathrm{Cp}(S)$. ###### Proof. By Proposition 15, $S$ is realisable iff $S=\mathrm{Er}(\mathrm{Cp}(S))$, so let $Q=\mathrm{Cp}(S)$ in the theorem. ∎ It is interesting to note that Corollary 18 does not require $R$ to be realisable. However, in general, the equivalence does not hold unless $S$ is realisable. For a counterexample, consider the three-point lattice $A=\\{0,1,2\\}$ with $0\sqsubset 1\sqsubset 2$, and let $S$ be the equivalence relation with equivalence classes $\\{0,2\\}$ and $\\{1\\}$. The first of these classes is not convex, so $S$ is not realisable. Now consider the property $f:\mathrm{All}\Rightarrow S$. For continuous $f:A\to B$, it is easy to see that this property holds iff $f$ is constant. However, $\mathrm{Cp}(S)=\mathrm{Cp}(\mathrm{All})=\mathrm{All}$ and $f:\mathrm{All}\Rightarrow\mathrm{All}$ holds trivially. (Of course, this is no great loss if the property of interest is actually constancy. The appropriate choice of $S$ in this case is $S=\mathrm{Id}$, which is realisable.) #### 3.8.1. Verifying Realisability Figure 3. An Unrealisable Equivalence Relation We describe a simple necessary condition for realisability, which is also sufficient in the finite case. It is motivated by the following example. Let $R$ be the equivalence relation shown in Fig. 3. The three equivalence classes are clearly convex, but $R$ is not realisable. To see why, suppose that $R$ is the kernel of $f$. There must be distinct elements $x$ and $y$ such that $f(\\{a,b^{\prime}\\})=\\{x\\}$ and $f(\\{b,a^{\prime}\\})=\\{y\\}$. If $f$ is monotone then, since $a\sqsubseteq a^{\prime}$ and $b\sqsubseteq b^{\prime}$, it must be the case that $x\sqsubseteq y\sqsubseteq x$, which contradicts the assumption that $x$ and $y$ are distinct. The example of Fig. 3 generalises quite directly. Given any equivalence relation $R$ on a poset $A$, define $\phi$ as the relation on ${[}R{]}$ which relates two equivalence classes whenever they contain $(\sqsubseteq)$-related elements: ${{[}a{]}_{R}\mathrel{\phi}{[}b{]}_{R}}\mathrel{\mathrm{\ iff\ }}\exists x\in{[}a{]}_{R}.\exists y\in{[}b{]}_{R}.x\sqsubseteq y$. In Fig. 3, unrealisability manifests as a non-trivial cycle in the graph of $\phi$, that is, a sequence ${[}a_{1}{]}_{R}\mathrel{\phi}\cdots\mathrel{\phi}{[}a_{n}{]}_{R}\mathrel{\phi}{[}a_{1}{]}_{R}$ with $n>1$ and such that all ${[}a_{i}{]}_{R}$ are distinct. By the obvious inductive generalisation of the above argument, any monotone $f$ necessarily maps all $a_{i}$ to the same value, thus making $\ker(f)=R$ impossible. So if the graph of $\phi$ contains a non-trivial cycle, $R$ is not realisable. (Note also that this generalises the convexity condition: if any ${[}a{]}_{R}$ is non-convex, there will be a non-trivial cycle with $n=2$.) Conversely, to say that $\phi$ is free of such cycles is just to say that the transitive closure $\phi^{+}$ is antisymmetric. Clearly, $\phi^{+}$ is also reflexive and transitive, thus $B=({[}R{]},\phi^{+})$ is a poset. Let $f:A\to B$ be the map $a\mapsto{[}a{]}_{R}$. Then $f$ is monotone (because ${x\sqsubseteq y}$ implies ${{[}x{]}_{R}\mathrel{\phi}{[}y{]}_{R}}$) and $\ker(f)=R$. In the case that $\sqsubseteq_{A}$ is of finite height, this establishes that $R$ is realisable. ### 3.9. Post Processing In §2 we introduced three equivalent ways of ordering functions, the first based on inclusion of their kernels ($\precsim$), the second in terms of their inter-definability via postprocessing (Proposition 2), and the third in terms of their knowledge sets (Proposition 4). Moving to a setting of posets and continuous functions, we have presented direct analogues of the first of these in terms of ordered kernels ($\mathrel{\precsim_{\text{\tiny LoCI}}}$), and of the third in terms of ordered knowledge sets (Proposition 9). However, it turns out that there is no direct analogue of the postprocessing correspondence. To see why, we consider two pairs of counterexamples which illustrate two essentially different ways in which the postprocessing correspondence fails for $\operatorname{LoCI}$. #### Counterexample 1: Non-Existence of a Monotone Postprocessor Consider a test isEven1 on natural numbers which simply returns True or False. This can be modelled in the obvious way by a function $\mathrm{isEven1}\in[N\rightarrow\mathrm{Bool}_{\bot}]$, where $N$ is the unordered set of natural numbers and $\mathrm{Bool}_{\bot}$ is the lifted domain of Booleans in Fig. 4(a). (a) Codomains (b) Kernels Figure 4. Postprocessing Counterexamples Now consider the following Haskell-style function definition ⬇ isEven x = if even x then ((), spin) else (spin, ()) where spin = spin Tuples in Haskell are both lazy and lifted, so this can be modelled by a function $\mathrm{isEven2}\in[N\rightarrow D]$, where $D$ is the lifted diamond domain in Fig. 4(a). (Haskell does not have a primitive type for natural numbers, but only integers, but for the sake of the example let us assume that the program operates over naturals.) Both these functions have the same kernel (ordered and unordered): it simply partitions $N$ into the sets of even and odd numbers. So $(\mathrm{isEven2}\mathrel{\precsim_{\text{\tiny LoCI}}}\mathrm{isEven1})$ and $(\mathrm{isEven1}\mathrel{\precsim_{\text{\tiny LoCI}}}\mathrm{isEven2})$. We can certainly obtain isEven2 from isEven1 by postprocessing: map $\bot$ to $\bot$, map $T$ to $(\ast,\bot)$, and map $F$ to $(\bot,\ast)$. However, there is no continuous postprocessor $p\in[D\rightarrow\mathrm{Bool}_{\bot}]$ such that $\mathrm{isEven1}=p\circ\mathrm{isEven2}$. The problem is that any such $p$ must map $(\ast,\bot)$ to $T$ and $(\bot,\ast)$ to $F$. But then, since $(\ast,\ast)$ is greater than both $(\ast,\bot)$ and $(\bot,\ast)$, $p$ must map $(\ast,\ast)$ to a value greater than both $T$ and $F$, and no such value exists. Note, however that $(\ast,\ast)$ is not actually in the _range_ of isEven2. If $p$ was not required to be monotone, the problem would therefore be easily resolved, since $p$ could arbitrarily map $(\ast,\ast)$ to either $T$ or $F$ (or even to $\bot$). Unfortunately, such $p$ would not actually be computable. Nonetheless, it is clear that it is indeed _computationally feasible_ to learn exactly the same information from the output of the two functions. For example, we may poll the two elements in the output of isEven2 in alternation, until one becomes defined; as soon as this happens we will know the parity of the input. This behaviour is clearly implementable in principle, even though it does not define a monotone function in $D\rightarrow\mathrm{Bool}_{\bot}$. (Of course, we cannot implement this behaviour in sequential Haskell, but this is just a limitation of the language.) Conceivably, a slightly more liberal postprocessing condition could be designed to accommodate this and similar counterexamples (allow postprocessors to be partial, for example). #### Counterexample 2: Non-Existence of a Continuous Postprocessor Consider these two programs: ⬇ S1: if (x == 0) while True output(); for i := 1 to x - 1 { output () }; while True { } ⬇ S2: if (x == 0) while True output(); while True { } Both programs take a natural number $x$ and produce a partial or infinite stream of units. They can be modelled by functions $S_{1},S_{2}\in[N\rightarrow\Omega]$, where $\Omega$ is the poset illustrated in Fig. 4(a). (In the picture for $\Omega$ we represent each partial stream of units by its length; the limit point $\omega$ represents the infinite stream.) When $x=0$, both programs produce an infinite stream. When $x>0$, S1 produces a stream of length $x-1$, and then diverges; S2 simply diverges immediately. As illustrated in Fig. 4(b), the ordered kernel for $S_{1}$ is isomorphic to $\Omega$, while the ordered kernel for $S_{2}$ is a two-point lattice. Clearly, $S_{2}\mathrel{\precsim_{\text{\tiny LoCI}}}S_{1}$. But there is no continuous $p\in[\Omega\rightarrow\Omega]$ such that $S_{2}={p\circ S_{1}}$. The problem in this case is that $p$ would have to send all the finite elements of $\Omega$ to the bottom point $0$, while sending the limit point $\omega$ to a different value. The key thing to note here is that, although ${[}{\mathrel{\ker_{\sqsubseteq}(S_{1})}}{]}$ contains $\\{0\\}$ as a maximal element (it is the inverse image under S1 of the infinite output stream) an observer of S1 will never actually learn that $x=0$ in finite time. With each observed output event, the observer rules out one more possible value for $x$, but there will always be infinitely many possible values remaining. After observing $n$ output events, the observer knows only that ${x=0}\vee{x>n}$. By contrast, an observer of S2 learns that $x=0$ as soon as the first output event is observed. (On the other hand, when $x>0$, an S2 observer learns nothing at all.) Perhaps the best we can claim is that the $\operatorname{LoCI}$ model is conservative, in the sense that it faithfully captures what an observer will learn “in the limit”. But, as S1 illustrates, sometimes the limit never comes. ## 4\. Termination-Insensitive Properties In this section we turn to the question of how $\operatorname{LoCI}$ can help us to formulate the first general definition of a class of weakened information-flow properties known as _termination-insensitive_ properties (or sometimes, _progress-insensitive_ properties). ### 4.1. What is Termination-Insensitivity? We quote Askarov et al. (2008): > Current tools for analysing information flow in programs build upon ideas > going back to Denning’s work from the 70’s $\langle$(Denning and Denning, > 1977)$\rangle$. These systems enforce an imperfect notion of information > flow which has become known as termination-insensitive noninterference. > Under this version of noninterference, information leaks are permitted if > they are transmitted purely by the program’s termination behaviour (i.e. > whether it terminates or not). This imperfection is the price to pay for > having a security condition which is relatively liberal (e.g. allowing > while-loops whose termination may depend on the value of a secret) and easy > to check. The term _noninterference_ in the language-based security literature refers to a class of information flow properties built around a lattice of security labels (otherwise known as _security clearance levels_) (Denning, 1976), in the simplest case two labels, $H$ (the label for secrets) and $L$ (the label for non-secrets), together with a “may flow” partial order $\prec$, where in the simple case $L\prec H$, expressing that public data may flow to (be combined with) secrets. On the semantic side, for each label $k$ there is a notion of _indistinguishability_ between inputs and, respectively, outputs – equivalence relations which determines whether an observer at level $k$ can see the difference between two different elements. These relations must agree with the flow relation in the sense that whenever $j\prec k$ then indistinguishability at level $k$ implies indistinguishably at level $j$. Indistinguishability relations are either given directly, or can be constructed as the kernel of some projection function which extracts the data of classification at most $k$. Thus “ideal” noninterference for a program denotation $f$ can be stated in terms of the lattice of information as a conjunction of properties of the form $f:P_{k}\Rightarrow Q_{k}$, expressing that an output observer at level $k$ learns no more than the level-$k$ input. Without focusing on security policies in particular, we will show how to take any property of the form $f:P\Rightarrow Q$ and weaken it to a property which allows for termination leaks. The key to this is to use the preorder refinement of $Q$ to get a handle on exactly what leaks to allow. The case when $P$ and $Q$ are used to model security levels will just be a specific instantiation. But even for this instantiation we present a new generalisation of the notion of termination sensitivity beyond the two special cases that have been studied in the literature, namely (i) the “batch-job” case when programs either terminate or deliver a result, and (ii) the case when programs output a stream of values. In the recent literature the term _progress- insensitivity_ has been used to describe the latter case, but in this section we will not distinguish these concepts – they are equally problematic for a Denning-style program analysis. Case (i) we will refer to henceforth as _simple termination-insensitive noninterference_ and is relevant when the result domain of a computation is a flat domain. As a simple example of case (i) consider the programs $A\;=\;{}$while (h>0) { } and $B\;=\;{}$while (h>0) {h := h-1}. Assume that $h$ is a secret. Standard information flow analyses notice that the loop condition in each case references variable $h$, but since typical analyses do not have the ability to analyse termination properties of loops, they must conservatively assume that information about $h$ leaks in both cases (when in fact it only leaks for program $A$). This prevents us from verifying the security of any loops depending on secrets. However, a termination- insensitive analysis ignores leaks through termination behaviour and thus both $A$ and $B$ are permitted by termination-insensitive noninterference: such an analysis is more permissive because it allows loops depending on secrets (such as $B$), but less secure because it also allows leaky program $A$ (which terminates only when $h\leq 0$). Case (ii), progress-insensitivity, is the same issue but for programs producing streams. Consider here two programs which never terminate (thanks to $D\,=\,$ while True { }): $A^{\prime}\,=\,{}$ output(1);$A$; output(1);$D$ versus $B^{\prime}\,=\,{}$ output(1);$B$; output(1);$D$. Here $B^{\prime}$ is noninterfering but $A^{\prime}$ is not, but both are permitted by the termination-insensitive condition (aka progress- insensitivity) for stream output defined in e.g. (Askarov et al., 2008). The point of this example is to illustrate that the carrier of the information leak is not just the simple “does it terminate or not”, but the _cause_ of the leak is the same. The definition in (Askarov et al., 2008) is ad hoc in that it is specific to the particular model of computation. If the computation model is changed (for example, if there are parallel output streams, or if there is a value delivered on termination) then the definition has to be rebuilt from scratch, and there is no general recipe to do this. ### 4.2. Detour: Termination-Insensitivity in the Lattice of Information Before we get to our definition, it is worth considering how termination- insensitive properties might be encoded in the lattice of information directly. The question is how to take an arbitrary property of the form $P\Rightarrow Q$ and weaken it to a termination-insensitive variant $P^{\prime}\Rightarrow Q^{\prime}$. We are not aware of a general approach to this in the literature. In this section we look at a promising approach which works for some specific and interesting choices of $P$ and $Q$, but which we failed to generalise. We will later prove that it cannot be generalised in a way which matches the definition which we provide in §4.3. So how might one weaken a property of the form $P\Rightarrow Q$ to allow termination leaks? It is tempting to try to encode this by weakening $Q$ (taking a more liberal relation) – and indeed that is what has been done in typical relational proofs of simple termination-insensitive noninterference by breaking transitivity and allowing any value in the codomain to be indistinguishable from $\bot$. Our approach in §4.3 can be seen as a generalisation of this approach. But it is useful first to consider how far we can get while remaining within the realm of equivalence relations. Sterling and Harper in a recent paper on the topic (Sterling and Harper, 2022) say (in relation to a specific work (Abadi et al., 1999) using a relational, semantic proof of noninterference) > “A more significant and harder to resolve problem is the fact that the > indistinguishability relation …cannot be construed as an equivalence > relation” While this seems to be true if we restrict ourselves to solving the problem by weakening $Q$, in fact it _is_ possible to express termination-insensitivity of types (i) and (ii) just using equivalence relations. The trick is not to weaken $Q$, but instead to strengthen $P$. The approach, which we briefly introduce here, is based on Bay and Askarov’s study of progress-insensitive noninterference (Bay and Askarov, 2020). Their idea is to characterise a hypothetical observer who _only_ learns through progress or termination behaviour. In the specific case of (Bay and Askarov, 2020) it is a “progress observer” who sees the length of the output stream, but not the values within it. Let us illustrate this idea in the more basic context of simple termination-insensitive properties. Suppose we want to define a simple termination-insensitive variant of a property of the form $f:\ P\Rightarrow Q$ for some function $f\in[D\rightarrow V_{\bot}]$ where $V$ is a flat set of values. We characterise the termination observer by the relation $T=\\{(\bot,\bot)\\}\cup\\{(\mathrm{lift}(u),\mathrm{lift}(v))\mid u\in V,v\in V\\}$. The key idea is that we modify the property $f:\ P\Rightarrow Q$ not by weakening the observation $Q$, but by strengthening the prior knowledge $P$. We need to express that by observing $Q$ you learn nothing more than $P$ plus whatever you can learn from termination; here “plus” means least upper bound, and “what you learn from termination” is expressed as the generalised kernel of $f$ with respect to $T$, namely $f^{\ast}(T)$. Thus the simple termination- insensitive weakening of $f:\ P\Rightarrow Q$ is $f:\ P\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}{f^{\ast}(T)}\Rightarrow Q.$ The general idea could then be, for each codomain, to define a suitable termination observer $T$. Bay and Askarov did this for the domain of streams to obtain “progress-insensitive” noninterference. We see two reasons to tackle this differently: 1. (1) Reasoning explicitly about $P\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}{f^{\ast}(T)}$ is potentially cumbersome, especially since we don’t care _what_ is leaked in a termination-insensitive property. 2. (2) Finding a suitable $T$ that works as intended but over an arbitrary domain is not only non-obvious, but, we suspect, not possible in general. In §4.4 we return to point (2) to show that it is not possible to find a definition of $T$ which matches the generalised termination-insensitivity which we now introduce. ### 4.3. Using LoCI to Define Generalised Termination-Insensitivity Here we provide a general solution to systematically weakening an $\operatorname{LoI}$ property $f:R\Rightarrow S$ to a termination-insensitive counterpart (we assume $S$ is realisable). The first step is to encode $f:{R\Rightarrow S}$ as the $\operatorname{LoCI}$ property $f:P\Rightarrow Q$, where $P=\mathrm{Cp}(R)$ and $Q=\mathrm{Cp}(S)$, as allowed by Corollary 18. Preorder $Q$ has the same equivalence classes as $S$, but the classes themselves are minimally ordered to respect the domain order; it is precisely this ordering which gives us a handle on the weakening we need to make. As a starting point, consider how simple termination-insensitive noninterference is proven: one ignores distinctions that the observer might make between nontermination and termination. In a relational presentation (e.g. (Abadi et al., 1999)) this is achieved by simply relating bottom to everything (and vice-versa) and not requiring transitivity. What is the generalisation to richer domains (i.e. domains with more “height”)? The first natural attempt comes from the observation that, in a Scott-style semantics, operational differences in termination behaviour manifest denotationally as differences in definedness, i.e. as inequations with respect to the domain ordering. Towards a generalisation, let us start by assuming that $S$ is the identity, so preorder $Q={\mathrm{Cp}(\mathrm{Id})}$ is the top element of $\operatorname{LoCI}$, i.e. it is just the domain ordering. This corresponds to an observer who can “see” everything (but some observations are more definite than others). The obvious weakening of the property $f:P\Rightarrow(\sqsubseteq)$ is to symmetrise $(\sqsubseteq)$ thus: $\\{(d,e)\mid d\sqsubseteq e~{}\text{or}~{}e\sqsubseteq d\\}$ This is “the right thing” for some domains but not all. As an example of where it does _not_ do the right thing, consider the domain $\mathbbm{2}\times\mathbbm{2}$ where $\mathbbm{2}=\mathbbm{1}_{\bot}$, and $\mathbbm{1}=\\{{*}\\}$. This domain contains four elements in a diamond shape. Suppose that a value of this type is computed by two loops, one to produce the first element, and one to produce the second. A termination- insensitive analysis ignores the leaks from the termination of each loop, so our weakening of any desired relation on $\mathbbm{2}$ must relate $(\bot,{*})$ and $({*},\bot)$ (and hence termination-insensitivity must inevitably leak all information about this domain). But what do $(\bot,{*})$ and $({*},\bot)$ have in common? The answer is that they represent computations that might turn out to be the same, should their computations progress, i.e. they have an upper bound with respect to the domain ordering. What about when the starting point is an arbitrary $Q\in\operatorname{LoCI}(D)$? The story here is essentially the same, but here we must think of the equivalence classes of $Q$ instead of individual elements, and the relation $Q$ instead of the domain ordering. ###### Definition 0 (Compatible extension). Given two elements $d,d^{\prime}\in D$, and a preorder $Q$ on $D$, we say that $d$ and $d^{\prime}$ are $Q$-_compatible_ if there exists an $e$ such that $d\mathrel{Q}e$ and $d^{\prime}\mathrel{Q}e$. Define $\widetilde{Q}$, the _compatible extension_ of $Q$, to be $\\{(d,d^{\prime})\mid\text{$d$ is $Q$-compatible with $d^{\prime}$}\\}$. For any preorder $Q$, compatible extension has the following evident properties: 1. (1) ${\widetilde{Q}}\supseteq{Q}$ (if $d\mathrel{Q}e$ then $e$ is a witness to the compatibility of $d$ and $e$, since $Q$ is reflexive). 2. (2) ${\widetilde{Q}}$ is reflexive and symmetric (but not, in general, transitive). A candidate general notion of termination-insensitive noninterference is then to use properties of the form $f:P\Rightarrow{\widetilde{Q}}$ where $P$ and $Q$ are complete preorders. This captures the essential idea outlined above, and passes at least one sanity check: $f:P\Rightarrow{\widetilde{Q}}$ is indeed a weaker property than $f:P\Rightarrow Q$ (simply because ${\widetilde{Q}}\supseteq Q$). However, a drawback of this choice is that it lacks a strong composition property. In general, ${f:P\Rightarrow{\widetilde{Q}}}\wedge{g:Q\Rightarrow{\widetilde{R}}}$ does _not_ imply that ${g\circ f}:P\Rightarrow{\widetilde{R}}$. For a counterexample, consider the following function $g\in[A\rightarrow A]$, where $A=\\{0,1,2\\}_{\bot}$: $g(a)=\left\\{\begin{array}[]{cl}\bot&\mbox{ if }a=\bot\\\ 0&\mbox{ if }a=0\\\ 1&\mbox{ if }a=1\\\ \bot&\mbox{ if }a=2\end{array}\right.$ $Q=\mbox{}$ Let $Q$ be the complete preorder whose underlying equivalence relation is the identity relation but which orders the elements of $A$ in a diamond shape, as pictured above. It is easily checked that $g:Q\Rightarrow{\widetilde{(\sqsubseteq)}}$. Now, since $Q$ has a top element, $\widetilde{Q}$ is just $\mathrm{All}$, so for _every_ $P$ and $f$ of appropriate type, it will hold that $f:P\Rightarrow{\widetilde{Q}}$. But it is _not_ true that ${g\circ f}:P\Rightarrow{\widetilde{(\sqsubseteq)}}$ holds for every $P$ and $f$ (take $P=\mathrm{All}$ and $f=\operatorname{id}$, for example). Clearly, the above counterexample is rather artificial. Indeed, it is hard to see how we might construct a program with denotation $g$ such that a termination-insensitive analysis could be expected to verify $g:Q\Rightarrow{\widetilde{(\sqsubseteq)}}$. Notice that $g$ not only fails to send $Q$-related inputs to $(\sqsubseteq)$-related outputs, it effectively ignores the ordering imposed by $Q$ entirely, in that it fails even to preserve $Q$-compatibility. This suggests a natural strengthening of our candidate notion. We define our generalisation of termination-insensitive noninterference over the lattice of computable information to be “preservation of compatibility”: ###### Definition 0 (Generalised Termination-Insensitivity). Let $f\in[D\rightarrow E]$ and let $P$ and $Q$ be elements of $\operatorname{LoCI}(D)$ and $\operatorname{LoCI}(E)$, respectively. Define: $f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}Q\quad\mathrel{\mathrm{\ iff\ }}\quad f:{\widetilde{P}}\Rightarrow{\widetilde{Q}}$ Crucially, although this is stronger than our initial candidate, it is still a weakening of ${\\_\Rightarrow\\_}$: ###### Lemma 0. Let $f\in[A\rightarrow B]$. Let $P$ and $Q$ be complete preorders on $A$ and $B$, respectively. Then $f:P\Rightarrow Q\quad\text{implies}\quad f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}Q.$ ###### Proof. Assume $f:P\Rightarrow Q$ and suppose $x\mathrel{\widetilde{P}}y$. Since $x\mathrel{\widetilde{P}}y$, there is some $z$ such that $x\mathrel{P}z$ and $y\mathrel{P}z$. Since $f:P\Rightarrow Q$, we have ${f(x)}\mathrel{Q}{f(z)}$ and ${f(y)}\mathrel{Q}{f(z)}$, hence ${f(x)}\mathrel{\widetilde{Q}}{f(y)}$. ∎ Furthermore, Definition 2 gives us both compositionality and “subtyping”: ###### Proposition 0. The following inference rules are valid for all continuous functions and elements of $\operatorname{LoCI}$ of appropriate type: $\displaystyle\frac{\begin{array}[]{c}\;P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P\;\;\;f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}Q\;\;\;Q\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}Q^{\prime}\end{array}}{\begin{array}[]{c}\;f:{P^{\prime}}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{Q^{\prime}}\end{array}}~{}\text{\emph{SubTI}}\hskip 30.00005pt\displaystyle\frac{\begin{array}[]{c}\;f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}Q\;\;\;g:Q\mathrel{{\Rightarrow}^{{\textsc{ti}}}}R\end{array}}{\begin{array}[]{c}\;g\circ f:{P}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{R}\end{array}}~{}\text{\emph{CompTI}}$ ###### Proof. We rely on the general Sub and Comp rules (Fact 1). For SubTI, the premise for $f$ unpacks to $f:{\widetilde{P}}\Rightarrow{\widetilde{Q}}$ and the conclusion unpacks to $f:\widetilde{P^{\prime}}\Rightarrow\widetilde{Q^{\prime}}$. It suffices then to show that $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P$ implies ${\widetilde{P^{\prime}}}\subseteq{\widetilde{P}}$ (and similarly for $Q,Q^{\prime})$, since we can then apply the general Sub rule directly. So, suppose $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P$, hence $P^{\prime}\subseteq P$, and suppose $x\mathrel{\widetilde{P^{\prime}}}y$. Then, for some $z$, we have $x\mathrel{P^{\prime}}z$ and $y\mathrel{P^{\prime}}z$, thus $x\mathrel{P}z$ and $y\mathrel{P}z$, thus $x\mathrel{\widetilde{P}}y$, as required. For CompTI we observe that it is simply a specialisation of the general Comp rule, since the premises unpack to $f:{\widetilde{P}}\Rightarrow{\widetilde{Q}}$ and $g:{\widetilde{Q}}\Rightarrow{\widetilde{R}}$, while the conclusion unpacks to $g\circ f:\widetilde{P}\Rightarrow\widetilde{R}$. ∎ ### 4.4. Impossibility of a Knowledge-based Definition In this section we return to the question of whether there exists a knowledge- based characterisation which matches our definition of termination- insensitivity, and show why this cannot be the case. Suppose we start with an “ideal” property of the form $f:P\Rightarrow S$, where $S$ is assumed to be realisable (by $\mathrm{Cp}(S)$), and (for simplicity but without loss of generality) $P$ is over a discrete domain (so ${\mathrm{Cp}(P)}=P$). The question, which we will answer in the negative, is whether we can construct a “termination observer” $T$ from the structure of the codomain of $f$ such that $f:\ P\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}{f^{\ast}(T)}\Rightarrow S\mathrel{\mathrm{\ iff\ }}f:\ P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{\mathrm{Cp}(S)}$ We build a counterexample based on the following Haskell code: ⬇ data Kite = Body () () | Tail ⬇ spin = spin ⬇ f h = if h then Body () spin else Body spin () g h = if h then Body () spin else Tail We will use some security intuitions to present the example (since that is the primary context in which termination-insensitivity is discussed). Suppose that we view the input to ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$ and ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$ as either ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}}$ or ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}$, and that this is a secret. We are being sloppy here and ignoring the fact that the input domain is lifted, but that has no consequence on the following. Figure 5. Domain representing Kite Now consider the output to be public, and the question is whether ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$ and ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$ satisfy termination-insensitive noninterference. Standard noninterference in this case would be the property $(\\_:\mathrm{All}\Rightarrow\mathrm{Id})$. Our definition of termination-insensitive noninterference is thus $(\\_:\mathrm{All}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}(\sqsubseteq))$ where $\sqsubseteq$ here is the ordering on the domain corresponding to `Kite`, namely the domain in Fig. 5. By our definition, ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$ satisfies termination-insensitive noninterference but ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$ does not. This is perhaps not obvious for ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$ because a typical termination-insensitive _analysis_ would reject it anyway, so it is instructive to see a semantically equivalent definition ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}\textquoteright}}}}}$ (assuming well-defined Boolean input) which would pass a termination- insensitive analysis555One should not be surprised that a program analysis can yield different results on semantically equivalent programs – as Rice’s theorem (Rice, 1953) shows, this is the price to pay for any non-trivial analysis which is decidable, and having a semantic soundness condition.. ⬇ f’ h = Body (assert h ()) (assert (not h) ()) where assert b y = seq (if b then () else spin) y We claim that a semantic definition of termination-insensitive noninterference should accept ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}\textquoteright}}}}}$ (and hence ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$) but reject ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$. The reason for this is a fundamental feature of sequential computation, embodied in programming constructs such as call-by-value computation or sequential composition in imperative code. In Haskell, sequential computation is realised by a primitive function ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{seq}}}}}}}$, which computes its first argument then, if it terminates, returns its second argument. Consider an expression of the form ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{seq}}{\@listingGroup{ltx_lst_space}{ }}{\@listingGroup{ltx_lst_identifier}{a}}{\@listingGroup{ltx_lst_space}{ }}{\@listingGroup{ltx_lst_identifier}{b}}}}}}}$ where `a` may depend on a secret, but `b` provably does not. The only way that such a computation reveals information about the secret is if the _termination_ of `a` depends on the secret. This is the archetypal example of the kind of leak that a termination-insensitive analysis ignores. A particular case of this is the function ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{assert}}}}}}}$ in the code above, which leaks the value of its first parameter via (non)termination. For this reason, even when `h` is a secret, terms ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{assert}}{\@listingGroup{ltx_lst_space}{ }}{\@listingGroup{ltx_lst_identifier}{h}}{\@listingGroup{ltx_lst_space}{ }}()}}}}}$ and ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{assert}}{\@listingGroup{ltx_lst_space}{ }}({\@listingGroup{ltx_lst_keyword}{not}}{\@listingGroup{ltx_lst_space}{ }}{\@listingGroup{ltx_lst_identifier}{h}}){\@listingGroup{ltx_lst_space}{ }}()}}}}}$ are considered termination-insensitive noninterfering (and thus so is ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}\textquoteright}}}}}$). This example forms the basis of our impossibility claim, the technical content of which is the following: ###### Proposition 0. There is no termination observer $T$ (i.e. an equivalence relation) on the ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Kite}}}}}}}$ domain for which ${{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}:\ \mathrm{All}\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}{{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}}^{\ast}(T)}\Rightarrow\mathrm{Id}$ but for which this does not hold for ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$. ###### Proof. The problem is to define $T$ in such a way that it distinguishes different ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}$ instances but none of the ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}$ instances from ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Tail}}}}}}}$, while still being an equivalence relation. $T$ would either have to (1) relate ${{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize()}}}}}\bot$ and ${{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}\bot\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize()}}}}}$ or (2) distinguish them and also distinguish one of them from ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Tail}}}}}}}$ (if it related both to ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Tail}}}}}}}$, then, by transitivity and symmetry, it would also relate the two ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}$ instances). Without loss of generality, assume ${{{(\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Body}}}}}}}\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize()}}}}}\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{Tail}}}}}}})\not\in T$ (otherwise adjust ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$ accordingly). In case (1), ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}$ does not have property ${\mathrm{All}\mathrel{\sqcup_{\text{\tiny$\operatorname{LoI}$}}}{{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}}^{\ast}(T)}\Rightarrow\mathrm{Id}$, because ${{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}}^{\ast}(T)$ is $\mathrm{All}$, but ${{{{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}}\neq\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{f}}}}}}}\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}$. In case (2), ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}$ does have this property because ${{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_identifier}{g}}}}}}}}^{\ast}(T)$ is the identity relation. ∎ ### 4.5. Case Study: Nondeterminism and Powerdomains In this section we consider the application of generalised termination- insensitive properties to nondeterministic languages modelled using _powerdomains_ (Plotkin, 1976). In the first part we instantiate our definition for a finite powerdomain representing a nondeterministic computation over lifted Booleans and illustrate that it “does the right thing”. In the second part we prove that we have an analogous compositional reasoning principle to the function composition property CompTI (Proposition 4), but replacing regular composition with the Kleisli composition of the finite powerdomain monad. #### Example: Termination-Insensitive Nondeterminism We are not aware of any specific studies of termination-insensitive noninterference for nondeterministic languages, and the definitions in this paper were conceived independently of this example, so it provides an interesting case study. Figure 6. Powerdomain $\raisebox{1.79993pt}{\Large$\wp$}(\mathrm{Bool}_{\bot})$ Suppose we have a nondeterministic program $C$, modelled as a function in $\mathrm{Bool}\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(\mathrm{Bool}_{\bot})$, where $\wp$ is the Plotkin powerdomain constructor and ${{\mathrm{Bool}=\\{\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}},\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}\\}$. In the case of powerdomains over finite domains, the elements can be viewed as convex subsets of the underlying domain (see below for more technical details). In this section we only consider such finite powerdomains. $\raisebox{1.79993pt}{\Large$\wp$}(\mathrm{Bool}_{\bot})$, for example, is given in Figure 6. Each element of the powerdomain represents a set of possible outcomes of a nondeterministic computation. Let’s consider the input of some program $C$ to be a secret, and the output public. The property of interest here is what we can call TI-security, i.e., $C:\mathrm{All}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}(\sqsubseteq)$. To explore this property, let us assume an imperative programming language with the following features: * • a choice operator $C_{1}\mid C_{2}$ which chooses nondeterministically to compute either $C_{1}$ or $C_{2}$, * • a Boolean input x, and * • an output statement to deliver a final result. Note how the semantics of ${\mid}$ can be given as set union of values in the powerdomain. Under our definition, the compatible extension of the domain ordering for $\raisebox{1.79993pt}{\Large$\wp$}(\mathrm{Bool}_{\bot})$ relates all the points in the lower diamond to each other. Note that in particular this means that ${\\{\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}}\\}$ and ${\\{\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}\\}$ are related. This in turn means that the following program $C$ is TI secure: ⬇ while True { } | output x This looks suspicious, to say the least. A static analysis would never allow such a program. But our definition says that it is TI-secure, since the denotation of $C$ maps ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}}$ to ${\\{\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}}\\}$ and ${\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}$ to ${\\{\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}\\}$, and these are compatible by virtue of the common upper bound ${{\\{\bot,\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{True}}}}}}},\operatorname{\text{\leavevmode\lstinline{{\lst@@@set@language\lst@@@set@numbers\lst@@@set@frame\lst@@@set@rulecolor\lst@@@set@language\lst@@@set@numbers\lst@@@set@language\normalsize{\@listingGroup{ltx_lst_keyword}{False}}}}}}}\\}$. To show that our definition is, nonetheless, “doing the right thing”, we can write $C$ in a semantically equivalent way as: ⬇ ( while x { } ; output False ) | ( while (not x) { }; output True) Not only is this equivalent, but the insecurity apparent in the first rendition of the program is now invisible to a termination-insensitive analysis. Now we turn to properties relevant to compositional reasoning about generalised termination-insensitivity for nondeterministic programs modelled using finite powerdomains. #### Compositional Reasoning for Finite Powerdomains We review the basic theory of finite Plotkin powerdomains, as developed in (Plotkin, 1976). We then define a natural lifting of complete preorders to powerdomains and show that this yields pleasant analogues (Corollary 15) of the SubTI and CompTI inference rules (Proposition 4) with respect to the powerdomain monad. Note: throughout this section we restrict attention to finite posets, so a preorder is complete iff it contains the partial order of its domain. The Plotkin powerdomain construction uses the so-called Egli-Milner ordering on subsets of a poset, derived from the order of the poset. For our purposes it is convenient to generalise the Egli-Milner definition to arbitrary binary relations: ###### Definition 0 (Egli-Milner extension). Let $R$ be a binary relation on $A$. Then $\mathrm{EM}(R)$ is the binary relation on subsets of $A$ defined by ${X\mathrel{\mathrm{EM}(R)}Y}\mathrel{\mathrm{\ iff\ }}{(\forall x\in X.\exists y\in Y.x\mathrel{R}y)\wedge(\forall y\in Y.\exists x\in X.x\mathrel{R}y)}$ ###### Fact 2. _(1)_ $\mathrm{EM}(\\_)$ is monotone. _(2)_ $\mathrm{EM}(\\_)$ preserves reflexivity, transitivity, and symmetry. The Egli-Milner ordering on subsets of a poset $A$ is then $\mathrm{EM}(\sqsubseteq)$. Note that (2) entails that $\mathrm{EM}(R)$ is a preorder whenever $R$ is a preorder. However, since antisymmetry is _not_ preserved, in general $\mathrm{EM}(\sqsubseteq)$ is only a preorder, so to obtain a partial order it is necessary to quotient by the induced equivalence relation. Conveniently, the _convex_ subsets provide a natural canonical representative for each equivalence class: ###### Definition 0 (Convex Closure). The _convex closure_ of $X$ is $\mathrm{Cv}(X)\mathrel{\stackrel{{\scriptstyle{\scriptscriptstyle\mathrm{def}}}}{{=}}}\\{b\in A\mid a\in X,c\in X,a\sqsubseteq b\sqsubseteq c\\}$. ###### Fact 3. _(1)_ $\,\mathrm{Cv}$ is a closure operator. _(2)_ $\,\mathrm{Cv}(X)$ is the largest member of ${[}X{]}_{\mathrm{EM}(\sqsubseteq)}$. ###### Definition 0 (Finite Plotkin Powerdomain). Let $(A,\sqsubseteq)$ be a finite poset. Then the _Plotkin powerdomain_ $\raisebox{1.79993pt}{\Large$\wp$}(A)$ is the poset of all non-empty convex subsets of $A$ ordered by $\mathrm{EM}(\sqsubseteq)$. The union operation is defined by $X\mathrel{\bar{\cup}}Y\mathrel{\stackrel{{\scriptstyle{\scriptscriptstyle\mathrm{def}}}}{{=}}}{\mathrm{Cv}(X\cup Y)}$. The powerdomain constructor is naturally extended to a monad, allowing us to compose functions with types of the form $A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)$. ###### Definition 0 (Kleisli-extension). Let $A,B$ be finite posets. Let $f\in[A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)]$. The Kleisli- extension of $f$ is ${{f}^{\dagger}}\in[\raisebox{1.79993pt}{\Large$\wp$}(A)\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)]$ defined by ${{f}^{\dagger}}(X)=\mathrm{Cv}(\bigcup_{x\in X}f(x)).$ ###### Definition 0 (Kleisli-composition). Let $A,B,C$ be finite posets and let $f\in[A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)]$ and $g\in[B\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(C)]$. Then the Kleisli- composition $f;g\in[A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(C)]$ is ${{g}^{\dagger}}\circ f$. We lift the powerdomain constructor to binary relations in the obvious way: ###### Definition 0. Let $R$ be a binary relation on finite poset $A$. Then $\raisebox{1.79993pt}{\Large$\wp$}(R)$ is the relation on $\raisebox{1.79993pt}{\Large$\wp$}(A)$ obtained by restricting $\mathrm{EM}(R)$ to non-empty convex sets. ###### Lemma 0. If $P$ is a complete preorder on finite poset $A$ then $\raisebox{1.79993pt}{\Large$\wp$}(P)$ is a complete preorder on $\raisebox{1.79993pt}{\Large$\wp$}(A)$. Now, in order to establish our desired analogues of SubTI and CompTI, we must be able to relate $\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(P)}$ to $\widetilde{P}$. The key properties are the following: ###### Lemma 0. Let $R$ be a preorder and let $P$ be a complete preorder. Then: _(1)_ ${\widetilde{\mathrm{EM}(R)}}={\mathrm{EM}(\widetilde{R})}$ _(2)_ $\mathrm{Cv}(X)\mathrel{\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(P)}}\mathrm{Cv}(Y)$ iff $X\mathrel{\widetilde{\mathrm{EM}(P)}}Y$ _(3)_ ${\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(P)}}={\raisebox{1.79993pt}{\Large$\wp$}(\widetilde{P})}$ We then have: ###### Theorem 14. Let $A,B$ be finite posets and let $f\in[A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)]$. Let $P,P^{\prime}$ be a complete preorders on $A$ and let $Q$ be a complete preorder on $B$. 1. (1) If $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P$ then ${\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(P^{\prime})}}\subseteq{\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(P)}}$. 2. (2) If $f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}\raisebox{1.79993pt}{\Large$\wp$}(Q)$ then ${{f}^{\dagger}}:\raisebox{1.79993pt}{\Large$\wp$}(P)\mathrel{{\Rightarrow}^{{\textsc{ti}}}}\raisebox{1.79993pt}{\Large$\wp$}(Q)$. ###### Proof. 1. (1) By definition $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P$ iff $P^{\prime}\subseteq P$ hence, as argued in the proof of Proposition 4, $P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P$ implies ${\widetilde{P^{\prime}}}\subseteq{\widetilde{P}}$. The conclusion then follows by monotonicity of $\mathrm{EM}(\\_)$ and Lemma 13. 2. (2) Assume $f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}\raisebox{1.79993pt}{\Large$\wp$}(Q)$. By the definition of ${{f}^{\dagger}}$ and Lemma 13, it suffices to show that $X_{1}\mathrel{\mathrm{EM}(\widetilde{P})}X_{2}$ implies $Z_{1}\mathrel{\mathrm{EM}(\widetilde{Q})}Z_{2}$, where $Z_{i}=\bigcup_{x\in X_{i}}f(x)$. Let $z_{1}\in Z_{1}$, thus $z_{1}\in f(x_{1})$ for some $x_{1}\in X_{1}$. Since $X_{1}\mathrel{\mathrm{EM}(\widetilde{P})}X_{2}$, there is some $x_{2}\in X_{2}$ with $x_{1}\mathrel{\widetilde{P}}x_{2}$. Since $f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}\raisebox{1.79993pt}{\Large$\wp$}(Q)$, it follows that $f(x_{1})\mathrel{\widetilde{\raisebox{1.79993pt}{\Large$\wp$}(Q)}}f(x_{2})$, hence by Lemma 13 $f(x_{1})\mathrel{\mathrm{EM}(\widetilde{Q})}f(x_{2})$, hence $z_{1}\mathrel{\widetilde{Q}}z_{2}$ for some $z_{2}\in f(x_{2})\subseteq Z_{2}$. Thus $\forall z_{1}\in Z_{1}.\exists z_{2}\in Z_{2}.z_{1}\mathrel{\widetilde{Q}}z_{2}$. It follows by a symmetrical argument that $\forall z_{2}\in Z_{2}.\exists z_{1}\in Z_{1}.z_{1}\mathrel{\widetilde{Q}}z_{2}$. ∎ ###### Corollary 0. Let $A,B,C$ be finite posets and let $f\in[A\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(B)]$ and $g\in[B\rightarrow\raisebox{1.79993pt}{\Large$\wp$}(C)]$. The following inference rules are valid for all elements of $\operatorname{LoCI}$ of appropriate type: $\displaystyle\displaystyle\frac{\begin{array}[]{c}\;P^{\prime}\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}P\;\;\;f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{\raisebox{1.79993pt}{\Large$\wp$}(Q)}\;\;\;Q\mathrel{\sqsupseteq_{\text{\tiny LoCI}}}Q^{\prime}\end{array}}{\begin{array}[]{c}\;f:{P^{\prime}}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{\raisebox{1.79993pt}{\Large$\wp$}(Q^{\prime})}\end{array}}~{}\text{}\hskip 30.00005pt\displaystyle\frac{\begin{array}[]{c}\;f:P\mathrel{{\Rightarrow}^{{\textsc{ti}}}}\raisebox{1.79993pt}{\Large$\wp$}(Q)\;\;\;g:Q\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{\raisebox{1.79993pt}{\Large$\wp$}(R)}\end{array}}{\begin{array}[]{c}\;f;g:{P}\mathrel{{\Rightarrow}^{{\textsc{ti}}}}{\raisebox{1.79993pt}{\Large$\wp$}(R)}\end{array}}~{}\text{}$ ## 5\. Related Work Readers of this paper hoping to see a reconciliation of Shannon’s quantitative information theory with domain theory may be disappointed to see that we have tackled a less ambitious problem based on Shannon’s lesser-known qualitative theory of information. Abramsky (2008) discusses the issues involved in combining the quantitative theory of Shannon with the qualitative theory of Scott and gives a number of useful pointers to the literature. As we mentioned in the introduction, Shannon’s paper describing information lattices (Shannon, 1953) is relatively unknown, but a more recent account by Li and Chong (2011) make Shannon’s ideas more accessible (see also (Rioul et al., 2022)). Most later works using similar abstractions for representing information have been made independently of Shannon’s ideas. In the security area, Cohen (1977) used partitions to describe varieties of information flow via so-called _selective dependencies_. In an independent line of work, various authors developed the use of the lattice of partial equivalence relations (PERs) to give semantic models to polymorphic types in programming languages e.g. (Coppo and Zacchi, 1986; Abadi and Plotkin, 1990). PERs generalise equivalence relations by dropping the reflexivity requirement, so a PER is just an equivalence relation on a subset of the space in question. An important generalisation over equivalence relations, particularly when used for semantic models of types, is that “flow properties” of the form $f:P\Rightarrow Q$ can expressed by interpreting $P\Rightarrow Q$ itself as a PER over functions, and $f:P\Rightarrow Q$ is just shorthand for $f$ being related to itself by this PER. The connection to information flow and security properties comes via _parametricity_ , a property of polymorphic types which can be used to establish noninterference e.g. (Tse and Zdancewic, 2004; Bowman and Ahmed, 2015). Independent of all of the above, Landauer and Redmond (1993) described the use of the lattice of equivalence relations to describe security properties, dubbing it _a lattice of information_. Sabelfeld and Sands (2001), inspired by the use of PERs for static analysis of dependency (Hunt, 1991; Hunt and Sands, 1991) (and independent of Landauer and Redmond’s work) used PERs over domains to give semantic models of information flow properties, including for more complex domains for nondeterminism and probability, and showed that the semantic properties could be used to prove semantic soundness of a simple type system. Our TI results in § 4.5 mirror the termination-sensitive composition principle for powerdomains given by Sabelfeld and Sands (2001). Hunt and Sands (2021) introduce a refinement of $\operatorname{LoI}$, orthogonal to the present paper, which adds disjunctive information flow properties to the lattice. Li and Zdancewic (2005) use a postprocessing definition of declassification policies in the manner of Proposition 2(2); Sabelfeld and Sands (2009) sketched how this could be reformulated within $\operatorname{LoI}$. Giacobazzi and Mastroeni (2004) introduced _abstract noninterference_ (ANI) in which a security-centric noninterference property is parameterised by abstract interpretations to represent the observational power of an attacker, and the properties to be protected. Hunt and Mastroeni (2005) showed how so-called _narrow_ ANI in (Giacobazzi and Mastroeni, 2004) and some key results can be recast as properties over $\operatorname{LoI}$. In its most general form, Giacobazzi and Mastroeni (2018, Def 4.2) define ANI as a property of a function $f$ parameterised by three upper closure operators (operating on _sets_ of values): an output observation $\rho$, an input property $\phi$ which may flow, and an input property $\eta$ “to protect”. A function $f$ is defined to have abstract noninterference property $\\{\phi,\eta\\}f\\{\rho\\}$ if, for all $x,y$: $\phi(\\{x\\})=\phi(\\{y\\})~{}\text{implies}~{}\rho(\hat{f}(\eta(\\{x\\})))=\rho(\hat{f}(\eta(\\{y\\})))$ where $\hat{f}$ is the lifting of $f$ to sets. Note that this can be directly translated to an equivalent property over the lattice of information, as follows: $\hat{f}\circ\eta^{\prime}:\ker(\phi^{\prime})\Rightarrow\ker(\rho)$ where $\phi^{\prime}(x)=\phi(\\{x\\})$ and $\eta^{\prime}(x)=\eta(\\{x\\})$. In the special case that $\eta$ is the identity, this reduces simply to an information flow property of $f$, namely: $f:\ker(\phi^{\prime})\Rightarrow\ker(\rho^{\prime})$ where $\rho^{\prime}(y)=\rho(\\{y\\})$. In the general case, Giacobazzi and Mastroeni (2018) observe that the ANI framework models attackers whose ability to make logical deductions (about the inputs of $f$) is constrained within the abstract interpretation fixed by $\rho$ and $\eta$. Inheriting from the underlying abstract interpretation framework, ANI can be developed within a variety of semantic frameworks (denotational, operational, trace-based, etc.). The lattice of information, either directly or indirectly provides a robust baseline for various quantitative measures of information flow (Malacaria, 2015; McIver et al., 2014). In the context of quantitative information flow, Alvim et al. (2020, Chapter 16) discuss leakage refinement orders for potentially nonterminating probabilistic programs. Their ordering allows increase in “security” or termination. Increase in security here corresponds to decrease in information (a system which releases no information being the most secure). Thus Alvim et al.’s ordering is incomparable to ours: the $\operatorname{LoCI}$ ordering reflects increase in information (decrease in security) or increase in termination. Regarding the question of termination-sensitive noninterference, the first static analysis providing this kind of guarantee was by Denning and Denning (1977). This used Dorothy Denning’s lattice model of information (Denning, 1976). It is worth noting that Denning’s lattice model is a model for security properties expressed via labels, inspired by, but generalising, classical military security clearance levels. As such these are syntactic lattices used to identify different objects in a system and to provide a definition of the intended information flows. But Denning’s work did not come with any actual formal definition of information flow, and so their analysis did not come with any proof of a semantic security property. Such a proof came later in the form of a termination-insensitive noninterference property for a type system (Volpano et al., 1996), intended to capture the essence of Denning’s static analysis. The semantic guarantees for such analysis in the presence of stream outputs was studied by Askarov et al. (2008). There they showed that stream outputs can leak arbitrary amounts of information through the termination (progress) insensitivity afforded by a Denning-style analysis, but also that the information is bound to leak slowly. In recent work, Sterling and Harper (2022) propose a new semantic model for termination-insensitive noninterference properties using more elaborate domain-theoretic machinery. The approach is fundamentally type-centric, adopting sheaf semantics ideas from their earlier work on the semantics of phase distinctions (Sterling and Harper, 2021). The fundamental difference in their work is that it is an _intrinsic_ approach to the semantics of information flow types in the sense of Reynolds (2003), whereby information flow specifications are viewed as an integral part of types, and thus the meaning of the type for a noninterfering function is precisely the semantics of noninterference. This is in contrast with the _extrinsic_ relational models studied here in which information flow properties are characterised by properties carved out of a space of arbitrary functions. Though the merger of domains and relations sketched in §3.7 may be considered an intrinsic presentation, the approach of Sterling and Harper goes further: it requires a language of information flow properties to be part of the type language (and the underlying semantics). In their work the class of properties discussed is quite specific, namely those specifiable by a Denning-style (semi)lattice of security labels. The approach is particularly suited to reasoning about systems in the style of DCC (Abadi et al., 1999) in which security labels are part of the programming language itself. Unlike in the present work, the only kind of termination-insensitive noninterference discussed in their paper is the simple case in which a program either terminates or it does not. The most advanced semantic soundness proof of termination-insensitive noninterference (in terms of programming language features) is the recent work of Gregersen et al. (2021). In terms of advanced typing features (combinations of higher-order state, polymorphism, existential and recursive types…) this work is a tour de force, although the notion of termination-sensitivity at the top level is just the simplest kind; any termination-insensitive notions that arise internally through elaborate types are not articulated explicitly. ## 6\. Conclusion and Future Work In this paper we have reconciled two different theories of information: * • Shannon’s lattice model, which gives an encoding-independent view of the information that is released from some data source by a function, and orders one information element above another when it provides more information about the source; and * • Scott’s domain theory, where an “information element” is a provisional representation of the information produced so far by a computational process, and the ordering relation reflects an increase in definedness, or computational progress. Our combination of these models, which we have dubbed the Lattice of Computable Information (as a nod to the fact that Scott’s theory is designed to model computable functions via continuity, even if it does not always do so perfectly) retains the essential features of both theories – it possesses the lattice properties which describe how information can be combined and compared, at the same time as taking into account the Scott ordering in a natural way. We have also shown how the combination yields the first definition, general in its output domain, of what it means to be the termination-insensitive weakening of an arbitrary flow property. We identify some lines of further work which we believe would be interesting to explore: #### New Information Flow Policies using LoCI $\operatorname{LoCI}$ allows the expression of new, more fine-grained information flow properties, but which ones are useful? One example worth exploring relates to noninterference for systems with input streams. In much of the literature on noninterference for such systems there is an explicit assumption that systems are “input total”, so that the system never blocks when waiting for a secret input. Using $\operatorname{LoCI}$ we have the machinery to explore this space without such assumptions – we can formulate what we might call _input termination-sensitive_ and _input termination- insensitive_ properties within $\operatorname{LoCI}$ (without weakening). Input termination-sensitive properties are very strong since they assume that the high user might try to sneak information to a low observer via the decision to supply or withhold information, whereas insensitive properties permit upfront knowledge of the number of high inputs consumed, thus ignoring these flows. Another place where $\operatorname{LoCI}$ can prove useful is in _required release_ policies (Chong, 2012), where a minimum amount of information flow is required (e.g. a freedom of information property). In this case we would like to ensure that the information which is released is produced in a “decent” form – i.e. as a maximal element among those $\operatorname{LoCI}$ elements which have the same equivalence classes. This prevents the use of nontermination to obfuscate the information. #### Semantic Proofs of Noninterference We have developed some basic semantic-level tools for compositional reasoning about information flow properties in $\operatorname{LoCI}$, and their termination-insensitive relatives (e.g. Proposition 4 and Corollary 15). It seems straightforward to establish a flow-sensitive variant of the progress- insensitive type system of Askarov et al. (2008) that can be given a semantic soundness proof based on the definition of termination-insensitivity given here (the proof in (Askarov et al., 2008) is not given in the paper, but it is a syntactic proof ). It would be interesting to tackle a more involved language, for example with both input and output streams, and with input termination-sensitive/insensitive variants. It would be important, via such case studies, to further develop an arsenal of properties, established at the semantic level, which can be reused across different proofs for different systems. For languages which support higher-order functions, semantic proofs would call for the ability to build complete preorders on continuous function spaces $[A\rightarrow B]$ by the usual logical relations construction. That is, given complete preorders $P$ and $Q$ on $A$ and $B$, we would like to construct a complete preorder $P\Rightarrow Q$, relating $f$, and $g$ just when $a\mathrel{P}a^{\prime}$ implies $f(a)\mathrel{Q}g(a^{\prime})$. In fact, defined this way, the relation $P\Rightarrow Q$ will in general only be a _partial_ preorder (some elements of $[A\rightarrow B]$ will not be in the relation at all). Promisingly, the results in (Abadi and Plotkin, 1990) suggest that complete partial preorders are well-behaved, yielding a cartesian-closed category. #### Domain Constructors The powerdomain results in §4.5 are limited to finite posets. It would be interesting to extend these results beyond the finite case and, more generally, to see if other domain constructions, including via recursive domain equations, can be lifted to complete preorders. Clearly this will require restriction to an appropriate category of algebraic domains, rather than arbitrary posets. It remains to be seen whether it will also be necessary to impose additional constraints on the preorders. ###### Acknowledgements. Thanks to the anonymous referees for numerous constructive suggestions, in particular connections to category theory that formed the basis of §3.7, and the suggestion to use an example based on powerdomains. Thanks to Andrei Sabelfeld and Aslan Askarov for helpful advice. This work was partially supported by the Swedish Foundation for Strategic Research (SSF), the Swedish Research Council (VR). ## References * (1) * Abadi et al. (1999) M. Abadi, A. Banerjee, N. Heintze, and J. Riecke. 1999\. A Core calculus of Dependency. In _Proc. ACM Symp. on Principles of Programming Languages_. 147–160. * Abadi and Plotkin (1990) Martín Abadi and Gordon D. Plotkin. 1990. A PER model of polymorphism and recursive types. _[1990] Proceedings. Fifth Annual IEEE Symposium on Logic in Computer Science_ (1990), 355–365. * Abramsky (1987) Samson Abramsky. 1987\. _Domain Theory and the Logic of Observable Properties_. Ph. D. Dissertation. University of London. * Abramsky (1991) Samson Abramsky. 1991\. Domain theory in logical form. _Annals of Pure and Applied Logic_ 51, 1 (1991), 1–77. https://doi.org/10.1016/0168-0072(91)90065-T * Abramsky (2008) Samson Abramsky. 2008\. Information, processes and games. _J. Benthem van & P. Adriaans (Eds.), Philosophy of Information_ (2008), 483–549. * Abramsky and Jung (1995) Samson Abramsky and Achim Jung. 1995. Domain Theory. In _Handbook of Logic in Computer Science (Vol. 3): Semantic Structures_. Oxford University Press, Inc., USA, 1–168.
# Unsupervised Learning for Computational Phenotyping Chris Hodapp<EMAIL_ADDRESS> ###### Abstract With large volumes of health care data comes the research area of computational phenotyping, making use of techniques such as machine learning to describe illnesses and other clinical concepts from the data itself. The “traditional” approach of using supervised learning relies on a domain expert, and has two main limitations: requiring skilled humans to supply correct labels limits its scalability and accuracy, and relying on existing clinical descriptions limits the sorts of patterns that can be found. For instance, it may fail to acknowledge that a disease treated as a single condition may really have several subtypes with different phenotypes, as seems to be the case with asthma and heart disease. Some recent papers cite successes instead using unsupervised learning. This shows great potential for finding patterns in Electronic Health Records that would otherwise be hidden and that can lead to greater understanding of conditions and treatments. This work implements a method derived strongly from Lasko _et al._ , but implements it in Apache Spark and Python and generalizes it to laboratory time-series data in MIMIC- III. It is released as an open-source tool for exploration, analysis, and visualization, available at: https://github.com/Hodapp87/mimic3_phenotyping. ###### Index Terms: Big data, Health analytics, Data mining, Machine learning, Unsupervised learning, Computational phenotyping ## I Introduction & Background The field of _computational phenotyping_[1] has emerged recently as a way of learning more from the increasing volumes of Electronic Health Records available, and the volume of this data ties it in naturally with fields like machine learning and data mining. The “traditional” approach of supervised learning over classifications has two noted problems: * • It requires the time and attention of that domain expert in order to provide classification information over which to train a model, and this requirement on human attention limits the amount of data available (and, to an extent, its accuracy.) * • It tends to limit the patterns that can be found to what existing classifications acknowledge. If a disease treated as a single condition really has multiple subtypes with different phenotypes, the model will not reflect this - for instance, asthma and heart disease[9]. Some recent papers[13, 9, 5] cite successes with approaches instead using unsupervised learning on time-series data. In Lasko _et al._[9], such an approach applied to serum uric acid measurements was able to distinguish gout and acute leukemia with no prior classifications given in training. Marlin _et al._[13] examines 13 physiological measures from a pediatric ICU (such as pulse oximetric saturation, heart rate, and respiratory rate). Che _et al._[1] likewise uses ICU data, but focuses on certain ICD-9 codes rather than mortality. This approach still has technical barriers. Time-series in healthcare data frequently are noisy, spare, heterogeneous, or irregularly sampled, and commonly Gaussian processes are employed here in order to condition the data into a more regular form as a pre-processing step. In [9], Gaussian process regression produces a model which generates a continuous, interpolated time- series providing both predicted mean and variance, then applies a two-layer stacked sparse autoencoder (compared with a five-layer stacked denoising autoencoder in [1], without Gaussian process regression). The goal undertaken here was to reimplement a combination of some earlier results (focusing mainly on that of Lasko _et al._[9]) using Apache Spark and the MIMIC-III critical care database[6], and able to run primarily on a “standard” Spark setup such as Amazon Elastic MapReduce. However, presently it relies on Python in order to use Keras and scikit-learn for feature learning and t-SNE. The software behind this work is also released as an open source tool for accomodating exploration, analysis, and visualization using the techniques described herein. It is available at: https://github.com/Hodapp87/mimic3_phenotyping. ## II Approach & Implementation The main problems that the implementation tries to address within these parameters are: * • Loading MIMIC-III data into a form usable from Spark * • Identifying relevant laboratory tests, admissions, and ICD-9 codes on which to focus * • Preprocessing the time-series data with Gaussian process regression * • Using a two-layer stacked sparse autoencoder to perform feature learning * • Visualizing the new feature space and identifying potential clusters This section describes the general approach, and the Experimental Evaluation section on page III gives specific examples that were tested. ### II-A Loading & Selecting Data The MIMIC-III database is supplied as a collection of .csv.gz files (that is, comma-separated values, compressed with gzip). By way of spack-csv, Apache Spark 2.x is able to load these files natively as tabular data, i.e. a DataFrame. All work described here used the following tables[6]: * • LABEVENTS: Timestamped laboratory measurements for patients * • DIAGNOSES_ICD: ICD-9 diagnoses for patients (per-admission) * • D_ICD_DIAGNOSES: Information on each ICD-9 diagnosis * • D_LABEVENTS: Information on each type of laboratory event The process requires two ICD-9 categories, and one LOINC code for a lab test. Admissions are filtered to those which contain a lab time-series of the given LOINC code and containing at least 3 samples, containing either ICD-9 category mutually exclusively (that is, at least one diagnosis of the first ICD-9 category, but none of the second, or at least one of the second ICD-9 category, and none of the first). As an aid to this process, the tool can produce a matrix in which each row represents an ICD-9 category (starting with the most occurrences and limited at some number), each column represents a likewise ordered LOINC code (http://loinc.org/), and each intersection contains the number of admissions with an ICD-9 diagnosis of that category and a laboratory time-series of that LOINC code. It does not tell whether a pair of ICD-9 categories, mutually excluding each other, may produce enough data, but it may still give a meaningful estimate. The below shows an example of this matrix, limited to the top 4 LOINC codes (incidentally, all blood measurements) and top 12 ICD-9 categories for space reasons: ICD-9 | LOINC code ---|--- category | 11555-0 | 11556-8 | 11557-6 | 11558-4 427 | 12456 | 12454 | 12458 | 12779 276 | 11392 | 11393 | 11393 | 11962 428 | 11198 | 11195 | 11196 | 11515 401 | 11186 | 11187 | 11188 | 11525 518 | 11386 | 11387 | 11386 | 11545 250 | 8238 | 8238 | 8240 | 8574 414 | 9243 | 9242 | 9242 | 9412 272 | 7733 | 7733 | 7736 | 7919 285 | 6423 | 6421 | 6422 | 6720 584 | 6541 | 6541 | 6542 | 6834 V45 | 4862 | 4860 | 4861 | 5068 599 | 3815 | 3816 | 3815 | 3983 All processing at this stage was done via Spark’s DataFrame operations, aside from the final conversion to an RDD containing individual time-series. ### II-B Preprocessing #### II-B1 Time Warping The covariance function that is used in Gaussian process regression (and explained after this section) contains a time-scale parameter $\tau$ which embeds assumptions on how closely correlated nearby samples are in time. This value is assumed not to change within the time-series - that is, it is assumed to be stationary[9]. This assumption is often incorrect, but under the assumption that more rapidly-varying things (that is, shorter time-scale) are measured more frequently, an approximation can be applied to try to make the time-series more stationary - in the form of changing the distance in time between every pair of adjacent samples in order to shorten longer distances, but lengthen shorter ones[9]. For original distance $d$, the warped distance is $d^{\prime}=d^{1/a}+b$, using $a=3,b=0$ (these values were taken directly from equation 5 of [9] and not tuned further). Thomas Lasko also related in an email that this assumption (that measurement frequency was proportional to how volatile the thing being measured is) is not true for all medical tests. He referred to another paper of his[8] for a more robust approach, however, this is not used here. #### II-B2 Gaussian Process Regression In order to condition the irregular and sparse time-series data from the prior step, Gaussian process regression was used. The method used here is what Lasko _et al._[9] described, which in more depth is the method described in algorithm 2.1 of Rasmussen & Williams[15]. In brief, Gaussian process regression (GPR) is a Bayesian non-parametric, or less parametric, method of supervised learning over noisy observations[3, 9, 15]. It is not completely free-form, but it infers a function constrained only by the mean function (which here is assumed to be 0 and can be ignored) and the covariance function $k(t,t^{\prime})$ of an underlying infinite- dimensional Gaussian distribution. It is not exclusive to time-series data, but $t$ is used here as in this work GPR is done only on time-series data. That covariance function $k$ defines how dependent observations are on each other, and so a common choice is the squared exponential[3]: $k(t,t^{\prime})=\sigma_{n}^{2}\exp\bigg{[}\frac{-(t-t^{\prime})^{2}}{2l^{2}}\bigg{]}$ Note that $k$ approaches a maximum of $\sigma_{n}^{2}$ as $t$ and $t^{\prime}$ are further, and $k$ approaches a minimum of 0 as $t$ and $t^{\prime}$ are closer. Intuitively, this makes sense for many “natural” functions: we expect closer $t$ values to have more strongly correlated function values, and $l$ defines the time scale of that correlation. The rational quadratic function is used here instead as it better models things that may occur on many time scales[9]: $k(t,t^{\prime})=\sigma_{n}^{2}\bigg{[}1+\frac{(t-t^{\prime})^{2}}{2\alpha\tau^{2}}\bigg{]}^{-\alpha}$ Gaussian process regression was implemented from algorithm 2.1 of Rasmussen & Williams[15], copied below: $\displaystyle L=\textrm{cholesky}(K+\sigma_{n}^{2}I)$ (1) $\displaystyle\boldsymbol{\alpha}=L^{\top}\backslash(L\backslash\mathbf{y})$ (2) $\displaystyle\bar{f_{*}}=\mathbf{k_{*}^{\top}}\boldsymbol{\alpha}$ (3) $\displaystyle\bar{v}=L\backslash\mathbf{k_{*}}$ (4) $\displaystyle\textrm{V}\big{[}f_{*}\big{]}=k(\mathbf{x_{*}},\mathbf{x_{*}})-\mathbf{v}^{\top}\mathbf{v}$ (5) $\displaystyle\log p(\mathbf{y}|X)=-\frac{1}{2}\mathbf{y}^{\top}\boldsymbol{\alpha}-\sum_{i}\log L_{ii}-\frac{n}{2}\log 2\pi$ (6) $X$ is the training inputs (a $n\times 1$ matrix for a time-series of $m$ samples), $y$ is a $n\times 1$ matrix with the corresponding values to $X$, and $\mathbf{x_{*}}$ is a test input (a scalar), for which $\bar{f_{*}}$ are $\textrm{V}\big{[}f_{*}\big{]}$ the predictions (respectively, predictive mean and variance). $K$ is a $n\times n$ matrix for which $K_{ij}=k(X_{i},X_{j})$, $\mathbf{k_{*}}$ is a $m\times 1$ matrix for which $(\mathbf{k_{*}})_{i}=k(X_{i},\mathbf{x_{*}})$, and $A\backslash B$ (both $A$ and $B$ matrices) is the matrix $x$ such that $Ax=B$. Matrices $L$ and $\boldsymbol{\alpha}$ in effect represent the Gaussian process itself (alongside the covariance function and its hyperparameters), and can be reused for any test inputs $\mathbf{x_{*}}$. This implementation also exploits the fact that the algorithm trivially generalizes to multiple $\mathbf{x_{*}}$ in matrix form and produces multiple $\bar{f_{*}}$ and $\textrm{V}\big{[}f_{*}\big{]}$. Line 6 provides the log marginal likelihood of the target values $\mathbf{y}$ given the inputs $X$, and this was the basis for optimizing the hyperparameters of the covariance function. In specific, in order to optimize the hyperparameters $\sigma_{n}$, $\tau$, and $\alpha$, every individual time- series was transformed with the time-warping described in the prior section, standardized to mean of 0 and standard deviation of 1 (note that the mean and standard deviation must be saved in order to undo this transformation when interpolating), and then $\sum\log p(\mathbf{y}|X)$ over the entire training set was maximized using a grid search. Hyperparameter optimization is likely the most time-consuming step of processing, and Gaussian process regression is very sensitive to their values. However, this step also is a highly parallelizable one, and so it was amenable to the distributed nature of Apache Spark (and likely to more efficient methods such as gradient descent). The values (i.e. $y$) in each individual time-series were standardized to a mean of 0 and standard deviation of 1, and this standardization was then reversed on the interpolated values. #### II-B3 Interpolation The remainder of algorithm 2.1 is not reproduced here, but the code directly implemented this method using the rational quadratic function (and hyperparameters given above) as the covariance function. This inferred for each individual time-series a continuous function producing predictive mean and variance for any input $t$ (for which they use the notation $\mathbf{x^{*}}$ for “test input”). As in [9], all of these inferred functions were then evaluated at a regular sampling of time values (i.e. via the test input $\mathbf{x^{*}}$) with padding added before and after each time series. The sampling frequency and the amount of padding depends on the dataset, and so can be specified when running the tool. In effect, this mapped each individual time-series first to a continuous function, and then to a new “interpolated” time-series containing predicted mean and variance at the sampled times described above. The interpolated time- series first had the reverse transformation applied from their standardization (i.e. the stored mean was added back in), and this was then written to CSV files and used as input to later steps. ### II-C Feature Learning with Autoencoder A stacked sparse 2-layer autoencoder was then used to perform feature learning. The implementation used here was a combination of what was described in [9] (which closely follows the UFLDL Tutorial from Stanford[14]), and François Chollet’s guide[2] on using the Python library Keras to implement autoencoders. Specifically, Keras was used with Theano (GPU-enabled) as the backend; all data was loaded from the prior step with the pandas library. These autoencoders have a fixed-size input and output, and in order to accomodate this, fixed-size contiguous patches were sampled from the interpolated time-series. As in [9], the patch size was set to the total number of padded samples, and patches were sampled uniformly randomly from all contiguous patches. Note that since Gaussian process regression produces both mean and variance predictions, the input and output size of the network are twice the patch size. As in [9], the encoder and decoder layers used sigmoid encoders and linear decoders, and all hidden layers had 100 units. Both layers contained a sparsity constraint (in the form of L1 activity regularization) and L2 weight regularization, and performance appeared very sensitive to their weights. Figure 1: First stage of Keras autoencoder Figure 2: Second stage of Keras autoencoder This network was built up in stages in order to greedy layerwise train[14]. The first stage was the neural network in figure 1; this was trained with the raw input data (at both the input and the output), thus learning “primary” hidden features at the layer encode1. The first decoder layer decode1 was then discarded and the model extended with another encoder and decoder, as in figure 2. This model then was similarly trained on raw input data, but while keeping the weights in layer encode1 constant. That is, only layers encode1 and decode2 were trained, and in effect they were trained on “primary” hidden features (i.e. encode1’s activations on raw input). Following this was “fine- tuning”[14] which optimized all layers, i.e. the stacked autoencoder as a whole, again using the raw input data. The final model then discarded layer decode2, and used the activations of layer encode2 as the learned sparse features. (Elsewhere in the paper, “second-layer learned features” refers to these activations on a given patch of time-series input. “First layer learned features” refers to the activations of encode1.) ### II-D Visualization & Classification The final processing of the tool consists of using the learned sparse features of the prior step (from both the first and second layers) as input into two separate steps: a visualization by way of t-SNE (t-distributed Stochastic Neighbor Embedding), and training a logistic regression classifier. Both of these steps used the Python library scikit-learn, and respectively sklearn.manifold.TSNE and sklearn.linear_model.LogisticRegression. ## III Experimental Evaluation As a test of the tool described in this paper, experiments were run on a selection of the data. Particularly, the LOINC code 1742-6, corresponding to MIMIC-III ITEMID of 50861, Alanine Aminotransferase (ALT), was used, and the ICD-9 categories 428 (heart failure) and 571 (chronic liver diseases), corresponding to ALT’s use as a biomarker for liver health and to suggest congestive heart failure. All time-series for this were in international units/liter (IU/L). This were selected to a total of 3,553 unique admissions (1,782 for ICD-9 category 428, and 1,771 for 571), 3,320 patients (1,397 females, 1,923 males), and 34,047 time-stamped samples. 70% of these admissions were randomly selected for the training set, and the remaining 30% for the testing set (2,473 and 1,080 respectively). The interpolated series from Gaussian process regression were padded by 10 samples at the beginning and end and sampled at 0.25 days (thus 2.5 days of padding), producing a total of 198,830 samples. These interpolated time-series were randomly resampled to 7,419 patches of 20 samples long (thus, the neural network used inputs and outputs of 40 nodes). 20% of these were set aside for cross-validation. Figure 3 is an example of a time-series from this data. The solid black line is the “original” time-series, the red line is the warped version, and the blue line is the version after interpolation with Gaussian process regression (with one standard deviation plotted on the surrounding dotted line). Figure 4 is several other randomly-chosen time-series from the data as examples. Figure 3: Time-series: Original, warped, and GPR interpolated Figure 4: Example time-series When training the autoencoder on this data, manual tuning led to an L1 activity regularization (i.e. sparsity constraint) of $10^{-4}$ and L2 weight regularization of $10^{-3}$. An interesting detail which figure 2 of [9] shows is that the effects of the first layer can be visualized directly in the form of its weights (not its activations; this is the result of its training, not of any input). As they form a 40x100 array, each 40-element vector corresponds to a sort of time- series signature which that unit is detecting. Figure 5 shows the corresponding plot of first-layer weights (i.e. encode1) after it is trained on the subset described here. Figure 5: Autoencoder first-layer weights, shown as 100 time-series This shows similar structure as in [9] (including considerable redundancy), but with different sorts of signatures. Particularly, it seems to single out edges and certain kinds of ramps. The t-SNE results, shown below in figures 6 and 7, are inconclusive here. It appears to have extracted some structure (which in the second layer is better- refined), but this structure does not seem to relate clearly with the labels. Figure 6: t-SNE on first-layer learned features Figure 7: t-SNE on second- layer learned features The classifier here is not producing useful results; particularly, it is producing an AUC of 0.5 on both the 1st and 2nd layer features. The reason for the poor performance is not known, but due to this, it was not compared against any “baseline” classifier or expert-feature classifier as in [9]. Overall, further work is needed here. ## IV Conclusions & Further Work The tool is released as open source built on openly available libraries and (mostly) open data sources. It was sufficient to produce all diagrams, plots, and analysis in this paper. However, it still needs further experimentation to produce meaningful results, and the intention is that it can be a starting point for this. The examples were restricted by the tool’s use of ICD-9 categories, which may have been too broad to produce meaningful clustering. Generalizing this would be useful, as would other diagnostics to give clues into the feature learning process (such as plots of second-layer learned features). The original goal of running as much as possible within Apache Spark on “standard” infrastructure such as Amazon EMR or Databricks was not fully met. Further integration with Apache Spark still is possible; the autoencoders perhaps could be implemented in DL4J (a native Java library supporting Apache Spark) or Spark’s built-in pyspark support may allow the Keras and scikit- learn code to run directly on that infrastructure via spark-submit. The R language also has many relevant libraries, and SparkR may at some point permit their more seamless use. Several optimizations also may help. hyperparameter optimization is currently done with a grid search, but would more sensibly done with a more intelligent optimization algorithm (such as SGD). The time warping function has parameters that could be tuned, or more extensive changes[8] could be made to try to make the time-series more stationary. Other covariance functions may be more appropriate as well. Some other areas should perhaps be explored further too. One incremental change is in the use of multiple-task Gaussian processes (MTGPs); the work done here handles only individual time-series, while MIMIC-III is rich in parallel time-series that correlate with each other. Ghassemi _et al._[4] explored the use of MTGPs to find a latent representation of multiple correlated time-series, but did not use this representation for subsequent feature learning. Another incremental change is in the use of Variational Autoencoders (VAEs) to learn a feature space that is sufficiently low- dimensional that techniques such as t-SNE are not required for effective visualization. A more extensive change could involve using recurrent neural networks (RNNs). Deep networks such as RNNs such as Long Short-Term Memories (LSTMs) have shown promise in their ability to more directly handle sequences[16] and clinical time-series data, including handling missing data[10, 12, 11]. However, they are primarily used for supervised learning, but could potentially be treated similarly as autoencoders (as in [7]), that is, trained with the same input and output data in order to learn a reduced representation of the input. This approach would avoid some of the need to perform Gaussian Process Regression, however, it still may not cope well with time-series data that is very irregular. ## Acknowledgment Thank you to Dr. Jimeng Sun, Sungtae An, and the other TAs for their time and advice in this project. ## References * [1] Z. Che, D. Kale, W. Li, M. Taha Bahadori, and Y. Liu. Deep Computational Phenotyping. Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 507–516, 2015. * [2] F. Chollet. Building Autoencoders in Keras. https://via.hypothes.is/https://blog.keras.io/building-autoencoders-in-keras.html. * [3] M. Ebden. Gaussian Processes: A Quick Introduction. (August), 2015. * [4] M. Ghassemi, T. Naumann, T. Brennan, D. a. Clifton, and P. Szolovits. A Multivariate Timeseries Modeling Approach to Severity of Illness Assessment and Forecasting in ICU with Sparse , Heterogeneous Clinical Data. Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, pages 446–453, 2015. * [5] A. E. W. Johnson, M. M. Ghassemi, S. Nemati, K. E. Niehaus, D. Clifton, and G. D. Clifford. Machine Learning and Decision Support in Critical Care. Proceedings of the IEEE, 104(2):444–466, 2016. * [6] A. E. W. Johnson, T. J. Pollard, L. Shen, L.-W. H. Lehman, M. Feng, M. Ghassemi, B. Moody, P. Szolovits, L. A. Celi, and R. G. Mark. MIMIC-III, a freely accessible critical care database. Scientific data, 3:160035, 2016. * [7] M. Klapper-Rybicka, N. N. Schraudolph, and J. Schmidhuber. Unsupervised Learning in LSTM Recurrent Neural Networks. Icann, pages 674—-681, 2001. * [8] T. A. Lasko. Nonstationary Gaussian Process Regression for Evaluating Clinical Laboratory Test Sampling Strategies. Proceedings of the … AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence, 2015(8):1777–1783, jan 2015. * [9] T. A. Lasko, J. C. Denny, and M. A. Levy. Computational Phenotype Discovery Using Unsupervised Feature Learning over Noisy, Sparse, and Irregular Clinical Data. PLoS ONE, 8(6), 2013. * [10] Z. C. Lipton, D. C. Kale, C. Elkan, and R. Wetzell. Learning to Diagnose with LSTM Recurrent Neural Networks. Iclr, pages 1–18, 2015. * [11] Z. C. Lipton, D. C. Kale, and R. Wetzel. Directly Modeling Missing Data in Sequences with RNNs: Improved Classification of Clinical Time Series. 56(2016):1–17, 2016. * [12] Z. C. Lipton, D. C. Kale, and R. C. Wetzell. Phenotyping of Clinical Time Series with LSTM Recurrent Neural Networks. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 705–714, 2015. * [13] B. M. Marlin, D. C. Kale, R. G. Khemani, and R. C. Wetzel. Unsupervised Pattern Discovery in Electronic Health Care Data Using Probabilistic Clustering Models. * [14] A. Ng, J. Ngiam, C. Y. Foo, Y. Mai, and C. Suen. UFLDL Tutorial. http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial. * [15] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning., volume 14. 2004\. * [16] I. Sutskever, O. Vinyals, and Q. V. Le. Sequence to sequence learning with neural networks. Advances in Neural Information Processing Systems (NIPS), pages 3104–3112, 2014.
# Mn(Pt1-xPdx)5P: Isovalent Tuning of Mn Sublattice Magnetic Order Tyler J. Slade,1,2||∗ Ranuri S. Dissanayaka Mudiyanselage,3|| Nao Furukawa,1,2 Tanner R. Smith,1,2 Juan Schmidt,1,2, Lin-Lin Wang,1 Chang-Jong Kang,4,5 Kaya Wei,6 Zhixue Shu,7 Tai Kong,7 Ryan Baumbach,6 Gabriel Kotliar,4 Sergey L. Bud’ko,1,2 Weiwei Xie,3,8 Paul C. Canfield1,2∗ ###### Abstract We report the growth and characterization of MnPd5P, a rare-earth-free ferromagnet, with TC $\approx$ 295 K and planar anisotropy, and conduct a substitutional study with its antiferromagnetic analogue MnPt5P. We provide a solution route to grow large single crystals of MnPd5P and the series Mn(Pt1-xPdx)5P by adding Mn into Pd-P and (Pt1-xPdx)-P based melts. All compounds in the family adopt the layered anti-CeCoIn5 type structure with the space group P4/mmm, and EDS and X-ray diffraction results indicate that MnPt5P and MnPd5P form a complete solid solution. Based on measurements of the temperature- and field-dependent magnetization and resistance, we construct a temperature-composition (T–x) phase diagram for Mn(Pt1-xPdx)5P and demonstrate that the initial antiferromagnetic order found in MnPt5P is extraordinarily sensitive to Pd substitution. At low Pd fractions (x $<$ 0.010), the single antiferromagnetic transition in pure MnPt5P splits into a higher temperature ferromagnetic transition followed first, upon cooling, by a lower temperature ferromagnetic to antiferromagnetic transition and then by a re-entrant antiferromagnetic to ferromagnetic transition at even lower temperatures. The antiferromagnetic region makes up a bubble phase that persists up to x $\approx$ 0.008–0.009 for T $\approx$ 150 K, with all samples x $<$ 0.008 recovering their initial ferromagnetic state upon further cooling to base temperature. Over the same low substitution range we find a non-monotonic change in the room temperature value of the unit cell volume, further suggesting that pure MnPt5P is very close to an instability. Once x $>$ 0.010, Mn(Pt1-xPdx)5P undergoes a only single transition into the ferromagnetic phase. The Curie temperature initially increases rapidly with x, rising from TC $\approx$ 197 K at x = 0.013 to a maximum of TC $\approx$ 312 K for x $\approx$ 0.62, and then falling back to TC $\approx$ 295 K for pure MnPd5P (x = 1.00). Given that Pt and Pd are isoelectronic, this work raises questions as to the origin of the extreme sensitivity of the magnetic ground state in MnPt5P upon introducing Pd. 1Ames National Laboratory, US DOE, Iowa State University, Ames, Iowa 50011, USA 2Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA 3Department of Chemistry and Chemical Biology, The State University of New Jersey Rutgers, Piscataway, NJ 08854, USA 4Department of Physics and Astronomy, Rutgers University, Piscataway, NJ, 08854, USA 5Department of Physics, Chungnam National University, Daejeon, 34134, South Korea 6National High Magnetic Field Laboratory, Tallahassee, FL, 32310, USA 7Department of Physics, University of Arizona, Tucson, AZ 85721, USA 8Department of Chemistry, Michigan State University, East Lansing, MI, 48824, USA ## 1 Introduction Targeted design of tunable magnetic materials is active and key challenge for the materials chemistry and physics community. Achieving this goal necessitates understanding, at a microscopic level, what chemical and structural features underpin the magnetic properties of a given material. Whereas in magnetic semiconductors and insulators, theories based upon super- exchange interactions can often provide a satisfactory explanation of the magnetism,[1, 2] in metallic compounds, it is far more challenging from a theoretical perspective to understand and therefore to predict whether a structure containing transition metals will be paramagnetic, ferromagnetic, or antiferromagnetic.[3] This difficulty is particularly pronounced in the case of low-dimensional and/or itinerant magnetic metals, which show completely different electronic and magnetic properties from magnetic semiconductors.[4, 5, 6, 7, 8] With these challenges in mind, detailed studies on isostructural or chemically similar intermetallic compounds with disparate magnetic properties may yield valuable insight into the physical parameters that ultimately determine the magnetism and hopefully provide guidelines for more targeted design of magnetic materials. Based on our previous experimental work, magnetically active 3d metals (most frequently Cr, Mn, Fe, Co and Ni) occupying voids in complex intermetallic frameworks can give rise to lower-dimensional structures of these magnetic metals. A prominent example is the M(Pt, Pd)5X family, where M = Mn, Fe and X = P, As, Se.[9, 10, 11, 12, 13] These compounds all crystalize in the layered CeCoIn5-type tetragonal structure with the space group P4/mmm ($\\#$123). Despite sharing the same crystal structure, the magnetic properties of the M(Pt, Pd)5X materials are remarkably diverse. MnPt5As is a TC $\approx$ 280 K ferromagnet,[11] whereas the isovalent MnPt5P orders antiferromagnetically below TN $\approx$ 190 K (likely with a small ferromagnetic, q = 0, component in addition to the predominantly antiferromagnetic order).[9, 13] FePt5P is an itinerant antiferromagnet that undergoes three closely spaced transitions between $\approx$ 70-90 K.[10, 13] Lastly, MnPd5Se also shows antiferromagnetic order below TN $\approx$ 80 K, and a spin reorientation is observed upon further cooling below 50 K.[12] The range of properties exhibited by the M(Pt, Pd)5X family suggests that the magnetism in these compounds is extremely sensitive to chemical composition, electron count, and steric considerations. The case of MnPt5P is particularly interesting. As noted above, MnPt5P orders antiferromagnetically, whereas MnPt5As is a $\approx$ 280 K ferromagnet. Very recently, exploratory solution growth of single crystal studies found that isovalent MnPd5P also manifests ferromagnetic order near room temperature,[13] suggesting that both lattice expansion (towards MnPt5As) and contraction (towards MnPd5P) push the antiferromagnetic MnPt5P toward a ferromagnetic ground state. In this work, we present a substitutional study between MnPd5P and MnPt5P to better characterize the magnetism of the end members and to understand how the initially antiferromagnetic state of MnPt5P evolves towards ferromagnetism in MnPd5P. We successfully grow single and polycrystalline samples of MnPd5P and Mn(Pt1-xPdx)5P using both solution growth and solid-state reaction techniques. Phase and structural analysis with X-ray diffraction and energy dispersive spectroscopy indicate that MnPt5P and MnPd5P form a full solid solution of Mn(Pt1-xPdx)5P. Magnetization measurements show that the essentially antiferromagnetic state in pure MnPt5P is extraordinarily sensitive to Pd substitution. At Pd concentrations as low as x $<$ 0.01, the single antiferromagnetic transition found for pure MnPt5P splits into a higher temperature ferromagnetic transition followed first, upon cooling, by a lower temperature ferromagnetic to antiferromagnetic transition and then by a re- entry into the ferromagnetic state at lower temperatures. The antiferromagnetic region makes up a bubble phase that persists up to x $\approx$ 0.008–0.009 for T $\approx$ 150 K, with all samples x $<$ 0.008 recovering their initial ferromagnetic state upon further cooling to base temperature. Over the same low substitution range we find a possible non- monotonic change in the room temperature value of the a-lattice parameter and unit cell volume, further suggesting that pure MnPt5P is very close to an instability. When x $>$ 0.010, Mn(Pt1-xPdx)5P undergoes only a single, ferromagnetic, transition, where the Curie temperature initially increases with x and is maximized at $\approx$ 312 K for x = 0.62 Pd before decreasing to $\approx$ 295 K in pure MnPd5P. Considering that for x $>$ 0.01, the rather gradual and non-monotonic TC variation does not track well with lattice parameters, or suggest particular sensitivity to Pd content, the fantastic sensitivity of the antiferromagnetic phase of Mn(Pt1-xPdx)5P for x $<$ 0.01 is remarkable and suggests there is a qualitative change in the material that may well be associated with a change in the Fermi-surface topology, such as a Lifshitz transition. Qualitatively in-line with this proposal, electronic band structure calculations for MnPt5P indeed show several pockets near the Fermi level which are significantly altered in MnPd5P. Direct experiments to probe the electronic structure/density of states at the Fermi level are needed to further understand the extraordinary sensitivity of MnPt5P to Pd alloying. ## 2 Experimental Details ### 2.1 Crystal Growth We prepared and characterized both single crystalline and polycrystalline samples of MnPd5P and Mn(Pt1-xPdx)5P. The Mn(Pt1-xPdx)5P single crystals were grown from (Pt1-xPdx)-P based solutions as follows.[13] Elemental Mn pieces (Puratronic, 99.98$\%$), Pt powder (Engelhard, 99+ $\%$ purity), Pd powder (Engelhard, 99+ $\%$ purity), and red P pieces (Alpha-Aesar, 99.99$\%$) were weighed according to nominal compositions of Mn9Pt71-yPdyP20 (the actual compositions are given in Table 3 in the Appendix) and placed into the bottom of an alumina Canfield crucible set (CCS).[14, 15] The packed CCS were flame sealed into evacuated fused silica ampules that were backfilled with $\approx$ 1/6 atm Ar gas. Using a box furnace, the ampules were slowly warmed to 250°C over 6 h and then to 1180°C over an additional 8 h. After dwelling at 1180°C for 6 h, the furnace was gradually cooled to 800°C (for samples with nominally less than 50$\%$ Pd) or to 830°C (for those with over 50$\%$ Pd) over $\approx$ 100 h. Upon reaching the desired temperature, the excess liquid phase was decanted by inverting the ampules into a specially designed centrifuge with metal rotor and cups.[16] After cooling to room temperature, the ampules and CCS were opened to reveal clusters of metallic plate-like crystals with typical dimensions of $\approx$ 3 mm, and a representative picture of several crystals is shown in the inset to Figure 1c. The polycrystalline samples were prepared from a solid state reaction by sintering pellets with nominal compositions of Mn(Pt1-xPdx)5P (x = 0.2, 0.4, 0.5, 0.6, 0.8, and 1). Mn powder (Mangan, 99+$\%$), Pt powder (BTC, 22 mesh, 99.99$\%$), Pd powder (BTC, 200 mesh, 99.95$\%$) and red P powder (BTC, 100 mesh, 99$\%$) were mixed and ground in Mn: Pt/Pd: P = 1:5:1 atomic ratio. The mixture was pressed into a pellet, and the pellet was placed into an alumina crucible and sealed in an evacuated silica tube. The sample tube was then heated to 1050°C at a rate of 40 °C per hour. After annealing for 2 days at 1050°C, the samples were slowly cooled down to room temperature at the speed of 10°C per hour. Both the single- and polycrystalline Mn(Pt1-xPdx)5P samples were stable in moist air. ### 2.2 Phase and Structure Determination Powder X-ray diffraction patterns were obtained using a Rigaku Miniflex-II instrument operating with Cu-K$\alpha$ radiation with $\lambda$ = 1.5406 Å (K$\alpha$1) and 1.5443 Å (K$\alpha$2) at 30 kV and 15 mA. The samples were prepared by grinding a representative number of crystals (5-10) to a fine powder. To determine the lattice parameters, the powder patterns were refined using the Rietveld method with GSAS-II software.[17] To obtain better estimate of the uncertainty in the lattice parameters for samples very dilute in Pd, we collected and refined three separate patterns for samples with x $\leq$ 0.022, and used the standard deviations of the refined values of a, c, and V as error bars (see Figure 1b and 1c). For samples with x $>$ 0.022, the fitting errors from the Rietveld refinements were used as the error bars. Single crystal X-ray diffraction (SCXRD) experiments were conducted in a D8 Quest Eco diffractometer with Mo-K$\alpha$ radiation ($\lambda$ = 0.71073 Å) equipped with Photon II detector. Empirically, we found that the solution grown single crystals of Mn(Pt1-xPdx)5P were not favorable for SCXRD, whereas very small single crystals picked from the sintered pellets (the solid-state reactions) were more suitable and were used for the SCXRD. The samples were mounted on a Kapton loop and measured with an exposure time of 10 s per frame scanning 2$\theta$ width of 0.5°. Structure refinement was performed in SHELXTL package using direct methods and full matrix least-squares on F2 model.[18, 19] Anisotropic thermal parameters for all atoms were refined in SHELXTL. The VESTA software was used to plot the crystal structures.[20] ### 2.3 Scanning Electron Microscopy (SEM) and Elemental Analysis The Pd concentrations (x) in the Mn(Pt1-xPdx)5P single crystals were determined by energy dispersive x-ray spectroscopy (EDS) quantitative chemical analysis using an EDS detector (Thermo NORAN Microanalysis System, model C10001) attached to a JEOL scanning-electron microscope (SEM). The compositions of each crystal were measured at several (3–6) different positions on the crystal’s face (perpendicular to the c-axis), revealing good homogeneity in each crystal. An acceleration voltage of 16 kV, working distance of 10 mm, and take off angle of 35° were used for measuring all standards and samples. A pure MnPt5P single crystal (x = 0.000) was used as a standard for Mn, Pt, and P quantification, and a pure MnPd5P single crystal (x = 1.000) was used as a standard for Pd. The spectra were fitted using NIST- DTSA II Microscopium software.[21] The average compositions and error bars were obtained from these data, accounting for both inhomogeneity and goodness of fit of each spectra. Chemical compositions of the polycrystalline Mn(Pt1-xPdx)5P samples were analyzed using a high vacuum Zeiss Sigma Field Emission SEM (FESEM) with Oxford INCA PentaFETx3 Energy-Dispersive Spectroscopy (EDS) system. Spectra were collected for 100 s from multiple areas of the crystals mounted on a carbon tape with an accelerating voltage of 20 keV. Figure 1: (a) Powder X-ray diffraction patterns collected from the solution- grown Mn(Pt1-xPdx)5P crystals. The Pd fractions were determined from EDS. (b) Refined lattice parameters. The inset shows a close up view of the a-lattice parameter for x $<$ 0.04 samples, and the red dashed line is a guide to the eye showing possible non-monotonic behavior. (b) Refined unit cell volume. The upper inset is a zoomed in view of V for x $<$ 0.04 samples and the lower inset show representative crystals (x = 0.064) on a mm grid. In (c) the dashed green line is a line connecting x = 0 and x = 1, showing overall close agreement of Vegard’s law with possible deviation at very low x. The horizontal error bars in (b) and (c) are the standard deviations from the EDS measurements, and the vertical error bars are discussed in the powder diffraction section of the experimental section. ### 2.4 Physical Property Measurements For the single crystals, magnetization measurements were performed in a Quantum Design Magnetic Property Measurement System (MPMS-classic) SQUID magnetometer operating in the DC measurement mode. The magnetic measurements were conducted with the field oriented parallel and perpendicular to the c-axis, where c is axial direction relative to the plate-like crystals. For measurements with H $\perp$ c, the samples were held in place between two plastic straws, and for H $\parallel$ c, the samples were sandwiched from above and below between two plastic discs. A small pinhole was poked in the space between the discs to allow for evacuation. In the latter case, a blank background using the bare discs was first measured and the values subtracted. The temperature dependent resistance of the Mn(Pt1-xPdx)5P single crystals were measured on a Quantum Design Physical Property Measurement System (PPMS) operating in the AC transport mode with an applied current of 3 mA and frequency of 17 Hz. The samples were prepared by cutting the crystals into rectangular bars, and the contacts were made by spot welding 25 $\mu$m thick annealed Pt wire onto the samples in standard four point geometry. After spot welding, a small amount of silver epoxy was painted onto the contacts to ensure good mechanical strength, and typical contact resistances were $\approx$ 1 $\Omega$. ### 2.5 Computational Details Electronic band structure and density of states for MnPt5P and MnPd5P were calculated in density functional theory (DFT)[22, 23] using PBE[24] as the exchange-correlation functional with spin-orbit coupling (SOC) included. All DFT calculations were performed in the Vienna Ab initio Simulation Package (VASP)[25, 26] with a plane-wave basis set and projector augmented wave method.[27] The kinetic energy cutoff was 270 eV. We used a $\Gamma$-centered Monkhorst-Pack[28] (10$\times$10$\times$6) k-point mesh with a Gaussian smearing of 0.05 eV for the primitive tetragonal unit cell. ## 3 Results and Discussion Figure 2: (a) The crystal structure of MnPd5P showing Mn@Pd12 face sharing polyhedral and P layers (b) Unit cell of Mn(Pt1-xPdx)5P indicating the mixture of Pd and Pt. Table 1: Single crystal structure refinement details for MnPd5P at 300(2) K. Refined Formula | MnPd5P ---|--- F.W. (g/mol) | 617.91 Space group; Z | P4/mmm; 1 a (Å) | 3.899 (2) c (Å) | 6.867 (4) V (Å3) | 104.42 (9) $\theta$ range (º) | 2.966-34.770 No. reflections; Rint | 578; 0.0609 No. independent reflections | 170 No. parameters | 12 R1: wR2 (I $>$ 2d(I)) | 0.0509; 0.1204 Goodness of fit | 1.282 Diffraction peak and hole (e-/Å3) | 2.656; -1.863 Table 2: Atomic coordinates and equivalent isotropic displacement parameters of MnPd5P at 300(2) K. (Ueq is defined as one-third of the trace of the orthogonalized Uij tensor (Å2)) Atoms | Wycoff | Occ. | x | y | z | U (eq) ---|---|---|---|---|---|--- Pd1 | 4i | 1 | 0 | 1/2 | 0.2948(1) | 0.015(1) Pd2 | 1a | 1 | 0 | 0 | 0 | 0.012(2) Mn3 | 1c | 1 | 1/2 | 1/2 | 0 | 0.022(2) P4 | 1b | 1 | 0 | 0 | 1/2 | 0.016(2) Figure 3: (a) Full temperature–composition phase diagram for Mn(Pt5-xPdx)5P. (b) The low–x region of the phase diagram for x $<$ 0.03. The * next to AFM denotes the small ferromagnetic (q = 0) component to the primarily antiferromagnetic order in the low x samples and the dashed lines are guides to the eye. In (a), the closed points represented dated obtained from the single crystals (sc) and the crossed-open points data from the polycrystalline (pc) samples. ### 3.1 Phase, Composition, and Structural Analysis We first analyzed our solution grown single crystals and sintered pellets with powder X-ray diffraction (PXRD) and energy dispersive spectroscopy (EDS) to determine the phase and assess the degree of Pd incorporation into the Mn(Pt1-xPdx)5P alloys. Figure 1a shows the powder patterns collected for the ground, solution grown single crystals, and the EDS data is given in Table 3 in the Appendix. For all samples, the experimental PXRD patterns are in excellent agreement with the anticipated reflections for the P4/mmm structure of MnPt5P and MnPd5P. The EDS analysis likewise suggests a monotonic increase in the Pd incorporation into the 1-5-1 matrix as Pt is exchanged for Pd in the starting melts (see Table 3). The lattice parameters determined from Rietveld refinements of the powder patterns are shown in Figure 1b. The a-lattice parameter has a very shallow maximum at x = 0.054, but overall there is little change in a over the full compositional range. This is contrasted by the c-lattice parameter which decreases monotonically (nearly linearly) as the Pd fraction increases. Because a is nearly invariant with Pd doping, the unit cell volume V, shown in Figure 1c, essentially mirrors the Pd dependence of c, decreasing linearly as the Pd content is raised. The dashed green line in Figure 1c shows a linear fit between the volume of x = 0 and x = 1, and the experimental values closely follow the projected line, indicating that V follows Vegard’s law for a solid solution between MnPt5P and MnPd5P. At very low Pd fraction (x $<$ 0.22) the a lattice parameter and unit cell volume arguably each have a V-shape x-dependency, initially decreasing slightly before increasing again (see insets to Figure 1b and 1c). This non-monotonic is more evident in the volume, where a clear deviation from the Vegard’s law is observed for x $<$ 0.22. We will return to this anomalous x-dependency at low substitution in the discussion of the magnetic properties. To provide more detailed structural analysis of MnPd5P and the Mn(Pt1-xPdx)5P alloys, we conducted single crystal X-ray diffraction (SCXRD). The resulting crystallographic data, including atomic coordinates, site occupancies and equivalent isotropic thermal displacement parameters of MnPd5P, are reported in Table 1 and Table 2 whereas crystallographic information on the Mn(Pt1-xPdx)5P alloys are given in the Appendix in Tables 5 and 6. The results show that MnPd5P and the Mn(Pt1-xPdx)5P compounds crystalize in a tetragonal unit cell with the space group of P4/mmm, like the previously reported MnPt5P and MnPt5As. The crystal structure is illustrated in Figures 2a and 2b and consists of layered motifs, with alternating layers of Mn@Pd12 face sharing polyhedra that span the ab-plane and which are separated by P layers along the c-axis. Consistent with the powder diffraction data, the single crystal XRD confirms that the Mn(Pt1-xPdx)5P alloys maintain the parent lattice structure with the Pt and Pd atoms having mixed occupancy on the two atomic sites 1a and 4i as indicated in Figure 2b. Details on the Pt/Pd distributions on 1a and 4i sites for each phase are given in Table 6 in the Appendix. The SCXRD data may hint that the Pd atoms have a slight preference for occupying the 4i site over the 1a site; however, given the uncertainties in our refinements, this cannot be supported with confidence and our data indicates the Pt/Pd mixing is essentially a solid solution. Figure 4: (a) Temperature dependence of M/H for all Mn(Pt1-xPdx)5P single crystals measured at H = 1 kOe. (b) Temperature dependence of M/H for low x Mn(Pt1-xPdx)5P with x $<$ 1 measured at H = 50 Oe. Figure 5: (a) Temperature dependent resistance data for 0 $\geq$ x $\geq$ 0.022 Mn(Pt1-xPdx)5P single crystals with the data normalized to R(375 K). For clarity, the R(T) curves are each offset by 0.1. (b) Derivatives of the datasets in (a). The peaks were used to determine transition temperatures and are marked with arrows.[29] (c) and (d) are the respective R(T)/R(375 K) and dR/dT data for 0.033 $\geq$ x $\geq$ 1 Figure 6: Field-dependent magnetization isotherms measured at temperatures corresponding to different parts of the phase diagram for (a) x = 0, (b) x = 0.0013, (c) x = 0.0026, (d) x = 0.0053, (e) x = 0.008, (f) x = 0.009. Note that the x-axis scale for (a) and (b) extends to 10 kOe to observe the metamagnetic transitions. ### 3.2 Magnetic and transport properties of Mn(Pt1-xPdx)5P MnPt5P enters into a spin-canted antiferromagnetic state at TN $\approx$ 190 K,[9, 13] and preliminary data collected on MnPd5P indicated that this material becomes ferromagnetic near room temperature.[13] To understand how the magnetic state evolves as Pt is replaced with Pd, we conducted temperature and field dependent magnetization and transport measurements on our Mn(Pt1-xPdx)5P samples, and the results are summarized in the temperature- composition (T–x) phase diagram given in Figure 3. The magnetization and transport data are outlined in Figures 4 and 5, and Figure 6 shows the field dependent magnetization isotherms collected at salient temperatures for x $<$ 0.010 samples. Temperature dependent magnetization (M/H) collected at H = 1 kOe with the field applied within the easy ab-plane (H $\perp$ c) for the Mn(Pt1-xPdx)5P single crystals is presented in Figure 4a, and Figure 4b shows H = 50 Oe data for x $<$ 0.010 samples (See Figure 9 in the Appendix for the anisotropic M(H) results). At low Pd substitution (x $<$ 0.010), the initially narrow, antiferromagnetic-like, peak observed at 192 K for pure MnPt5P substantially broadens as x increases and forms a plateau-like maxima centered near $\approx$ 180 K, whereas the weak upturn in M/H below 100 K in MnPt5P becomes a sharp increase reminiscent of ferromagnetic ordering. The ferromagnetic-like upturn moves to higher temperatures as x increases, eventually merging with the initial higher temperature transition such that by x = 0.008, the M/H data shows a very rapid increases at $\approx$ 193 K followed by a second, subtle, increase beginning at $\approx$150 K. Above x $>$ 0.010, only a single ferromagnetic transition is observed, and the Curie temperature (determined more precisely from resistance measurements discussed below) increases with Pd alloying to a maximum at 312 K for x = 0.62 before falling gradually back to $\approx$ 295 K for pure MnPd5P. To complement the temperature dependent M/H data, we also measured the resistance of each sample from 1.8–375 K and present the data in Figure 5. The R(T) results for low x samples are given in Figure 5a and the derivatives dR/dT used to assign the transition temperatures are shown in Figure 5b.[29] Corresponding R(T) and dR/dT data for the x $>$ 0.03 samples is shown in Figures 5c and 5d. As expected, all samples have metallic resistance that decreases with cooling, and the R(T) datasets each show a clear kink followed by a rapid drop at the initial magnetic transition temperature T1, characteristic of losing spin-disorder scattering as the samples enter a magnetically ordered state. The residual resistance ratios, RRR = R(375 K)/R(1.8 K), are shown in the appendix in Figure 10b and are minimized at x = 0.54, consistent with the expectation for stronger scattering associated with the crystallograpically disordered Pt and Pd atoms in the alloys. At low Pd fraction, the samples with 0.0013 $\leq$ x $\leq$ 0.0053 show a second transition T2 just below the first, which is suppressed from $\approx$187 K at x = 0.0013 to $\approx$177 K at x = 0.0053. Further cooling reveals another lower temperature transition T3 that increases with Pd alloying from $\approx$71 K for x = 0.00125 to 160 K at x = 0.009. We note that the signature of T2 is lost in the x = 0.008 and x = 0.009 resistance data, which is likely due to the close proximity of T2 and T3 at these compositions. This is consistent with the M/H data (see Figure 4b), which shows the lower temperature ferromagnetic feature (T3) essentially merging with the higher temperature transitions. T3 is hysteretic between warming and cooling, implying it is first order (see Figure 10 in the Appendix for a close up view). The resistance curves for 0.033 $\leq$ x $\leq$ 1 (Figure 5b) only show a single transition that increases rapidly with x and is maximized at $\approx$ 312 K before falling back to 295 K for pure MnPd5P, consistent with the M/H data in Figure 4. Using the transitions identified in the temperature dependent M/H and R(T) data discussed above, we can identify the unique regions of the T–x phase diagram shown in Figure 3. We find that the initial transition at 192 K in pure MnPt5P splits into two transitions, T1 and T2, upon even minute, almost homeopathic, levels of Pd substitution. T1 appears to be ferromagnetic and increases with x to a maximum at 321 K for x = 0.62, whereas T2 decreases gradually with x. The low-x samples show a final third transition T3 that increases with x and intersects T2 at approximately x $\approx$ 0.009, such that T2 and T3 delineate a “bubble phase” on the phase diagram that extends out to x $\approx$ 0.009. Below the bubble, the low-x samples recover the original ferromagnetic state entered upon cooling through T1. To better determine the type of order found in each region of the phase diagram, Figure 6 presents magnetization isotherms measured at salient temperatures for the x $<$ 0.010 Mn(Pt1-xPdx)5P samples. (Figure 9 shows the M(H) data for higher x samples over a much higher applied field range.) As shown in Figure 6a, the M(H) curves for pure MnPt5P show a series of metamagnetic transitions that shift to lower field as the temperature is lowered, indicating antiferromagnetic order. As outlined in our prior work,[13] the magnetization curves for MnPt5P all show a small saturation and measurable hysteresis at the lowest fields (under 0.2 kOe), implying that the antiferromagnetic order also has a small ferromagnetic (q = 0) component (we use * in Figure 3b to denote the ferromagnetic component to the otherwise primarily antiferromagnetic state). Importantly, despite having a broad upturn in M/H below $\approx$ 100 K (see Figures 4a and 4b), the 2 K M(H) isotherm for MnPt5P is qualitatively the same as the higher temperature datasets, with a clear metamagnetic transition observed at $\approx$ 1 kOe, indicating that MnPt5P remains antiferromagnetic down to at least 2 K, which is consistent with the R(T) data for MnPt5P showing only a single 192 K transition. This said, the very low metamagnetic field, that decreases with decreasing temperature, rather than the more standard increasing with decreasing temperature, strongly suggests a close energetic proximity to an ordered state with a larger ferromagnetic component. Upon introduction of Pd, the M(H) isotherms measured above the “bubble” region of the phase diagram, at temperatures between T1 and T2, show a swift rise in M at low field followed by saturation at above $\approx$ 1 kOe, implying that T1 is a ferromagnetic transition (see 190 K data in Figures 6b–6c and 185 K data in Figure 6d). Within the bubble, between T2 and T3, the M(H) curves all show metamagnetic transitions that generally move to lower fields as the temperature is lowered (for a given value of x). Likewise, the M(H) datasets all show a small but measurable low-field saturation below $\approx$0.2 kOe. Together, this information suggests that the bubble phase is the same spin- canted AFM* state found in pure MnPt5P. Below T3, the M(H) isotherms for Pd containing samples again are characteristic ferromagnetic behavior, with a rapid increase in M at low field followed by saturation at $\approx$ 4.5 $\mu_{\text{B}}$/f.u. When the Pd fraction rises above x $\approx$ 0.008-0.009, the M(H) isotherms show classic easy-plane ferromagnetic behavior at all temperatures below T1 (see Figure 9 in the Appendix for M(H) data for x $>$ 0.010). The M(H) data collected in the ferromagnetic phase (below T3 for x $<$ 0.010 and below T1 for x $>$ 0.010) show very small ($\approx$ 10-20 Oe), almost negligible, hysteresis on raising and lowering the field, implying that the ferromagnetism in Mn(Pt1-xPdx)5P single crystals is very soft. Moreover, the M(H) results show that the Mn(Pt1-xPdx)5P samples have relatively strong magnetic anisotropy where the ab-plane is the easy direction. Anisotropy fields H${}_{\text{A}}$ estimated by extrapolating the tangents of the H $\perp$ c and H $\parallel$ c datasets decrease monotonically as the Pd fraction rises (see Figure 9 in the Appendix), from $\approx$ 108 kOe for x = 0.022 Pd to $\approx$ 10 kOe in MnPd5P, which likely reflects the decreasing strength of spin orbit coupling accompanying substitution of the Pt with the smaller Z Pd. ### 3.3 Discussion Figure 7: Electronic band structures calculated for (a) MnPt5P and (b) MnPd5P. The green shading represents the relative projection of Mn-3d orbitals to the electronic bands. Our magnetic and transport measurements strongly suggest that the energy difference between ferromagnetic and antiferromagnetic states in MnPt5P is exceptionally small, such that even the small perturbation of x $\approx$ 0.02 Pd substitution on the Pt sites is sufficient to stabilize purely ferromagnetic order. At very low, essentially homeopathic levels of Pd x $<$ 0.01, both antiferromagnetic and ferromagnetic phases are observed, and the antiferromagnetic state forms a bubble phase spanning pure MnPt5P to x $\approx$ 0.008-0.009. Whereas this magnetic phase diagram, shown in figure 3b, describes the transitions below 200 K, non-monotonic changes in the a-lattice parameter and unit cell volume are also possibly detected at room temperature over essentially the same range of low x (see Figure 1b and 1c). This suggests that MnPt5P undergoes a transition, with very small Pd substitution, that manifests itself both in the lattice as well as the nature of the magnetic interactions. A clear candidate would be a Lifshitz type transition where the Fermi surface topology changes (i.e. small pockets appear/disappear) with small Pd substitution. Such an electronic transition can lead to changes in the density of states (DOS) at the Fermi energy as well as changes in the generalized electronic susceptibility, $\chi$(q), which governs whether the magnetic order is anti- or ferromagnetic. To explore this possibility, we calculated electronic band structures for paramagnetic MnPt5P and MnPd5P. The band structures are displayed in Figure 7, where the green shading represents the projection of Mn-3d orbitals to the electronic states. As expected, the calculations indicate both MnPt5P and MnPd5P are metals, with multiple well dispersed bands crossing the Fermi energy (E${}_{\text{F}}$). In both compounds, most of the bands near E${}_{\text{F}}$ are composed of Mn-3d based states. Most importantly, the band structure for pure MnPt5P has several bands that graze, or come very close to E${}_{\text{F}}$ at the M, R, and A points in the Brillouin zone, as well as a Dirac-like set of bands along X–M. In MnPd5P, the flat band sections near the M and R points move above E${}_{\text{F}}$, and the band along the X–M direction also moves up in energy. Furthermore, new flat band sections appear near E${}_{\text{F}}$ in MnPd5P along the $\Gamma$–X and A–Z directions. Admittedly, comparison of the end members MnPt5P and MnPd5P represents an extreme perturbation in reference to the x $\approx$ 0.02 needed to stabalize purely ferromagnetic order in Mn(Pt5-xPdx)5P; however, the calculations do show that MnPt5P has multiple pockets very near E${}_{\text{F}}$ that are substantially changed in MnPd5P, suggesting a Lifshitz transition is at least plausible in Mn(Pt5-xPdx)5P. Subsequent measurements that directly probe the electronic states at E${}_{\text{F}}$, such as the thermopower and Hall effect, would be needed to explore and test this proposal. ## 4 Summary and Conclusions We determined that ferromagnetic MnPd5P adopts the anti-CeCoIn5 structure with the space group P4/mmm and conducted a detailed substitutional study with its isostructural antiferromagnetic analogue MnPt5P. We demonstrate a solution route to grow large single crystals of both MnPd5P and the alloys Mn(Pt1-xPdx)5P. EDS and X-ray diffraction data support the formation of a full Mn(Pt1-xPdx)5P solid solution that maintains the tetragonal anti-CeCoIn5 structure. The magnetic data show that the primarily antiferromagnetic state in pure MnPt5P is extremely sensitive to Pd substitution, and as little as x $>$ 0.010 Pd stabilizes purely ferromagnetic order. At low x $<$ 0.010, the single antiferromagnetic transition in MnPt5P splits into a higher temperature ferromagnetic transition and lower temperature ferromagnetic-to- antiferromagnetic and lower temperature antiferromagnetic to ferromagnetic transition. The antiferromagnetic region forms a bubble-region in the T–x phase diagram which persists up to x $\approx$ 0.008–0.009, and further cooling recovers the original ferromagnetic state as the samples approach base temperature. Room temperature values of the a-lattice parameter and unit-cell- volume also manifest anomalous behavior for x $<$ $\approx$ 0.010, suggesting that some electronic topological phase transition, such as a Lifshitz transition, may be responsible for the changes in both magnetic ordering as well as structural features. Electronic band structure calculations indicate pure MnPt5P has several pockets close to the Fermi level that could be involved in such a transition. Beyond x $>$ 0.010, the ferromagnetic Curie temperature is substantially enhanced with Pd incorporation to $\approx$ 312 K at x = 0.62 before falling back to 295 K in pure MnPd5P. All Mn(Pt1-xPdx)5P samples have strong magnetic anisotropy in which the ab-plane is the easy direction, and the anisotropy field decreases from $\approx$108 kOe for x = 0.022 Pd to $\approx$10 kOe for MnPd5P, likely a result of reduced spin orbit coupling. ## 5 Acknowledgements Work at Ames National Laboratory (T.J.S., N.F., T.R.S, J.S., L.L.W., S.L.B., P.C.C.) were supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division. Ames National Laboratory is operated for the U.S. Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. T.J.S., P.C.C., and L.L.W. were supported by the Center for Advancement of Topological Semimetals (CATS), an Energy Frontier Research Center funded by the U.S. Department of Energy Office of Science, Office of Basic Energy Sciences, through the Ames National Laboratory under its Contract No. DE-AC02-07CH11358 with Iowa State University. The work at Rutgers is supported by Beckman Young Investigator award and NSF-DMR-2053287. C. -J.K. and G.K. were supported by the U.S. Department of Energy, Office of Science (Basic Energy Science) as a part of the Computational Materials Science Program through the Center for Computational Design of Functional Strongly Correlated Materials and Theoretical Spectroscopy under DOE grant no. DE-FOA-0001276. C.-J.K. also acknowledges support by NRF grant No. 2022R1C1C1008200. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-1644779 and the State of Florida. *corresponding authors’ email<EMAIL_ADDRESS><EMAIL_ADDRESS> ||T.J.S. and R.S.D.M. are equally contributing authors. Conflicts of Interest The authors have no conflicts of interest to declare. ## 6 Appendix ### 6.1 EDS analysis of Mn(Pt1-xPdx)5P Table 3: Chemical compositions determined from EDS analysis for the solution- grown single crystals of Mn(Pt1-xPdx)5P. The nominal compositions used for the growth were Mn9Pt71-yPdyP20, and we also give the nominal Pd:Pt fraction (x) in each. The EDS values of x represent the averages of 3-6 scans on each sample and the error bars were obtained considering both the EDS fitting errors and standard deviations of each measurement (see text). Nominal y | Nominal x | EDS x Mn(Pt1-xPdx)5P ---|---|--- 0.25 | 0.0035 | 0.00013 $\pm$ 0.001 0.5 | 0.0070 | 0.003 $\pm$ 0.001 1 | 0.014 | 0.0053 $\pm$ 0.0005 1.5 | 0.021 | 0.0082 $\pm$ 0.0007 1.75 | 0.025 | 0.009 $\pm$ 0.001 2.25 | 0.032 | 0.013 $\pm$ 0.002 3 | 0.042 | 0.022 $\pm$ 0.001 6 | 0.0845 | 0.033 $\pm$ 0.003 9 | 0.13 | 0.0635 $\pm$ 0.003 16 | 0.225 | 0.116 $\pm$ 0.005 23 | 0.32 | 0.188 $\pm$ 0.005 47 | 0.66 | 0.543 $\pm$ 0.004 50 | 0.70 | 0.623 $\pm$ 0.004 Table 4: Chemical compositions determined from EDS analysis for the polycrystalline samples of Mn(Pt1-xPdx)5P. The nominal compositions used for the growth were Mn(Pt1-yPdy)5P. Nominal x Mn(Pt1-xPdx)5P | EDS x Mn(Pt1-xPdx)5P ---|--- 0.2 | 0.188(5) 0.4 | 0.39(2) 0.5 | 0.502(5) 0.6 | 0.589(3) 0.8 | 0.81(1) Table 5: Single crystal structure refinement information for Mn(Pt1-xPdx)5P at 300(2) K. (The standard deviations are indicated by the values in parentheses). The single crystals used for data collection and refinement were picked from the sintered pellets. Loaded composition | MnPd5P | Mn(Pt0.2Pd0.8)5P | Mn(Pt0.4Pd0.6)5P | Mn(Pt0.5Pd0.5)5P | Mn(Pt0.8Pt0.2)5P ---|---|---|---|---|--- Refined Formula | MnPd5P | Mn(Pt0.172(4)Pd0.828(4))5P | Mn(Pt0.454Pd0.546)5P | Mn(Pt0.48Pd0.52)5P | Mn(Pt0.58Pd0.42)5P F.W. (g/mol) | 617.91 | 694.18 | 819.24 | 848.5 | 875.11 Space group; Z | P4/mmm; 1 | P4/mmm; 1 | P4/mmm; 1 | P4/mmm; 1 | P4/mmm; 1 a(Å) | 3.899(2) | 3.894(1) | 3.887(2) | 3.888 (2) | 3.901(3) c(Å) | 6.867(4) | 6.855(1) | 6.853(2) | 6.861(2) | 6.892(4) V (Å3) | 104.42(2) | 103.93(2) | 103.54(4) | 103.73(5) | 104.93(11) $\theta$ range (º) | 2.966-34.770 | 5.236-34.828 | 5.246-34.892 | 2.969-3.874 | 5.919-34.646 No. reflections; R${}_{\text{int}}$ | 578; 0.0609 | 1397; 0.0293 | 999; 0.0444 | 904; 0.0549 | 214; 0.0233 No. independent reflections | 170 | 173 | 166 | 167 | 94 No. parameters | 12 | 14 | 14 | 14 | 14 R${}_{\text{1}}$: wR${}_{\text{2}}$ (I $>$ 2(I)) | 0.0509; 0.1204 | 0.0253; 0.0603 | 0.0429; 0.1026 | 0.0417; 0.1080 | 0.0373; 0.0947 Goodness of fit | 1.282 | 1.346 | 1.331 | 1.429 | 1.154 Diffraction peak and hole (e-/Å3) | 2.656; -1.863 | 2.632; -1.926 | 7.771; -3.765 | 7.323; 5.219 | 2.579; -3.540 Temperature (K) | 300 (2) | 299 (2) | 301 (2) | 300 (2) | 301 (2) Table 6: Atomic coordinates, occupancies and isotropic displacement parameters of Mn(Pt1-xPdx)5P at 300(2) K. (U${}_{\text{eq}}$ is defined as one-third of the trace of the orthogonalized Uij tensor (Å2)). Atom | Wyckoff. | Occ. | x | y | z | U${}_{\text{eq}}$ ---|---|---|---|---|---|--- MnPd5P Pd1 | 4i | 1 | 0 | ½ | 0.2948 (1) | 0.015(1) Pd2 | 1a | 1 | 0 | 0 | 0 | 0.012(2) Mn3 | 1c | 1 | ½ | ½ | 0 | 0.022(1) P4 | 1b | 1 | 0 | 0 | ½ | 0.016(2) Mn(Pt0.2Pd0.8)5P Pd1 | 4i | 0.84(2) | 0 | ½ | 0.29385(9) | 0.0058(2) Pt2 | 4i | 0.22(2) | 0 | ½ | 0.2948(1) | 0.0058(2) Pd3 | 1a | 0.78(2) | 0 | 0 | 0 | 0.0046(3) Pt4 | 1a | 0.22(2) | 0 | 0 | 0 | 0.0046(3) Mn3 | 1c | 1 | ½ | ½ | 0 | 0.0128(9) P4 | 1b | 1 | 0 | 0 | ½ | 0.0076(11) Mn(Pt0.4Pd0.6)5P Pd1 | 4i | 0.57(6) | 0 | ½ | 0.29296(17) | 0.0056(4) Pt2 | 4i | 0.43(6) | 0 | ½ | 0.29296(17) | 0.0056(4) Pd3 | 1a | 0.55(6) | 0 | 0 | 0 | 0.0037(6) Pt4 | 1a | 0.45(6) | 0 | 0 | 0 | 0.0037(6) Mn3 | 1c | 1 | ½ | ½ | 0 | 0.020(2) P4 | 1b | 1 | 0 | 0 | ½ | 0.006(2) Mn(Pt0.5Pd0.5)5P Pd1 | 4i | 0.50(8) | 0 | ½ | 0.29274(18) | 0.0078(5) Pt2 | 4i | 0.50(8) | 0 | ½ | 0.29274(18) | 0.0078(5) Pd3 | 1a | 0.42(6) | 0 | 0 | 0 | 0.0049(7) Pt4 | 1a | 0.58(6) | 0 | 0 | 0 | 0.0049(7) Mn3 | 1c | 1 | ½ | ½ | 0 | 0.015(3) P4 | 1b | 1 | 0 | 0 | ½ | 0.011(3) Mn(Pt0.6Pd0.4)5P Pd1 | 4i | 0.43(6) | 0 | ½ | 0.29254(11) | 0.0084(6) Pt2 | 4i | 0.57(6) | 0 | ½ | 0.29254(11) | 0.0084(6) Pd3 | 1a | 0.38(6) | 0 | 0 | 0 | 0.0068(7) Pt4 | 1a | 0.62(6) | 0 | 0 | 0 | 0.0068(7) Mn3 | 1c | 1 | ½ | ½ | 0 | 0.012(3) P4 | 1b | 1 | 0 | 0 | ½ | 0.014(4) Figure 8: EDS values of x Pd in Mn(Pt1-xPdx)5P compared to the nominal x added to the starting crystal growth compositions. The inset shows a closeup of the low-x data, and the dashed line is a linear fit to the data up to nominally x = 0.0845. Figure 9: Field dependent magnetization isotherms measured at 2 K for all Pd containing Mn(Pt1-xPdx)5P (0.022 $\geq$ x $\geq$ 1) single crystals. The small degree of non-linearity observed at low fields for x = 0.022 and x = 0.064 Pd likely is from small misorientation of the sample. The dotted lines show the extrapolation of the tangents used to estimate the anisotropy fields H${}_{\text{A}}$. Figure 10: (a) Close up view of the resistance, normalized to R(375 K), around T3 for x = 0.0026. The R(T) is hysteretic between cooling and warming data implying T3 is first-order. (b) Residual resistance ratios for all Mn(Pt1-xPdx)5P samples (note the log scale for the y-axis). Table 3 lists the nominal compositions Mn9Pt71-yPdyP20 used for the growth of Mn(Pt1-xPdx)5P single crystals and the corresponding values of x determined by EDS from each batch. Note that starting compositions do not correspond to exact Mn(Pt1-xPdx)5P stoichiometry (i.e. y $\neq$ x) because the intention for solution growth is to intersect the liquidus surface for crystalization of Mn(Pt1-xPdx)5P on cooling, not to be directly “on-line”. Table 3 gives the same information for the polycrystalline samples obtained from solid-state reactions. The values of x are the average of multiple scans obtained on each sample and the error bars were determined either by the standard deviations or the EDS fitting error (the fitting error was only used for x = 0.0001). We calculated x relative to the total amount of Pt and Pd detected in each sample, i.e. x = f${}_{\text{Pd}}$/(f${}_{\text{Pt}}$+f${}_{\text{Pd}}$) where f${}_{\textit{Pt}}$(f${}_{\textit{Pd}}$) represent the quantity of Pt(Pd) found in each sample. For both the solution growth single crystals and polycrystalline samples, we observe a monotonic enrichment of the EDS Pd fraction as the starting growth compositions become richer in Pd. For the single crystalline sample with nominal y = 0.25 and EDS x = 0.00013, the Pd fraction is under the detection limit of our instrument (the fitting errors to the EDS spectra were $\approx$ 10 times greater than the detected quantity of Pd). This is unsurprising given the extremely dilute quantity added to the initially melt (y = 0.25, which is $\approx$ 0.35 $\%$ Pd relative to the total quantity of Pt and Pd); however, the significant differences in the magnetic and transport properties between this sample and those of pure MnPt5P imply very small but finite Pd incorporation (see Figures 4–6 in the main text). Given the detected Pd in this sample is below the resolution of our instrument, to estimate the Pd fraction, we extrapolated the linear trend between nominal and EDS values of x to nominally x = 0.0035 (see the inset to Figure 8), which gives an estimate of x = 0.0013 for the most dilute sample. ### 6.2 Crystallographic information for Mn(Pt1-xPdx)5P Table 5 gives the refinement information and statistics for the single crystal XRD refinements of Mn(Pt1-xPdx)5P samples. The single crystals used for these measurements were picked from the sintered pellets (solid-state reactions). The atomic positions and isotropic displacement parameters are listed in Table 6. The results indicate that MnPd5P and the Mn(Pt1-xPdx)5P compounds all adopt the layered tetragonal (P4/mmm) anti-CeCoIn5 type structure. The single crystal XRD refinements support mixed occupancy between Pt and Pd on the two atomic sites 1a and 4i, indicating the formation of a Mn(Pt1-xPdx)5P solid solution. As shown in Table 6, the Pd atoms may have a slight preference for occupying the 4i site over the 1a site; however, given the uncertainties of our experiments we are not able to confidently assert at sight preference for the Pt or Pd atoms. ### 6.3 Additional Data for Mn(Pt1-xPdx)5P Single Crystals Figure 9 shows the magnetization isotherms measured at 2 K for Mn(Pt1-xPdx)5P where 0.022 $\geq$ x $\geq$ 0.01. All datasets show field dependencies that are characteristic of ferromagnetic order in which the ab-plane is the easy direction (i.e. H $\perp$ c). The saturation magnetization are all on the order of 4-4.5 $\mu_{\text{B}}$/f.u., with $\approx$ 10 $\%$ variation amongst the samples, and there is no clear trend in the magnitude of $\mu_{\text{sat}}$ as a function of the Pd fraction. The discrepancies likely represent a combination of weighing errors, uncertainty in x, and intrinsic changes in $\mu_{\text{sat}}$ as x changes. Ultimately, the primary conclusion drawn from Figure 9 is that the x $>$ 0.01 Mn(Pt1-xPdx)5P are unambiguously easy-plane ferromagnets below T1. Extrapolating the tangents of the M(H) isotherms in Figure 9 to their intersection point gives an estimate of the anisotropy fields H${}_{\text{A}}$, which are plotted explicitly in Figure 11. We find that H${}_{\text{A}}$ decreases monotonically with increasing Pd fraction, strongly suggesting that the weakening mangetic anisotropy is governed by the decreasing strength of spin orbit coupling accompanying the replacement of Pt atoms with lower Z Pd. Figure 10a shows the temperature dependent resistance for x = 0.0026 zoomed in around T3, i.e., the lower temperature re-entry into the ferromagnetic state. The resistance is hysteretic between cooling and warming temperature sweeps, showing that T3 is first-order. Figure 10b displays the residual resistance ratios (RRR = R(375 K)/R(1.8 K)) of the Mn(Pt1-xPd1-x)5P samples. We find that the RRR is highest for x = 0 and x = 1 and decreases rapidly when the composition moves away from the pure phases, reaching a minimum value of $\approx$ 3.5 at x = 0.54. The reasonably high RRR values for the pure compounds (115 for x = 1 and 37 for x = 1) indicates high crystal quality, and the swift reduction of RRR as x approaches 0.5 is consistent with stronger scattering of charge carriers accompanying the increasing crystallographic disorder between Pt and Pd atoms in the alloys. Figure 11: Anisotropy fields for Mn(Pt1-xPdx)5P single crystals estimated from the extrapolations of the tangents in Figure 9. ### 6.4 Magnetic Data for Polycrystalline Mn(Pt1-xPdx)5P Figure 12: Temperature dependent M/H measured on pellets of polycrystalline Mn(Pt1-xPdx)5P. Figure 13: Field dependent magnetization isotherms measured at 2 K on pellets of polycrystalline Mn(Pt1-xPdx)5P. Because the crystallographic data for MnPd5P and Mn(Pt1-xPdx)5P was obtained from small single crystals picked from solid state reactions, we also measured the magnetic properties of the sintered pellets to ensure consistency with the solution-grown single crystals discussed in the main text. The magnetic properties of the polycrystalline Mn(Pt1-xPdx)5P (x = 0.2, 0.4, 0.5, 0.6, 0.8, and 1) were measured in a Quantum Design PPMS Dynacool (QD-PPMS) at the National High Magnetic Field Laboratory over a temperature range of 1.8 to 400 K with the applied field of 1 kOe. Additionally, magnetic measurements of Pt doped compounds were carried out in a vibrating sample magnetometer (VSM) in a Quantum Design PPMS system over a temperature range of 1.8–600 K with the applied field of 1 kOe. The field dependent magnetization measurements were carried out at several different temperatures between 2–350 K and in fields up to 90 kOe. The Magnetic data for the polycrystalline Mn(Pt1-xPdx)5P samples is displayed in Figures 12 and 13. Like the single crystals discussed above, the Pd- containing polycrystalline samples, which have 0.2 $\geq$ x $\geq$ 1, also show ferromagnetic behavior where TC is maximized near 312 K for $\approx$ x = 0.60 Pd. However, in addition to the Mn(Pt1-xPdx)5P primary phase, the polycrystalline pellets also contained a small fraction of a ferromagnetic impurity, likely MnPt3 (TC $\approx$ 390 K).[30, 31] Owing to the strong response of ferromagnetism to an applied magnetic field, even small ferromagnetic impurities are easily detected in magnetization measurements, and our polycrystalline samples with x $<$ 0.5 all show high temperature (T $>$ 300 K) ferromagnetic-like transitions that are not observed in any of the datasets collected on the single crystals. An analogous MnPd3 phase also exists, but orders antiferromagnetically,[32, 33, 34] which likely explains why the polycrystalline Mn(Pt1-xPdx)5P samples with x $>$ 0.5 do not show evidence for a second transition. As the ordering of MnPt3 could easily be misinterpreted as a second, higher temperature, transition in the Pt-rich samples, the contrast between single and polycrystalline data is an excellent demonstration of the advantages of solution growth, which allows us to produce high quality Mn(Pt1-xPdx)5P crystals free of significant contamination by magnetic impurities. The field dependent magnetization isotherms for the polycrystalline samples are shown in Figure 13 and show soft ferromagnetic behavior below T1 and saturated moments $\approx$ 4-4.5 $\mu_{\text{B}}$/f.u. at 2 K. Excluding the transitions from the MnPt3, the intrinsic Curie temperatures of the polycrystalline samples are otherwise in good agreement with those inferred from the data collected on single crystals. ## References * [1] Tomasz Dietl “A Ten-Year Perspective on Dilute Magnetic Semiconductors and Oxides” In _Nature Materials_ 9.12 Nature Publishing Group, 2010, pp. 965–974 * [2] John B Goodenough “Direct Cation–Cation Interactions in Several Oxides” In _Physical Review_ 117.6 APS, 1960, pp. 1442 * [3] German D Samolyuk and Gordon J Miller “Relation Between Chemical Bonding and Exchange Coupling Approaches to the Description of Ordering in Itinerant Magnets” In _Journal of Computational Chemistry_ 29.13 Wiley Online Library, 2008, pp. 2177–2186 * [4] Zaiyao Fei, Bevin Huang, Paul Malinowski, Wenbo Wang, Tiancheng Song, Joshua Sanchez, Wang Yao, Di Xiao, Xiaoyang Zhu and Andrew F May “Two-Dimensional Itinerant Ferromagnetism in Atomically Thin Fe3GeTe2” In _Nature Materials_ 17.9 Nature Publishing Group, 2018, pp. 778–782 * [5] Yuemei Zhang, Gordon J Miller and Boniface PT Fokwa “Computational Design of Rare-Earth-Free Magnets with the Ti3Co5B2-Type Structure” In _Chemistry of Materials_ 29.6 ACS Publications, 2017, pp. 2535–2541 * [6] Pritam Shankhari, Jan P Scheifers, Martin Hermus, Kunio Yubuta and Boniface PT Fokwa “Unexpected Trend Deviation in Isoelectronic Transition Metal Borides A3T5B2 (A = group 4, T = group 9): Ti3Co5B2-vs. Perovskite-Type Studied by Experiments and DFT Calculations” In _Zeitschrift für anorganische und allgemeine Chemie_ 643.21 Wiley Online Library, 2017, pp. 1551–1556 * [7] Bin Chen, JinHu Yang, HangDong Wang, Masaki Imai, Hiroto Ohta, Chishiro Michioka, Kazuyoshi Yoshimura and MingHu Fang “Magnetic Properties of Layered Itinerant Electron Ferromagnet Fe3GeTe2” In _Journal of the Physical Society of Japan_ 82.12 The Physical Society of Japan, 2013, pp. 124711 * [8] Boniface P.. Fokwa, Heiko Lueken and Richard Dronskowski “Rational Design of Complex Borides – One-Electron-Step Evolution from Soft to Semi-Hard Itinerant Ferromagnets in the New Boride Series Ti2FeRu5–nRhnB2 (1 $\leq$ n $\leq$ 5)” In _European Journal of Inorganic Chemistry_ 2011.26, 2011, pp. 3926–3930 DOI: https://doi.org/10.1002/ejic.201100315 * [9] Xin Gui, Ryan A Klein, Craig M Brown and Weiwei Xie “Chemical Bonding Governs Complex Magnetism in MnPt5P” In _Inorganic Chemistry_ 60.1 ACS Publications, 2020, pp. 87–96 * [10] Xin Gui, Madalynn Marshall, Ranuri S Dissanayaka Mudiyanselage, Ryan A Klein, Qiang Chen, Qiang Zhang, William Shelton, Haidong Zhou, Craig M Brown, Huibo Cao, Martha Greenblatt and Weiwei Xie “Spin Reorientation in Antiferromagnetic Layered FePt5P” In _ACS Applied Electronic Materials_ 3 * [11] Xin Gui and Weiwei Xie “Crystal structure, magnetism, and electronic properties of a rare-earth-free ferromagnet: MnPt5As” In _Chemistry of Materials_ 32.9 ACS Publications, 2020, pp. 3922–3929 * [12] Ranuri S Dissanayaka Mudiyanselage, Qiang Zhang, Madalynn Marshall, Mark Croft, Zhixue Shu, Tai Kong and Weiwei Xie “Spin Reorientation in Antiferromagnetic MnPd5Se with an Anti-CeCoIn5 Structure Type” In _Inorganic Chemistry_ 61.9 ACS Publications, 2022, pp. 3981–3988 * [13] Tyler J. Slade and Paul C. Canfield “Use of Refractory-Volatile Element Deep Eutectic Regions to Grow Single Crystalline Intermetallic Compounds” In _Zeitschrift für anorganische und allgemeine Chemie_ 648.15, 2022, pp. e202200145 DOI: https://doi.org/10.1002/zaac.202200145 * [14] Paul C Canfield, Tai Kong, Udhara S Kaluarachchi and Na Hyun Jo “Use of Frit-Disc Crucibles for Routine and Exploratory Solution Growth of Single Crystalline Samples” In _Philosophical Magazine_ 96.1 Taylor & Francis, 2016, pp. 84–92 * [15] “Canfield Crucible Sets” Accessed: 2022-03-23, https://www.lspceramics.com/canfield-crucible-sets-2/ * [16] Paul C Canfield “New Materials Physics” In _Reports on Progress in Physics_ 83.1 IOP Publishing, 2019, pp. 016501 * [17] Brian H Toby and Robert B Von Dreele “GSAS-II: the genesis of a modern open-source all purpose crystallography software package” In _Journal of Applied Crystallography_ 46.2 International Union of Crystallography, 2013, pp. 544–549 * [18] George M Sheldrick “Crystal structure refinement with SHELXL” In _Acta Crystallographica Section C: Structural Chemistry_ 71.1 International Union of Crystallography, 2015, pp. 3–8 * [19] Peter Muller, Regine Herbst-Irmer, Anthony Spek, Thomas Schneider and Michael Sawaya “Crystal Structure Refinement: A Crystallographer’s Guide to SHELXL” OUP Oxford, 2006 * [20] Koichi Momma and Fujio Izumi “VESTA: a Three-Dimensional Visualization System for Electronic and Structural Analysis” In _Journal of Applied crystallography_ 41.3 International Union of Crystallography, 2008, pp. 653–658 * [21] Dale E Newbury and Nicholas WM Ritchie “Rigorous Quantitative Elemental Microanalysis by Scanning Electron Microscopy/Energy Dispersive X-Ray Spectrometry (SEM/EDS) with Spectrum Processing by NIST DTSA-II” In _Scanning Microscopies_ 9236, 2014, pp. 90–106 SPIE * [22] Pierre Hohenberg and Walter Kohn “Inhomogeneous Electron Gas” In _Physical Review_ 136.3B APS, 1964, pp. B864 * [23] W. Kohn and L.. Sham “Self-Consistent Equations Including Exchange and Correlation Effects” In _Phys. Rev._ 140 American Physical Society, 1965, pp. A1133–A1138 DOI: 10.1103/PhysRev.140.A1133 * [24] John P. Perdew, Adrienn Ruzsinszky, Gábor I. Csonka, Oleg A. Vydrov, Gustavo E. Scuseria, Lucian A. Constantin, Xiaolan Zhou and Kieron Burke “Restoring the Density-Gradient Expansion for Exchange in Solids and Surfaces” In _Phys. Rev. Lett._ 100 American Physical Society, 2008, pp. 136406 DOI: 10.1103/PhysRevLett.100.136406 * [25] G. Kresse and J. Furthmüller “Efficient iterative schemes for ab initio total-energy calculations using a plane-wave basis set” In _Phys. Rev. B_ 54 American Physical Society, 1996, pp. 11169–11186 DOI: 10.1103/PhysRevB.54.11169 * [26] G. Kresse and J. Furthmüller “Efficiency of ab-initio total energy calculations for metals and semiconductors using a plane-wave basis set” In _Computational Materials Science_ 6.1, 1996, pp. 15–50 DOI: https://doi.org/10.1016/0927-0256(96)00008-0 * [27] P.. Blöchl “Projector augmented-wave method” In _Phys. Rev. B_ 50 American Physical Society, 1994, pp. 17953–17979 DOI: 10.1103/PhysRevB.50.17953 * [28] Hendrik J. Monkhorst and James D. Pack “Special points for Brillouin-zone integrations” In _Phys. Rev. B_ 13 American Physical Society, 1976, pp. 5188–5192 DOI: 10.1103/PhysRevB.13.5188 * [29] Michael E. Fisher and J.. Langer “Resistive Anomalies at Magnetic Critical Points” In _Phys. Rev. Lett._ 20 American Physical Society, 1968, pp. 665–668 DOI: 10.1103/PhysRevLett.20.665 * [30] B. Antonini, F. Lucari, F. Menzinger and A. Paoletti “Magnetization Distribution in Ferromagnetic Mn${\mathrm{Pt}}_{3}$ by a Polarized-Neutron Investigation” In _Phys. Rev._ 187 American Physical Society, 1969, pp. 611–618 DOI: 10.1103/PhysRev.187.611 * [31] B. Antonini, M. Felici and F. Menzinger “Spin waves and exchange interactions in MnPt3 ferromagnetic alloy” In _Physics Letters A_ 30.5, 1969, pp. 310–311 DOI: https://doi.org/10.1016/0375-9601(69)91013-5 * [32] Hiroshi Sato and Robert Toth “Long-Period Superlattice ${\mathrm{Pd}}_{3}$Mn II and Its Large Tetragonal Distortion” In _Phys. Rev._ 139 American Physical Society, 1965, pp. A1581–A1593 DOI: 10.1103/PhysRev.139.A1581 * [33] E. Kre´n and G. Ka´da´r “Crystal and magnetic structures in the Mn-Pd system near MnPd3” In _Physics Letters A_ 29.6, 1969, pp. 340–341 DOI: https://doi.org/10.1016/0375-9601(69)90160-1 * [34] E. Krén, G. Kádár and M. Márton “Neutron diffraction study of the MnPd3 phase” In _Solid State Communications_ 10.12, 1972, pp. 1195–1198 DOI: https://doi.org/10.1016/0038-1098(72)90942-8
# Curras + Baladi: Towards a Levantine Corpus ###### Abstract This paper presents two-fold contributions: a full revision of the Palestinian morphologically annotated corpus (Curras), and a newly annotated Lebanese corpus (Baladi). Both corpora can be used as a more general Levantine corpus. Baladi consists of around 9.6K morphologically annotated tokens. Each token was manually annotated with several morphological features and using LDC’s SAMA lemmas and tags. The inter-annotator evaluation on most features illustrates 78.5% Kappa and 90.1% F1-Score. Curras was revised by refining all annotations for accuracy, normalization and unification of POS tags, and linking with SAMA lemmas. This revision was also important to ensure that both corpora are compatible and can help to bridge the nuanced linguistic gaps that exist between the two highly mutually intelligible dialects. Both corpora are publicly available through a web portal. Keywords: Arabic morphology, Annotated Corpus, Arabic Dialect, Levantine, Palestinian Arabic, Lebanese Arabic languageresourceLanguage Resources english utf8 Curras + Baladi: Towards a Levantine Corpus Karim El Haff, Mustafa Jarrar††thanks: * Corresponding author.*, Tymaa Hammouda, Fadi Zaraket --- University of Strasbourg, Birzeit University, American University of Beirut <EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> Abstract content ## 1\. Introduction The processing of the Arabic language is a complex field of research. This is due to many factors, including the complex and rich morphology of Arabic, its high degree of ambiguity, and the presence of several regional varieties that need to be processed while taking into account their unique characteristics. When its dialects are taken into account, this language pushes the limits of NLP to find solutions to problems posed by its inherent nature. It is a diglossic language; the standard language is used in formal settings and in education and is quite different from the vernacular languages spoken in the different regions and influenced by older languages that were historically spoken in those regions. Indeed, Arabic speakers use those local varieties in day-to-day communication. We can distinguish several families of dialects: Moroccan, Egyptian, Sudanese, Levantine, Iraqi and Khaliji (Gulf). Arabic dialects tend to diverge from Modern Standard Arabic (MSA) in terms of phonetics, morphology, syntax and vocabulary. Arabic content was mainly written in MSA. Recently, dialectal content has been increasing massively, especially on social media. MSA is considered among the under-resourced languages by the NLP community [Darwish et al., 2021]. Dialectal Arabic (DA) is even less resourced. The resource gap between MSA and the dialects implies a large margin of error when MSA tools are used against dialectal content [Zbib et al., 2012]. Thus, it is important to build resources and tools to identify dialects in context and to treat Arabic content based on its unique dialectal identity. In this research, we focus on the Lebanese variety of Levantine Arabic, which is used in daily conversations and in the Lebanese media. It is spoken by about 6 million locals, and almost double that number in diaspora. The paper presents a morphologically annotated corpus for Lebanese. The development of the corpus uses texts covering a wide spectrum of subjects and registers. The corpus is designed to be compatible with, and leverage, Curras [Jarrar et al., 2017], the Palestinian corpus with morphological annotations. In this way, both corpora can be used as a more general Levantine corpus, especially that the Palestinian dialect represents Southern Levantine and that Lebanese represents Northern Levantine varieties. In addition to providing new Lebanese corpus annotations, we have also revised Curras annotations to ensure compatibility with the LDC’s SAMA tags and lemmas [Maamouri et al., 2010]. In this paper, we present two-fold contributions: 1. 1. Baladi, a Lebanese morphologically annotated corpus, which consists of 9.6K tokens. Each token was manually annotated with prefixes, suffixes, stem, POS tags, MSA and DA lemmatization, English gloss, in addition to other features such as gender, number, aspect, and person. The corpus was annotated mainly using LDC’s SAMA lemmas and tags. The inter-annotator evaluation on most features illustrates 87% agreement using the Cohen's Kappa score [McHugh, 2015]. 2. 2. Revision of Curras, by refining all annotations for accuracy, normalization and unification of POS tags, and linking with SAMA lemmas. This revision was also important to ensure that both corpora are compatible and can together form a more general Levantine corpus. Both, Curras and Baladi, are publicly available online111https://portal.sina.birzeit.edu/curras. The rest of the paper is organized as follows. We overview related work in Section 2. Section 3 describes the Lebanese dialect. In Section 4, we present Corpus Baladi. In Section 5 we present the annotation process and guidelines. Section 6 presents the evaluation and the inter-annotator agreement. In Section 7, we present the revisions we introduce to Curras. Section 8 discusses how we managed to transform Curras into a more Levantine corpus. Finally, Section 9 concludes. ## 2\. Related Work This section reviews efforts to create annotated corpora for Arabic dialects as well as for MSA. ### 2.1. MSA Resources The Penn Arabic Treebank (PATB) [Maamouri et al., 2005] by the Linguistic Data Consortium (LDC) is central to the development of several MSA resources. It enriches newswire text in MSA collected from several news outlets with tokenization, segmentation, lemma, POS and gloss tags annotations along with syntactic trees. PATB uses the morphological tags as defined by the BAMA morphological analyzer [Buckwalter, 2004], which provides vocalized solutions, unique lemmas, prefixes, suffixes, stems, POS tags, and English gloss terms. SAMA [Maamouri et al., 2010] is a substantial improvement and refinement on BAMA as it extends its lexicon and provides several analysis refinements. The Prague Arabic Dependency Tree bank [Hajič et al., 2004] enriched the literature with functional linguistic annotations which in turn lead to the emergence of ElixirFM [Smrž, 2007] The Arabic lexicographic database at Birzeit University [Jarrar and Amayreh, 2019, Alhafi et al., 2019] provides a large set of MSA lemmas, word forms, and morphological features, which are linked with the Arabic Ontology [Jarrar, 2021, Jarrar, 2011] using the W3C LEMON model [Jarrar et al., 2019] ### 2.2. Dialectal Resources The Levantine Arabic Treebank [Maamouri et al., 2006] featured the Jordanian Arabic dialect. Curras [Jarrar et al., 2017, Jarrar et al., 2014] is a more recent Levantine corpus featuring the Palestinian dialect. Large number of textual entries were collected from Facebook, Twitter and scripts of the Palestinian series ``Watan Aa Watar''.Each word in the corpus was then manually annotated with a set of morphological attributes. The corpus contains 56K tokens. Earlier, the CALLHOME Egyptian Arabic corpus [Canavan et al., 1997] consisted of transcripts of telephone conversations in Egyptian. CALIMA [Habash et al., 2012] extended ECAL [Kilany et al., 2002] which build on CALLHOME to provide morphological analysis functionality of the Egyptian dialect. The COLABA project [Diab et al., 2010] collected resources in dialectal Arabic (mainly in Egyptian and Levantine) from the collection of online blogs. The effort eventually lead to constructing the Egyptian Tree Bank (ARZATB) [Maamouri et al., 2014]. Curras and ARZATB were leveraged as case studies for morphological analysis and disambiguation [Eskander et al., 2016]. YADAC [Al-Sabbagh and Girju, 2012] focuses also on the Egyptian dialect identification and provides a mutli-genre approach. It is a collection of web blogs, micro blogs, and several Egyptian content discussion forums. MADAR [Bouamor et al., 2014] is an ongoing multi-dialect corpora covering 26 different cities and their corresponding dialects. Other efforts cover Emirati [Ntelitheos and Idrissi, 2017, Khalifa et al., 2018], Tunisian and Algerian [Zribi et al., 2015, Harrat et al., 2014], and Yemeni and Moroccan [Al-Shargi et al., 2016]. Our proposed contributions in this paper is to enrich Curras by (1) providing a Lebanese Levantine extension and by (2) refining and revising Curras entries to better accommodate the general Levantine dialect. ## 3\. Lebanese and Levantine Dialects The Levantine family of dialects can be linguistically split into Northern Levantine including the Lebanese and Syrian dialects, and Southern Levantine including Palestinian and Jordanian. During the spread of Arabic from the seventh century onwards, the Levant was a region that mainly spoke Western Aramaic [Skaf, 2015]. Aramaic is a Semitic language continuum spoken mainly during antiquity throughout the Levantine region and it served as a lingua franca then. Aramaic survives today through modern dialects such as Turoyo Syriac and Western Neo-Aramaic spoken in parts of Syria. It also survives more subtly in the noticeable substratum underlying Levantine dialects that differ from MSA on several common linguistic specifities such as phonology, syntax, morphology and lexicon. This motivates using dialect specific annotations to annotate Levantine dialects. In the sequel, we briefly review the differentiating factor between Levantine dialects and MSA. ### 3.1. Phonological differences Like other Semitic languages, Aramaic and its varieties were written with a 22-letter alphabet (Abjad). When Arabic was spread to the Levant, the Christian populations of the region began to transcribe the Arabic language using this consonantal alphabet, a tradition of Syriac writing known as "Garshouni" [Briquel Chatonnet, 2005]. Due to the lack of some letters compared to the Arabic alphabet which contains 28 letters, adaptations were made in the Garshouni script, and some Syriac graphemes can represent several phonemes of Arabic, especially among the emphatic letters. Indeed, certain Arabic phonemes were not widely used by Levantine populations, even to this day; a speaker of Lebanese today tends to de-emphasize emphatic letters in Arabic words as for example in the case of مظلوم> (abused) which is pronounced مظلوم> in MSA and مزلوم> in Lebanese. Another example may be words containing ث> ث such as ثعلب> (fox) that is pronounced تعلب> or سعلب> across the Levant. As phonology differs in many situations for Levantine dialect speakers, spelling can vary greatly and can pose a challenge to the processing of those dialects when written. ### 3.2. Syntactical differences A common usage for sentence structure in MSA is the verb-subject-object (VSO) structure. Sentences tend to start with a verb followed by its subject and then its object. Other structure configurations tend to be less frequent. On the other hand, in Levantine dialects, this structure is more flexible as the verb and subject have a natural flow of interchangeable positions. MSA (VSO): أكل الولد التفاحة > LEVANTINE (VSO): أكل الولد التفاحة> LEVANTINE (SVO): الولد أكل التفاحة > In English (SVO): The child ate the apple ### 3.3. Morphological differences Levantine inherits templatic morphology where affixes play an important role from its Semitic roots. Major morphological differences exist when compared to MSA. One of them is the loss of case markings in Levantine. Additionally, there are Levantine-specific morphemes that do not exist in MSA such as عم> عم; the present continuous mark that precedes imperfect verbs to indicate the continuity of the action. Its absence in Levantine indicates that the action is a general truth: أنا عم باكل > (I am eating) أنا باكل> (I eat). MSA has no such entity. Context alone indicates whether the action is continuous or a general truth; أنا آكل> can mean both ``I am eating at the moment'' or ``I factually eat''. Other morphemes include رح> رح and ح> ح that are the future markers in Levantine dialects as opposed to MSA’s س> س and سوف> سوف. Furthermore, the progressive Levantine particle ب> ب (as in باكل> باكل) that is used to indicate imperfective verbs does not exist in MSA. ### 3.4. Differences in lexicon It is also important to notice that the Levantine lexicon is rich with old Aramaic words due to its pre-Arab heritage, as well as foreign loan words due to the Levant's location as a frequent passage of many civilisations. ## 4\. Corpus Collection We manually collected texts written in Lebanese from sources such as Facebook posts, blog posts and traditional poems. We collected a total of 9.6K tokens spanning over 424 sentences. We merged them all into a single text file, with an average of 22 words per sentence. The corpus was chosen based on a critical judgment to include several registers of Lebanese speech, hence the choice to include folk poems, and satirical texts from social networks and blog articles. We avoided text written in Arabizi (Arabic written using proprietary Latin letters) as this is not the goal of our corpus at this phase. As the size of the corpus is relatively small, we performed data collection manually through the retrieval of the transcripts of traditional Lebanese poems زجل> زجل by local poet Jihad Assi, satirical Facebook posts written in the vernacular Lebanese dialect by Mohamad Jaber as well as some blog posts written in the Lebanese dialect (Bel-Lebneene blog). We did not preprocess the text and kept the raw form. As such, we did not perform any unification of letter variations, removal of diacritics, or correction of typos. We based this on the selective quality of our corpora. We then tokenized the raw text. This produced a table with three columns (sentence ID, token ID, and token text). The table was represented in a modern shareable spread sheet tool where each token and its annotations stood on its own separate row. The annotators introduced annotations in separate columns each designated for a specific feature or tag. ## 5\. Annotation Methodology Table 1: A sentence in Lebanese with its full annotations. Translation: ``My eyes are grateful to yours who dwell in my soul as I make my provisions and I shall still, even without my eyes, see the world whole.'' Four linguists carried out the annotation process over a period of ten months. We used AnnoSheet to carry out the manual annotations. AnnoSheet is a Google Sheet that we empowered by developing and adding advanced JavaScript methods to (1) assist the annotation process and (2) validate the proposed annotations. For each row in the AnnoSheet, we have 16 columns: sentence ID, Token ID, token, CODA, prefix, stem, suffix, POS, MSA lemma, dialect lemma, gloss, person, aspect, gender, and number. The annotation guidelines for each of these columns are described in the following subsections. To speed up the annotation process, we uploaded the revised version of Curras annotations (See section 7) into another spreadsheet and allowed the annotator to look up candidate annotations from Curras. The JavaScript lookup method searches Curras and returns the top matching results. The annotator can then select one of the results, and edit the corresponding fields if needed. The annotator also has the option to fill in the annotations directly. To guide and control the quality of the annotations, we implemented several validation triggers in JavaScript to highlight potential mistakes. In addition, for each cell in AnnoSheet, we implemented a customized list, from which the annotator can select values based on the column. The lists are dynamic and they are populated with values that depend on the values in other cells of the same row. We, additionally, developed a Google Colab application to validate all annotations in the AnnoSheet and to flag cells that may require corrections. The validation ran once daily. In this way, the annotators were able to annotate each word, in context, by re-using annotations from Curras, fully or partially, or by entering new annotations. As a Google Sheet, AnnoSheet allowed the annotators to also annotate the corpus cooperatively, write feedback to each other, and skip tokens they are not certain about. Table 1 illustrates a sentence in Lebanese and its full annotations. ### 5.1. Annotation Guidelines This section presents the annotation guidelines for each of the different annotations tasks. #### CODA The CODA tag (التهجئة الاصطلاحية>) of a token signifies the ``correct'' spelling of the token. Instead of annotating the exact token in the corpus, the idea is to unify the different spelling variations of the same word into one CODA spelling, and annotate this CODA. Due to the lack of standardized orthographic spelling rules for Arabic dialects, people tend to write words as they pronounce them; thus, the same word might be written in different ways. In fact, the same person may write the same word in different ways in the same sentence. For example, consider the word meaning 'a lot': which can be written as كثير> كثير and كتير> كتير. The second letter in the first word corresponds to the sound $\theta$. The correct spelling in MSA is the first word with ث>. However, the ث> [th] sound is rarely pronounced in Lebanese and is often replaced by the t> [t] sound or the [s] sound. Similarly, the words بألكن> بألكن and بقلكون> بقلكون are different orthographic variations of the same word, which means ``I tell you''. In this case, we write both in the the same CODA spelling بقلكن> بقلكن. See more examples in Table 2. Token | | CODA ---|---|--- بألكن> بألكن | → | بقلكن> بقلكن بقلكون> بقلكون | → | بقلكن> بقلكن طريء> طريء | → | طريق> طريق هايدي> هايدي | → | هيدي> هيدي عيونن> عيونن | → | عيونٌ> عيونٌ Table 2: Example of words and their CODA spelling. Since our goal in this corpus is to transform Curras to be a more general Levantine corpus, we chose to adopt the Palestinian CODA guidelines [Habash et al., 2015] for the Lebanese dialect. We made some modifications and simplifications to be adapted to cover more Levantine regionalisms, as will be discussed in the next sections. It is notable to add that some slight spelling differences exist between Lebanese and Palestinian as the former is a northern Levantine variety while the latter is southern and regional differences exist. The most common examples of this lie in demonstrative pronouns where Palestinian tends to use more emphatic sounds than Lebanese; a masculine ``this'' is said هاظا> هاظا or هادا> هادا in Palestinian, while هَيدا> هَيدا in Lebanese. Another example is the use of م> م in Palestinian to indicate a third person plural where Lebanese uses a ن> ن: ``your house'' is بيتكم> بيتكم in Palestinian and بيتكن> بيتكن in Lebanese. The differences in spelling are due to the differences in pronunciation across the Levant and have no effect over the total mutual intelligibility of the dialects and thus a potential standardized spelling for Northern and Southern Levantine can be seen as the slight differences between British English and American English spelling systems. #### MSA Lemma The MSA Lemma (المدخلة المعجمية الفصحى>) is the MSA lemma of the token. We restricted the choices of MSA lemmas to SAMA lemmas. The AnnoSheet allows the annotator to search the SAMA database and select the target lemma. For tokens that are not derived from an MSA lemma, like بدي> بدي, we chose the closest SAMA lemma (e.g., 1_أَراد>). In case, no matching MSA lemmas are found in the SAMA database, the annotator is allowed to look up lemmas from Birzeit's lexicographic database [Jarrar and Amayreh, 2019], which are linked with the Arabic Ontology [Jarrar, 2021] and represented in the w3C Lemon model [Jarrar et al., 2019]. the annotator may also introduce a new MSA lemma, however, new lemmas are marked with ``_0'', such as (0_يوغا>) or (0_هيفاء>). Similar to SAMA lemmas, noun lemmas should be in the masculine singular form. Plural and feminine are acceptable in case there is no masculine singular. Verb lemmas should be in the past masculine singular 3rd person form. #### Dialect Lemma The dialectal lemma (المدخلة المعجمية العامية>) signifies the semantic value of the token as a lexicon entry. Similar to the MSA lemma, each token in the corpus is tagged with its DA lemma. If a token stems from MSA, then its MSA and DA lemmas are the same. For example, the dialect token بقلك> بقلك, which means ``I tell you'', has the same MSA and DA lemma 1_قال>. Some Levantine lexicon instances differ from MSA and need their own dialectal lemmas. These lemmas potentially do not exist in an ordinary MSA dictionary, due to their likely origin in other languages, notably Aramaic. As an example, the typical Levantine words used to say 'inside' and 'outside' are جوّا> جوّا and برّا> برّا, respectively. These two words are different in MSA: 'inside' is داخل> داخل and 'outside' is خارج> خارج. Table 1 illustrates more examples of Levantine lemmas that are not in MSA, such as عَم> عَم, رَح> رَح, شاف> شاف, لُوما> لُوما. #### Gloss The gloss (المعنى بالانجليزية>) is the meaning of the lemma in English. We restrict the glosses to be SAMA glosses if a SAMA lemma is used, or to Curras if available, otherwise we provide it in the same way. #### Stem The stem (الساق>) is the base word after removing suffixes and prefixes from the token. We follow the $\langle$Stem/POS $\rangle$ tagging schema used in Curras and SAMA, where the stem and the POS are separated by '/'. The POS is limited to the exact stem POS tagset found in SAMA. #### Prefixes and Suffixes We follow the prefixes (السوابق>) and suffixes (االواحق>) tagging schema used in Curras and SAMA: $\\{\langle$Prefix1/POS$\rangle+\langle$Prefix2/POS$\rangle\ldots\\}$ and $\\{\langle$Suffix1/POS$\rangle+\langle$Suffix2/POS$\rangle\ldots\\}$. As shown in Table 1 the prefix بـ> in the word بروحي> بروحي is the preposition بـ>/PREP. Multiple prefixes are combined with ``+''. For example, the three prefixes in the word وبالقلب> are: و>/CONJ+بـ>/PREP+الـ>/DET. Suffixes are written in the same way. For example, the suffixes in the word قلتلهن> قلتلهن are: تـ>/PVSUFF_SUBJ:1S+ لـ>/PREP+هن>/PRON_3FP. Prefixes and suffixes are critical when dealing with dialects. This is because the morphological difference between dialects and MSA words is mostly due to different combinations of prefixes and suffixes. Dialects use additional types of prefixes and suffixes that are not used in MSA. The prefix بـ> in بيضلن>, prefix هـ> in هالعيون>, or the suffix ش> in بعرفش>, are examples of affixes that are commonly used in Levantine dialects but are not part of the MSA morphology. To control the quality of our annotations of affixes (i.e., prefixes and suffixes), we extracted the set of all combinations of affixes in Curras and verified them manually (See section 7). This set along with the SAMA combinations of affixes were then uploaded to the AnnoSheet and used to limit the choices of the annotators. Table 3 presents the set of the prefixes used in the revised version of Curras and uploaded into AnnoSheet to be used by the annotators. Prefixes in Palestinian and Lebanese are all in common but there are two exceptions. The ا> in Palestinian can be used as an INTERROG_PART, like in اغنيها> اغنيها, which means ``shall I sing it?''. However, in such cases in Lebanese, the verb is conjugated in its imperfective first person form to express the same meaning by using با> and is said باغنيها> باغنيها. Additionally, all prepositions in Lebanese and Palestinian are the same, except for فـ> which is used only in Palestinian. We would also like to note that there are two prefixes in the corpora that are used only in MSA forms, and not in Palestinian or Lebanese, which are the سـ>/FUT_PART and the لـ>/JUS_PART. They occur in both corpora due to code-switching as this is a common phenomenon in dialects. Table 4 presents the set of suffixes used in the revised version of Curras. The majority of of the suffixes are common to both dialects. However, there seems to be one bold systematic difference between the two dialects and it concerns suffixes used to indicate a plural in the 2nd and 3rd person; هُن>/كُن> is used in Lebanese to always express a gender-neutral plural for the 2nd and 3rd person (e.g., بيتهُن> بيتهُن/بيتكُن> بيتكُن) whereas its Palestinian counterpart uses هِن>/كِن> to mostly express a feminine 2nd and 3rd person plural (e.g., بيتهِن> بيتهِن/بيتكِن> بيتكِن) aligning itself with MSA’s بيتهُنّ> بيتهُنن>/ بيتكُنّ> بيتكُنن>. Nevertheless, the northern Palestinian variety is closer to that of Lebanese and uses هِن>/كِن> while remaining gender-neutral. Furthermore, Palestinian uses كو> where Lebanese uses كُن> for the 2nd person plural, respectively: بيتكو> and بيتكُن>. Palestinian also tends to use هم> and كم>, aligning itself with MSA’s بيتهم> and بيتكم>, where Lebanese does not. These occurrences seem to be systematic and may be due to the fact that Lebanese is a Northern Levantine variety while Palestinian is Southern Levantine and such differences are bound to exist in the dialectal continuum, sometimes overlapping in border regions. Table 3: List of prefixes and their POS tags in MSA, Palestinian, and Lebanese, which we are used in both corpora. Palestinian-specific prefixes are marked with (*), and Lebanese with (+). Table 4: List of suffixes and their POS tags in Palestinian and Lebanese corpora (MSA are excluded as they are many). Palestinian-specific suffixes are marked with (*), and Lebanese with (+). #### Part of Speech The part of speech (POS) (قسم الكلام>) concerns the grammatical category of the token. The annotators were limited to selecting the POS from the tagset used in SAMA. #### Person The person (الإسناد>) refers to the person of the annotated word, if applicable. This can be the $1^{st}$, $2^{nd}$ or $3^{rd}$ person, which we represent by the numbers 1, 2 and 3. #### Aspect This column concerns, for verbs, their aspect (صيغة الفعل>). We denote (i) for imperfective verbs (present tense, not completed), (p) for perfective verbs (completed, past tense), and (c) for imperative or command verbs. #### Gender The gender (الجنس>) of the word, if applicable, which is (m) for masculine, (f) for feminine and (n) for neutral. #### Number The number (العدد>) of the word, if applicable. which is (s) for singular, (p) for plural, or (d) for dual (to count two units). ## 6\. Evaluation In this section we evaluate the quality of the our annotations for the Lebanese corpus. We performed two evaluations: (i) Inter-annotator agreement using the Cohen's Kappa $\kappa$, and (ii) The F1-score between each annotator and an expert annotator. The results of the two evaluations are summarized in Tables 5 and 6. To conduct the inter-annotator agreement, we randomly selected annotated sentences that together consist of 400 tokens, i.e., 4.2% of the corpus. We divided these sentences among our four annotators, such that each annotator re-annotates about 100 tokens that were annotated by another. We used these 400 new annotations to compute the Cohen's kappa $\kappa$ agreement coefficient. The inter-annotation agreement per annotation feature was computed. Table 5 lists the name of the feature and the $\kappa$ metric [Di Eugenio and Glass, 2004]: $\kappa=\frac{p_{o}-p_{e}}{1-p_{e}}$ where $p_{o}$ is the relative observed agreement among annotators and $p_{e}$ is the hypothetical expected agreement. In the second evaluation, an expert went over the 400 tokens and corrected the original annotations, if needed. We used these corrections to compute precision and recall where the main expert annotator was considered reference. The expert annotator performed the following correction actions per feature value: * • Approved a feature value annotation (increments $tp:$ true positives for the feature value) * • Approved a missing feature value annotation (increments $tn:$ true negatives for the feature value) * • Rejected a feature value annotation (increments $fp:$ false positives for the feature value) * • Rejected a missing feature value annotation (increments $fn:$ false negatives for the feature value) The precision $\frac{tp}{tp+fp}$ reflects the ratio of the true positives over the sum of true positives and false positives. The recall $\frac{tp}{tp+fn}$ reflects the ratio of the true positives over the sum of true positives and false negatives. Table 6 reports the average precision and recall across all feature values for each feature. We also computed the F1-score based on the precision and recall as: $F1-score=\frac{2*precision*recall}{precision+recall}$ The overall kappa $k$, precision, recall, F1-score for all features are calculated using the Python sklearn.metrics package. We present the average weighted by the support of each label for precision and recall. According to the interpretation of the $\kappa$ score [McHugh, 2015], the aspect and the suffix features scored moderate agreement (between .4 and .6), the stem and the prefix features scored near perfect agreement (above .81), and the rest of the features scored substantial agreement (between .61 and .80). The precision and recall scores of the corrected items show values that concur with the $\kappa$ coefficient. Tag | Values | Agreement | Disagreement | Kappa ---|---|---|---|--- Stem | 178 | 357 | 43 | 0.884 Prefix | 41 | 380 | 20 | 0.860 Suffixes | 55 | 358 | 42 | 0.738 POS | 22 | 340 | 60 | 0.821 Person | 3 | 359 | 41 | 0.629 Aspect | 4 | 384 | 16 | 0.911 Gender | 3 | 337 | 63 | 0.687 Number | 4 | 347 | 53 | 0.741 Overall | | | | 0.785 Table 5: Cohen's Kappa coefficient for the inter-annotation agreement and the precision and recall metrics for the main expert corrections. Feature | Precision | Recall | F1-Score ---|---|---|--- Stem | 0.9036 | 0.8935 | 0.893 Prefixes | 0.964 | 0.95 | 0.955 Suffixes | 0.948 | 0.895 | 0.915 POS | 0.898 | 0.85 | 0.853 Person | 0.928 | 0.898 | 0.910 Aspect | 0.974 | 0.96 | 0.967 Gender | 0.845 | 0.843 | 0.844 Number | 0.881 | 0.868 | 0.873 Overall | 0.918 | 0.894 | 0.901 Table 6: The precision and recall metrics for the main expert corrections. These results reflect some areas of disagreement between the annotators. A notable example of this is with prepositions that have a pronoun attached to them such as معها> where there should not be any gender or number assigned. In such an example, some annotators assigned the gender and number of the suffix ها> to token مع>. That was corrected to be a gender-less and numberless preposition. Other disagreements are present in some instances where the suffix ة> that indicate the feminine gender are not annotated as a suffix but are merged with the stem of the word. Some differences in POS agreement are present for example in the case where gender-less and numberless adverbs are annotated as prepositions or interrogative adverbs (such as كيف>) which is not a striking disagreement in itself. ## 7\. Curras Revisions In order to ensure compatibility with Curras annotations, tagsets, and lemmas, some revisions on Curras were necessary. Curras consists of 55,889 tokens. Each token was fully annotated with the morphological features that we adopted in section 5. Since the same token can be used in the same way (i.e., the same features) in different sentences, it is expected that the exact annotations will be repeated. However, we found that this is not always the case in Curras. For example, the same word يدوروا> يدوروا appeared in two different sentences in Curras, with the same meaning “searching for”, but each time with a different MSA lemma: (بَحَث>) and (بحث>); while it should be (1 بَحَثَ۪>). The adverb بس> بس was correctly annotated in all occurrences in Curras; however, in some cases, it was mistakenly assigned with gender; and in some cases, it was annotated with the noun POS. We also found some typos in the tagsets of the stems and affixes. Our goal is to unify and normalize such variations, and then build a list of morphological solutions as clean as possible.We performed the following revision steps: a. Tokenization and POS We developed a POS parser that reads the prefixes, stem, and suffixes in a given solution (i.e., annotations of a token), and returns a validation flag. We carefully inspected solutions that were flagged for review. The POS parser validates the following: (i) no parsing errors in the prefixes, stem, and suffixes, (ii) the transliterations of the prefixes, stem, and suffixes in Buckwalter are correct, (iii) the concatenation of the prefixes, stem, and suffixes corresponds to that of the CODA, (iv) every prefix should be in the predefined set of prefixes, (v) every suffix should be in the predefined set of suffixes, and (vi) every stem POS should be in the SAMA POS tagset. b. Lemmatization Curras originally contained 8,560 unique lemmas. Although Curras was annotated using SAMA lemmas, some of Curras lemmas were incorrectly linked with SAMA lemmas. This was mostly because of partial diacritization of lemmas (e.g., بحث>) or as the lemma subscript is ignored (e.g., بَحَث>). Ignoring diacritics and subscripts makes the lemma ambiguous. Thus, we cannot know, for example, whether it is (1 بَحَثَ۪>), (2 بَحْث>), or (1 بَحْث>). To disambiguate MSA lemmas in Curras and link them with SAMA lemmas, we developed a lemma disambiguator that takes the lemma, POS, and gloss, and tries to reduce the number of choices. In case one lemma is returned, it is then considered the correct SAMA lemma, otherwise undecided. We were able to disambiguate about 5,120 unique lemmas (i.e., 58%) in this way. The remaining undecided 3,560 lemmas were manually disambiguated and linked with SAMA. As a result, the unique number of MSA lemmas in Curras now is 7,313. These include 6,781 that are mapped with SAMA lemmas, and 432 MSA lemmas that are not found in SAMA. We marked the latter with ``_0''. Validating and unifying dialect lemmas was straightforward. In case a dialect lemma has the same letters as the MSA lemma (i.e., ignoring diacritics and subscripts) then it is the same lemma. So, we replace the dialect lemmas with the MSA lemma. Otherwise, a manual verification is performed. As a result, the unique number of dialect lemmas in Curras is 8,510. These include 7,785 lemmas equivalent to MSA lemmas, and 1,012 dialect lemmas that have no corresponding MSA lemmas. We also marked the latter with ``_0''. c. Other features We applied some heuristics in cleaning the Person, Aspect, Gender, and Number features. For example, the Aspect and Person are assigned only to verbs, otherwise they should be ``-''. We compared the Gender and Number with the suffix tags which also indicate gender and number, and corrected mistakes manually when needed. d. Generating Unique Solutions We prepared a table with unique annotations from Curras, called the ``Solutions'' table. We reused these solutions to annotate the Lebanese corpus in order to maximize the compatibility between both corpora. To do this, we split Curras into two tables: Tokens and Solutions, with a solution identifier (id) to link them. The Tokens table contains only the token id, token, and solution id. In this manner, the tokens that have the exact same annotations are given the same solution id. The Solutions table contains all annotations after removing the exact redundancies, which consists of 16,244 solutions. We considered two solutions to be identical if they have the same: CODA, prefixes, stem, suffixes, DA lemma, MSA lemma, Person, Aspect, Gender, and Number. We uploaded the Solutions table into our AnnoSheet and enabled our annotators to look up and re-use annotations from the Solutions table, as described in Section 5. As a result of this effort, we envision that the revised version of Curras along with the additions from Baladi; the newly built Lebanese corpus, form a more Levantine dialect corpus. ## 8\. Discussion: a more Levantine Corpus In this section, we discuss how both Palestinian and Lebanese corpora can be used as one, more Levantine corpus. Not only they are annotated with the same tagsets as discussed earlier, but adding 9.6K annotated Lebanese tokens to the Palestinian corpus Curras has helped bridge the nuanced linguistic gaps that exist between the two highly mutually intelligible dialects. Those nuances, as discussed earlier in this paper, are notably present in the affixes (i.e., morphology). Indeed, some prefixes and suffixes are typically Palestinian and not habitually used in Lebanese and vice-versa. However, these differences are a few and the majority of affixes are common to both dialects (See the differences in section 5.1). Additionally, Lebanese functional words have also been incorporated, solidifying our idea of a more Levantine corpus where the dialectal continuum is taken into account. In fact, the majority of the functional words are common to both dialects. Table 7 presents frequent functional words that are different in both dialects and the mapping between them. To summarize, both corpora consists of about 65.2K tokens, covering both Palestinian and Lebanese, annotated using the same guidelines. Table 7: Frequent functional words in Lebanese (right) and Palestinian (left). ## 9\. Conclusions and Future Work In this paper we presented the first morphologically annotated corpus for the Lebanese dialect 9.6K tokens. We also present a revised version of Curras, the Palestinian dialect corpus, about 55.9K tokens. We also described the various challenges we faced and measures we took to produce a compatible and more general Levantine Corpus, consisting of 55.9K tokens annotated with rich morphological and semantic information. Still, the evaluation of our annotators’ performance shows a high degree of consistency and agreement. The Lebanese corpus is available for downloading and browsing online. We plan to increase the size of our corpus to cover additional Levantine sub- dialects, especially those of other Levantine areas, most notably some of Syria’s dialectal varieties. We also plan to use this corpus to develop morphological analyzers and word-sense disambiguation system for Levantine Arabic as we did for MSA (see [Al-Hajj and Jarrar, 2021a, Al-Hajj and Jarrar, 2021b]). Additionally, we plan to build on the Palestinian and Lebanese dialect lemmas to develop a Levantine-MSA-English Lexicon and extend it with synonyms [Jarrar et al., 2021]. Both Curras and Baladi corpora are also being annotated with named-entities as part of the Wojood NER corpus see [Jarrar et al., 2022]. ## 10\. Acknowledgements This research is partially funded by the Palestinian Higher Council for Innovation and Excellence. The authors also acknowledge the great efforts of many students who helped in the annotation process, especially Tamara Qaimari, Shimaa Hamayel, Dua Shwiki, Asala Hamed, and Ahd Nazeeh. ## 11\. Bibliographical References ## References * Al-Hajj and Jarrar, 2021a Al-Hajj, M. and Jarrar, M. (2021a). Arabglossbert: Fine-tuning bert on context-gloss pairs for wsd. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 40–48, Online, sep. INCOMA Ltd. * Al-Hajj and Jarrar, 2021b Al-Hajj, M. and Jarrar, M. (2021b). Lu-bzu at semeval-2021 task 2: Word2vec and lemma2vec performance in arabic word-in-context disambiguation. In Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), pages 748–755, Online, aug. Association for Computational Linguistics. * Al-Sabbagh and Girju, 2012 Al-Sabbagh, R. and Girju, R. (2012). YADAC: Yet another dialectal Arabic corpus. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 2882–2889, Istanbul, Turkey, May. European Language Resources Association (ELRA). * Al-Shargi et al., 2016 Al-Shargi, F., Kaplan, A., Eskander, R., Habash, N., and Rambow, O. (2016). Morphologically annotated corpora and morphological analyzers for Moroccan and sanaani yemeni Arabic. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1300–1306, Portorož, Slovenia, May. European Language Resources Association (ELRA). * Alhafi et al., 2019 Alhafi, D., Deik, A., and Jarrar, M. (2019). Usability evaluation of lexicographic e-services. In The 2019 IEEE/ACS 16th International Conference on Computer Systems and Applications (AICCSA), pages 1–7. IEE, November. * Bouamor et al., 2014 Bouamor, H., Habash, N., and Oflazer, K. (2014). A multidialectal parallel corpus of Arabic. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 1240–1245, Reykjavik, Iceland, May. European Language Resources Association (ELRA). * Briquel Chatonnet, 2005 Briquel Chatonnet, F. (2005). De l'intérêt de l'étude du garshouni et des manuscrits écrits selon ce système. In L'Orient Chrétien dans l'Empire musulman, en hommage au Professeur Gérard Troupeau, Studia Arabica III, pages 463–475. Editions de Paris. * Buckwalter, 2004 Buckwalter, T. (2004). Buckwalter arabic morphological analyzer version 2.0. LDC2004L02, dec. * Canavan et al., 1997 Canavan, A., Zipperlen, G., and Graff, D. (1997). Callhome egyptian arabic speech. LDC97S45. * Darwish et al., 2021 Darwish, K., Habash, N., Abbas, M., Al-Khalifa, H., Al-Natsheh, H. T., Bouamor, H., Bouzoubaa, K., Cavalli-Sforza, V., El-Beltagy, S. R., El-Hajj, W., Jarrar, M., and Mubarak, H. (2021). A panoramic survey of natural language processing in the arab worlds. Commun. ACM, 64(4):72–81, April. * Di Eugenio and Glass, 2004 Di Eugenio, B. and Glass, M. (2004). The kappa statistic: A second look. Computational Linguistics, 30(1):95–101. * Diab et al., 2010 Diab, M., Habash, N., Rambow, O., Altantawy, M., and Benajiba, Y. (2010). Colaba: Arabic dialect annotation and processing. LREC Workshop on Semitic Language Processing, pages 66–74, 01. * Eskander et al., 2016 Eskander, R., Habash, N., Rambow, O., and Pasha, A. (2016). Creating resources for dialectal Arabic from a single annotation: A case study on Egyptian and Levantine. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 3455–3465, Osaka, Japan, December. The COLING 2016 Organizing Committee. * Habash et al., 2012 Habash, N., Eskander, R., and Hawwari, A. (2012). A morphological analyzer for Egyptian Arabic. In Proceedings of the Twelfth Meeting of the Special Interest Group on Computational Morphology and Phonology, pages 1–9, Montréal, Canada, June. Association for Computational Linguistics. * Habash et al., 2015 Habash, N., Jarrar, M., Alrimawi, F., Akra, D., Zalmout, N., Bartolotti, E., and Arar, M. (2015). Palestinian arabic conventional orthography guidelines. Technical report, Birzeit University. * Hajič et al., 2004 Hajič, J., Smrž, O., Petr, Z., Snaidauf, J., and Beška, E. (2004). Prague arabic dependency treebank: development in data and tools. Proc. of the NEMLAR Intern. Conf. on Arabic Language Resources and Tools, 01. * Harrat et al., 2014 Harrat, S., Meftouh, K., Abbas, M., and Smaïli, K. (2014). Building resources for algerian arabic dialects. In INTERSPEECH. * Jarrar and Amayreh, 2019 Jarrar, M. and Amayreh, H. (2019). An arabic-multilingual database with a lexicographic search engine. In The 24th International Conference on Applications of Natural Language to Information Systems (NLDB 2019), volume 11608 of LNCS, pages 234–246. Springer, June. * Jarrar et al., 2014 Jarrar, M., Habash, N., Akra, D., and Zalmout, N. (2014). Building a corpus for palestinian arabic: a preliminary study. In Proceedings of the EMNLP 2014, Workshop on Arabic Natural Language, pages 18–27. Association For Computational Linguistics, October. * Jarrar et al., 2017 Jarrar, M., Habash, N., Alrimawi, F., Akra, D., and Zalmout, N. (2017). Curras: An annotated corpus for the palestinian arabic dialect. Journal Language Resources and Evaluation, 51(3):745–775, September. * Jarrar et al., 2019 Jarrar, M., Amayreh, H., and McCrae, J. P. (2019). Representing arabic lexicons in lemon - a preliminary study. In The 2nd Conference on Language, Data and Knowledge (LDK 2019), volume 2402, pages 29–33. CEUR Workshop Proceedings, May. * Jarrar et al., 2021 Jarrar, M., Karajah, E., Khalifa, M., and Shaalan, K. (2021). Extracting synonyms from bilingual dictionaries. In Proceedings of the 11th International Global Wordnet Conference (GWC2021), pages 215–222. Global Wordnet Association, Jan. * Jarrar et al., 2022 Jarrar, M., Khalilia, M., and Ghanem, S. (2022). Wojood: Nested arabic named entity corpus and recognition using bert. In Proceedings of the International Conference on Language Resources and Evaluation (LREC 2022), Marseille, France, June. * Jarrar, 2011 Jarrar, M. (2011). Building a formal arabic ontology (invited paper). In Proceedings of the Experts Meeting on Arabic Ontologies and Semantic Networks. ALECSO, Arab League, 7. * Jarrar, 2021 Jarrar, M. (2021). The arabic ontology - an arabic wordnet with ontologically clean content. Applied Ontology Journal, 16(1):1–26. * Khalifa et al., 2018 Khalifa, S., Habash, N., Eryani, F., Obeid, O., Abdulrahim, D., and Al Kaabi, M. (2018). A morphologically annotated corpus of emirati Arabic. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan, May. European Language Resources Association (ELRA). * Kilany et al., 2002 Kilany, H., Gadalla, H., Arram, H., Yacoub, A., El-Habashi, A., and McLemore, C. (2002). Egyptian colloquial arabic lexicon. LDC99L22, jul. * Maamouri et al., 2005 Maamouri, M., Bies, A., Buckwalter, T., Jin, H., and Mekki, W. (2005). Arabic treebank: Part 3 (full corpus) v 2.0. LDC2005T20, June. * Maamouri et al., 2006 Maamouri, M., Bies, A., Buckwalter, T., Diab, M., Habash, N., Rambow, O., and Tabessi, D. (2006). Developing and using a pilot dialectal Arabic treebank. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC'06), Genoa, Italy, May. European Language Resources Association (ELRA). * Maamouri et al., 2010 Maamouri, M., Graff, D., Bouziri, B., Krouna, S., Bies, A., and Kulick, S. (2010). Ldc standard arabic morphological analyzer (sama) version 3.1. LDC2010L01, July. * Maamouri et al., 2014 Maamouri, M., Bies, A., Kulick, S., Ciul, M., Habash, N., and Eskander, R. (2014). Developing an Egyptian Arabic treebank: Impact of dialectal morphology on annotation and tool development. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14), pages 2348–2354, Reykjavik, Iceland, May. European Language Resources Association (ELRA). * McHugh, 2015 McHugh, M. L. (2015). Interrater reliability: the kappa statistic. Biochemia medica, 22. * Ntelitheos and Idrissi, 2017 Ntelitheos, D. and Idrissi, A. (2017). Language growth in child emirati arabic. Perspectives on Arabic Linguistics XXIX, 5:229–248. * Skaf, 2015 Skaf, R. (2015). Le morphème d= en araméen-syriaque : étude d'une polyfonctionalité à plusieurs échelles syntaxiques. Theses, Université Sorbonne Paris Cité ; Università degli studi (Torino, Italia), November. * Smrž, 2007 Smrž, O. (2007). ElixirFM – implementation of functional Arabic morphology. In Proceedings of the 2007 Workshop on Computational Approaches to Semitic Languages: Common Issues and Resources, pages 1–8, Prague, Czech Republic, June. Association for Computational Linguistics. * Zbib et al., 2012 Zbib, R., Malchiodi, E., Devlin, J., Stallard, D., Matsoukas, S., Schwartz, R., Makhoul, J., Zaidan, O. F., and Callison-Burch, C. (2012). Machine translation of Arabic dialects. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 49–59, Montréal, Canada, June. Association for Computational Linguistics. * Zribi et al., 2015 Zribi, I., Ellouze, M., Belguith, L. H., and Blache, P. (2015). Spoken tunisian arabic corpus "stac": Transcription and annotation. Res. Comput. Sci., 90:123–135. lrec2022-bib
# Fast and Accurate Scene Parsing via Bi-direction Alignment Networks ###### Abstract In this paper, we propose an effective method for fast and accurate scene parsing called Bidirectional Alignment Network (BiAlignNet). Previously, one representative work BiSeNet [1] uses two different paths (Context Path and Spatial Path) to achieve balanced learning of semantics and details, respectively. However, the relationship between the two paths is not well explored. We argue that both paths can benefit each other in a complementary way. Motivated by this, we propose a novel network by aligning two-path information into each other through a learned flow field. To avoid the noise and semantic gaps, we introduce a Gated Flow Alignment Module to align both features in a bidirectional way. Moreover, to make the Spatial Path learn more detailed information, we present an edge-guided hard pixel mining loss to supervise the aligned learning process. Our method achieves 80.1% and 78.5% mIoU in validation and test set of Cityscapes while running at 30 FPS with full resolution inputs. Code and models will be available at https://github.com/jojacola/BiAlignNet. Index Terms— Bidirectional Alignment Network, Fast and Accurate Scene Parsing ## 1 Introduction (a) Atrous Conv[2] (b) FPN[3] (c) BiSeNet[1] (d) Proposed BiAlignNet Fig. 1: Comparison of different segmentation architectures. 1(a) uses astrous convolution layers to obtain larger receptive field and high resolution feature map but introduces heavy computation complexity. 1(b) is a FPN-like model. It gets a high resolution feature map by adding top-down and lateral fusions. 1(c) shows the structure of BiSeNet[1]. We propose 1(d) to maximize the utilization between two paths and add different supervision according to their priorities. Best viewed in color. Semantic Segmentation is a fundamental vision task that aims to classify each pixel in the images correctly. Some earlier approaches [4, 5] use structured prediction operators such as conditional random fields (CRFs) to refine segmentation results. Recent methods for semantic segmentation are predominantly based on FCNs [6]. Current state-of-the-art methods [7, 8, 9] apply atrous convolutions [2] at the last several stages of their networks to yield feature maps with strong semantic representation while at the same time maintaining the high resolution, as shown in Fig. 1(a). Moreover, there are also several methods based on Feature Pyramid Network (FPN)-like [3, 10, 11] models which leverage the lateral path to fuse feature maps in a top-down manner. In this way, the deep features of the last several layers strengthen the shallow features with high resolution. Therefore, the refined features are possible to keep high resolution and meanwhile catch semantic representation, which is beneficial to the accuracy improvement, as shown in Fig. 1(b). However, both designs are not practical for real-time settings. The former methods [7, 8] require extra computation since the feature maps in the last stages can reach up to 64 times bigger than those in FCNs. Meanwhile, the latter one [10] has a heavier fusion operation in their decoder. For example, under a single GTX 1080Ti GPU, the previous model PSPNet [7] has a frame rate of only 1.6 FPS for $1024\times 2048$ input images. As a consequence, this is very problematic for many time-critical applications, such as autonomous driving and robot navigation, which desperately demand real-time online data processing. There are several specific designed real-time semantic segmentation models [12, 13, 1, 14] handling above issues. However, these methods can not achieve satisfactory segmentation results as accurate models. The representative works BiSeNets [1, 14] propose to use two different paths for learning spatial details and coarse context information respectively, shown in Fig. 1(c). However, they have not explored the interaction between two data flows explicitly. We believe such two data flows contain complementary content that can benefit each other. In this paper, we propose a new network architecture for real-time scene parsing settings. As shown in Fig. 1(d), two paths interact with each other through specific design modules before the fusing. Motivated by a recent alignment module [15] which deforms the entire feature map using a learned flow field, we propose a Gated Flow Alignment Module to avoid noise during the fusing since two paths contain diverse information. The proposed module is light-weight and can be inserted on each path before fusion. The features are aligned to each other through the learned flow fields. Moreover, to make the spatial path learn detailed information, we supervise it using the edge-guided hard pixel mining loss [16] to further improve the performance. We term our network as BiAlignNet for short. Finally, we evaluate BiAlignNet on two datasets, i.e., Cityscapes [17] and CamVid [18]. The results demonstrate the effectiveness of the proposed components. Specifically, our methods improve the origin BiSegNet baseline by about 2% mIoU on the test set of Cityscapes with only 3 FPS drop. Our method can achieve 78.5% mIoU while running at 32 FPS on single 1080Ti without acceleration. ## 2 Method Fig. 2: Overview of the BiAlignNet. The context path is in the blue box. The spatial path is in the green box. Orange part represents the bidirectional alignment. Best viewed in color. We present the overall network architecture in Fig. 2. BiAlignNet includes the following three parts: two pathways, which are Spatial Path and Context Path, and Bidirectional Alignment using Gated Flow Alignment Module to align features in both directions. We also specially design the loss functions explained in Sec. 2.3 to supervise different sorts of information in two paths at last. ### 2.1 Spatial Path and Context Path We briefly review the spatial and context path in BiSeNet [1]. The spatial path is designed to capture the low-level information from the input image. We only use shallow layers to preserve spatial details. It only consists of three convolution layers with batch normalization and ReLU. Each layer has a stride of 2, so the final feature map of the spatial path is $\frac{1}{8}$ of the input size. The context path is responsible for extracting high-level information using a deeper network with more downsample operation. For implementation, we employ lightweight backbone DFNet [19] series for context path. Pyramid Pooling Module (PPM) [7], which has shown a strong ability to catch contextual information, is also added to our model. All backbones have four stages of residual blocks, and the first layer of each stage has a stride of 2. Thus, the final output of the context path is $\frac{1}{32}$ of the input size. ### 2.2 Bidirectional Alignment In this section, we present a Gated Flow Alignment Module (GFAM) to align features with each other. The original FAM [15] is proposed to align adjacent features in the decoder. However, directly using such a module may lead to inferior results because of the huge semantic gap between the two paths. Thus, we plug a gate into the FAM to avoid the noises and highlight the important information. Suppose $\mathbf{F}_{s}$ is the source feature, and we want to align the information from $\mathbf{F}_{s}$ to target feature $\mathbf{F}_{t}$. Inspired by original FAM [15], we first generate a flow field grid $G$: $G=conv(cat(\mathbf{F}_{s}||\mathbf{F}_{t})),$ (1) where $\mathbf{F}_{s}$ and $\mathbf{F}_{t}$ can be features from the spatial path and the context path respectively, and vice versa. The feature map that has a smaller size is bilinearly upsampled to reach the same size as the larger one. After flow field grid generation, we adopt a pixel-wise gate to emphasize the important part in current data flow: $\hat{G}=\sigma(conv(\mathbf{F}_{t}))\odot G,$ (2) where $\hat{G}$ is the gated flow field grid, $\sigma$ means the sigmoid layer and $\odot$ represents element-wise product. Each position $p$ in target feature $\mathbf{F}_{t}$ can be mapped to a position $p^{\prime}$, according to the values in gated flow field grid $\hat{G}$. Note that the mapping result is not an integer, so the value at $\mathbf{F}_{t}(p^{\prime})$ is interpolated by the values of the 4-neighbors $\mathcal{N}\left(p^{\prime}\right)$ (top-left, top-right, bottom-left, and bottom-right): $\hat{\mathbf{F}_{t}}\left(p\right)=\sum_{i\in\mathcal{N}\left(p^{\prime}\right)}w_{p}\mathbf{F}_{t}(p^{\prime}),$ (3) where $w_{p}$ is the bilinear kernel weights estimated by the distance of warped grid, $\hat{\mathbf{F}_{t}}$ is the target feature aligned with information from source feature $\mathbf{F}_{s}$. In BiAlignNet, we take both spatial feature and context feature as source features to align with each other bidirectionally. In this way, different pieces of information can complement each other, as shown in the orange box of Fig. 2. ### 2.3 Loss Function The spatial path gives priority to spatial details while context path focuses on high-level semantic context. To force spatial path to focus on detailed information, we introduce an edge-guided hard pixel indicator map $d$ to supervise the learning. $d$ is predicted from the spatial path feature and normalized by a sigmoid layer. Since most of the fine information are concentrated in the boundaries, the edge map $b$ is derived from the segmentation labels through algorithm [20] which retrieves contours from the binary image. We utilize the edge map $b$ to guide the prediction of indicator $d$. As for context path, we use cross-entropy loss with online hard example mining (OHEM) [16, 1]. We jointly supervise two paths with a loss function $L$: $L=L_{spatial}(d,b,s,g)+L_{context}(s,g),$ (4) where $s$ is the predicted segmentation output of the model and $g$ is the ground truth segmentation labels, and $L_{context}$ is the OHEM loss. $L_{spatial}$ is calculated from the following equation. $L_{spatial}=\lambda L_{bce}(d,b)+L_{hard}(s,g,d),$ (5) $L_{hard}=-\frac{1}{K}\sum_{i=1}^{N}\mathbbm{1}\left[s_{i,g_{i}}<t_{K}\&d_{i}>t_{b}\right]\cdot\log s_{i,g_{i}},$ (6) where $L_{bce}$ is the binary cross-entropy loss for edge-guided hard pixel indicator $d$, $L_{hard}$ mines the hard pixels with high probability in $d$ and calculate the cross-entropy loss. $N$ is the total number of pixels. $\mathbbm{1}[x]=1$ if $x=1$ otherwise 0. First Eq. 6 filters the positions that have a higher probability than threshold $t_{b}$=0.8 in $d$. Then it picks positions within top $K$ losses, where $t_{K}$ is the threshold for top $K$ loss. Empirically, we set $\lambda=25$ to balance the losses in all experiments. In this way, the spatial path learns more detailed information during the training. ## 3 Experiment ### 3.1 Datasets We carry out experiments on Cityscapes and Camvid datasets. Cityscapes [17] is a large street scene dataset which contains 2,975 fine-annotated images for training, 500 images for validation and a testing set without annotations of 1,525 images. All images in this dataset have a high resolution of 1,024$\times$2,048. CamVid [18] is another road scene dataset. This dataset contains 367 training images, 101 validation images and 233 testing images with a resolution of $720\times 960$. ### 3.2 Speed and Accuracy Analysis Implementation Details. Our experiments are done with the PyTorch framework. We use stochastic gradient descent (SGD) with a batch size of 16 and a momentum of 0.9 and weight decay of 5e-4. The initial learning rate is 0.01 with a ”poly” learning rate strategy in which the initial rate is multiplied by $\left(1-\frac{\text{ iter }}{\text{total\\_iter}}\right)^{0.9}$. As for data augmentation, we randomly horizontally flip the images and randomly resize them with a scale of [0.5, 2.0], and crop images to a size of 1024$\times$1024 (720$\times$720 for CamVid). We use the single scale inference and report the speed with one 1080Ti GPU. Table 1: Comparison on Cityscapes val and test set with state-of-the-art real- time models. Notation: $\gamma$ is the downsampling ratio corresponding to the original $1024\times 2048$ resolution, for example, $\gamma=0.75$ means the model’s input size is $768\times 1536$. ”*” noted methods and ours are tested on single 1080Ti GPU. Method | $\gamma$ | Backbone | mIoU ($\%$) | #FPS | #Params ---|---|---|---|---|--- val | test ENet [21] | 0.5 | - | - | 58.3 | 60 | 0.4M ESPNet [22] | 0.5 | ESPNet | - | 60.3 | 132 | 0.4M ESPNetv2 [23] | 0.5 | ESPNetv2 | 66.4 | 66.2 | 80 | 0.8M ERFNet [24] | 0.5 | - | 70.0 | 68.0 | 41.9 | - BiSeNetv1 [1]∗ | 0.75 | Xception39 | 69.0 | 68.4 | 175 | 5.8M ICNet [12] | 1.0 | PSPNet50 | - | 69.5 | 34 | 26.5M CellNet [25] | 0.75 | - | - | 70.5 | 108 | - DFANet [13] | 1.0 | Xception A | - | 71.3 | 100 | 7.8M BiSeNetv2 [14]∗ | 0.5 | - | 73.4 | 72.6 | 28 | - DF1-Seg [19]∗ | 1.0 | DFNet1 | - | 73.0 | 100 | 8.55M BiSeNetv1 [1]∗ | 0.75 | ResNet18 | 74.8 | 74.7 | 35 | 12.9M DF2-Seg [19]∗ | 1.0 | DFNet2 | - | 74.8 | 68 | 18.88M SwiftNet [26]∗ | 1.0 | ResNet18 | 75.4 | 75.8 | 39.9 | 11.8M FC-HarDNet [27]∗ | 1.0 | HarDNet | 77.4 | 76.0 | 35 | 4.1M SwiftNet-ens [26]∗ | 1.0 | - | - | 76.5 | 18.4 | 24.7M BiAlignNet | 0.75 | DFNet2 | 76.8 | 75.4 | 50 | 19.2M BiAlignNet | 1.0 | DFNet2 | 78.7 | 77.1 | 32 | 19.2M BiAlignNet† | 0.75 | DFNet2 | 79.0 | 76.9 | 50 | 19.2M BiAlignNet† | 1.0 | DFNet2 | 80.1 | 78.5 | 32 | 19.2M * • †Mapillary dataset used for pretraining. Result Comparison. Table 1 shows the results of our method compared to other state-of-the-art real-time methods. Our method with an input size of $768\times 1536$ can get the best trade-off between accuracy and speed. When input with the whole image, BiAlignNet still runs in real time and gets 78.7% mIoU and 77.1% mIoU on val and test, which outperforms all the methods listed above. After pre-training on Mapillary [28] dataset, our BiAlignNet gains 1.4% improvement. We also apply our method with different light-weight backbones on CamVid dataset and report comparison results in Table 2. BiAlignNet also achieves state-of-the-art performance on the CamVid. Visualization. In Fig. 3, we visualize flow fields from two directions. Flow from the spatial path to the context path (Column b) contains more detailed information and Column c that is from the context path, includes more high- level information. Thus, different features are aligned to each other under the guidance of learned flow field. Fig. 3(d) shows that BiAlignNet outperforms BiSeNet (Column e) on boundaries and details. Fig. 4 gives more insights into the proposed GFAM module and the hard pixel mining supervision. As shown in Column b, gates from the spatial path assign higher scores on image details. It confirms that the gate in GFAM can filter the noise and highlight the significant part in the flow field. Fig. 4(c) and (d) visualize hard pixels used in $L_{hard}$ and the predicted indicator map by the spatial path. They are consistent with the fact that edge-guided hard pixel mining pays more attention to fine-grained objects and edges that are difficult to separate. Table 2: Comparison on the CamVid test set with previous state-of-the-art real-time models. Method | Backbone | mIoU ($\%$) | #FPS ---|---|---|--- DFANet B [13] | - | 59.3 | 160 SwiftNet [26] | ResNet18 | 63.33 | - DFANet A [13] | - | 64.7 | 120 ICNet [12] | ResNet-50 | 67.1 | 34.5 BiSeNetv1 [1] | ResNet18 | 68.7 | 60 BiSeNetv2 [14] | - | 72.4 | 60 BiSeNetv$\text{2}^{*}$ [14] | - | 76.7 | 60 BiAlignNet | DFNet1 | 68.9 | 85 BiAlignNet | DFNet2 | 72.3 | 65 BiAlignNe$\text{t}^{*}$ | DFNet2 | 77.1 | 65 * • * Cityscapes dataset used for pretraining. Fig. 3: Visualization of learned flow field and segmentation output. Column (a) lists three exemplary images. Column (b) and (c) show the flow field in two directions, spatial to context and context to spatial correspondingly. Column (d) and (e) show the comparison between BiAlignNet and BiSeNet. Best viewed on screen and zoom in. Fig. 4: Visualization of flow gate, hard examples in spatial loss and predicted edges. Column (a) lists input images. Column (b) shows the gate map from spatial path to context path. Column (c) shows the hard examples in $L_{hard}$. Column (d) illustrates the predicted hard pixel indicator map from the spatial path. Best viewed on screen and zoom in. ### 3.3 Ablation Study We carry out ablation studies on each component of BiAlignNet in this section. As shown in Table 3, our proposed module only introduces a very small amount of computation. Ablation for bidirectional alignment. We argue that insufficiently feature fusion leads to low performance in previous BiSeNet. As we can see in Table 3, compared to the baseline that simply concatenates two feature maps, bidirectional alignment with GFAM can improve performance by 2.4%. Moreover, the alignments in two directions show the synergistic effects with each other. The performance increase brought by bidirectional alignment is more than the two one-way models. Also, the simple gate mechanism in GFAM results in a 0.8% performance increase. Ablation for the spatial loss. We expect two paths to learn different contents from the input, especially the spatial path. Thus, we enhance the detail supervision in the spatial path through the specially designed spatial loss with a hard pixel mining indicator. After adding the spatial loss, the performance has improved by 0.9%. This proves the effectiveness of the designed spatial loss function. Table 3: Ablation Study. We show the effectiveness of each component in BiAlignNet with DFNet2 on validation set of Cityscapes. CP: Context Path; SP: Spatial Path; GFAM: Gated Flow Alignment Module; FAM: original Flow Alignment Module; $\xrightarrow{}$: Alignment direction; SL: Spatial Loss. Method | mIoU ($\%$) | $\Delta$ ($\%$) | #GFLOPs ---|---|---|--- CP + SP (baseline) | 75.4 | - | 108 CP + SP + GFAM (CP$\xrightarrow{}$SP) | 76.5 | 1.1$\uparrow$ | 108.37 CP + SP + GFAM (SP$\xrightarrow{}$CP) | 76.6 | 1.2$\uparrow$ | 108.36 CP + SP + FAM (bidirection) | 77.0 | 1.6$\uparrow$ | 108.72 CP + SP + GFAM (bidirection) | 77.8 | 2.4$\uparrow$ | 108.73 CP + SP + GFAM (bidirection) + SL | 78.7 | 3.3$\uparrow$ | 108.73 ## 4 Conclusion In this paper, we propose a Bidirectional Alignment Network (BiAlignNet) for fast and accurate scene parsing. With the bidirectional alignment and specific supervision in each pathway, the low-level spatial feature can be deeply fused with the high-level context feature. Comparative experiments are performed to show the effectiveness of our proposed components over the baseline models. BiAlignNet also achieves a considerable trade-off between segmentation accuracy and the inference speed. ## References * [1] Changqian Yu, Jingbo Wang, Chao Peng, Changxin Gao, Gang Yu, and Nong Sang, “Bisenet: Bilateral segmentation network for real-time semantic segmentation,” in ECCV, 2018. * [2] Fisher Yu and Vladlen Koltun, “Multi-scale context aggregation by dilated convolutions,” ICLR, 2016. * [3] Tsung-Yi Lin, Piotr Dollár, Ross B. Girshick, Kaiming He, Bharath Hariharan, and Serge J. Belongie, “Feature pyramid networks for object detection,” in CVPR, 2017. * [4] Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille, “Semantic image segmentation with deep convolutional nets and fully connected CRFs,” ICLR, 2015. * [5] Xi Li and Hichem Sahbi, “Superpixel-based object class segmentation using conditional random fields,” in ICASSP, 2011. * [6] Jonathan Long, Evan Shelhamer, and Trevor Darrell, “Fully convolutional networks for semantic segmentation,” in CVPR, 2015. * [7] Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, and Jiaya Jia, “Pyramid scene parsing network,” in CVPR, 2017. * [8] Jun Fu, Jing Liu, Haijie Tian, Zhiwei Fang, and Hanqing Lu, “Dual attention network for scene segmentation,” arXiv preprint arXiv:1809.02983, 2018. * [9] Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, and Bryan Catanzaro, “Improving semantic segmentation via video propagation and label relaxation,” in CVPR, 2019. * [10] Alexander Kirillov, Ross Girshick, Kaiming He, and Piotr Dollar, “Panoptic feature pyramid networks,” in CVPR, 2019. * [11] Olaf Ronneberger, Philipp Fischer, and Thomas Brox, “U-net: Convolutional networks for biomedical image segmentation,” MICCAI, 2015. * [12] Hengshuang Zhao, Xiaojuan Qi, Xiaoyong Shen, Jianping Shi, and Jiaya Jia, “Icnet for real-time semantic segmentation on high-resolution images,” in ECCV, 2018. * [13] Hanchao Li, Pengfei Xiong, Haoqiang Fan, and Jian Sun, “Dfanet: Deep feature aggregation for real-time semantic segmentation,” in CVPR, 2019. * [14] Changqian Yu, Changxin Gao, Jingbo Wang, Gang Yu, Chunhua Shen, and Nong Sang, “Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation,” arXiv preprint arXiv:2004.02147, 2020. * [15] Xiangtai Li, Ansheng You, Zhen Zhu, Houlong Zhao, Maoke Yang, Kuiyuan Yang, Shaohua Tan, and Yunhai Tong, “Semantic flow for fast and accurate scene parsing,” in ECCV, 2020. * [16] Abhinav Shrivastava, Abhinav Gupta, and Ross Girshick, “Training region-based object detectors with online hard example mining,” in CVPR, 2016. * [17] Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele, “The cityscapes dataset for semantic urban scene understanding,” in CVPR, 2016. * [18] Gabriel J Brostow, Julien Fauqueur, and Roberto Cipolla, “Semantic object classes in video: A high-definition ground truth database,” Pattern Recognition Letters, vol. 30, no. 2, pp. 88–97, 2009. * [19] Xin Li, Yiming Zhou, Zheng Pan, and Jiashi Feng, “Partial order pruning: for best speed/accuracy trade-off in neural architecture search,” in CVPR, 2019. * [20] Satoshi Suzuki and Keiichi Abe, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics, and Image Processing, vol. 30, no. 1, pp. 32–46, 1985. * [21] Adam Paszke, Abhishek Chaurasia, Sangpil Kim, and Eugenio Culurciello, “Enet: A deep neural network architecture for real-time semantic segmentation,” arXiv preprint arXiv:1606.02147, 2016. * [22] Sachin Mehta, Mohammad Rastegari, Anat Caspi, Linda Shapiro, and Hannaneh Hajishirzi, “Espnet: Efficient spatial pyramid of dilated convolutions for semantic segmentation,” in ECCV, 2018. * [23] Sachin Mehta, Mohammad Rastegari, Linda Shapiro, and Hannaneh Hajishirzi, “Espnetv2: A light-weight, power efficient, and general purpose convolutional neural network,” in CVPR, 2019. * [24] Eduardo Romera, Jose M. Alvarez, Luis Miguel Bergasa, and Roberto Arroyo, “Erfnet: Efficient residual factorized convnet for real-time semantic segmentation,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 263–272, 2017. * [25] Yiheng Zhang, Zhaofan Qiu, Jingen Liu, Ting Yao, Dong Liu, and Tao Mei, “Customizable architecture search for semantic segmentation,” in CVPR, 2019. * [26] Marin Orsic, Ivan Kreso, Petra Bevandic, and Sinisa Segvic, “In defense of pre-trained imagenet architectures for real-time semantic segmentation of road-driving images,” in CVPR, 2019. * [27] Ping Chao, Chao-Yang Kao, Yu-Shan Ruan, Chien-Hsiang Huang, and Youn-Long Lin, “Hardnet: A low memory traffic network,” in ICCV, 2019. * [28] Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder, “The mapillary vistas dataset for semantic understanding of street scenes,” in ICCV, 2017.
# GROOT: Learning to Follow Instructions by Watching Gameplay Videos Shaofei Cai1,2, Bowei Zhang3, Zihao Wang1,2, Xiaojian Ma5, Anji Liu4, Yitao Liang1 Team CraftJarvis 1Institute for Artificial Intelligence, Peking University 2School of Intelligence Science and Technology, Peking University 3School of Electronics Engineering and Computer Science, Peking University 4Computer Science Department, University of California, Los Angeles 5Beijing Institute for General Artificial Intelligence (BIGAI) <EMAIL_ADDRESS> [email protected],liuanji<EMAIL_ADDRESS>Corresponding Author. ###### Abstract We study the problem of building a controller that can follow open-ended instructions in open-world environments. We propose to follow reference videos as instructions, which offer expressive goal specifications while eliminating the need for expensive text-gameplay annotations. A new learning framework is derived to allow learning such instruction-following controllers from gameplay videos while producing a video instruction encoder that induces a structured goal space. We implement our agent GROOT in a simple yet effective encoder- decoder architecture based on causal transformers. We evaluate GROOT against open-world counterparts and human players on a proposed Minecraft SkillForge benchmark. The Elo ratings clearly show that GROOT is closing the human- machine gap as well as exhibiting a 70% winning rate over the best generalist agent baseline. Qualitative analysis of the induced goal space further demonstrates some interesting emergent properties, including the goal composition and complex gameplay behavior synthesis. Code and video can be found on the website https://craftjarvis-groot.github.io. Figure 1: Through the cultivation of extensive gameplay videos, GROOT has grown a rich set of skill fruits (number denotes success rate; skills shown above do not mean to be exhaustive; kudos to the anonymous artist). ## 1 Introduction Developing human-level embodied agents that can solve endless tasks in open- world environments, such as Minecraft (Johnson et al., 2016; Fan et al., 2022), has always been a long-term goal pursued in AI. Recent works have explored using Large Language Models (LLMs) to generate high-level plans, which guide the agent to accomplish challenging long-horizon tasks (Wang et al., 2023b; a; Zhu et al., 2023). However, a major gap between these LLM-based agents and generalist agents that can complete endless amounts of tasks is the capability of their low-level controllers, which map the plans to motor commands. Recently developed controllers are only capable of completing a predefined and narrow set of programmatic tasks (Lin et al., 2021; Baker et al., 2022; Cai et al., 2023), which hinders LLM-based planning agents from unleashing their full potential. We attribute the limitation of these low- level controllers to how the goal is specified. Specifically, existing controllers use task indicator (Yu et al., 2019), future outcome (Chen et al., 2021; Lifshitz et al., 2023), and language (Brohan et al., 2022) to represent the goal. While it is easy to learn a controller with some of these goal specifications, they may not be expressive enough for diverse tasks. Taking future outcome goals as an example, an image of a desired house clearly lacks procedural information on how the house was built. One exception is language, but learning a controller that can receive language goals is prohibitively expensive as it requires a huge number of trajectory-text pairs with text that precisely depicts the full details of the gameplay, therefore preventing them from scaling up to more open-ended tasks. Having observed the limitations of goal specification in the prior works, this paper seeks to find a balance between the capacity of goal specification and the cost of controller learning. Concretely, we propose to specify the goal as a reference gameplay video clip. While such video instruction is indeed expressive, there are two challenges: 1) How can the controller understand the actual goal being specified as the video itself can be ambiguous, i.e., a goal space or video instruction encoder has to be learned; 2) How to ultimately map such goal to actual motor commands? To this end, we introduce a learning framework that simultaneously produces a goal space and a video instruction following controller from gameplay videos. The fundamental idea is casting the problem as future state prediction based on past observations: * • The predicting model needs to identify which goal is being pursued from the past observations, which requires a good goal space (induced by a video instruction encoder); * • Since the transition dynamics model is fixed, a policy that maps both the state and the recognized goal to action is also needed by the predicting model when rolling the future state predictions. Effectively, this results in the goal space and control policy we need. We introduce a variational learning objective for this problem, which leads to a combination of a cloning loss and a KL regularization loss. Based on this framework, we implement GROOT, an agent with an encoder-decoder architecture to solve open-ended Minecraft tasks by following video instructions. The video encoder is a non-causal transformer that extracts the semantic information expressed in the video and maps it to the latent goal space. The controller policy is a decoder module implemented by a causal transformer, which decodes the goal information in the latent space and translates it into a sequence of actions in the given environment states in an autoregressive manner. To comprehensively evaluate an agent’s mastery of skills, we designed a benchmark called Minecraft SkillForge. The benchmark covers six common Minecraft task groups: collect, build, survive, explore, tool, and craft, testing the agent’s abilities in resource collection, structure building, environmental understanding, and tool usage, in a total of 30 tasks. We calculate Elo ratings among GROOT, several counterparts, and human players based on human evaluations. Our experiments showed that GROOT is closing the human-machine gap and outperforms the best baseline by 150 points (or 70% winning rate) in an Elo tournament system. Our qualitative analysis of the induced goal space further demonstrates some interesting emergent properties, including the goal composition and complex gameplay behavior synthesis. To sum up, our main contributions are as follows: * • Start by maximizing the log-likelihood of future states given past ones, we have discovered the learning objectives that lead to a good goal space and ultimately the instruction-following controller from gameplay videos. It provides theoretical guidance for the agent architecture design and model optimization. * • Based on our proposed learning framework, we implemented a simple yet efficient encoder-decoder agent based on causal transformers. The encoder is responsible for understanding the goal information in the video instruction while the decoder as the policy emits motor commands. * • On our newly introduced benchmark, Minecraft SkillForge, GROOT is closing the human-machine gap and surpassing the state-of-the-art baselines by a large margin in the overall Elo rating comparison. GROOT also exhibits several interesting emergent properties, including goal composition and complex gameplay behavior synthesis. ## 2 Preliminaries and Problem Formulation Reinforcement Learning (RL) concerns the problem in which an agent interacts with an environment at discrete time steps, aiming to maximize its expected cumulative reward (Mnih et al., 2015; Schulman et al., 2017; Espeholt et al., 2018). Specifically, the environment is defined as a Markov Decision Process (MDP) $\langle\mathcal{S},\mathcal{A},\mathcal{R},\mathcal{P},d_{0}\rangle$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{R}:\mathcal{S}\times\mathcal{A}\rightarrow\mathbb{R}$ is the reward function, $\mathcal{P}:\mathcal{S}\times\mathcal{A}\rightarrow\mathcal{S}$ is the transition dynamics, and $d_{0}$ is the initial state distribution. Our goal is to learn a policy $\pi(a|s)$ that maximizes the expected cumulative reward $\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}]$, where $\gamma\in(0,1]$ is a discount factor. In goal-conditioned RL (GCRL) tasks, we are additionally provided with a goal $g\in\mathcal{G}$ (Andrychowicz et al., 2017; Ding et al., 2019; Liu et al., 2022; Cai et al., 2023). And the task becomes learning a goal-conditioned policy $\pi(a|s,g)$ that maximizes the expected return $\mathbb{E}[\sum_{t=0}^{\infty}\gamma^{t}r_{t}^{g}]$, where $r_{t}^{g}$ is the goal-specific reward achieved at time step $t$. Apart from being a new type of RL task, GCRL has been widely studied as a pre-training stage toward conquering more challenging environments/tasks (Aytar et al., 2018b; Baker et al., 2022; Zhang et al., 2022). Specifically, suppose we are provided with a good goal-condition policy, the goal can be viewed as a meta-action that drives the agent to accomplish various sub-tasks, which significantly simplifies tasks that require an extended horizon to accomplish. Further, when equipped with goal planners, we can achieve zero- or few-shot learning on compositional tasks that are beyond the reach of vanilla RL algorithms (Huang et al., 2022; Wang et al., 2023b; a; Zhu et al., 2023). At the heart of leveraging such benefits, a key requirement is to have a properly-defined goal space that (i) has a wide coverage of common tasks/behaviors, and (ii) succinctly describes the task without including unnecessary information about the state. Many prior works establish goal spaces using guidance from other modalities such as language (Hong et al., 2020; Stone et al., 2023; Cai et al., 2023) or code (Wang et al., 2023a; Huang et al., 2023). While effective, the requirement on large-scale trajectory data paired with this auxiliary information could be hard to fulfill in practice. Instead, this paper studies the problem of simultaneously learning a rich and coherent goal space and the corresponding goal-conditioned policy, given gameplay videos, i.e., sequences of states $\\{s_{0:T}^{(i)}\\}_{i}$ collected using unknown policies. ## 3 Goal Space Discovery via Future State Prediction This section explains our learning framework: discovering a “good” goal space as well as a video instruction following controller through the task of predicting future states given previous ones. We start with an illustrative example in Minecraft (Johnson et al., 2016). Imagine that an agent is standing inside a grassland holding an axe that can be used to chop the tree in front of them. Suppose in the gameplay video, players either go straight to chop the tree or bypass it to explore the territory. In order to predict future frames, it is sufficient to know (i) which goal (chop tree or bypass tree) is being pursued by the agent, and (ii) what will happen if the agent chooses a particular option (i.e., transition dynamics). Apart from the latter information that is irrelevant to the past observations, we only need to capture the goal information, i.e., whether the agent decides to chop the tree or bypass the tree. Therefore, the task of establishing a comprehensive while succinct goal space can be interpreted as predicting future states while conditioning on the transition dynamics of the environment. Formally, our learning objective is to maximize the log-likelihood of future states given past ones: $\log{p}_{\theta}(s_{t+1:T}|s_{0:t})$. Define $g$ as a latent variable conditioned on past states (think of it as the potential goals the agent is pursuing given past states), the evidence lower-bound of the objective given variational posterior ${q}_{\phi}(g|s_{0:T})$ is the following (see Appendix A for the derivation of this and the following equations): $\displaystyle\log{p}_{\theta}(s_{t+1:T}|s_{0:t})$ $\displaystyle=\log\sum_{g}{p}_{\theta}(s_{t+1:T},g|s_{0:t})$ $\displaystyle\geq\mathbb{E}_{g\sim{q}_{\phi}(\cdot|s_{0:T})}\left[\log{p}_{\theta}(s_{t+1:T}|s_{0:t},g)\right]-D_{\mathrm{KL}}\left({q}_{\phi}(g|s_{0:T})\parallel{p}_{\theta}(g|s_{0:t})\right),$ where $D_{\mathrm{KL}}(\cdot\|\cdot)$ denotes the KL-divergence. Next, we break down the first term (i.e., ${p}(s_{t+1:T}|s_{0:t},g)$) into components contributed by the (unknown) goal-conditioned policy $\pi(a|s,g)$ and the transition dynamics ${p}(s_{t+1}|s_{0:t},a_{t})$ : $\displaystyle\log{p}_{\theta}(s_{t+1:T}|s_{0:t},g)$ $\displaystyle=\sum_{\tau=t}^{T}\log\sum_{a_{\tau}}\pi_{\theta}(a_{\tau}|s_{0:\tau},g)\cdot{p}_{\theta}(s_{\tau+1}|s_{0:\tau},a_{\tau})$ $\displaystyle\geq\sum_{\tau=t}^{T}\mathbb{E}_{a_{\tau}\sim{p}_{\theta}(a_{\tau}|s_{0:\tau+1})}\big{[}\log\pi_{\theta}(a_{\tau}|s_{0:\tau},g)+C\big{]},$ where the constant $C$ contains terms that depend solely on the environment dynamics and are irrelevant to what we want to learn (i.e., the goal space and the goal-conditioned policy). Bring it back to the original objective, we have $\displaystyle\log{p}(s_{t+1:T}|s_{0:t})\geq\underbrace{\sum_{\tau=t}^{T-1}\mathbb{E}_{g\sim{q}_{\phi}(\cdot|s_{0:T}),a_{\tau}\sim{p}_{\theta}(\cdot|s_{0:\tau+1})}\left[\log\pi_{\theta}(a_{\tau}|s_{0:\tau},g)\right]}_{\text{behaviour cloning}}-\underbrace{D_{\mathrm{KL}}\left({q}_{\phi}(g|s_{0:T})\parallel{p}_{\psi}(g|s_{0:t})\right)}_{\text{goal space constraint (KL regularization)}},$ where $q_{\phi}(\cdot|s_{0:T})$ is implemented as a video encoder that maps the whole state sequence into the latent goal space. $p(\cdot|s_{0:\tau+1})$ is the inverse dynamic model (IDM) that predicts actions required to achieve a desired change in the states, which is usually a pre-trained model. Thus, the objective can be explained as jointly learning a video encoder and a goal- controller policy through behavior cloning under succinct goal space constraints. ## 4 GROOT Architecture Design and Training Strategy Figure 2: Our GROOT agent architecture. Left: In the training stage, a video encoder (non-causal transformer) learns to extract the semantic meaning and transfer the video (state sequence) into the goal embedding space. A goal- conditioned policy (causal transformer) is learned to predict actions following the given instructions. We learn the agent using behavior cloning under a KL constraint. Right: During the inference, any reference video is passed into the video encoder to generate the goal embeddings that drives the policy to interact with the environment. This section illustrates how to create an agent (we call it GROOT) that can understand the semantic meaning of a reference video and interact with the environment based on the aforementioned learning framework. According to the discussion in Section 3, the learnable parts of GROOT include the video encoder and the goal-conditioned policy. Recently, Transformer (Vaswani et al., 2017) has demonstrated effectiveness in solving sequential decision- making problems (Parisotto et al., 2019; Chen et al., 2021; Brohan et al., 2022). Motivated by this, we implement GROOT with transformer-based encoder- decoder architecture, as shown in Figure 2. The video encoder is a non-causal transformer that extracts semantic information and generates goal embeddings. The policy is a causal transformer decoder that receives the goal embeddings as the instruction and autoregressively translates the state sequence into a sequence of actions. Next, we describe how each module is constructed together with the training strategy. ### 4.1 Video Encoder The video encoder includes a Convolutional Neural Network (CNN) to extract spatial information from image states $s_{1:T}$ and a non-causal transformer to capture temporal information from videos. Specifically, we use a CNN backbone to extract visual embeddings $\\{x_{1:T}\\}$ for all frames. Additionally, motivated by Devlin et al. (2019), we construct a set of learnable embeddings (or summary tokens), represented as $\\{c_{1:N}\\}$, to capture the semantic information present in the video. The visual embeddings and summary tokens are passed to a non-causal transformer, resulting in the output corresponding to the summary tokens as $\\{\hat{c}_{1:N}\\}$ $\displaystyle x_{1:T}$ $\displaystyle\leftarrow\texttt{Backbone}(s_{1:T}),$ (1) $\displaystyle\hat{c}_{1:N}$ $\displaystyle\leftarrow\texttt{Transformer}(\left[x_{1:T},c_{1:N}\right]).$ Similar to VAE (Kingma & Welling, 2013), we assume that the latent goal space follows a Gaussian distribution, hence we use two fully connected layers, $\mu(\cdot)$ and $\sigma(\cdot)$, to generate the mean and standard deviation of the distribution, respectively. During training, we use the reparameterization trick to sample a set of embeddings $\\{g_{1:N}\\}$ from the distribution, where $g_{t}\sim\mathcal{N}(\mu(\hat{c}_{t}),\sigma(\hat{c}_{t}))$. During inference, we use the mean of the distribution as the goal embeddings, i.e. $g_{t}\leftarrow\mu(\hat{c}_{t})$. Figure 3: Results on Minecraft SkillForge benchmark. Left: Tournament evaluation of GROOT assessed by human players. GROOT performs better than state-of-the-art Minecraft agent STEVE-1. A 150-score gap corresponds to a $70\%$ probability of winning. Middle: Winning rate of GROOT v.s. other agents on specific task categories. Colors from red to blue denote a decrease in the winning rate. Apart from the human player, GROOT surpasses all other baselines. Right: Success rate on 9 representative tasks. GROOT champions process-oriented tasks, such as dig three down and fill one up () and build snow golems (). ### 4.2 Decoder as Policy To introduce our policy module, we start with VPT (Baker et al., 2022), a Minecraft foundation model trained with standard behavioral cloning. It is built on Transformer-XL (Dai et al., 2019) that can leverage long-horizon historical states and predict the next action seeing the current observation. However, the vanilla VPT architecture does not support instruction input. To condition the policy on goal embeddings, we draw the inspiration from Flamingo (Alayrac et al., 2022), that is, to insert _gated cross-attention dense_ layers into every Transformer-XL block. The keys and values in these layers are obtained from goal embeddings, while the queries are derived from the environment states $\displaystyle\hat{x}^{(l)}_{1:t}$ $\displaystyle\leftarrow\texttt{GatedXATTN}(\textit{kv}=g_{1:N},\textit{q}=x^{(l-1)}_{1:t};\theta_{l}),$ (2) $\displaystyle x^{(l)}_{1:t}$ $\displaystyle\leftarrow\texttt{TransformerXL}(\textit{qkv}=\hat{x}^{(l)}_{1:t};\theta_{l}),$ $\displaystyle\hat{a}_{t}$ $\displaystyle\leftarrow\texttt{FeedForward}(x^{(M)}_{t}),$ where the policy reuses the visual embeddings extracted by the video encoder, i.e., $x^{(0)}_{1:t}=x_{1:t}$, the policy consists of $M$ transformer blocks, $\theta_{l}$ is the parameter of $l$-th block, $\hat{a}_{t}$ is the predicted action. Since our goal space contains information about how to complete a task that is richer than previous language-conditioned policy (Cai et al., 2023; Lifshitz et al., 2023), the cross-attention mechanism is necessary. It allows the GROOT to query the task progress from instruction information based on past states, and then perform corresponding behaviors to complete the remaining progress. ### 4.3 Training and Inference The training dataset can be a mixture of Minecraft gameplay videos and offline trajectories. For those videos without actions, an inverse dynamic model (Baker et al., 2022) can be used to generate approximate actions. Limited by the computation resources, we truncated all the trajectories into segments with a fixed length of $T$ without using any prior. We denote the final dataset as $\mathcal{D}=\\{(x_{1:T},a_{1:T})\\}_{M}$, where $M$ is the number of trajectories. We train GROOT in a fully self-supervised manner while the training process can be viewed as self-imitating, that is, training GROOT jointly using behavioral cloning and KL divergence loss $\displaystyle\mathcal{L}(\theta,\phi)=\mathbb{E}_{(s,a)\sim\mathcal{D}}\left[\frac{\lambda_{bc}}{T}\sum_{t}-\log\pi_{\theta}(a_{t}|s_{1:t})+\lambda_{kl}\sum_{\tau}D_{KL}\left(q_{\phi}(g|s_{0:T})\parallel{p}_{\psi}(g|s_{0:\tau})\right)\right],$ (3) where $\lambda_{bc},\lambda_{kl}$ are tradeoff coefficients, ${q}_{\phi}$ is the visual encoder, ${p}_{\psi}$ is a prior video encoder with the same architecture. More implementation details can be found in the Appendix C. ## 5 Result ### 5.1 Performance on Mastering Minecraft Skills Minecraft SkillForge Benchmark. In order to comprehensively evaluate the mastery of tasks by agents in Minecraft, we created a diverse benchmark called Minecraft SkillForge. It covers 30 tasks from 6 major categories of representative skills in Minecraft, including collect, explore, craft, tool, survive, and build. For example, the task “dig three down and fill one up” in the build category asks the agent to first dig three blocks of dirt, then use the dirt to fill the space above; The task of “building a snow golem” () requires the agent to sequentially stack 2 snow blocks () and 1 carved pumpkin (). We put the details of this benchmark in the Appendix I. Apart from some relatively simple or common tasks such as “collect wood” and “hunt animals”, other tasks require the agent to have the ability to perform multiple steps in succession. We compare GROOT with the following baselines: (a) VPT (Baker et al., 2022), a foundation model pre-trained on large-scale YouTube data, with three variants: VPT (fd), VPT(bc), and VPT(rl), indicating vanilla foundation model, behavior cloning finetuned model, and RL finetuned model; (b) STEVE-1 (Lifshitz et al., 2023), an instruction-following agent finetuned from VPT, with two variants: STEVE-1 (visual) and STEVE-1 (text) that receives visual and test instructions. It is worth noting that GROOT was trained from scratch. Figure 4: t-SNE visualization of the goal space. Each color corresponds to a specific video category. Left: Space of randomly initialized video encoder. All the videos are entangled together. Middle: Space of GROOT trained with self-supervised learning. The videos are clustered based on their semantics. Right: Synthesized videos using concatenation manner. The concatenated videos lay on the position between the source videos. Human Evaluation with Elo Rating. We evaluated the relative strength of agents by running an internal tournament and reporting their Elo ratings, as in Mnih et al. (2015). Before the tournament, each agent is required to generate 10 videos of length 600 on each task. Note that, all the reference videos used by GROOT are generated from another biome to ensure generalization. Additionally, we also invited 3 experienced players to do these tasks following the same settings. After the video collection, we asked 10 players to judge the quality of each pair of sampled videos from different agents. Considering the diversity of tasks, we designed specific evaluation criteria for every task to measure the quality of rollout trajectories. For example, in the task of “build snow golem”, we rank the completion degree of the task in ascending order: no blocks placed, one type of block placed, two types of blocks placed, and snow golem built successfully. After 1500 comparisons, the Elo rating converged as in Figure 3 (left). Although there is a large performance gap compared with human players, GROOT has significantly surpassed the current state-of-the-art STEVE-1 series and condition-free VPT series on the overall tasks. Additional details are in Appendix H. In Figure 3 (middle), we compare GROOT with other baselines in winning rate on six task groups. We found that except for the performance on craft tasks, where STEVE-1 (visual) outperforms our model, GROOT achieves state-of-the-art results. In particular, GROOT greatly outperforms other baselines by a large margin on build and tool. For build, the goal space needs to contain more detailed procedural information, which is the disadvantage of methods that use future outcomes as the goal. Moreover, such tasks are distributed sparsely in the dataset, or even absent in the dataset, which requires the agent to have strong generalization ability. As for craft group, GROOT is not superior enough, especially on the “crafting table” task. We attribute this to the wide task distribution in the dataset. Thus the future outcomes can prompt STEVE-1 to achieve a high success rate. Programmatic Evaluation. To quantitatively compare the performance of the agents, we selected 9 representative tasks out of 30 and reported the success rate of GROOT, STEVE-1 (visual), and VPT (bc) on these tasks in Figure 3 (right). We found that, based on the success rate on tasks such as dye and shear sheep(), enchat sword (), smelt food (), use bow (), sleep (), and lead animals (), GROOT has already reached a level comparable to that of human players ($100\%$). However, the success rates for build snow golems () and build obsidian () tasks are only $60\%$ and $50\%$. By observing the generated videos, we found that GROOT cannot precisely identify the items in Hotbar (such as buckets, lava buckets, snow blocks, and pumpkin heads), resulting in a low probability of switching to the correct item. STEVE-1 also has the same problem. This may be due to the current training paradigm lacking strong supervisory signals at the image level. Future work may introduce auxiliary tasks such as vision-question answering (VQA) to help alleviate this phenomenon. For more details, please refer to Appendix F. Figure 5: Comparision on using raw and concatenated reference videos as conditions. Left: Collected wood in the forest biome. Right: Killed mobs in the plains biome. “concat” denotes the reference video is [chop trees, hunt animals]. Figure 6: Ablation study on KL loss. After being jointly trained with KL loss, GROOT can collect $2\times$ more seagrass () underwater and $1.5\times$ wood () in the forest while the difference is not as impressive on the use bow () task. ### 5.2 Properties of Learned Goal Space This section studies the properties of learned goal space. We used the t-SNE algorithm (van der Maaten & Hinton, 2008) to visualize the clustering effect of reference videos encoded in goal space, as in Figure 4. We select 7 kinds of videos, including craft items, combat enemies, harvest crops, hunt animals, chop trees, trade with villagers, and mine ores. These videos are sampled from the contractor data (Baker et al., 2022) according to the meta information (details are in Appendix E). Each category contains 1k video segments. As a control group, in Figure 4 (left), we showed the initial goal space of the video encoder (with a pre-trained EfficientNet-B0 (Tan & Le, 2019) as the backbone) before training. We found that the points are entangled together. After being trained on offline trajectories, as in Figure 4 (middle), it well understands reference videos and clusters them according to their semantics. This proves that it is efficient to learn behavior-relevant task representations using our self-supervised training strategy. Inevitably, there are still some videos from different categories entangled together. We attribute this to the possibility of overlap in the performed behaviors of these videos. For example, chop trees and harvest crops both rely on a sequential of “attack” actions. Condition on Concatenated Videos. We also study the possibility of conditioning the policy on concatenated videos. First, we collect 3 kinds of source videos, including chop trees, hunt animals, and trade with villagers. We randomly sampled two videos from sources of chop trees and hunt animals, downsampled and concatenated them into a synthetic video, denoted as [chop trees, hunt animals]. By the same token, we can obtain [hunt animals, trade with villagers]. We visualize these videos together with the source videos in Figure 4 (right). We found that the source videos lie far away from each other while the concatenated videos are distributed between their source videos. Based on this intriguing phenomenon, we infer that concatenated videos may prompt GROOT to solve both tasks simultaneously. To verify this, we evaluate GROOT on three kinds of reference videos, i.e., chop trees, hunt animals, and [chop trees, hunt animals]. We launched GROOT in the forest and in the animal plains, respectively. The collected wood and killed mobs are reported in Figure 6. We found that although the concatenated video may not be as effective as raw video in driving an agent to complete a single task ($60\%$ of the performance of raw video), it does possess the ability to drive the agent to perform multiple tasks. This is an important ability. As discussed in Wang et al. (2023b), sometimes the high-level planner will propose multiple candidate goals, it will be efficient if the low-level controller can automatically determine which to accomplish based on the current observation. Figure 7: Results on solving challenging obtain diamond task. The vertical dashed lines represent the time when a certain item is first obtained. Left: GROOT first dug down to the depth of 12 and then mined horizontally to obtain diamonds with an average success rate of $16\%$. Right: STEVE-1 quickly dug down to the specific depth but struggled to maintain its height. Ablation on KL Divergence Loss. To investigate the role of KL loss in training, we evaluated GROOT (w/ KL) and its variant (w/o KL) on three tasks: collect seagrass (), collect wood (), and use bow (). As shown in Figure 6, we found that introducing the constraint of KL loss improved agent performance by $2\times$ and $1.5\times$ in the first two tasks, whereas there was no significant effect in the use bow task. This may be because the first two tasks require the agent to generalize the corresponding skills to different terrains (e.g. locating trees in the environment for collecting wood and sinking to specific locations for collecting seagrass). Therefore, it puts higher demands on the agent’s ability to generalize in the goal space, and this is exactly the role played by the KL loss. The use bow task is relatively simple in comparison because it only requires charging and shooting the arrow, without considering environment factors. ### 5.3 Combining Skills for Long-horizon Tasks In this section, we explore whether GROOT can combine skills to solve long- horizon tasks, which is key to its integration with a high-level planner. Taking the task of mining diamonds as an example, prior knowledge is that diamond ores are generally distributed between the 7th and 14th floors underground, and the probability of appearing in other depths is almost zero. Therefore, the agent needs to first dig down to the specified depth (12) and then maintain horizontal mining. To achieve this, we designed two reference videos, each $128$ frames long. One describes the policy of starting from the surface and digging down, and the other demonstrates the behaviors of horizontal mining. We show an example in Figure 7 (left). In the beginning, GROOT quickly digs down to the specified depth and then switches to horizontal mining mode. It maintains the same height for a long time and found diamonds at 11k steps. In addition, we compared STEVE-1 (visual) under the same setting in Figure 7 (right). After switching to the horizontal mining prompt, STEVE-1 maintains its height for a short time before it stuck in the bedrock layer (unbreakable in survival mode), greatly reducing the probability of finding diamonds. This indicates that our goal space is expressive enough to instruct the way of mining, and the policy can follow the instructions persistently and reliably. In contrast, STEVE-1, which relies on future outcomes as a condition, was unable to maintain its depth, despite attempts at various visual prompts. We conducted 25 experiments each on GROOT and STEVE-1, with success rates of $16\%$ and $0\%$ for finding diamonds. Additional details are in the Appendix G. ## 6 Related Works Pre-train Policy on Offline Data. Pre-training neural networks on internet- scale data has been demonstrated as a very effective training paradigm in Computer Vision (Radford et al., 2021; Kirillov et al., 2023) and Nature Language Processing (Devlin et al., 2019; Brown et al., 2020). Recently, researchers tried to transfer this success to the field of decision-making from pre-training visual representations and directly distilling the policy from offline data. As the former, Aytar et al. (2018a); Zakka et al. (2021); Bruce et al. (2023) leveraged temporal information present in videos as the supervision signal to learn visual representations. The representations are used to generate intrinsic rewards for boosting downstream policy training, which still requires expensive online interactions with the environment. Schmidhuber (2019); Srivastava et al. (2019); Chen et al. (2021) tried to leverage scalable offline trajectories to train optimal policy by conditioning it on cumulated rewards. However, these methods require clear task definitions and reward labels. This makes it hard to be applied to open worlds, where the tasks are infinite and the reward is open-ended. More generally, Chang et al. (2020); Zhang et al. (2022); Baker et al. (2022) used a pre-trained inverse dynamic model to label actions for gameplay videos and directly distilled the policy with imitation learning. Condition Policy on Goal Space. Researchers have explored many goal modalities, such as language (Khandelwal et al., 2021), image (Du et al., 2021), and future video (Xie et al., 2023), to build a controllable policy. Brohan et al. (2022) collected a large-scale dataset of trajectory-text pairs and trained a transformer policy to follow language instructions. Despite the language being a natural instruction interface, the cost of collecting paired training data is expensive. As a solution, Majumdar et al. (2022) sorted to use hindsight relabeling to first train a policy conditioned on the target image, then aligned text to latent image space, which greatly improves training efficiency. Lifshitz et al. (2023) moved a big step on this paradigm by replacing the target image with a 16-frame future video and reformulating the modality alignment problem into training a prior of latent goal given text. Build Agents in Minecraft. As a challenging open-world environment, Minecraft is attracting increasing researchers to develop AI agents on it, which can be divided into plan-oriented (Wang et al., 2023b; a) and control-oriented methods (Baker et al., 2022; Cai et al., 2023; Lifshitz et al., 2023) based on their emphasis. Plan-oriented agents aim to reason with Minecraft knowledge and decompose the long-horizon task into sub-tasks followed by calling a low- level controller. Control-oriented works follow the given instructions and directly interact with the environments using low-level actions (mouse and keyboard). Baker et al. (2022) pre-trained the first foundation model VPT in Minecraft using internet-scale videos. Although it achieves the first obtaining diamond milestone by fine-tuning with RL, it does not support instruction input. Lifshitz et al. (2023) created the first agent that can solve open-ended tasks by bridging VPT and MineCLIP (Fan et al., 2022). However, its goal space is not expressive enough and prevents it from solving multi-step tasks. ## 7 Limitation and Conclusion Although GROOT has demonstrated powerful capabilities in expressing open-ended tasks in the form of video instructions, training such a goal space remains highly challenging. We found that GROOT is quite sensitive to the selection of reference videos, which we attribute to the fact that the goal space trained from an unsupervised perspective may not be fully aligned with the human intention for understanding the semantics of the reference video. Therefore, it would be a promising research direction in the future to use RLHF (Ziegler et al., 2019) and SFT (supervised fine-tuning, Sanh et al. (2021)) to align the pre-trained goal space with human preference. In conclusion, we propose a paradigm for learning to follow instructions by watching gameplay videos. We argue that video instruction is a good form of goal space that not only expresses open-ended tasks but can also be trained through self-supervision. Based on this, we built an encoder-decoder transformer architecture agent named GROOT in Minecraft. Without relying on any annotated data, GROOT demonstrated extraordinary instruction-following ability and crowned the Minecraft SkillForge benchmark. Additionally, we also showed its potential as a planner downstream controller in the challenging obtain diamond task. We believe that this training paradigm has strong generalization and hope to see its application to more complex open-world environments. ## References * Alayrac et al. (2022) Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc, Arthur Mensch, Katie Millican, Malcolm Reynolds, Roman Ring, Eliza Rutherford, Serkan Cabi, Tengda Han, Zhitao Gong, Sina Samangooei, Marianne Monteiro, Jacob Menick, Sebastian Borgeaud, Andy Brock, Aida Nematzadeh, Sahand Sharifzadeh, Mikolaj Binkowski, Ricardo Barreira, Oriol Vinyals, Andrew Zisserman, and Karen Simonyan. Flamingo: a visual language model for few-shot learning. _ArXiv_ , abs/2204.14198, 2022. URL https://api.semanticscholar.org/CorpusID:248476411. * Andrychowicz et al. (2017) Marcin Andrychowicz, Dwight Crow, Alex Ray, Jonas Schneider, Rachel Fong, Peter Welinder, Bob McGrew, Joshua Tobin, P. Abbeel, and Wojciech Zaremba. Hindsight experience replay. _ArXiv_ , abs/1707.01495, 2017. URL https://api.semanticscholar.org/CorpusID:3532908. * Aytar et al. (2018a) Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyun Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In _Neural Information Processing Systems_ , 2018a. URL https://api.semanticscholar.org/CorpusID:44061126. * Aytar et al. (2018b) Yusuf Aytar, Tobias Pfaff, David Budden, Tom Le Paine, Ziyun Wang, and Nando de Freitas. Playing hard exploration games by watching youtube. In _Neural Information Processing Systems_ , 2018b. URL https://api.semanticscholar.org/CorpusID:44061126. * Baker et al. (2022) Bowen Baker, Ilge Akkaya, Peter Zhokhov, Joost Huizinga, Jie Tang, Adrien Ecoffet, Brandon Houghton, Raul Sampedro, and Jeff Clune. Video pretraining (vpt): Learning to act by watching unlabeled online videos. _ArXiv_ , abs/2206.11795, 2022. URL https://api.semanticscholar.org/CorpusID:249953673. * Brohan et al. (2022) Anthony Brohan, Noah Brown, Justice Carbajal, Yevgen Chebotar, Joseph Dabis, Chelsea Finn, Keerthana Gopalakrishnan, Karol Hausman, Alexander Herzog, Jasmine Hsu, Julian Ibarz, Brian Ichter, Alex Irpan, Tomas Jackson, Sally Jesmonth, Nikhil J. Joshi, Ryan C. Julian, Dmitry Kalashnikov, Yuheng Kuang, Isabel Leal, Kuang-Huei Lee, Sergey Levine, Yao Lu, Utsav Malla, Deeksha Manjunath, Igor Mordatch, Ofir Nachum, Carolina Parada, Jodilyn Peralta, Emily Perez, Karl Pertsch, Jornell Quiambao, Kanishka Rao, Michael S. Ryoo, Grecia Salazar, Pannag R. Sanketi, Kevin Sayed, Jaspiar Singh, Sumedh Anand Sontakke, Austin Stone, Clayton Tan, Huong Tran, Vincent Vanhoucke, Steve Vega, Quan Ho Vuong, F. Xia, Ted Xiao, Peng Xu, Sichun Xu, Tianhe Yu, and Brianna Zitkovich. Rt-1: Robotics transformer for real-world control at scale. _ArXiv_ , abs/2212.06817, 2022. URL https://api.semanticscholar.org/CorpusID:254591260. * Brown et al. (2020) Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, T. J. Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeff Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. _ArXiv_ , abs/2005.14165, 2020. URL https://api.semanticscholar.org/CorpusID:218971783. * Bruce et al. (2023) Jake Bruce, Ankit Anand, Bogdan Mazoure, and Rob Fergus. Learning about progress from experts. In _International Conference on Learning Representations_ , 2023. URL https://api.semanticscholar.org/CorpusID:259298702. * Cai et al. (2023) Shaofei Cai, Zihao Wang, Xiaojian Ma, Anji Liu, and Yitao Liang. Open-world multi-task control through goal-aware representation learning and adaptive horizon prediction. _2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 13734–13744, 2023. URL https://api.semanticscholar.org/CorpusID:256194112. * Chang et al. (2020) Matthew Chang, Arjun Gupta, and Saurabh Gupta. Semantic visual navigation by watching youtube videos. _ArXiv_ , abs/2006.10034, 2020. URL https://api.semanticscholar.org/CorpusID:219721405. * Chen et al. (2021) Lili Chen, Kevin Lu, Aravind Rajeswaran, Kimin Lee, Aditya Grover, Michael Laskin, P. Abbeel, A. Srinivas, and Igor Mordatch. Decision transformer: Reinforcement learning via sequence modeling. In _Neural Information Processing Systems_ , 2021. URL https://api.semanticscholar.org/CorpusID:235294299. * Dai et al. (2019) Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Carbonell, Quoc Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a fixed-length context. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , Jan 2019. doi: 10.18653/v1/p19-1285. URL http://dx.doi.org/10.18653/v1/p19-1285. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. _ArXiv_ , abs/1810.04805, 2019. URL https://api.semanticscholar.org/CorpusID:52967399. * Ding et al. (2019) Yiming Ding, Carlos Florensa, Mariano Phielipp, and P. Abbeel. Goal-conditioned imitation learning. _ArXiv_ , abs/1906.05838, 2019. URL https://api.semanticscholar.org/CorpusID:189762519. * Du et al. (2021) Heming Du, Xin Yu, and Liang Zheng. Vtnet: Visual transformer network for object goal navigation. _ArXiv_ , abs/2105.09447, 2021. URL https://api.semanticscholar.org/CorpusID:234790212. * Espeholt et al. (2018) Lasse Espeholt, Hubert Soyer, Rémi Munos, Karen Simonyan, Volodymyr Mnih, Tom Ward, Yotam Doron, Vlad Firoiu, Tim Harley, Iain Dunning, Shane Legg, and Koray Kavukcuoglu. Impala: Scalable distributed deep-rl with importance weighted actor-learner architectures. _ArXiv_ , abs/1802.01561, 2018. URL https://api.semanticscholar.org/CorpusID:3645060. * Fan et al. (2022) Linxi (Jim) Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. _ArXiv_ , abs/2206.08853, 2022. URL https://api.semanticscholar.org/CorpusID:249848263. * Guss et al. (2019) William H. Guss, Brandon Houghton, Nicholay Topin, Phillip Wang, Cayden Codel, Manuela M. Veloso, and Ruslan Salakhutdinov. Minerl: A large-scale dataset of minecraft demonstrations. In _International Joint Conference on Artificial Intelligence_ , 2019\. URL https://api.semanticscholar.org/CorpusID:199000710. * Hong et al. (2020) Yicong Hong, Qi Wu, Yuankai Qi, Cristian Rodriguez-Opazo, and Stephen Gould. Vln-bert: A recurrent vision-and-language bert for navigation. _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 1643–1653, 2020. URL https://api.semanticscholar.org/CorpusID:227228335. * Huang et al. (2022) Wenlong Huang, F. Xia, Ted Xiao, Harris Chan, Jacky Liang, Peter R. Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, Pierre Sermanet, Noah Brown, Tomas Jackson, Linda Luu, Sergey Levine, Karol Hausman, and Brian Ichter. Inner monologue: Embodied reasoning through planning with language models. In _Conference on Robot Learning_ , 2022. URL https://api.semanticscholar.org/CorpusID:250451569. * Huang et al. (2023) Wenlong Huang, Chen Wang, Ruohan Zhang, Yunzhu Li, Jiajun Wu, and Li Fei-Fei. Voxposer: Composable 3d value maps for robotic manipulation with language models. _ArXiv_ , abs/2307.05973, 2023. URL https://api.semanticscholar.org/CorpusID:259837330. * Johnson et al. (2016) Matthew Johnson, Katja Hofmann, Tim Hutton, and David Bignell. The malmo platform for artificial intelligence experimentation. In _International Joint Conference on Artificial Intelligence_ , 2016\. URL https://api.semanticscholar.org/CorpusID:9953039. * Khandelwal et al. (2021) Apoorv Khandelwal, Luca Weihs, Roozbeh Mottaghi, and Aniruddha Kembhavi. Simple but effective: Clip embeddings for embodied ai. _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 14809–14818, 2021. URL https://api.semanticscholar.org/CorpusID:244346010. * Kingma & Welling (2013) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. _CoRR_ , abs/1312.6114, 2013. URL https://api.semanticscholar.org/CorpusID:216078090. * Kirillov et al. (2023) Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C. Berg, Wan-Yen Lo, Piotr Dollár, and Ross B. Girshick. Segment anything. _ArXiv_ , abs/2304.02643, 2023. URL https://api.semanticscholar.org/CorpusID:257952310. * Lifshitz et al. (2023) Shalev Lifshitz, Keiran Paster, Harris Chan, Jimmy Ba, and Sheila A. McIlraith. Steve-1: A generative model for text-to-behavior in minecraft. _ArXiv_ , abs/2306.00937, 2023. URL https://api.semanticscholar.org/CorpusID:258999563. * Lin et al. (2021) Zichuan Lin, Junyou Li, Jianing Shi, Deheng Ye, Qiang Fu, and Wei Yang. Juewu-mc: Playing minecraft with sample-efficient hierarchical reinforcement learning. _arXiv preprint arXiv:2112.04907_ , 2021. * Liu et al. (2022) Minghuan Liu, Menghui Zhu, and Weinan Zhang. Goal-conditioned reinforcement learning: Problems and solutions. _ArXiv_ , abs/2201.08299, 2022. URL https://api.semanticscholar.org/CorpusID:246063885. * Majumdar et al. (2022) Arjun Majumdar, Gunjan Aggarwal, Bhavika Devnani, Judy Hoffman, and Dhruv Batra. Zson: Zero-shot object-goal navigation using multimodal goal embeddings. _ArXiv_ , abs/2206.12403, 2022. URL https://api.semanticscholar.org/CorpusID:250048645. * Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin A. Riedmiller, Andreas Kirkeby Fidjeland, Georg Ostrovski, Stig Petersen, Charlie Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. _Nature_ , 518:529–533, 2015. URL https://api.semanticscholar.org/CorpusID:205242740. * Parisotto et al. (2019) Emilio Parisotto, H. Francis Song, Jack W. Rae, Razvan Pascanu, Çaglar Gülçehre, Siddhant M. Jayakumar, Max Jaderberg, Raphael Lopez Kaufman, Aidan Clark, Seb Noury, Matthew M. Botvinick, Nicolas Manfred Otto Heess, and Raia Hadsell. Stabilizing transformers for reinforcement learning. In _International Conference on Machine Learning_ , 2019. URL https://api.semanticscholar.org/CorpusID:204578308. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_ , 2021. URL https://api.semanticscholar.org/CorpusID:231591445. * Sanh et al. (2021) Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Stella Biderman, Leo Gao, Tali Bers, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization, 2021\. * Schmidhuber (2019) Juergen Schmidhuber. Reinforcement learning upside down: Don’t predict rewards - just map them to actions. _ArXiv_ , abs/1912.02875, 2019. URL https://api.semanticscholar.org/CorpusID:208857600. * Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. _ArXiv_ , abs/1707.06347, 2017. URL https://api.semanticscholar.org/CorpusID:28695052. * Silver et al. (2016) David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, L. Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Vedavyas Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy P. Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. _Nature_ , 529:484–489, 2016. URL https://api.semanticscholar.org/CorpusID:515925. * Srivastava et al. (2019) Rupesh Kumar Srivastava, Pranav Shyam, Filipe Wall Mutz, Wojciech Jaśkowski, and Jürgen Schmidhuber. Training agents using upside-down reinforcement learning. _ArXiv_ , abs/1912.02877, 2019. URL https://api.semanticscholar.org/CorpusID:208857468. * Stone et al. (2023) Austin Stone, Ted Xiao, Yao Lu, Keerthana Gopalakrishnan, Kuang-Huei Lee, Quan Ho Vuong, Paul Wohlhart, Brianna Zitkovich, F. Xia, Chelsea Finn, and Karol Hausman. Open-world object manipulation using pre-trained vision-language models. _ArXiv_ , abs/2303.00905, 2023. URL https://api.semanticscholar.org/CorpusID:257280290. * Tan & Le (2019) Mingxing Tan and Quoc V. Le. Efficientnet: Rethinking model scaling for convolutional neural networks. _ArXiv_ , abs/1905.11946, 2019. URL https://api.semanticscholar.org/CorpusID:167217261. * Tkachenko et al. (2020-2022) Maxim Tkachenko, Mikhail Malyuk, Andrey Holmanyuk, and Nikolai Liubimov. Label Studio: Data labeling software, 2020-2022. URL https://github.com/heartexlabs/label-studio. Open source software available from https://github.com/heartexlabs/label-studio. * van der Maaten & Hinton (2008) Laurens van der Maaten and Geoffrey E. Hinton. Visualizing data using t-sne. _Journal of Machine Learning Research_ , 9:2579–2605, 2008\. URL https://api.semanticscholar.org/CorpusID:5855042. * Vaswani et al. (2017) Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NIPS_ , 2017. URL https://api.semanticscholar.org/CorpusID:13756489. * Wang et al. (2023a) Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi (Jim) Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. _ArXiv_ , abs/2305.16291, 2023a. URL https://api.semanticscholar.org/CorpusID:258887849. * Wang et al. (2023b) Zihao Wang, Shaofei Cai, Anji Liu, Xiaojian Ma, and Yitao Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents. _ArXiv_ , abs/2302.01560, 2023b. URL https://api.semanticscholar.org/CorpusID:256598146. * Xie et al. (2023) Zhihui Xie, Zichuan Lin, Deheng Ye, Qiang Fu, Wei Yang, and Shuai Li. Future-conditioned unsupervised pretraining for decision transformer. In _International Conference on Machine Learning_ , 2023. URL https://api.semanticscholar.org/CorpusID:258947476. * Yu et al. (2019) Tianhe Yu, Deirdre Quillen, Zhanpeng He, Ryan C. Julian, Karol Hausman, Chelsea Finn, and Sergey Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning. _ArXiv_ , abs/1910.10897, 2019. URL https://api.semanticscholar.org/CorpusID:204852201. * Zakka et al. (2021) Kevin Zakka, Andy Zeng, Peter R. Florence, Jonathan Tompson, Jeannette Bohg, and Debidatta Dwibedi. Xirl: Cross-embodiment inverse reinforcement learning. In _Conference on Robot Learning_ , 2021. URL https://api.semanticscholar.org/CorpusID:235368061. * Zhang et al. (2022) Qihang Zhang, Zhenghao Peng, and Bolei Zhou. Learning to drive by watching youtube videos: Action-conditioned contrastive policy pretraining. In _European Conference on Computer Vision_ , 2022. URL https://api.semanticscholar.org/CorpusID:250626771. * Zhu et al. (2023) Xizhou Zhu, Yuntao Chen, Hao Tian, Chenxin Tao, Weijie Su, Chenyuan Yang, Gao Huang, Bin Li, Lewei Lu, Xiaogang Wang, Y. Qiao, Zhaoxiang Zhang, and Jifeng Dai. Ghost in the minecraft: Generally capable agents for open-world environments via large language models with text-based knowledge and memory. _ArXiv_ , abs/2305.17144, 2023. URL https://api.semanticscholar.org/CorpusID:258959262. * Ziegler et al. (2019) Daniel M. Ziegler, Nisan Stiennon, Jeff Wu, Tom B. Brown, Alec Radford, Dario Amodei, Paul Christiano, and Geoffrey Irving. Fine-tuning language models from human preferences. _ArXiv_ , abs/1909.08593, 2019. URL https://api.semanticscholar.org/CorpusID:202660943. ## Appendix ## Appendix A Derivation In this section, we detail how we derive the final objective. Recall that the goal is to maximize the log-likelihood of future states given past ones: $\log p(s_{t+1:T}|s_{0:t})$. Using Bayes’ theorem and the Jensen’s inequality, we have: $\displaystyle\log\ p(s_{t+1:T}|s_{0:t})$ $\displaystyle=\log\sum_{z}p(s_{t+1:T},z|s_{0:t}),$ (4) $\displaystyle=\log\sum_{z}\frac{p(s_{t+1:T},z|s_{0:t})\ q(z|s_{0:T})}{q(z|s_{0:T})},$ (5) $\displaystyle\geq\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\log\ p(s_{t+1:T},z|s_{0:t})-\log\ q(z|s_{0:T})\big{]},$ (6) $\displaystyle=\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\log\ p(s_{t+1:T}|s_{0:t},z)+\log\ p(z|s_{0:t})-\text{log}\ q(z|s_{0:T})\big{]},$ (7) $\displaystyle=\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\log\ p(s_{t+1:T}|s_{0:t},z)\big{]}+\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\log\frac{\ p(z|s_{0:t})}{\ q(z|s_{0:T})}\big{]},$ (8) $\displaystyle=\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\log\ p(s_{t+1:T}|s_{0:t},z)\big{]}-D_{\text{KL}}\big{(}q(z|s_{0:T})\parallel p(z|s_{0:t})\big{)}.$ (9) We break down $p(s_{t+1:T}|s_{0:t},z)$ into components: goal-conditioned policy $\pi(a_{\tau}|s_{0:\tau+1})$ and the transition dynamics $p(s_{t+1}|s_{0:t},a_{t})$, we have $\displaystyle p(s_{t+1:T}|s_{0:t},z)=\prod_{\tau=t}^{T-1}\big{(}\sum_{a_{\tau}}\pi(a_{\tau}|s_{0:\tau},z)\cdot p(s_{\tau+1}|s_{0:\tau},a_{\tau})\big{)}.$ (10) Furthermore, using Jensen’s inequality, $\log p(s_{t+1:T}|s_{0:t},z)$ can be written as $\displaystyle\log\ p(s_{t+1:T}|s_{0:t},z)$ $\displaystyle=\sum_{\tau=t}^{T-1}\log\sum_{a_{\tau}}\pi(a_{\tau}|s_{0:\tau},z)\cdot p(s_{\tau+1}|s_{0:\tau},a_{\tau}),$ (11) $\displaystyle=\sum_{\tau=t}^{T-1}\log\sum_{a_{\tau}}\pi(a_{\tau}|s_{0:\tau},z)\cdot\frac{p(a_{\tau}|s_{0:\tau},s_{\tau+1})\cdot p(s_{\tau+1}|s_{0:\tau})}{p(a_{\tau}|s_{0:\tau})},$ (12) $\displaystyle\geq\sum_{\tau=t}^{T-1}\mathbb{E}_{a_{\tau}\sim p(a_{\tau}|s_{0:\tau},s_{\tau+1})}\big{[}\text{log}\ \pi(a_{\tau}|s_{0:\tau},z)+C\big{]},$ (13) where the constant $C=\log p(s_{\tau+1}|s_{0:\tau})-\log p(a_{\tau}|s_{0:\tau})$ depends solely on the environment dynamics and are irrelevant to what we want to learn (i.e., the goal space and the goal- conditioned policy), we have: $\displaystyle\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\text{log}\ p(s_{t+1:T}|s_{0:t},z)\big{]}$ $\displaystyle\geq\mathbb{E}_{z\sim q(z|s_{0:T})}\big{[}\sum_{\tau=t}^{T-1}\mathbb{E}_{a_{\tau}\sim p(a_{\tau}|s_{0:\tau},s_{\tau+1})}\big{[}\text{log}\ \pi(a_{\tau}|s_{0:\tau},z)\big{]}\big{]},$ (14) $\displaystyle=\sum_{\tau=t}^{T-1}\mathbb{E}_{z\sim q(z|s_{0:T}),a_{\tau}\sim p(a_{\tau}|s_{0:\tau},s_{\tau+1})}\big{[}\text{log}\ \pi(a_{\tau}|s_{0:\tau},z)\big{]}.$ (15) Thus, we derived the evidence lower-bound of $\log p(s_{t+1:T}|s_{0:t})$ as follows $\displaystyle\log p(s_{t+1:T}|s_{0:t})\geq\sum_{\tau=t}^{T-1}\mathbb{E}_{z\sim q(z|s_{0:T}),a_{\tau}\sim p(a_{\tau}|s_{0:\tau+1})}\big{[}\log\pi(a_{\tau}|s_{0:\tau},z)\big{]}-D_{\text{KL}}\big{(}q(z|s_{0:T})\parallel p(z|s_{0:t})\big{)}.$ (16) ## Appendix B Minecraft Environment Figure 8: Examples of Minecraft environment. Tasks from top to bottom, from left to right are building houses, planting wheat, fishing, brewing a potion, mining diamond ores, and combating the ender dragon, respectively. Minecraft is an extremely popular sandbox game that allows players to freely create and explore their world. This game has infinite freedom, allowing players to change the world and ecosystems through building, mining, planting, combating, and other methods (shown in Figure 8). It is precisely because of this freedom that Minecraft becomes an excellent AI testing benchmark (Johnson et al., 2016; Baker et al., 2022; Fan et al., 2022; Cai et al., 2023; Lifshitz et al., 2023; Wang et al., 2023b; a). In this game, AI agents need to face situations that are highly similar to the real world, making judgments and decisions to deal with various environments and problems. Therefore, Minecraft is a very suitable environment to be used as AI testing benchmark. By using Minecraft, AI researchers can more conveniently simulate various complex and diverse environments and tasks, thereby improving the practical value and application of AI technology. We use the combination of 1.16.5 version MineRL (Guss et al., 2019) and MCP- Reborn111https://github.com/Hexeption/MCP-Reborn as our testing platform, which is consistent with the environment used by VPT (Baker et al., 2022) and STEVE-1 (Lifshitz et al., 2023). Mainly because this platform preserves observation and action space that is consistent with human players to the fullest extent. On the one hand, this design brings about high challenges, as agents can only interact with the environment using low-level mouse and keyboard actions, and can only observe visual information like human players without any in-game privileged information. Therefore, the AI algorithms developed on this platform can have higher generalization ability. On the other hand, this also presents opportunities for us to conduct large-scale pre-training on internet-scale gameplay videos. ### B.1 Observation Space The visual elements included in our observation space are completely consistent with those seen by human players, including the Hotbar, health indicators, player hands, and equipped items. The player’s perspective is in the first person with a field of view of $70$ degrees. The simulator first generates an RGB image with dimensions of $640\times 360$ during the rendering process. Before inputting to the agent, we resize the image to $224\times 224$ to enable the agent to clearly see item icons in the inventory and important details in the environment. When the agent opens the GUI, the simulator also renders the mouse cursor normally. The RGB image is the only observation that the agent can obtain from the environment during inference. It is worth noting that to help the agent see more clearly in extremely dark environments, we have added a night vision effect for the agent, which increases the brightness of the environment during nighttime. ### B.2 Action Space Our action space is almost identical to that of humans, except for actions that involve inputting strings. It consists of two parts: the mouse and the keyboard. The mouse movement is responsible for changing the player’s camera perspective and moving the cursor when the GUI is opened. The left and right buttons are responsible for attacking and using items. The keyboard is mainly responsible for controlling the agent’s movement. We list the meaning of each action in the Table 1. To avoid predicting null action, we used the same joint hierarchical action space as Baker et al. (2022), which consists of button space and camera space. Button space encodes all combinations of keyboard operations and a flag indicating whether the mouse is used, resulting in a total of 8461 candidate actions. The camera space discretizes the range of one mouse movement into 121 actions. Therefore, the action head of the agent is a multi-classification network with 8461 dimensions and a multi-classification network with 121 dimensions. Table 1: Action space descriptions from Minecraft wiki (https://minecraft.fandom.com/wiki/Controls). Index | Action | Human Action | Description ---|---|---|--- 1 | Forward | key W | Move forward. 2 | Back | key S | Move backward. 3 | Left | key A | Strafe left. 4 | Right | key D | Strafe right. 5 | Jump | key Space | Jump. When swimming, keeps the player afloat. 6 | Sneak | key left Shift | Slowly move in the current direction of movement. When used in conjunction with the attack function in the GUI, it can swap items between inventory and Hotbar. When used with the craft function, it crafts the maximum possible number of items instead of just one. 7 | Sprint | key left Ctrl | Move quickly in the direction of current motion. 8 | Attack | left Button | Destroy blocks (hold down); Attack entity (click once). 9 | Use | right Button | Put down the item being held or interact with the block that the player is currently looking at. Within the GUI, pick up a stack of items or place a single item from the stack that is being held by the mouse. 10 | hotbar.[1-9] | keys 1 - 9 | Selects the appropriate hotbar item. When in the inventory GUI, swap the contents of the inventory slot under the mouse pointer and the corresponding hotbar slot. 11 | Yaw | move Mouse X | Turning; aiming; camera movement.Ranging from -180 to +180. 12 | Pitch | move Mouse Y | Turning; aiming; camera movement.Ranging from -180 to +180. ## Appendix C Implementation Details ### C.1 Model Architecture The video encoder consists of a convolutional neural network backbone and a non-causal transformer. Inspired by Brohan et al. (2022), we adopted the EfficientNet (Tan & Le, 2019) as the backbone. Specifically, we use its variant EfficientNet-B0 for efficiency, which takes in images of size $224\times 224$ and extracts a feature vector of shape $7\times 7\times 1280$, where $7\times 7$ denotes the spatial dimensions. In order to adaptively enhance the important visual information, we use a shallow transformer to pool the feature map along spatial channels. To fuse global visual features, we construct another learnable embedding [sp], concatenate it with the $49$ features in space, and obtain a token sequence of length $50$. After being processed by the transformer, the output for the [sp] token corresponds to the pooled visual feature, whose dimension is $d_{hid}=1024$. To capture the temporal features of the video, we remove the code related to the casual mask in the minGPT222https://github.com/karpathy/minGPT and obtain a non-causal transformer. The policy decoder consists of 4 identical blocks, where each block contains a Flamingo _gated-attention dense layer_ (Alayrac et al., 2022) and a Transformer-XL block(Dai et al., 2019). The Transformer-XL block maintains a recurrence memory of past $128$ key-value pairs to memory long- horizon history states. We directly use the Transformer-XL implementation in Baker et al. (2022) with a simple modification, i.e., before passing states into the policy decoder, we add the previous action to the state embedding at each timestep. Notably, We find this modification very useful especially when we need to train the policy from scratch. As it not only accelerates the training process but makes the predicted action more consistent and smooth. Additional hyperparameters can be found in Table 2. Table 2: Hyperparameters for training GROOT. Hyperparameter | Value ---|--- Optimizer | AdamW Weight Decay | 0.001 Learning Rate | 0.0000181 Warmup Steps | 2000 Number of Workers | 4 Parallel Strategy | ddp Type of GPUs | NVIDIA RTX 4090Ti, A40 Parallel GPUs | 8 Accumulate Gradient Batches | 8 Batch Size/GPU (Total) | 2 (128) Training Precision | bf16 Input Image Size | $224\times 224$ CNN Backbone | EfficientNet-B0 Encoder Transformer | minGPT (w/o causal mask) Decoder Transformer | TransformerXL Number of Encoder Blocks | 8 Number of Decoder Blocks | 4 Hidden Dimension | 1024 Number of Condition Slots | 1 Trajectory Chunk size | 128 Attention Memory Size | 256 Weight of BC Loss | 1 Weight of KL Loss | 0.01 ### C.2 Inference To generate reference videos, we invited three human players to play each task according to the task description. Each person was asked to produce two videos, so we could prepare six videos for each task in total. Then, we selected the most relevant video to the task description from the six videos and cropped the first 128 frames into a new video, which was used to instruct GROOT to complete this task. In addition, we selected a 16-frame segment that best expressed the task information as the visual prompt for STEVE-1 (visual) from these six videos. This ensures fairness in comparison. During inference, we found that in some tasks, such as build obsidian (), GROOT’s behavior mixed with the intention of traveling around. We believe this is a bias introduced during training. We draw the inspiration from STEVE-1 (Lifshitz et al., 2023) and subtract this bias in the action logits space before sampling the action. Specifically, we infer two models at the same time, where one model’s condition is a specific task video and the other model’s condition is a 128-frame video of traveling freely in the environment. The input observations for the two models are exactly the same. At each time step, we use the action logits of the previous model to subtract a certain proportion of the action logits predicted by the latter model before using the Gumbel-Softmax trick to sample the action. We found that this technique can significantly improve the success rate of tasks such as build obsidian () and enchant sword (). ## Appendix D Dataset Details ### D.1 Contractor Data The contractor data is a Minecraft offline trajectory dataset provided by Baker et al. (2022) 333https://github.com/openai/Video-Pre-Training, which is annotated by professional human players and used for training the inverse dynamic model. In this dataset, human players play the game while the system records the image sequence $\\{s_{1:T}\\}_{M}$, action sequence $\\{a_{1:T}\\}_{M}$, and metadata $\\{e_{1:T}\\}_{M}$ generated by the players. Excluding frames containing empty actions, the dataset contains $1600$M frames with a duration of approximately $2000$ hours. The metadata records the events triggered by the agent in the game at each time step, including three types: craft item, pickup, and mine block, which represent the agent’s activities of crafting items using the GUI, picking up dropped items and destroying blocks at the current time step, respectively. In the process of training the model, we use all trajectories provided by the contractor data, but without including any metadata. We only use the metadata to retrieve relevant trajectory segments during the visualization of the goal space. ## Appendix E t-SNE Visualization Details This section details how the videos are sampled to do visualization. The selected videos are categorized into seven groups: craft items, combat enemies, harvest crops, hunt animals, chop trees, trade with villagers, and mine ores. Generally, each group contains two types of videos, each with $1000$ data points sampled. The sampling method retrieves the time when a certain event occurs in the metadata and goes back $128$ frames from that time to obtain a video segment that is $128$ frames long. We illustrate video configurations in Table 3. For example, in the combat enemies task, taking "combat zombies" as an example, we retrieve all the moments when the event "pickup:rotten_flesh" occurs, because after killing zombies, they will drop rotten flesh, which can then be picked up by players. Through sampling observations, we found that this method can sample videos that are consistent with the descriptions. Table 3: Sample videos from the contractor data (Baker et al., 2022) for the goal space visualization. Group | Video Description | Event in Metadata ---|---|--- craft items | craft wodden_pickaxe with crafting_table | craft_item:wooden_pickaxe craft items | craft iron_pickaxe with crafting_table | craft_item:iron_pickaxe combat enemies | combat zombies | pickup:rotten_flesh combat enemies | combat spiders | pickup:spider_eye harvest crops | harvest wheat | mine_block:wheat harvest crops | harvest melon | mine_block:melon hunt animals | hunt sheep | pickup:mutton hunt animals | hunt cow | pickup:beef chop trees | chop oak trees | mine_block:oak_log chop trees | chop birch trees | mine_block:birch_log trade with villagers | trade with villagers for emerald | craft_item:emerald trade with villagers | trade with villagers for enchanted_book | craft_item:enchanted_book mine ores | mine coal ores with pickaxe | mine_block:coal_ore mine ores | mine iron ores with pickaxe | mine_block:iron_ore ## Appendix F Programmatic Evaluation Details In this section, we elaborated on how each episode is regarded as successful. For the dye and shear sheep () task, dyeing the sheep and shearing its wool must be successfully performed to be considered a success. For the use bow () task, firing the arrow after charging it to the maximum degree is required to be successful. For the sleep () task, placing the bed and spending the night on it are required to be successful. For the smelt () task, placing the furnace and dragging coal and mutton into the designated slots are required to be successful. For the lead () task, successfully tethering at least one animal is considered a success. For the build obsidian () task, pouring a water bucket and a lava bucket to fuse them is required to be successful. For the enchant () task, placing the enchantment table, putting a diamond sword and lapis lazuli into the slots, and clicking the enchanting option are required to be successful. For the dig down three fill one up () task, the agent must first vertically break three dirt blocks below and then use one dirt block to seal the area above. For the build snow golems () task, placing 2 snow blocks and 1 carved pumpkin head in order and triggering the creation of a snow golem are required to be successful. ## Appendix G Combining Skills Experimental Details First, we introduce the experimental environment selected for our study. The agent is summoned on the plains biome, holding a diamond pickaxe, and granted the night vision status to enable the agent to see the various ores underground. At the beginning of each episode, we set the agent’s condition to dig down. When the agent descends to a depth below 12 layers, the condition automatically switches to horizontal mining. Each round of episodes lasts for 12,000 frames, which is equivalent to 10 minutes in the real world. For GROOT, both the reference videos of dig down and horizontal mining were recorded by a human player. For STEVE-1, we invited the same player to carefully record the prompt videos. It is worth noting that while we could easily prompt it to dig down, it was difficult to keep it in the horizontal mining condition. This made STEVE-1 prone to falling into the bedrock layer and getting stuck. Finally, we did not observe STEVE-1 finding any diamonds in the 25 experiments, which can be attributed to the inability of its goal space to encode details such as horizontal mining. ## Appendix H Elo Rating System Details The Elo rating system is widely adopted for evaluating the skill levels of multiple players in two-player games, such as Chess and Go (Silver et al., 2016). In this section, we elaborate on how we introduce human evaluation and use the Elo Rating system to measure the relative performance of agents on the Minecraft SkillForge benchmark. In the Elo rating system, each agent’s skill level is represented by a numerical rating. We repeatedly let agents play against each other in pairs. Specifically, in each game, we sample a task and two agents, denoted as Agent A and Agent B. Then, we randomly sample a trajectory for each agent corresponding to the designated task. The two trajectories are assigned to a human annotator, who selects the most task-relevant one. We implement the annotating system with Label Studio (Tkachenko et al., 2020-2022), as shown in Figure 9. We consider the agent that produced this trajectory to be the winner, let’s assume it is Agent A. After each round, we update the scores of Agent A and Agent B as follows $\displaystyle R_{A}\leftarrow R_{A}+K\cdot\frac{1}{1+10^{(R_{A}-R_{B})/400}},$ (17) $\displaystyle R_{B}\leftarrow R_{B}-K\cdot\frac{1}{1+10^{(R_{A}-R_{B})/400}},$ where K is the update factor and we set it to 8. After calculating the score of the agent, we use VPT (bc) as 1500 points and shift the scores of other agents accordingly. Based on the Elo ratings, we can easily measure the relative winning rate for each paired agent. The win rate of Agent A over Agent B can be represented as $\frac{1}{1+10^{(R_{B}-R_{A})/400}}$. For example, the win rate ratio between two agents with a score difference of $100$ scores is $64\%:36\%$. A score difference of $200$ scores implicit $76\%:24\%$. Figure 9: Example of the annotating system for human evaluation. ## Appendix I Minecraft SkillForge Benchmark Details In this section, we detail the benchmark titled "Minecraft SkillForge" which meticulously incorporates a wide spectrum of tasks prevalent within Minecraft. Our aim is to ensure that every task provides a meaningful evaluation of a specific skill that an AI agent might possess. We categorize these tasks into six groups: collect, explore, craft, tool, survive, and build. In the following subsections, we will provide a detailed introduction to each of them. The “Description” field provides a brief description of the task, the “Precondition” field outlines the initial settings of the testing environment for the task, the “SkillAssessed” field indicates which aspect(s) of the agent’s ability are being assessed by the task, and the “Evaluation” field describes the quality evaluation metrics for task completion (based on which human players judge the quality of two rollout videos). ### I.1 Collect The tasks categorized under the collect section of our benchmark are specifically designed to evaluate an AI agent’s capability in resource acquisition proficiency and spatial awareness. This means the agent should not only be adept at identifying and gathering specific resources but also possess the acumen to navigate through varied environments while being aware of its surroundings and the available tools at its disposal. Figure 10: Examples of tasks in collect category. ⬇ Task: collect dirt Description: Collect dirt from the surface. Precondition: Spawn the player in the plains biome. SkillAssessed: Basic terrain understanding and the ability to differentiate between surface-level blocks. Evaluation: Run away. < Look down. < Dig down. < Break the dirt on the surface. Task: collect grass Description: Remove weeds on the surface. Precondition: Spawn the player in the plains biome. SkillAssessed: Surface navigation and comprehension of vegetation blocks. Evaluation: Run away. < Break the grass block. < Break a large field of grass blocks. Task: collect wood Description: Cut down trees to collect wood. Precondition: Spawn the player in the forest biome with an iron_axe in its hand. SkillAssessed: Recognition of tree structures, efficient utilization of tools, and block harvesting capability. Evaluation: Run away. < Approach trees. < Chop the tree and collect logs. Task: collect seagrass Description: Dive into the water and collect seagrass. Precondition: Spawn the player near the sea. SkillAssessed: Water navigation, diving mechanics understanding, and underwater block interaction. Evaluation: Walk on the land. < Swim on the water < Dive into the water. < Break seagrass blocks. Task: collect wool Description: Dye and shear the sheep for wool. Precondition: Spawn the player in the plains biome with a shear (mainhand) and a stack of blue_dye (offhand), 5 sheep near the player. SkillAssessed: Interaction with entities, tool and item application, and sequential action execution. Evaluation: Ignore the sheep. < Dye the sheep. < Shear the sheep. < First dye then shear the sheep. Listing 1: The environment configuration and evaluation metric for collect series tasks. ### I.2 Explore The tasks encompassed within the explore category of our benchmark are intricately devised to evaluate an AI agent’s navigation proficiency, understanding of diverse environments, and intrinsic motivation for exploration. Through these tasks, we gauge an agent’s ability to actively traverse, understand, and interact with varied elements of the Minecraft world, and its propensity to unravel mysteries and challenges posed by the environment. Figure 11: Examples of tasks in explore category. ⬇ Task: run and explore Description: Run and explore. Precondition: Spawn the player in the plains biome. SkillAssessed: Stamina utilization and distance-based exploration. Evaluation: Exploring as far as possible. Task: climb the mountain Description: Climb the mountain. Precondition: Spawn the player in the stone shore biome and near the mountain. SkillAssessed: Vertical navigation, terrain adaptation, and goal-oriented movement. Evaluation: Run away and ignore the mountain. < Approach the mountain. < Climbing the mountain. < Climb to the top of the mountain. Task: mine horizontally Description: Mine horizontally underground. Precondition: Spawn the player in a deep cave with an iron_pickaxe in the hand. SkillAssessed: Underground navigation, tool utilization, and spatial reasoning in confined spaces. Evaluation: Run away. < Break the stone. < Dig down. < Mine horizontally. Task: travel by boat Description: Travel on a wooden boat through water. Precondition: Spawn the player near the sea with a wooden boat in the hand. SkillAssessed: Aquatic travel, tool placement, and boat maneuverability. Evaluation: Did not place the boat. < Place the boat on the water. < Board the boat. < Row in the water. Task: explore the treasure Description: Rush into a villager’s home and open a chest and acquire the treasure. Precondition: Spawn the player in front of a villager’s house. SkillAssessed: Interaction with structures, curiosity-driven exploration, and object acquisition. Evaluation: Ignore the house and run away. < Open the door. < Enter the house. < Open the chest. < Acquire the treasure. Listing 2: The environment configuration and evaluation metric for explore series tasks. ### I.3 Craft The tasks under the craft category in our benchmark have been designed to shed light on an AI agent’s prowess in item utilization, the intricacies of Minecraft crafting mechanics, and the nuances of various game mechanic interactions. These tasks provide a detailed examination of an agent’s capability to convert materials into functional items and harness the game’s various crafting and enhancement mechanics. Figure 12: Examples of tasks in craft category. ⬇ Task: craft the crafting_table Description: Open inventory and craft a crafting table. Precondition: Spawn the player in the plains biome with a stack of oak_planks in the inventory. SkillAssessed: Inventory management and basic crafting. Evaluation: Open the inventory. < Click on the recipe button. < Click on the crafting_table. < Drag the crafting_table into the inventory. Task: craft ladders Description: Place the crafting table and open it to craft ladders. Precondition: Spawn the player in the plains biome with a crafting_table in its main hand and a stack of oak_planks in the inventory. SkillAssessed: Advanced crafting using crafting stations and recipe navigation. Evaluation: Place the crafting_table on the surface. < Open the crafting_tabe. < Click on the recipe book. < Click on the ladder. < Drag the ladder into the inventory. Task: enchant sword Description: Place an enchanting table and use it to enchant a diamond sword. Precondition: Spawn the player in the plains biome with an enchanting table in its main hand, 3 diamond swords, and 3 stacks of lapis_lazuli in the inventory. SkillAssessed: Tool enhancement using enchantment stations and decision-making in choosing enchantments. Evaluation: Place the enchanting_table on the surface. < Open the enchanting_table. < Place the lapis_lazuli or diamond sword. < Place the lapis_lazuli and diamond sword. < Choose any enchantment. Task: smelt food Description: Place a furnace and use it to smelt food. Precondition: Spawn the player in the plains biome with a furnace table in its main hand, 3 stacks of mutton, and 3 stacks of coal in the inventory. SkillAssessed: Food processing using a smelting furnace, raw material to product conversion, and patience in awaiting outcomes. Evaluation: Place the furnace on the surface. < Open the furnace. < Place raw meat or coal. < Place both raw meat and coal. < Wait for the raw meat to be cooked. < Take out cooked meat. Task: cut stone Description: Place a stonecutter and use it to cut stones. Precondition: Spawn the player in the plains biome with a stonecutter in its main hand, 6 stacks of stones in the inventory. SkillAssessed: Tool enhancement using enchantment stations and decision-making in choosing enchantments. Evaluation: Place the stonecutter on the surface. < Open the stonecutter. < Place the stones. < Select a target type of stone. < Drag stones to the inventory. Listing 3: The environment configuration and evaluation metric for craft series tasks. ### I.4 Tool The tasks within the Tool category of our benchmark are designed to deeply investigate an AI agent’s capabilities in tool utilization, precision in tool handling, and contextual application of various tools to carry out specific tasks. This category provides insights into the agent’s skill in wielding, using, and exploiting tools optimally within different Minecraft scenarios. Figure 13: Examples of tasks in tool category. ⬇ Task: use bow Description: Draw a bow and shoot. Precondition: Spawn the player in the plains biome with a bow in the mainhand and a stack of arrows in the inventory. SkillAssessed: Precision, tool handling, and projectile mastery. Evaluation: Just run. < Draw the bow and shoot the arrow. < Hold the bow steady and charge up the shot before releasing the arrow. Task: set fires Description: Set fires on the trees. Precondition: Spawn the player in the forest biome with a flint_and_steel in its main hand. SkillAssessed: Environment manipulation and controlled chaos creation. Evaluation: Attack the tree. < Start a fire with the flint_and_steel. < Go wild with the fire. Task: lead animals Description: Use rein to tie up the animals. Precondition: Spawn the player in the plains biome with a stack of leads in its main hand. Spawn 5 sheep and 5 cows near the player’s position. SkillAssessed: Entity interaction, tool application on moving entities, and livestock Evaluation: Ignore the animals and run away. < Use the rein to tie up animals. Task: carve pumpkins Description: Place the pumpkins and carve pumpkins with shears. Precondition: Spawn the player in the plains biome with a shear in its main hand and a stack of pumpkins in the inventory. SkillAssessed: Block placement, block modification, and crafting interaction. Evaluation: Just run. < Place the pumpkin on the surface. < Use the shear to carve it. < Get a carved pumpkin. Task: use trident Description: Fly the trident on a rainy day. Precondition: Spawn the player in the plains biome with a trident in the main hand, which is enchanted with riptide. The weather is rain. SkillAssessed: Weather-adaptive tool utilization, motion dynamics, and advanced weapon handling. Evaluation: Just run. < Use the trident to break the block. < Use the trident for quick movement. < Charge to throw the trident farther. Listing 4: The environment configuration and evaluation metric for tool series tasks. ### I.5 Survive The tasks embedded within the survive category of our benchmark aim to analyze an AI agent’s ability to ensure its own survival, adeptness in combat scenarios, and its capability to interact with the environment in order to meet basic needs. Survival, being a core aspect of Minecraft gameplay, necessitates an intricate balance of offensive, defensive, and sustenance- related actions. This category is structured to ensure a thorough evaluation of these skills. Figure 14: Examples of tasks in survive category. ⬇ Task: hunt animals Description: Hunt animals on the plains. Precondition: Spawn the player in the plains biome with an iron sword in the main hand. Spawn 5 sheep and 5 cows near the player’s position. SkillAssessed: Predator instincts, combat efficiency, and sustenance acquisition. Evaluation: Ignore animals and run away. < Hurt animals. < Kill animals. Task: combat enemies Description: Fight the enemy spider. Precondition: Spawn the player in the plains biome with a diamond sword in its main hand and a suite of diamond equipment. Spawn 3 spiders in front of the player. SkillAssessed: Self-defense, offensive combat strategy, and equipment utilization. Evaluation: Ignore spiders and run away. < Hurt spiders. < Kill spiders. Task: use shield Description: Use a shield to ward off zombies. Precondition: Spawn the player in the plains biome with a shield in its main hand and a suite of diamond equipment. Spawn 3 zombies in front of the player. SkillAssessed: Defensive tactics, tool application in combat, and strategic protection. Evaluation: Ignore zombies and run away. < Use the shield to protect itself. Task: plant wheats Description: Use an iron_hoe to till the land and then plant wheat seeds. Precondition: Spawn the player in the plains biome with an iron hoe in its main hand, and a stack of wheat seeds in the off hand. SkillAssessed: Land cultivation, planting proficiency, and sustainable resource creation. Evaluation: Just run away. < Till the land. < Plant the wheats. Task: sleep on the bed Description: Place the bed on the surface and sleep. Precondition: Spawn the player in the plains biome with a white bed in its main hand. SkillAssessed: Self-preservation, understanding of day-night cycle implications, and use of utilities for rest. Evaluation: Just run away. < Place the bed on the surface. < Sleep on the bed. Listing 5: The environment configuration and evaluation metric for survive series tasks. ### I.6 Build The tasks within the build category of our benchmark are devised to evaluate an AI agent’s aptitude in structural reasoning, spatial organization, and its capability to interact with and manipulate the environment to create specific structures or outcomes. Building is an integral component of Minecraft gameplay, requiring an intricate interplay of planning, creativity, and understanding of block properties. Figure 15: Examples of tasks in build category. ⬇ Task: build pillar Description: Build a pillar with dirt. Precondition: Spawn the player in the plains biome with a stack of dirt in the main hand. SkillAssessed: Vertical construction and basic structure formation. Evaluation: Just run away. < Look down. < Jump and place the dirt. < Pile the dirt into a few pillars. < Make a really high pillar. Task: dig three down and fill one up Description: Dig three dirt blocks and fill the hole above. Precondition: Spawn the player in the plains biome. SkillAssessed: Ground manipulation and depth perception. Evaluation: Just run away. < Look down. < Dig down three dirt blocks. < Raise the head. < Raise the head and use dirt to fill the hole. Task: build gate Description: Build an archway gate. Precondition: Spawn the player in the plains biome with a stack of oak_planks in the main hand. SkillAssessed: Symmetry, planning, and aesthetic construction. Evaluation: Place no plank. < Build 1 pillar. < Build 2 pillars. < Build an archway gate. Task: build obsidian Description: Make obsidian by pouring a water bucket and a lava bucket. Precondition: Spawn the player in the plains biome with two water buckets and two lava buckets in the Hotbar. SkillAssessed: Material transformation, understanding of in-game chemistry, and precise pouring. Evaluation: Just run away. < Pour water or lava. < Pour both liquids. < Pour into a mold to make obsidian. Task: build snow golems Description: Build snow golems by placing two snow blocks and one carved pumpkin. Precondition: Spawn the player in the plains biome with two stacks of snow blocks and two stacks of carved pumpkins in the Hotbar. SkillAssessed: Entity creation, sequential block placement, and combination of multiple materials. Evaluation: Place no block. < Place at least one kind of block. < Place both kinds of blocks. < Build a snow golem. Listing 6: The environment configuration and evaluation metric for build series tasks.
# Armada: A Robust Latency-Sensitive Edge Cloud in Heterogeneous Edge-Dense Environments Lei Huang, Zhiying Liang, Nikhil Sreekumar, Cody Perakslis, Sumanth Kaushik Vishwanath, Abhishek Chandra, Jon Weissman University of MinnesotaTwin CitiesMinneapolisMinnnesotaUSA55455 huan1397,liang772,sreek012,perak005, kaush047, chandra<EMAIL_ADDRESS> ###### Abstract. Edge computing has enabled a large set of emerging edge applications by exploiting data proximity and offloading latency-sensitive and computation- intensive workloads to nearby edge servers. However, supporting edge application users at scale in wide-area environments poses challenges due to limited point-of-presence edge sites and constrained elasticity. In this paper, we introduce Armada: a densely-distributed edge cloud infrastructure that explores the use of dedicated and volunteer resources to serve geo- distributed users in heterogeneous environments. We describe the lightweight Armada architecture and optimization techniques including performance-aware edge selection, auto-scaling and load balancing on the edge, fault tolerance, and in-situ data access. We evaluate Armada in both real-world volunteer environments and emulated platforms to show how common edge applications, namely real-time object detection and face recognition, can be easily deployed on Armada serving distributed users at scale with low latency. edge computing resource management, proximity, latency-sensitive, heterogeneity, Armada ††copyright: none ## 1\. Introduction Edge computing, a computing paradigm that brings computation closer to data sources and end-users, has enabled the deployment of emerging edge-native applications (Satyanarayanan et al., 2019a; Chen et al., 2017). With 5G accelerating the first network hop and rapid rollout of public edge infrastructure, edge computing is starting to play a significant role in the computing landscape (Satyanarayanan et al., 2019b). The emerging edge-native applications, including AR/VR, cognitive assistance, autonomous vehicles, are latency-sensitive and compute-intensive. Offloading workload from devices to powerful edge servers that can run complex machine learning algorithms is necessary to resolve the device-side limitation. The demand for these applications will increase rapidly and require the edge to be highly available and scalable. However, elasticity is a well-known limitation of edge resources (Wang et al., 2019). A burst of incoming workload can easily overwhelm an edge site causing service performance degradation. Furthermore, widely geo-distributed users require wide edge availability with full coverage of geographical locations to provide low-latency edge access. These requirements cannot be satisfied by single providers with limited point-of- presence and capacity in today’s edge infrastructure deployments (Amazon, 2021a, b; Microsoft, 2021; Google, 2021). Edge platforms that exploit edge resources from multiple providers have been proposed in both industry (Mutable, 2021; EDJX, 2021; MobiledgeX, 2021), and academia (Şenel et al., 2021) to enlarge the edge coverage. However, they are built on top of dedicated resources with a sparsely-distributed resource model: users from a certain geographic location only have one or few nearby edge options which can provide a low-latency response. Overload can easily happen since dedicated resources are physically limited and lack scaling capabilities. With the advent of powerful personal computers and devices, we believe the necessary compute power is already closer to the users. Volunteer- based underused personal devices can be organized and coordinated at scale to resolve resource limitations on the edge. In this paper, we introduce Armada, a robust latency-sensitive edge cloud that explores the use of both dedicated and volunteer resources to support low-latency computation offloading. Armada uses a densely-distributed resource model: users from a certain geographic location can have multiple nearby options to offload computations. Specifically, we explore the following challenges: * • How to select edge nodes to obtain low end-to-end latency in heterogeneous environments? * • How to achieve edge scalability with multiple loosely-coupled and resource- constrained edge nodes? * • How to guarantee continuous service in volunteer environments with high node churn and failure rate? * • How to minimize latency overhead for data persistence and consistency on edge? Armada implements auto-scaling service deployment mechanisms based on real- time user demand and distribution, and uses a user-side performance probing strategy as a key idea to guide service selection and load balancing among multiple edge nodes. The service deployment mechanisms incorporate several factors that affect performance, including user/data geo-location, edge server load, and network latency. User-side probing employs multiple, flexibly maintained client-to-edge connections that provide fault tolerance by enabling immediate connection switch to alternate edge nodes upon node failure. In addition, we introduce an edge-native storage layer to support low-latency data access when data and processing states cannot persist locally on volatile compute resources. In this paper, we focus on the system and implementation aspects of Armada. We show how real-time inference, a common latency-sensitive and computation- intensive application category, can be easily deployed on Armada and serve geo-distributed users with low latency. Then we take a closer look at system scalability, fault tolerance and data access performance in both real-world volunteer environments and emulation environments. The evaluation shows that Armada achieves a 33% - 52% reduction in average user end-to-end latency with high concurrent demand compared to locality-based and dedicated-resources-only approaches. ## 2\. Armada Overview In this section, we describe the heterogeneous edge-dense environment and give an overview of Armada design goals and system architecture. Then we discuss the application type that Armada supports. ### 2.1. Heterogeneous Edge-Dense Environment Logical proximity, defined as low-latency high-bandwidth communication channels between edge servers and users, is usually provided by a LAN, on- premise networking infrastructures, and increasingly 5G technologies. However, special-purpose networking and compute resources on the edge are highly constrained in availability and scalability. In Figure 1, we show that nearby general-purpose resources in heterogeneous WAN environments (Edge-tier-2) can also provide low-latency benefits when Edge-tier-1 resources are not available or overloaded. We include both dedicated local public servers and volatile volunteer resources in Edge-tier-2 to enlarge the edge presence. Therefore, the resource limitation on edge can be resolved with the help of abundant volunteer edge nodes densely distributed around users, namely edge-dense environments. The heterogeneity of Edge-tier-2 resources is twofold. First, connections from users to edge servers in WAN environments are highly diverse in terms of local ISPs and underlying networking infrastructure. Based on how users connect to the network, the actual number of routing hops and latency performance to the same edge server can highly diverge. Second, accessible compute resources present in nearby areas come from multiple providers and individuals. The heterogeneous capacity and hardware can lead to different processing performance, which is on the critical path of user requests and thus affects the end-to-end latency. Volunteer resources will amplify such heterogeneity by introducing more edge access points and increasing the system entropy. Figure 1. RTT latency in heterogeneous edge-dense environment ### 2.2. Design Goals Armada is designed with the following goals in mind: * • Support for low-latency computation offloading at scale with densely distributed edge resources: While one edge server is limited by its capacity, many loosely coupled but densely distributed edge nodes can coordinate with each other to provision nearby users at scale. Armada is designed to manage resource-constrained but abundantly distributed edge nodes to support scalable low-latency computation offloading. As a result, applications deployed on Armada are able to automatically scale and obtain more resources in a specific region if more users are present. * • Locality-based service deployment: Service deployment should be based on fine- grained geographical specifications to reduce networking latency. Multiple replicas 111We use the term service replica and task interchangeably in this paper. of the service should be deployed on different edge nodes to guarantee edge availability and capacity in specified regions. Changes to currently active users should also dynamically guide the service placement to fit the real-time user distribution. Furthermore, new service deployment should be optimized for short startup time to start serving users in a timely manner. * • Performance-aware service selection in heterogeneous environments: Geographical proximity is not strictly equivalent to low RTT latency. Multiple factors together determine the edge performance including network/compute resource heterogeneity and availability. Given a list of nearby edge nodes running replicas of the application service, Armada should identify the best- performing edge access point for each user to offload the computation. This edge selection process should also handle the load balancing for all users to achieve overall lower latency. * • Ease of use: Armada interfaces should be easy to use for both application developers and resource contributors. In particular, developers should use Armada SDK with minimum code modifications to their applications for deployment. Moreover, resource contributors should be able to register their nodes quickly with lightweight components and isolated runtime. * • Fault tolerance: Armada must ensure the fault tolerance for Armada users in the presence of high node churn due to volatile, unreliable and unpredictable volunteer resources. Armada users must be guaranteed continuous service and experience zero downtime upon node failure or node leaving. * • In-situ edge storage: Armada should provide a native storage layer on the edge (Sreekumar et al., 2020) to support low-latency data access. The storage layer should be reliable and independent from the volatile compute layer to persist the data for stateful and data-intensive applications. Also, flexible duplication and consistency policies should be supported for different application requirements. ### 2.3. Armada Architecture Figure 2. Armada system architecture Figure 2 shows the Armada system architecture. Armada consists of geo- distributed nodes that donate their compute and/or storage resources, along with a set of global and central services hosted on dedicated, stable nodes. Both Armada system components and Armada-hosted applications are encapsulated in Docker containers for ease of use and fast deployment. Docker itself provides a lightweight, isolated runtime and abstractions over underlying resources for edge nodes, which is a good option for shipping the code easily to volunteer-based heterogeneous environments. Armada resources and services together constitute the following major components (described in Section 3): * • Beacon: Beacon is the global entry point for all interactions with Armada central services. It will forward requests to corresponding handler components, including application deployment requests, user connection requests and resource registration requests. * • Application Manager: Application manager maintains the states of submitted applications in Armada and manages the application lifecycle. It globally controls, operates, and monitors all application tasks running on different edge nodes, and processes initial user connecting requests. It also handles auto-scaling based on real-time user demand. * • Compute Layer: Compute layer manages dedicated and volunteer compute resources in Armada. It includes Spinner, the compute resource manager and Captain, the compute node. The Spinner handles compute node registration, health check and resource allocation for task deployment requests sent by the Application manager. The Captain manages the local heterogeneous resources through the Docker engine API and processes user workloads. * • Storage Layer: Storage layer manages dedicated and volunteer storage resources in Armada. It includes Cargo manager, the storage resource manager and Cargo, the storage node. The Cargo manager handles storage node registration, health check, maintains metadata and executes storage policies for data-dependent applications. The Cargo manages the local heterogeneous storage resources using the Docker volume and persists data on the edge supporting low-latency access for nearby users. ### 2.4. Armada Applications Armada applications are long-running edge services using Armada resources for low-latency computation offloading. It includes a server-side program submitted to Armada for application-specific processing, and a client-side program used by application users to discover the service and offload computations. Armada deploys multiple replicas of the server-side program (tasks) to guarantee availability and scalability. Moreover, the client-side program uses Armada SDK to help application users locate the nearby service access points and establish direct communication channels. In Armada, we focus on the scenario where application users are co-located with the processing data, such as AR users sending out video streams for real-time processing. However, we also support external data upload from other data sources to the Armada storage layer, providing low-latency data access for running services. In Armada, volunteer resources are assumed to be unstable, volatile, and dynamic, with high node churn in heterogeneous environments. The guarantee on immediate recovery and continuous services upon node failure requires that application clients immediately switch connections to other service replicas and continue processing without waiting for failed node recovery. Therefore, no hard states or dependencies of the users are allowed to be maintained on the server-side for Armada applications. Application developers should either modify the application to maintain hard states and execution contexts on the client-side or use the Armada storage layer through Armada storage SDK to persist the data with minimized latency overhead. ## 3\. Armada System Components ### 3.1. Beacon Beacon is the entry point of contact for all initial interactions with Armada. It exposes interfaces for application developers to deploy edge services and monitor service status, application users to query service access points, and resource contributors to register edge nodes. Requests with different purposes will be forwarded to different handler services i.e., Application manager, Spinner and Cargo Manager, for further processing. Beacon provides the central public access point for different entities to establish initial connections with Armada components. ### 3.2. Application Manager Application manager (AM) handles service deployment requests from application developers and service discovery requests from application users. AM also monitors the user demand and user distribution to make service auto-scaling decisions. Each service in Armada contains multiple replicas, namely tasks, deployed on distributed edge nodes. AM globally controls and monitors all replicas of the service through task-oriented APIs exposed by the compute layer (Section 3.3). In this way, Armada decouples the application-level management from the underlying edge resources layer. Three major modules of AM are described as follows. Parameter | Description ---|--- Image | Docker image for the application service Compute_Req | Compute resource requirements Sched_Policy* | Optional customized scheduling policy Location | Coordinate(s) for expected user distribution Need_Storage | If persistent edge storage is required Storage_Req* | Storage requirements: capacity, consistency policy and data source Table 1. Service deployment interface (* denotes optional parameters) Service deployment. Initial service deployment request includes parameters shown in Table 1. Service deployers only need to specify the resources required per replica without worrying about the number of replicas and replica distributions. AM initially deploys a minimum of three replicas to guarantee fault tolerance through the Spinner task deployment API. Then more replicas will be automatically spawned based on actual user demand and distribution (discussed later in auto-scaling). For all deployed tasks, AM periodically requests the underlying resource layer to collect real-time updates including running status, current load and resource utilization. If the Need_Storage field is true, AM will send storage resource requirements to the Cargo manager (Section 3.4) to allocate persistent edge storage capacity associated with the service. Service discovery and selection. AM maintains the metadata and states of all deployed service replicas. Application users need to query AM for nearby access points before establishing direct communication channels. However, the networking performance is nondeterministic in heterogeneous wide-area environments, and different hardware leads to different processing speeds. In addition, non-Armada networking traffic and workloads are unpredictable in practical volunteer environments, which will also cause performance fluctuation at random periods. There are no unified criteria to address all the above heterogeneities and system dynamics at the same time. We argue that periodic end-to-end latency probing is the only effective way to identify the best-performing edge node in real-time deterministically. In Armada, we propose a 2-step approach for application clients to select low- latency service access points accurately. AM implements the first step of this approach by generating the service candidate list, and application clients finish the second step by performing the probing tests and making final decisions (Section 4). Algorithm 1 Service Selection Step-1 1:$Loc,NetType,\dots$ 2:$CandidateList$ 3:function ServiceSelect($Loc,NetType,\dots$) 4: $LocalServices$ $\leftarrow$ $geoProximitySearch(Loc)$ 5: for $i\leftarrow 1$ to $LocalServices.len()$ do 6: $EdgeNetType$ $\leftarrow$ $LocalServices[i].NetType$ 7: $Loc$ $LocalServices[i].Score\leftarrow$ $LocalServices[i].Resources*weight1$ + $netAffiliation(EdgeNetType,NetType)*weight2$ + $\dots$ 8: end for 9: $CandidateList$ $\leftarrow$ $TopNSort(TopN,LocalServices)$ 10: return $CandidateList$ 11:end function Algorithm 1 shows how to generate the service candidate list using the user information as input. The candidate list is a small subset of service replicas that are likely to provide low latency responses for specific users. The considered factors include geo-proximity, resource utilization of the service replica (to detect overload), and the optionally-specified network affiliation between edge nodes and users. In geoProximitySearch(), we apply GeoHash (Balkić et al., 2012) with less precision to identify a wider-range geographical area, so relatively far-away edge nodes will be evaluated in the same way as closer edge nodes to avoid excluding better-performing options from the candidate list in heterogeneous environments. TopN (line 7) is the length of the candidate list. Larger TopN value leads to higher accuracy but also higher overhead during the performance probing step. We use the TopN of 3 to have moderate overhead and enough accuracy. Service auto-scaling. AM handles the auto-scaling of the service based on the real-time user demand and distribution. The initial three service replicas are deployed in expected locations (Table 1) without having actual users connected. When users join, AM will asynchronously associate user locations with new task deployment requests sent to the Spinner. Then, the Spinner scheduler will try to incrementally allocate more edge resources in specified locations to deliver better edge performance. With the help of Spinner scheduling policies (Section 3.3), AM auto-scaling requests can adapt to both higher user demand and wider user distribution by deploying more replicas in overloaded locations and spawning replicas in new locations. In Armada, scalability is achieved at both service deployment and user service selection levels to better allocate edge resources and to balance user workloads, achieving higher average performance. ### 3.3. Armada Compute Layer Armada compute layer manages dedicated and volunteer compute resources to execute latency-sensitive and computation-intensive edge services. It contains Spinner, the compute resource manager, and Captains, the geo-distributed edge compute nodes in Armada. #### 3.3.1. Spinner Interface | Input/Output | Description ---|---|--- Task_Deploy | Task_Metadata/ Status, Task_ID | Application manager sends a task deployment request to Spinner. Task_Status | Task_ID/ Task_Status | Application manager queries the runtime status of the task. Task_Cancel | Task_ID/Status | Application manager notifies Spinner to remove a task. Captain_Join | Node_Metadata/ Status | A new Captain registers itself into the system. Captain_Update | Captain_Updates/ _ | Captain sends heartbeats to Spinner reporting status updates. New_Policy | Schedule_Policy/ Status | Register a new scheduling policy. Table 2. Spinner interfaces Spinner manages edge compute resources in Armada, and runs the Armada scheduler that allocates edge resources and deploys tasks. Table 2 shows Spinner interfaces, including task-oriented APIs for Application mananger to operate on tasks and APIs for Captains to register and report status. Spinner acts as the bridge between Armada applications and underlying edge compute resources. Spinner handles the Task_Deploy request through the Armada scheduler. Given the task image, resource requirements, target location, and optional custom scheduling policies, Armada scheduler uses a series of node filters followed by sorting policies to select edge nodes in heterogeneous environments effectively. We consider four types of policies: * • Locality-based. Geo-proximity filter is the fundamental policy to identify nearby edge nodes. Based on the density of edge nodes at target locations, the proximity range can be dynamically modified to limit the number of selected edge nodes. * • Resource-aware. Spinner monitors resource utilization (CPU and memory availability) of all edge nodes. Resource-aware sorting policy sorts the edge nodes based on the required compute power and actual availability. * • Docker-aware. The startup time of docker containers (Zheng et al., 2018) can cause a high delay during the auto-scaling process when new service replicas need to be deployed very fast. Docker image layers with the same digest ID can be reused to reduce the downloading time of new images (Fu et al., 2020). We use Docker-aware sorting policy to identify edge nodes that are faster to deploy tasks based on identical docker layers. * • Customized. Application deployers can define custom filter and sorting policies to guide service scheduling. For example, network types and dedicated/volunteer resource preferences can be specified to sort or filter edge nodes. Data-dependent workloads can also specify policies to use data sources to guide node selection. Filter policies are used sequentially to remove unqualified Captains, while all sorting policies are used collectively to determine the final sorting order. Each sorting policy is subject to a weight, defined as how significantly this policy affects the latency performance. The weighted score decides the final selected Captain for each Task_Depoly request. Note that Spinner also notifies un-selected Captains to prefetch the task images if possible to accelerate future task deployment by reducing the image downloading time. #### 3.3.2. Captain Captain222Captain represents both the edge compute node and the controller container running in the node. is an edge compute node in Armada. It listens to task operation instructions from Spinner, manages container lifecycle locally through Docker engine APIs, and discovers nearby edge storage capacity for data-related tasks using Cargo manager (Section 3.4). Captain isolates Armada runtime from the host environments and exposes edge services for direct connections with nearby users. Captain also reports local resource utilization, task running status and image repository information periodically to Spinner. ### 3.4. Armada Storage Layer Armada storage layer maintains dedicated and volunteer storage resources in Armada. It enables edge services and applications to persist data on the edge with low-latency access. Armada storage layer consists of two components: Cargo Manager, the storage resource manager, and Cargos, the geo-distributed storage nodes. #### 3.4.1. Cargo Manager Cargo manager manages edge storage resources in Armada. Table 3 shows Cargo manager interfaces: for Cargos to join and report status, Application manager to allocate storage resources, and Captains to discover nearby data access points. Cargo manager also spawns data replicas to guarantee fault-tolerance and low-latency data access for geo-distributed services. Data persistence is achieved on edge with redundant data replicas and flexible data consistency policies. The three main modules of Cargo manager are described as follows: Interface | Input/Output | Description ---|---|--- Cargo_Join | Cargo_Metadata/ Status | A new Cargo registers itself into the system. Cargo_Update | Cargo_Updates/ _ | Cargo sends heartbeats to Cargo manager reporting status updates. Store_Register | Storage_Req/ Status | Application manager registers storage capacity for an edge service. Cargo_Discover | Captain_Info/ Status | Captain queries nearby data access points Table 3. Cargo manager interfaces Storage registration: Application manager sends the Store_Register request to the Cargo manager during the service deployment phase (Section 3.2) if the application requires persistent edge storage. The Store_Register request contains the service identifier, capacity requirement for each data replica, consistency policy, and the data source for original data uploading. We initially allocate resources and deploy three data replicas on three Cargos to guarantee availability and fault tolerance. The Cargo selection is based on locations and storage requirements given by the service deployment request. Data access point selection: Cargo manager maintains the metadata and states of all data replicas for an edge service. After the storage registration, Captain sends Cargo_Discover requests during the task deployment phase to help tasks find nearby data access points. A similar 2-step approach in service selection is applied to overcome the network heterogeneity and locate the best-performing data access point. First, a candidate list is generated by the Cargo manager based on the geo-proximity between the Captain and Cargos holding the data replicas. Optional factors like network affiliation can also be specified to help rank the candidates. Second, Captain performs the data access probing to identify the fast access point. The additional candidates in the list are used to handle fault tolerance through immediate connection switch upon Cargo failures. Storage auto-scaling: Initially a service is allocated three storage replicas. When more service replicas are spawned to satisfy higher user demand and wider user distribution, the storage layer should also adaptively scale to guarantee low-latency data access for geo-distributed service replicas. We employ the similar idea applied in the service auto-scaling process. When new service replicas are deployed, the Cargo manager asynchronously creates new data replicas on geo-proximate Cargos to the services. Since more replicas lead to higher resource usage and data consistency overhead, the Cargo manager collects the data access probing feedback from Captains to evaluate the need to spawn new data replicas carefully. #### 3.4.2. Cargo nodes Cargo is an edge storage node in Armada. It handles data I/O operations and propagation of updates to replicas depending on the type of consistency. Each Cargo node is aware of at most three replica Cargo nodes corresponding to application data. The updates made to one Cargo node are propagated in a cascade manner to all the replicas if more data replicas are spawned to meet the user demands. Table 4 describes the Armada storage SDK used by server-side application programs to interact with the storage layer. With Captains locating nearby data access points, Armada storage SDK helps tasks transparently communicate with nearby Cargos. Function | Input / Output | Description ---|---|--- Init_Cargo | Cargo_App_ Metadata/ Status | Establish connection with a Cargo node Write | Write_Data/ Write_Status | Write data to the Cargo node Read | Read_Data/ Read_Status | Read data from Cargo node Close_Cargo | _/Status | Close connection to Cargo node after use Table 4. Armada storage SDK ## 4\. Application Client Application client is the user-side program of Armada applications. It contains the application-specific logic and uses Armada client SDK to help application users locate the service access points. Application client plays an important role in coordinating with Armada system components to achieve latency-sensitive service selection, scalability and fault tolerance. We describe performance probing and multi-connection strategies which are core building blocks inside Armada client SDK, and discuss how they are applied to deliver Armada benefits. Performance probing, as discussed in Section 3.2, is the second step in the service selection process. Application clients first obtain the service candidate list through the Beacon interface and then establish connections to each candidate for probing tests. The candidate with the lowest end-to-end latency is selected to start offloading the actual workload. More importantly, the 2-step service selection process is performed periodically and asynchronously in the background to adapt to system dynamics. If the selected node is overloaded or a closer node joins the system later, application clients can always identify the changes and switch to a better edge node if necessary. As a result, load balancing is automatically handled since overload can negatively affect the performance probing results. A far-away edge node can be selected if a closer node delivers worse performance due to overload. Therefore, latency-driven performance probing balances the load and improves edge scalability. Multi-connection strategy is used to achieve fault tolerance and guarantee continuous service. Each application client maintains multiple connections to different candidate edge nodes and uses this redundancy to prepare for potential server failures. Since all connections are already established and processing data is independent from the server (Section 2.4), no additional overhead is present to switch connections from the failed node to a working node. Candidate nodes obtained from the service selection process are already sorted by performance, therefore the second-best candidate is selected to maintain low-latency responses. Application developers develop the application client program using Armada client SDK to easily integrate above functionalities with minimum code modifications. We currently support the gRPC protocol in Golang and around 10 lines of code are added to apply the changes in our experiment applications. ## 5\. Real-time Inference on Armada We implement two real-time inference workloads to evaluate Armada performance. Real-time object detection and face recognition are critical building blocks in commonly used applications like augmented reality, cognitive assistance and security surveillance. They are both computational-intensive and latency- sensitive, which require offloading the computation to powerful servers and obtaining processing results in a timely manner. First, we use an object detection workload to demonstrate the workflow of the Armada computing layer. Second, the face recognition workload (Kagami, 2021) showcases the coordination between computing and storage layer when Armada application needs persistent edge storage. ### 5.1. Real-time Object Detection Figure 3 shows the workflow of real-time object detection in Armada. In the service deployment phase (Figure 3 (a)) , service deployers first contact Beacon in step (1) to submit the application along with requirements to Armada. Application manager receives this request in step (2) and initiates three task deployment requests sent to Spinner in step (3). Spinner then calls the Armada scheduler to find available edge nodes and place the tasks in step (4). In the end, the tasks deployment status and service deployment status are updated back to the deployers in step (5) - (8). In Figure 3 (b), When users request the object detection service in Armada, they need to first query the system for service access points in step (1) - (4), and then start sending the video frames for object detection in step (5). Note that TopN number of connections are maintained using the candidate list obtained from the service selection process. Figure 3. Object detection workflow in Armada Figure 4. Face recognition workflow in Armada ### 5.2. Face Recognition Figure 4 shows the workflow of real-time face recognition in Armada. In the service deployment phase (Figure 4(a)), service deployers first submit the application along with requirements for both compute and storage resources (1) - (2). Then the Application manager contacts the Cargo manager to register the storage requirement of the service (3). The Cargo manager selects three Cargos and allocates the required storage resources for three data replicas. The three Cargos then use the specified data source to pull the initial pre- labeled face datasets used to recognize people during the real-time inference. In step (4) - (5), tasks are sent to the compute layer for deployments. To connect tasks with nearby data access points, Captains queries Cargo manager in step (6). Given the candidate list of access points, tasks can directly interact with the selected data replicas in step (7) using Armada storage SDK. In the end, the task and service deployment status are updated back to the deployers in step (8) - (11). Figure 4 (b) shows the workflow when face recognition clients request the service. In step (1) - (4), clients first query the system for service access points, and then start sending video frames for face recognition in step (5). For any detected faces during the processing, tasks query data replicas in Cargos for face recognition (6). The read requests send detected faces to Cargo searching for matched people, and the write requests insert new labeled faces into the persistent data store for future recognition. ## 6\. Evaluation We evaluate Armada in both real-world edge environments and emulation platforms in the cloud. The real-world experiment explores Armada performance in fine-grained small geographical areas (regions within a city). The emulation experiment explores wider-range geographical areas (regions across nearby cities). We first use a computation-only workload, object detection, to demonstrate Armada service selection, scalability and fault tolerance performance. Then we use a face recognition workload to explore the storage layer performance when the persistent store is required. ### 6.1. Experimental Setup In Table 5, we show the underlying hardware used for both real-world and emulation experiments. Note that the third column shows the processing time per frame for real-time object detection application on these hardwares. Node | Processor | Processing ---|---|--- V1 | Intel® Core™ i7-9700, 8 cores | 24ms V2 | Intel® Core™ i7-2720, 6 cores | 32ms V3 | Intel® Core™ i9-8950HK, 6 cores | 31ms V4 | Intel® Core™ i5-8250U, 4 cores | 45ms V5 | Intel® Core™ i5-5250U, 2 cores | 49ms D6 | Intel® Xeon® CPU E5-2620 v3, 24 cores | 30ms×4 Cloud | t2.large, 4 cores | 34ms (a) Real-world experiment Node | Type | Location | Processing ---|---|---|--- A | t2.2xlarge, 8 cores | City_A | 23ms B | t2.large, 4 cores | City_B | 34ms C | t2.small, 2 cores | City_C | 58ms Cloud | t2.large, 4 cores | Cloud | 34ms (b) Emulation experiment Table 5. Hardware used and frame processing performance #### 6.1.1. Real-world Environment We set up the real-world experiment environment around our University campus. As shown in Table 5 (a), A combination of both dedicated and volunteer resources is used. Volunteer nodes V1 - V5 are located within 5 miles of the campus, and a powerful University server D6 located on campus is considered the dedicated edge node. While the dedicated node has more compute power and better network connectivity, volunteer nodes are set up with heterogeneous compute and networking performance contributed by actual volunteers around the campus. The dedicated node D6 can hold four service replicas in parallel, with each of them processing the video at 30ms/frame. Figure 5 shows the benefits of exploiting volunteer resources from one user’s perspective. Volunteer nodes can deliver similar or even better performance compared to the dedicated edge node. #### 6.1.2. Emulation Environment Due to physical limits, we use the emulation environment to explore Armada performance on a wider geographical scale. We use the network emulation platform Netropy (Technologies, 2021) in AWS to emulate WAN connectivity for three nearby cities City_A, City_B and City_C that are about 100 - 150 miles away from each other. Three edge nodes A, B, C are located at three locations as shown in 5 (b). #### 6.1.3. Baselines We use geo-proximity, dedicated-edge-only and cloud scenarios for comparisons with Armada. * • Geo-proximity: In the geo-proximity scenario, we force all users to connect to the closest edge node in a geographical location, a typical edge selection policy to identify the low-latency edge access point. * • Dedicated-edge-only: In the dedicated-edge-only scenario, we assume that only limited dedicated edge resources are available, which is common in today’s edge infrastructure deployment. As shown in Table 5 (a), we use one powerful dedicated node as compared to 5 resource-constrained volunteer nodes to maintain a reasonable ratio of the availability of dedicated and volunteer resources. We show the benefits of exploiting volunteer resources by comparing them with the dedicated-edge-only scenario. * • Cloud: We show the cloud performance as the baseline compared to other scenarios. We use the closest AWS service region US East to deploy the services and assume that the cloud has unlimited scalability with increasing user demand. Figure 5. CDF of end-to-end latency for different servers ### 6.2. Latency-Sensitive Service Selection We set up three users C1, C2 and C3, in the real-world experiment. They are located around the campus with heterogeneous networking performance to different edge nodes. We also set up three users User_A, User_B and User_C, in the emulation platform and configure them to be at the same locations as nodes A, B and C with corresponding real-world WAN networking performance. Table 6 shows the pairwise end-to-end latency for object detection application. The bold underlined values refer to the selected service access point in Armada for each user. Client | V1 | V2 | V3 | V4 | V5 | D6 | Cloud ---|---|---|---|---|---|---|--- C1 | 38 | 47 | 49 | 65 | 72 | 42 | 107 C2 | 43 | 35 | 56 | 58 | 61 | 45 | 102 C3 | 49 | 50 | 45 | 59 | 71 | 42 | 112 (a) End-to-end latency (ms) in real-world environment Client | A | B | C | Cloud ---|---|---|---|--- User_A | 31 | 63 | 89 | 108 User_B | 63 | 47 | 83 | 102 User_C | 51 | 68 | 58 | 111 (b) End-to-end latency (ms) in emulation environment Table 6. Latency-sensitive service selection in Armada Table 6 (a) and (b) show that users in both real-world and emulation environments can identify the heterogeneity of the environment and select the best-performing node to offload the workload. In Table 6 (b), User_C can select a farther node A due to local resource limitation in node C. ### 6.3. Scalability and Load Balancing We explore Armada’s scalability performance over high user demand and wide user distribution. We evaluate the average end-to-end latency for the object detection application with a varying number of users and edge nodes. #### 6.3.1. Performance over increasing user demand We recruit 15 users around the campus (within 5 miles) with heterogeneous networks to play object detection clients in real-world experiments. With edge resources from five volunteer nodes and one dedicated node shown in Table 5 (a), 15 users incrementally start requesting the service. We record the average end-to-end latency at three time slots when there are five, ten and 15 concurrent users. Figure 6 shows the user average performance using Armada as well as other baselines. Figure 6. Performance over increasing user demand Armada shows promising scalability performance: 33% faster than the geo- proximity scenario and 52% faster than the dedicated-edge-only scenario at #client = 15 in our experimental setup. First, locality-based service selection ignores network heterogeneity and quickly leads to performance degradation caused by overload. Second, dedicated edge resources are limited in point-of-presence and elasticity. High concurrent user demand can easily overload an edge cite as shown in Figure 6, where the dedicated-edge-only scenario is even worse than cloud performance at #client = 15. #### 6.3.2. Performance over wide user distribution In this emulation experiment, we explore Armada scalability and load balancing behaviors in wide area settings. Varying no. of users with a fixed set of edge nodes: In Figure 7, with static edge nodes A, B and C as described in 5 (b), we incrementally add users to different cities and observe the average latency performance for users at each city. Each subfigure tells the user distribution and the notation table tells the user edge selection results in Armada. Figure 7 (a), as an example, has one user at City_A, one user at City_B and zero user at City_C. The City_A user selects node A and the City_B user selects node B for processing. We also show the latency performance for locality-based edge selection and cloud as comparisons with Armada. Figure 7 (b) shows that the user at City_C selects node A for processing since node A is more powerful and has better performance compared to local node C. Figure 7 (c) shows that when two local users are present at City_A, the user at City_C switches back to local node C since node A is fully loaded serving local users. Figure 7 (d) shows that when node C is already serving a local user, the second user selects the farther node A after performance probing comparisons. Note that the average performance for users at City_A in Figure 7 (b) and (d) are worse than the locality-based approach because local node A serves more users from other cities. Varying no. of edge nodes with a fixed set of users: In Figure 8, with three static users at three cities, we incrementally add edge nodes to observe the user performance. Subfigure captains tell the edge node distribution in this case. Figure 8 (b) shows that a new node at City_A improves the performance of all three users in different cities. Figure 8 (c) shows that a new node at City_B further improves the performance of all three users. The user at City_B switches to local node B and releases more resources in node A. Figure 8 (d) shows that a new node at City_C does not affect the performance because the powerful node A delivers a better performance to the user at City_C. Figure 7. End-to-end latency: varying no. of users with fixed set of edge nodes. Each subfigure shows performance under different user distributions. The notation table tells the user edge selection results in Armada. Figure 8. End-to-end latency: varying no. of edge nodes with fixed set of users. Each subfigure shows performance under different edge node distributions. The notation table tells the user edge selection results in Armada. #### 6.3.3. Fast auto-scaling and Captain registration We also explore the task deployment speed during the service auto-scaling process. Figure 9 (a) shows the average task deployment time based on different strategies. When multiple edge nodes satisfy the task deployment requirements, Armada uses image prefetch and Docker-aware policies discussed in Section 3.3.1 to reduce the deployment time. As compared to random selection and anti-affinity selection (Kubernetes, 2021), a common approach to avoid workload similarities, Armada implements faster task deployment. Armada has unlimited potential to expand with the help of volunteer nodes. In Figure 9 (b), we measure the Captain registration time and resource usage during idle time to explore Captain lightweight characteristics. It shows that Captain is 57% and 86% faster than K3s (K3s, 2021) and K8s (Brewer, 2015) registration and has lower resource usage during idle time. Note that we only record the time used for node registration modules in K3s and K8s for fair comparisons. Figure 9. Fast auto-scaling and Captain registration ### 6.4. Fault Tolerance Armada uses the user-driven multi-connection strategy to guarantee continuous service over edge failures. We evaluate the Armada fault tolerance performance in the real-world experiment environment with the object detection workload. Figure 10 (a) shows the end-to-end latency for continuous video frames from a single-user perspective. When the currently connected edge node suddenly fails or leaves the system, the Armada client can immediately switch to a backup node and prevent the service downtime compared to a server re-connect approach. In Figure 10 (b), we manually fail edge nodes one by one and observe the average end-to-end latency of ten static users after each failure. The service is always guaranteed to be continuous in this experiment. So, as comparisons, we develop an Edge-to-Cloud approach where the end-user can immediately switch to the cloud due to node failure. The value on top of each data point (say 8/10) shows the number of still connected users to the edge after each node failure. With all the edge nodes failing, both Edge-to-Cloud and Armada approaches show cloud performance at the end. However, Armada shows a lower average latency since the failed users switch to alternative edge nodes for low-latency processing. Figure 10. End-to-end latency over node churn. The ratios over data points show the number of users that are still connected to edge nodes. ### 6.5. Performance of Storage Layer We use the face recognition workload to evaluate storage layer performance in the real-world experiment. In the following experiments, we focus on the communications between tasks and Cargos. Therefore we configure the TopN to 1 to simplify the compute layer workflow. In this case, each application client only connects to one task. We explore the effects of the Cargo selection strategy, storage fault tolerance and different consistency policies. The same set of resources described in Table 5(a) (a) is used, with each one of them having 2GB persistent storage capacity. In addition, each data replica initially uploaded to Cargo contains 1000 labeled face descriptors (Learned-Miller, 2014) in the format of $<$ID (8 bytes), vector (128 * 8 bytes)$>$ pairs. We focus on three workloads for evaluation: Read-only workload: 1000 face images are used as the task input video frames for real-time recognition. The task processes each image, detects the face and generates a unique face descriptor. Then the task queries Cargo to find the matched descriptor along with the face ID. The read latency includes the time to connect to the Cargo and query processing. The tasks do not buffer labeled faces locally to explore the Armada storage layer performance thoroughly. Write-only workload: 1000 new face images are used as the task input video frames. We configure the task to detect faces and directly write new face descriptors with face IDs into the Cargo data replica. The write latency includes the time to connect to the Cargo and to perform the writing. Read-followed-by-write workload: 1000 new face images are used as the task input video frames. For each image, the task first sends a read request to query the Cargo and then writes the new face descriptor into the Cargo when the read request cannot recognize the face. #### 6.5.1. Cargo selection We explore the Cargo selection results using the read-only workload. Nodes V1, V2, D6 and Cloud are registered as four Cargos, and V3, V4 and V5 are used as Captains to run three face recognition tasks. We also configure three users co-located with three Captains for simplicity. Table 7 shows the Cargo selection result and pairwise read latency. We can see that the Cargo selection strategy can identify the environmental heterogeneity and select the best-performing data access point for each data-dependent task. Task | Cargo_V1 | Cargo_V2 | Cargo_D6 | Cloud ---|---|---|---|--- Task_V3 | 21 | 25 | 31 | 61 Task_V4 | 25 | 23 | 33 | 64 Task_V5 | 42 | 38 | 18 | 60 Table 7. Cargo selection #### 6.5.2. Storage fault tolerance We demonstrate the storage fault tolerance behavior using the same experiment setup described in Section 6.5.1. In this experiment, we only focus on the read latency from Task_V5’s perspective. Figure 11 shows that Task_V5 can immediately switch to the Cargo_V2 upon Cargo_D6 failure. Thus, the Armada storage layer can guarantee continuous low-latency data access for edge services compared to a Cloud-backup scenario. This experiment also shows the benefits of exploiting volunteer resources when dedicated edge resources are not available. Figure 11. Continuous Cargo service on the edge #### 6.5.3. Effect of Consistency We run three workloads to explore the effect of different consistency policies in Armada. We also separate the performance for dedicated, volunteer edge resources and cloud to illustrate the benefits of exploiting volunteer resources for edge storage. We set up three configurations using dedicated Cargos, volunteer Cargos, and Cloud-located Cargos for both strong and eventual consistency scenarios. All edge nodes and users are loosely coupled with each other in real-world heterogeneous environments. As shown in Figure 12 and Figure 13, we record the data I/O latency with varying configurations, consistency policies, and workload types. Figures 12 (a) and 13 (a) show that strong and eventual consistency have similar read latency since no data propagation is required for the read-only workload. Figures 12 (b) and 13 (b) show that the strong consistency for volunteer Cargos can cause higher latency than the cloud since volunteer nodes are loosely coupled, leading to high data propagation overhead. Similar to write-only workload, Figures 12 (c) and 12 (c) show that strong consistency has higher overhead caused by synchronized data propagation. Based on the above, volunteer Cargos in Armada exhibit similar performance compared to dedicated Cargos using eventual consistency. It also demonstrates the benefits of utilizing volunteer edge storage over the cloud for low-latency data access. Figure 12. Read-Write latency for Strong Consistency Figure 13. Read-Write latency for Eventual Consistency ## 7\. Related Work Several different research projects investigate the utilization of volunteer resources for both compute and storage (Anderson, 2020; Pouwelse et al., 2005; Mengistu and Che, 2019). Nebula (Ryden et al., 2014) is a geo-distributed edge cloud that uses volunteer on an otherwise dedicated resource system to carry out data-intensive computing infrastructure for intensive computation and data storage with a NaCI sandbox. The NaCl sandbox is limited memory space and computation which defers it from running compute-intensive applications. Ad Hoc Cloud System (McGilvary et al., 2015) and cuCloud (Mengistu et al., 2018) are volunteer systems that harvest resources from sporadically available volunteer nodes, however, they lack locality or performance-aware mechanisms. Some groups have investigated running compute-intensive tasks on edge nodes based on MapReduce (Carson et al., 2019; Costa et al., 2012). These studies aim to handle resource allocation and data durability, however they are mainly designed for heavy computation with less concern about data storage. In industry, K3s (K3s, 2021) is a lightweight version of kubernetes (Brewer, 2015), specifically designed for edge or IoT scenarios. KubeEdge (Xiong et al., 2018) leverage computing resources from the cloud and edge to coordinate both environments. However, they are still oriented to central clusters management without optimization on heterogeneous resources and locality. Storage at the edge can be categorized into offload (offload data to edge and sync with cloud), aggregate (Data collected from multiple devices to the edge) and P2P (data generated by one device shared with another) (Baccarelli et al., 2017; Naranjo et al., 2018). Most of the existing storage systems focuses on offload and aggregate models. P2P storage is not explored much due to concerns of data security and synchronization difficulties across unreliable devices. CloudPath (Mortazavi et al., 2017) uses PathStore (Mortazavi et al., 2018), an eventually consistent datastore with persistent data on cloud and partial replicas on edge. The store may have a degraded performance when new data is queried frequently. SessionStore (Mortazavi et al., 2020) is a hierarchical datastore that guarantees session consistency using session-aware reconciliation algorithms built on top of Cassandra (Lakshman and Malik, 2010) and hence support client mobility to an extend. DataFog (Gupta et al., 2018) is an IoT data management infrastructure which places replica based on spatial locality, addresses sudden surges in demand using a location-aware load balancing policy and evicts and compresses data based on temporal relevance. However, it does not support network proximity based node selection. FogStore (Mayer et al., 2017) is a geo-distributed key-value infrastructure that places replicas based on latency of data access. Also, to ensure fault tolerance similar to DataFog, one of the replicas is kept at a remote location in FogStore. However, it does not take into account the limited storage capacities of heterogeneous storage nodes. ## 8\. Conclusion We presented the design of Armada, a densely distributed edge cloud infrastructure running on dedicated and volunteer resources. The lightweight Armada architecture and system optimization techniques were described, including performance-aware edge selection, auto-scaling and load balancing on the edge, fault tolerance, and in-situ data access. We illustrated how Armada served geo-distributed users in heterogeneous environments. An evaluation was performed in both real-world volunteer environments and emulated platforms. Compared to the locality-based approach and dedicated-resource-only scenario, Armada shows a 32% - 52% reduction in average end-to-end latency. We will formulate a service/data placement problem to identify suitable nodes for deploying services and storing data for the next step. We also plan to carry out an online churn analysis to quantify the volunteer node stability, which will play an essential part in the placement process. Furthermore, we will also explore different policies like service/data migration and dynamic replication with fine-grained consistency to support mobility in the future Armada version. ## References * Satyanarayanan et al. [2019a] Mahadev Satyanarayanan, Guenter Klas, Marco Silva, and Simone Mangiante. The seminal role of edge-native applications. In _2019 IEEE International Conference on Edge Computing (EDGE)_ , pages 33–40, 2019a. doi: 10.1109/EDGE.2019.00022. * Chen et al. [2017] Zhuo Chen, Wenlu Hu, Junjue Wang, Siyan Zhao, Brandon Amos, Guanhang Wu, Kiryong Ha, Khalid Elgazzar, Padmanabhan Pillai, Roberta Klatzky, Daniel Siewiorek, and Mahadev Satyanarayanan. An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance. In _Proceedings of the Second ACM/IEEE Symposium on Edge Computing_ , SEC ’17, New York, NY, USA, 2017. Association for Computing Machinery. ISBN 9781450350877. doi: 10.1145/3132211.3134458. URL https://doi.org/10.1145/3132211.3134458. * Satyanarayanan et al. [2019b] Mahadev Satyanarayanan, Wei Gao, and Brandon Lucia. The computing landscape of the 21st century. In _Proceedings of the 20th International Workshop on Mobile Computing Systems and Applications_ , HotMobile ’19, page 45–50, New York, NY, USA, 2019b. Association for Computing Machinery. ISBN 9781450362733. doi: 10.1145/3301293.3302357. URL https://doi-org.ezp3.lib.umn.edu/10.1145/3301293.3302357. * Wang et al. [2019] Junjue Wang, Ziqiang Feng, Shilpa George, Roger Iyengar, Padmanabhan Pillai, and Mahadev Satyanarayanan. Towards scalable edge-native applications. In _Proceedings of the 4th ACM/IEEE Symposium on Edge Computing_ , SEC ’19, page 152–165, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450367332. doi: 10.1145/3318216.3363308. URL https://doi.org/10.1145/3318216.3363308. * Amazon [2021a] Amazon. Aws local zones, 2021a. URL https://aws.amazon.com/about-aws/global-infrastructure/localzones/. * Amazon [2021b] Amazon. Aws wavelength, 2021b. URL https://aws.amazon.com/wavelength/. * Microsoft [2021] Microsoft. Azure edge zones, 2021. URL https://azure.microsoft.com/en-us/solutions/low-latency-edge-computing/. * Google [2021] Google. Global mobile edge cloud, 2021. URL https://cloud.google.com/blog/topics/inside-google-cloud/google-cloud-unveils-strategy-telecommunications-industry. * Mutable [2021] Mutable. Mutable, 2021. URL https://mutable.io/. * EDJX [2021] EDJX. Edjx, 2021. URL https://edjx.io/. * MobiledgeX [2021] MobiledgeX. Mobiledgex, 2021. URL https://mobiledgex.com/. * Şenel et al. [2021] Berat Can Şenel, Maxime Mouchet, Justin Cappos, Olivier Fourmaux, Timur Friedman, and Rick McGeer. Edgenet: A multi-tenant and multi-provider edge cloud. In _Proceedings of the 4th International Workshop on Edge Systems, Analytics and Networking_ , EdgeSys ’21, page 49–54, New York, NY, USA, 2021. Association for Computing Machinery. ISBN 9781450382915. doi: 10.1145/3434770.3459737. URL https://doi-org.ezp3.lib.umn.edu/10.1145/3434770.3459737. * Sreekumar et al. [2020] Nikhil Sreekumar, Abhishek Chandra, and Jon Weissman. Position paper: Towards a robust edge-native storage system. In _2020 IEEE/ACM Symposium on Edge Computing (SEC)_ , pages 285–292. IEEE, 2020. * Balkić et al. [2012] Zoran Balkić, Damir Šoštarić, and Goran Horvat. Geohash and uuid identifier for multi-agent systems. In _KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications_ , pages 290–298. Springer, 2012. * Zheng et al. [2018] Chao Zheng, Lukas Rupprecht, Vasily Tarasov, Douglas Thain, Mohamed Mohamed, Dimitrios Skourtis, Amit S Warke, and Dean Hildebrand. Wharf: Sharing docker images in a distributed file system. In _Proceedings of the ACM Symposium on Cloud Computing_ , pages 174–185, 2018. * Fu et al. [2020] Silvery Fu, Radhika Mittal, Lei Zhang, and Sylvia Ratnasamy. Fast and efficient container startup at the edge via dependency scheduling. In _3rd $\\{$USENIX$\\}$ Workshop on Hot Topics in Edge Computing (HotEdge 20)_, 2020. * Kagami [2021] Kagami. go-face, 2021. URL https://github.com/Kagami/go-face. * Technologies [2021] Apposite Technologies. Netropy emulator, 2021. URL https://www.apposite-tech.com/products/netropy/. * Kubernetes [2021] Kubernetes. Kubernetes: Affinity and anti-affinity, 2021. URL https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity. * K3s [2021] K3s. K3s: Lightweight kubernetes, 2021. URL http://k3s.io. * Brewer [2015] Eric A Brewer. Kubernetes and the path to cloud native. In _Proceedings of the sixth ACM symposium on cloud computing_ , pages 167–167, 2015. * Learned-Miller [2014] Gary B. Huang Erik Learned-Miller. Labeled faces in the wild: Updates and new reporting procedures. Technical Report UM-CS-2014-003, University of Massachusetts, Amherst, May 2014. * Anderson [2020] David P Anderson. Boinc: a platform for volunteer computing. _Journal of Grid Computing_ , 18(1):99–122, 2020\. * Pouwelse et al. [2005] Johan Pouwelse, Paweł Garbacki, Dick Epema, and Henk Sips. The bittorrent p2p file-sharing system: Measurements and analysis. In _International Workshop on Peer-to-Peer Systems_ , pages 205–216. Springer, 2005. * Mengistu and Che [2019] Tessema M Mengistu and Dunren Che. Survey and taxonomy of volunteer computing. _ACM Computing Surveys (CSUR)_ , 52(3):1–35, 2019\. * Ryden et al. [2014] Mathew Ryden, Kwangsung Oh, Abhishek Chandra, and Jon Weissman. Nebula: Distributed edge cloud for data intensive computing. In _2014 IEEE International Conference on Cloud Engineering_ , pages 57–66. IEEE, 2014. * McGilvary et al. [2015] Gary A McGilvary, Adam Barker, and Malcolm Atkinson. Ad hoc cloud computing. In _2015 IEEE 8th International Conference on Cloud Computing_ , pages 1063–1068. IEEE, 2015. * Mengistu et al. [2018] Tessema M Mengistu, Abdulrahman M Alahmadi, Yousef Alsenani, Abdullah Albuali, and Dunren Che. cucloud: Volunteer computing as a service (vcaas) system. In _International Conference on Cloud Computing_ , pages 251–264. Springer, 2018. * Carson et al. [2019] Kyle Carson, John Thomason, Rich Wolski, Chandra Krintz, and Markus Mock. Mandrake: Implementing durability for edge clouds. In _2019 IEEE International Conference on Edge Computing (EDGE)_ , pages 95–101. IEEE, 2019. * Costa et al. [2012] Fernando Costa, Joao Nuno Silva, Luís Veiga, and Paulo Ferreira. Large-scale volunteer computing over the internet. _Journal of Internet Services and Applications_ , 3(3):329–346, 2012. * Xiong et al. [2018] Ying Xiong, Yulin Sun, Li Xing, and Ying Huang. Extend cloud to edge with kubeedge. In _2018 IEEE/ACM Symposium on Edge Computing (SEC)_ , pages 373–377. IEEE, 2018. * Baccarelli et al. [2017] Enzo Baccarelli, Paola G Vinueza Naranjo, Michele Scarpiniti, Mohammad Shojafar, and Jemal H Abawajy. Fog of everything: Energy-efficient networked computing architectures, research challenges, and a case study. _IEEE access_ , 5:9882–9910, 2017. * Naranjo et al. [2018] Paola G Vinueza Naranjo, Enzo Baccarelli, and Michele Scarpiniti. Design and energy-efficient resource management of virtualized networked fog architectures for the real-time support of iot applications. _The journal of Supercomputing_ , 74(6):2470–2507, 2018. * Mortazavi et al. [2017] Seyed Hossein Mortazavi, Mohammad Salehe, Carolina Simoes Gomes, Caleb Phillips, and Eyal de Lara. Cloudpath: A multi-tier cloud computing framework. In _Proceedings of the Second ACM/IEEE Symposium on Edge Computing_ , pages 1–13, 2017. * Mortazavi et al. [2018] Seyed Hossein Mortazavi, Bharath Balasubramanian, Eyal de Lara, and Shankaranarayanan Puzhavakath Narayanan. Pathstore, a data storage layer for the edge. In _Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services_ , pages 519–519, 2018. * Mortazavi et al. [2020] Seyed Hossein Mortazavi, Mohammad Salehe, Bharath Balasubramanian, Eyal de Lara, and Shankaranarayanan PuzhavakathNarayanan. Sessionstore: A session-aware datastore for the edge. In _2020 IEEE 4th International Conference on Fog and Edge Computing (ICFEC)_ , pages 59–68. IEEE, 2020. * Lakshman and Malik [2010] Avinash Lakshman and Prashant Malik. Cassandra: a decentralized structured storage system. _ACM SIGOPS Operating Systems Review_ , 44(2):35–40, 2010. * Gupta et al. [2018] Harshit Gupta, Zhuangdi Xu, and Umakishore Ramachandran. Datafog: Towards a holistic data management platform for the iot age at the network edge. In _$\\{$ USENIX$\\}$ Workshop on Hot Topics in Edge Computing (HotEdge 18)_, 2018. * Mayer et al. [2017] Ruben Mayer, Harshit Gupta, Enrique Saurez, and Umakishore Ramachandran. Fogstore: Toward a distributed data store for fog computing. In _2017 IEEE Fog World Congress (FWC)_ , pages 1–6. IEEE, 2017.
# Against the “nightmare of a mechanically determined universe”: Why Bohm was never a Bohmian Flavio Del Santo Group of Applied Physics, University of Geneva, 1211 Geneva, Switzerland; and Constructor University, Geneva, Switzerland Gerd Christian Krizek Department Applied Mathematics and Physics, University of Applied Sciences Technikum Wien, 1200 Vienna, Austria ###### Abstract David Bohm has put forward the first deterministic interpretation of quantum physics, and for this he seems to be regarded as a champion of determinism by physicists (both his contemporaries and the supporters of his interpretation, the so-called “Bohmians”) as well as by historians of physics. The standard narrative is that he underwent a “conversion” from being a supporter of Bohr to being a staunch determinist, due to his interaction with Einstein and his commitment to Marxism. Here we show that Bohm actually upheld with continuity throughout his career some philosophical tenets that included a strong rejection of mechanistic determinism. As such, we conclude that Bohm was never a Bohmian and that his philosophical views have been largely misinterpreted. > _“Why on earth are they calling it Bohmian mechanics? Haven’t they read a > word I have written?!”_ > > David Bohm (reported by Basil Hiley) ## 1 Introduction David Bohm (1917-1992) went down in history as the physicist who achieved the impossible by providing an alternative deterministic interpretation of quantum mechanics [1, 2].111 Bohm himself referred to his interpretation as “alternative interpretation”[1, 2, 3], as “causal interpretation”[4, 5], and as “quantum potential interpretation”. In the literature it is referred to as “Ontological interpretation” [6, 7], “De Broglie-Bohm causal interpretation”[8], or “De Broglie-Bohm Pilot-Wave Theory”, “Bohmian Mechanics” [9, 10], or “Bohm theory” [11, 12]. The variety of terminologies reflects different stances and views of Bohm’s collaborators and successors which deviate in some cases substantially from Bohm’s own ideas and whose discussion would go beyond the scope of this work. Acclaimed or blamed therefore as a champion of determinism, he was (and still is) regarded by many as a cure against the claims of the Copenhagen school that quantum mechanics necessarily requires a completely novel way of looking at the world. According to this narrative, Bohm restored the seemingly lost comfort of mechanistic determinism, which had characterized physics for centuries, and his work seems therefore animated by a certain intellectual conservatism (see, e.g., [13]). Here, we show that it was far from his intention to try to go back to an old pre-quantum paradigm. Bohm’s views on philosophy of physics have instead been explicitly aimed, with continuity throughout his whole career, at demolishing certain established views that he perceived as limiting and dogmatic. As we shall see, one of these was the concept of mechanism, a form of reductionism which Bohm regarded as the > assumption that the great diversity of things that appear in all of our > experience, every day as well as scientific, can all be reduced completely > and perfectly to nothing more than consequences of the operation of an > absolute and final set of purely quantitative laws determining the behaviour > of a few kinds of basic entities or variables. ([3], p. 37). In this effort, Laplacian determinism was regarded by Bohm as the first and foremost expression of mechanism, and he thus searched for alternatives throughout his whole life. As noted by Nobel laureate Roger Penrose, “there can be few physicists who have delved into the philosophical implications of their subject as has David Bohm” [14]. It is indeed possible to identify at least three fundamental tenets in David Bohm’s philosophy of physics, namely: (i) realism, (ii) causality, and (iii) anti-mechanism. Here we will not deal with Bohm’s realism which has already been the subject of numerous studies, and it is undisputed that Bohm was committed to (some form of) realism (see, e.g., [15, 16, 17], and references therein). On the other hand, we will focus on the latter two tenets, which have been astonishingly misunderstood in most of the vast literature devoted to Bohm’s thought and his intellectual legacy. In particular, the term causality has been commonly assumed to be a synonym of determinism; a mistake unfortunately still present in the literature in both physics and philosophy to date. Furthermore, Bohm always opposed mechanism, which, we stress again, has its most striking example (but not the only one) in determinism. It is the main aim of this paper to clarify some of Bohm’s original philosophical stances by demolishing certain established misconceptions around his commitment to determinism, which we cannot emphasize enough, was never present in his thought. It is a peculiar case that a scholar to whom so many historical and philosophical studies have been devoted has been so misrepresented. Bohm’s sustained rejection of determinism was only partly acknowledged in [18] and new important evidences made available thanks to the publication of a collection of letters in [19]. Moreover, one of us (F.D.S.) already pointed out in [15] that Bohm’s commitment to determinism was secondary to his commitment to realism. The same thesis was then put forward in [16]. Here, we show that Bohm’s position was more radical than this: not only was not determinism his philosophical priority, but he actually always opposed it. In section 2, we will recollect the standard narrative about Bohm’s ideas. Albeit with some variations, indeed, there seems to be a consensus about the fact that Bohm’s main philosophical concern was to retrieve determinism in modern physics (at least at a certain stage of his working life). We will strongly counter, in section 3, this standard narrative with a more accurate account of the actual philosophical views of David Bohm, focusing on his take on causality and (non)determinism. We will show that one of Bohm’s main commitments was always anti-mechanism, a position that he had understood very early to be incompatible with determinism. This is what actually led him to initially (partly) support the indeterministic doctrine of Copenhagen, which, however, he abandoned when he realized that randomness is another, for him unacceptable, form of mechanism. Hence, his commitment to determinism—stemming from his celebrated alternative interpretation—is only ostensible. Bohm’s anti-mechanistic position led him to develop a dialectic philosophical view of an unlimited number of levels of description of reality that can be neither deterministic nor fully random, but still allow either of these descriptions to exist at different levels. We will here mainly focus on the period of the 1950s, because it is in that decade that Bohm allegedly underwent a change from being a supporter of Bohr to becoming a determinist and then supposedly abandoned this debate altogether as his commitment to Marxism faded away. To avoid further misinterpretations on our part, we will favor quoting as much as possible from Bohm’s original writings rather than presenting our own summaries and analyses. Moreover, in the interest of conciseness, but without the risk of decontextualizing the quotations, we will provide more extended excerpts in the form of appendices, where the interested reader can find further evidence in support of the thesis put forward in the main text. We hope that letting Bohm speak for himself would finally bring justice to some aspects of his complex and original way of conceiving physics. ## 2 The standard narrative: Bohm’s alleged commitment to determinism After World War II, the practices of physics underwent a drastic change. The foundational debate that had characterized the early days of quantum physics gave away to a pragmatic approach, the so-called “shut up and calculate”, oriented towards applications often of a military nature [20]; the debate over the interpretation of the quantum formalism seemed to be settled for good. It was only a handful of physicists (and a few philosophers) scattered all over the world who started reviving the uneasiness towards the orthodox interpretation proposed by the school of Copenhagen (see Refs. [20, 21, 22, 23, 24]). Among them, David Bohm was a link between the old generation of critics—such as Albert Einstein, who played and active role in his intellectual life, Erwin Schrödinger, or (the early) Luis de Broglie—and the new underground culture concerned with quantum foundations to come. After completing his PhD with Robert Oppenheimer at Berkeley in the 1940s and a post at the Institute of Advanced Studies in Princeton, in 1951, Bohm fell victim of the witch-hunt of McCarthyism because of his adherence to Marxism; this led him to a life of exile: firstly to Brazil, then to Israel, and finally to the UK, where he spent the rest of his life (see [25, 17] for biographies of Bohm). Although his research in the group of Oppenheimer was mainly about plasma physics, it is there that Bohm started getting interested in foundational problems of quantum theory, as he later recalled: “When I went to work with J. Robert Oppenheimer, I found a more congenial spirit in his group. For example, I was introduced to the work of Niels Bohr and this stimulated my interest, especially in the whole question of the oneness of the observer and the observed.” (cited in [17], p. 1. See also [25], Ch. 4). Bohr, together with Werner Heisenberg and others, was not only among the founding fathers of quantum theory but the initiator of the so-called Copenhagen interpretation thereof. The latter maintains that quantum mechanics necessarily leads to abandoning certain fundamental precepts of classical physics, among which determinism, and instead to embrace the genuine probabilistic nature of quantum phenomena. Bohm went so deep in his reflections about quantum theory and its foundations that, in 1951, he published the textbook Quantum Theory [26], fully in the spirit of the Copenhagen interpretation. Shortly after the publication, indeed, Bohm himself stated about his book: “a clear presentation of Bohr’s point of view (the first clear, if I may boast a little).”(Letter from Bohm to Miriam Yevick; Letter 66, Folder C117, January 23, 1952. In [19], p. 235.) However, in the very same year, Bohm submitted, on July 5th, a seminal work (published in two parts [1, 2]) wherein he presented the first consistent alternative interpretation of the quantum formalism. He introduced the initial position of quantum particles as a “hidden variable” that, if known, would lead to deterministic trajectories similar to the familiar ones of classical mechanics (but guided by a genuinely additional quantum part in the potential). So far, these are mere historical facts. Based on these, however, a standard narrative about David Bohm has crystallized, which can be summarized as follows: In the span of around a year, Bohm had a dramatic shift in his philosophical agenda moving one of his tenets from indeterminism to determinism. This narrative is not only popularized among physicists in the sort of working history that hovers in the community, but has been advocated by most historians, too. This is however not surprising, since admittedly it prima facie seems a rational account of the facts. A more thorough historical reconstruction, proposed among other works in the recent comprehensive biography of Bohm by Olival Freire Jr. [17], tells a more nuanced story. First of all, it points out that already in his 1951 book [26], Bohm had places some hints of his uneasiness with Copenhagen, such as endorsing ontological realistic assumptions (see [17], pp. 48-51). Moreover, historians tend to add a third phase in which Bohm supposedly distanced himself again from determinism at the end of the 1950s, concurrently with his dropping of Marxism. This double shift, also in relation to Marxism, was strongly emphasized already by Pylkkänen [27], and also Freire, although more cautiously, endorses a similar position: “Indeed, the connection between the break with Marxism and abandonment of determinism in science, particularly in physics, and not only in society, in Bohm’s thoughts is just a guess, albeit a plausible one.” ([17], p. 123). At any rate, the main point of the standard narrative is essentially present also in these more informed accounts. The historical question that naturally arises then is: why did Bohm go through such a drastic and abrupt change from an adherent of the school of Copenhagen, i.e. a doctrine explicitly advocating the failure of determinism, to a novel deterministic interpretation? (And, possibly, why did he give in determinism again a few years later?). That is, what caused the sudden “conversion” of Bohm from an open supporter of indeterminism to a staunch determinist (and perhaps back)? Numerous studies have tried to answer this question ([28, 25, 18, 27, 29, 19, 17], apparently quite successfully despite a few minor details that are still the subject of historical debate. But what if the question was the wrong one in the first place? What if determinism has never been a desideratum for Bohm, rather, this change was not about his worldview, but simply it was reflecting different phases of Bohm’s experimentation in his attempt to achieve a physical theory that would satisfy his main philosophical tenets? In section 3, we will, in fact, defend this thesis. That is, that Bohm always upheld an anti-mechanistic view that was clearly incompatible with determinism alone. Before doing that, in the remainder of this section, we will continue summarizing the standard narrative, or rather, its reply to the main question it poses. There is an almost absolute consensus on the fact that the two elements that played the major role in Bohm’s turn towards determinism have been, on the one hand, his encounter with Einstein, and, on the other, his Marxist views. This twofold explanation is by now well-established among historians, who mostly debate about the extent of one or the other influences (possibly, concurrently with Bohm’s political prosecution; see [29]). This reconstruction was already put forward by the illustrious historian and philosopher of physics Max Jammer, according to a late recollection of Bohm himself: > Stimulated by his discussion with Einstein and influenced by an essay which, > as he told the present author, was “written in English” and “probably by > Blokhintsev or some other Russian theorist like Terletzkii,” and which > criticized Bohr’s approach, Bohm began to study the possibility of > introducing hidden variables. ([28] p. 279)222Note however, that there is a > controversy about the value of this statement because there were no English > translations available of either Blokhintsev’s or some other Terletzkii’s > works at the time of Bohm’s “conversion”. See [17], Section 3.4.2. It is indeed well-known that Einstein had opposed Bohr’s views since the early days of quantum theory and his attempt to maintain determinism, summarized by the motto “God does not play dice”, has entered the popular culture. However, while Einstein was invariably troubled by the abandonment of realism (and possibly of locality and localizability) implied by Bohr and his school, there are quite incontrovertible evidences that determinism was not Einstein’s main philosophical concern [15], and even less so in his late years. Actually, in 1953, in a letter to his friend Max Born, he stated: “I have written a little nursery song about physics, which has startled Bohm and de Broglie a little. It is meant to demonstrate the indispensability of your statistical interpretation of quantum mechanics […] This may well have been so contrived by that same ‘non-dice-playing God’ who has caused so much bitter resentment against me, not only amongst the quantum theoreticians but also among the faithful of the Church of the Atheists” (Einstein, A. to Born, M, 12 Oct 1953 [30]). In the light of this, we can conjecture that the impact that Einstein had on Bohm at the time of their encounter at Princeton in the early 1950s, was probably that of casting doubt on the Copenhagen interpretation, and suggesting that one could search for an alternative. However, it does not seem likely that he directly pushed Bohm towards determinism, let alone hidden variable that he never supported (see [15]). As for whether and to what extent Marxism has been a guiding principle for Bohm in developing his deterministic hidden variable interpretation, the question is subtler. This has been considered in detail by Forstner [31, 29], and partly by Peat [25], Freire [17], and Talbot [19]. Bohm surely agreed with the ontology supported by Marx and Engels, namely, a materialistic philosophy (or naturalism) which “says that the sole reality is the natural world, and this world is made up solely of matter” and “material things are not dependent for their existence or nature on any mind or minds”, thus implying realism (from A. W. Wood, cited in [19], p. 24). Moreover Marx and Engels put together this materialistic view and the dialectic of Hegel, which turned into the main guiding philosophy of Marxism, i.e., dialectical materialism. While dialectical materialism applied in a scientific context deals primarily with the nature of the world, it is in the Marxist analysis of the progress of history and society, historical materialism, that one finds determinism as a main characteristic. In fact, for Marx it is the mode of production and the struggle between social classes that necessarily determines historical change. As explained by Freire [17], it is objectively difficult to know to which Marxist writings Bohm had access to and therefore which parts of that philosophy had a concrete impact on his scientific and philosophical views. However, we will see in section 3 that it is the dialectic aspect (and partly the materialist one, for what concerns realism) of Marxism that seems to have played the major role in the views about philosophy of science that guided Bohm, rather than the deterministic character of historical materialism. As a matter of fact, Bohm was already a Marxist when he published his book [26] in which he endorsed the view of Bohr, so it does not seem to make sense to attribute his alleged conversion towards determinism to his adherence to Marxism. We will show, on the contrary, that his interest in Bohr actually stemmed, at least partly, from Marxism. This should be regarded as Bohm’s first attempt to get away from a mechanistic philosophy in a dialectic (i.e. Marxist) spirit. Historians are not the only ones who have misconceived Bohm’s point of view. The idea that Bohm’s first and foremost concern was that of restoring determinism at any cost was surely always widespread among physicists too. Starting with the contemporaries who were supportive of him—like Einstein, Luis de Broglie, and several Marxist physicists, in particular Jean-Pierre Vigier—and closely followed by his critics, they all emphasized Bohm’s commitment to determinism: the former as a merit and the latter as a untenable conservative attitude (see [17], Chapters 4.2-4.5, for the early reactions on Bohm’s hidden variable model).333Incidentally, it should be recalled that Bohm’s interpretation did not receive the praise that he expected and that he might have deserved. Even Einstein, who supported Bohm in his career and considered him a very talented physicist, stated that the way Bohm’s way of restoring determinism “seems too cheap” (see [15]). There are several hypotheses about why this has been the case, related to the Zeitgeist of post- war physics, Bohm’s political views, the authority of the Copenhagen school, etc. (See [21, 17, 25, 13]). It was only in more recent years that the so- called Bohmian mechanics found new momentum in a sub-community of scholars interested in foundations of quantum physics (see [9, 32, 10]). Also Bohm’s close collaborators rediscovered Bohm’s original interpretation and encouraged further works closer to Bohm’s non-mechanistic ideas (see [33], [34], [6]). As a matter of fact, due to his hidden variable model, Bohm started being regarded as a staunch determinist. ## 3 An alternative narrative: Bohm against mechanistic determinism ### 3.1 Indeterminism in Bohm’s book Quantum Theory (1951) and beyond As we have previously recalled, the first work of Bohm in which he manifestly deals with foundational questions is his 1951 book on quantum theory [26]. It is generally known, as we have discussed, that this book takes an approach close to the orthodox view of Copenhagen. Note that in doing so, Bohm was not blindly following the mainstream, but rather he was actively looking for ways to provide quantum mechanics of solid and understandable physical foundations, against the wide-spread pragmatic acceptance of an uninterpreted abstract formalism. He therefore saw in the thought of Bohr an attractive philosophy because it was provided with two main features: the principle of complementarity, and irreducible probability (i.e. nondeterminism). In the former he saw elements of dialectics, which we claim was Bohm’s main influence from Marxism. In fact, this is a first attempt, that Bohm was to develop in greater detail in the following years (see below), to apply the ideas of Engels who, in his Dialectics of Nature, “is especially opposed to attempts at mechanical reductionism” [19]. In the context of quantum physics, this is the fact that it is the interaction between two qualitatively different descriptions (the classical and the quantum ones) to determine reality, forming something qualitatively new not according to necessity. This also satisfied Bohm’s antireductionist convictions because the classical world ought to lie outside of the quantum domain as a primitive and cannot be in general fully reduced to a quantum description. As for the acceptance of objective chance (i.e., potentialities), he saw in this the most natural possibility to abandoning the view of mechanistic determinism. Later Bohm abandoned this approach, but he remained sympathetic to potentialities (see section 3.5). In a letter to at that time his girlfriend Hanna Loewy, presumably in 1950, Bohm explicitly clarified his motivations for having taken a Bohrian approach in his book: > I just got another idea on the quantum theory also. It is based on the fact > that at the microscopic level, the quantum theory deals only with > potentialities. For example, the quantum theory describes the probability > that an electron can realise its potentiality for a given position. But to > realise this potentiality, it must interact with some large scale > (classical) system, such as an apparatus which measures position. It is only > at the large scale that definite and well-defined events can exist. […] > Thus, the quantum theory presupposes the validity of classical concepts at > the classical level. This means that one does not deduce the classical > theory from the quantum theory, but that the two work together to describe > the whole system. This is in contrast to most theories in physics, in which > we analyse all large scale phenomena in terms of the small scale components. > Here, we see that at the large scale level, new (classical) phenomena > appear, which are not contained logically in the small scale phenomena > alone. In other words, the behaviour of the whole system cannot be reduced > to a description of the relationship of all its parts, since, new properties > appear in a large aggregate, not contained at all in the behaviour of the > microscopic systems. (Letter from Bohm to Hanna Loewy; Letter 1. Folder C37, > not dated. [February-May, 1950?], [19], p. 99). Moreover, soon after the publication of the book, he explained to his friend, the mathematician Miriam Yevick, why he got interested in Bohr: > All I knew was that there was one school, which utterly repelled me, in > which one was supposed to introduce abstract mathematical postulates, and be > satisfied if the calculations agreed with experiment. Against this, Bohr’s > school seemed to be a big improvement, because at least he tried to explain > the physical meaning of the theory. Moreover, there was an element of > dialectics in Bohr’s point of view which attracted me. It seemed progressive > because it broke the old mechanist materialist determinism, which left no > room for growth and development of something new. (Bohm to Miriam Yevick; > Letter 65. Folder C117, dated: Jan 7, 1952, [19], p. 227); extended > quotation in Appendix 4.3). Note that at the time when he wrote this letter, Bohm was a staunch Marxist and most remarkably had already completed his work on deterministic hidden variables, and yet he was evidently criticizing mechanistic materialist determinism. For what concerns its content, Bohm’s book is an excellent technical manual of quantum mechanics and, although it endorses the view of the Copenhagen school, it is already possible to pin down where the main philosophical concerns of its author lie: causality is already his main focus together with his refusal of mechanism. However, at this stage, he explicitly endorses indeterminism as a way out of mechanism, a view that was soon to change when he realised that also indeterminism can be mechanistic. We have recalled in the previous section that Freire [17] already noticed that a first element that distances Bohm from the Copenhagen school, is that in his 1951 book he looks for a realist account of nature. Another main difference with Copenhagen becomes manifest for what concerns causality. While for Heisenberg “quantum mechanics proves the invalidity of the law of causality,”444The original German phrase reads: “so wird durch die Quantenmechanik die Ungtültigkeit des Kausalgesetzes”. [35] for Bohm causality was an absolutely indispensable tenet. However, he makes very clear in his book that while maintaining causality he wants to escape determinism. Hence, a first major distinction, surely not well-understood at that time (and alas not even today in most of physics circles), is the conceptual difference between causality and determinism. This is also at the center of misunderstandings in the historical literature when referring to Bohm’s later views, for instance in Freire’s words: “Soon both David Bohm and his critics were using “causal interpretation” to label his approach to quantum theory, clarifying Bohm’s ambition to restore a kind of determinism analogous to that of classical mechanics.” (Ref, [17], p. 63). In his 1951 book, Bohm actually advocates a causally non-deterministic nature of physical laws, in terms of tendencies (as we will see later, this is closely related to Popper’s view in terms of propensities; see section 3.5): > we wish to call attention to the fact that, even in very early times, two > alternative general types of causal laws appeared. One of these involved the > notion of complete determinism; the other involved the notion of causes as > determining general tendencies but not determining the behavior of a system > completely. ([26], Ch. 8, Sect. “Completely Deterministic vs. Causal Laws as > Tendencies.”) Bohm goes as far as to brilliantly show that actually the determinism of classical physics makes the concept of causality redundant: > It is a curiously ironical development of history that, at the moment causal > laws obtained an exact expression in the form of Newton’s equations of > motion, the idea of forces as causes of events became unnecessary and almost > meaningless. The latter idea lost so much of its significance because both > the past and the future of the entire system are determined completely by > the equations of motion of all the particles, coupled with their positions > and velocities at any one instant of time. Thus, we can no more say that the > future is caused by the past than we can say that the past is caused by the > future. […] > > Thus, classical theory leads to a point of view that is prescriptive and not > causal. ([26], Ch. 8, Sect. “Classical Theory Prescriptive and not Causal”.) Hence, he saw a way out of the effective lack of causality in a completely deterministic theory in terms of the tendencies or potentialities entailed by (the Copenhagen interpretation of) quantum physics: > With the advent of quantum theory, the idea of complete determinism was > shown to be wrong and was replaced by the idea that causes determine only a > statistical trend, so that a given cause must be thought of as producing > only a tendency toward an effect. […] ([26], Ch. 8, Sect. “New Properties of > Quantum Concepts : Approximate and Statistical Causality”.) > > Thus, in terms of our new concept, matter should be regarded as having > potentialities for developing either comparatively well-defined causal > relationships between comparatively poorly defined events or comparatively > poorly defined causal relationships between comparatively well-defined > events, but not both together. ([26], Ch. 8, Sect. “Relation between Space > Time and Causal Aspects of Matter”.) We have thus seen why Bohm became aligned with Bohr in the first place, namely, to find a suitable alternative to mechanistic determinism that precluded a sensible concept of causality, which was for Bohm a crucial assumption for a physical theory. However, he soon realized that Bohr’s philosophy was not as satisfactorily as he previously had sensed because it indeed contained a dialectical approach but not as much of materialism as he would have wanted: > After I had written the book, I finally began to grasp the full meaning of > the theory, and could see that it leads inevitably to a form of > (dialectical) idealism. But this was not so clear when I started, because of > the general confusion in the literature. (Bohm to Miriam Yevick; Letter 65. > Folder C117, dated: Jan 7, 1952, [19], p. 227); extended quotation in > Appendix 4.3). And again: > I notice that you call me “a disciple of Einstein”. This is not very > accurate. Actually I was a strong “Bohrian” and wrote my book under the > assumption (later proved wrong) that the principle of Complementarity was a > materialist point of view. It certainly is very dialectical, but I did not > see at that time that it is not materialist. After writing my book, I sent a > copy to Einstein. He called me up asking to discuss the book, especially the > Section on the paradox of EPR, which he liked very much. He thought I gave > Bohr’s point of view the most convincingly possible presentation, but he > still refused to accept it. He then argued for some time, and he ended up > convincing me that his objections were not answered. I thought about it for > a while, becoming more convinced all the time that he was right. Finally I > decided to look for a causal interpretation within few weeks, I hit upon the > idea which I published, not knowing about de Broglie’s work until later. It > took me 10 hours of work, distributed over 2 months to convince Einstein > that it made sense, but he actually never liked it. He only thought it was > good to propose it to break out the present stagnant situation in physics. > (Bohm to Schatzman; Letter A1.15. September 7, 1952, [23], p.335) ### 3.2 Against determinism, despite hidden variables (1952) Exactly in the same period when his book [26] was appearing, Bohm was formulating his alternative, deterministic interpretation in terms of hidden variables. Given his clear motivation recalled in the previous section, why did he do that? Bohm must have found himself in a strange position, when he managed to conceive a consistent model based on hidden variables that restored determinism. He clearly wanted to prove something that was considered impossible by the founding fathers of theory, in particular John von Neumann who had allegedly proven that a hidden variable completion of quantum mechanics was in principle impossible.555On the history of von Neumann’s impossibility proof see [36]. Moreover, Bohm wanted to prove that Bohr and Heisenberg’s view was not necessarily the ultimate description of reality. It should be stressed that at that time, no other interpretation of quantum physics was known besides (slightly different understandings) of the Copenhagen one, so probably stimulated by his novel awareness of the limits of Bohr’s interpretation and by the discussions with Einstein he explicitly looked for any alternative different interpretation. According to Hiley, indeed, Bohm “was not a deterministic man, he used causality. […] He was not bound to it [determinism]. David Bohm always used to say to me: ‘I am making a proposal’. So, all this people think he had rigid views. He didn’t have rigid views. He was always making proposals, because he thought he never fully got to the bottom of quantum mechanics.” [37]. In fact, although Bohm stresses in his papers that the “‘hidden” variables determine the precise results of each individual measurement process” [2], repeatedly acknowledging very clearly the deterministic character of his model, he certainly never adopted a fundamental ontology merely made of particles plus their deterministic dynamics guided by the wave function. This is something that his followers, the so-called Bohmians (see footnote 1), have instead assumed, namely, considering Bohm’s proposal as the ultimate description of reality, much against the view of Bohm himself. In fact, the germ of Bohm’s way out of mechanical determinism (see further) as entailed by his proposal, is already expressed, although quite subtly, already in the conclusion of his second paper on hidden variables [2], when he states: > This hypothesis is based on the simple assumption that the world as a whole > is objectively real and that, as far as we now know, it can correctly be > regarded as having a precisely describable and analyzable structure of > unlimited complexity. The pattern of this structure seems to be rejected > completely but indirectly at every level […]. We should never expect to > obtain a complete theory of this structure, because there are almost > certainly more elements in existence than we possibly can be aware of at any > particular stage of scientific development. Any specified element, however, > can in principle ultimately be discovered, but never all of them. Indeed, at least since 1951, most likely when he was still in Princeton (see [19], footnote 48, p. 31), Bohm started developing a new philosophy based on the concept of having different levels of description, each of which can be either deterministic or indeterministic, but each of them giving only a partial account of reality. His ontology was thus made of the wholeness of the different levels of qualitatively different entities. However, he postulated the number of levels to be infinite, thereby making it fundamentally impossible to have mechanism, and in particular determinism: > Because of the existence of an infinite number of levels, the deterministic > laws of order at each level probably follow only as a result of conditions > of chaos existing at lower levels. If the lower-level conditions of chaos > could be altered, then the very framework of description of the higher level > laws would also have to be altered. Thus, we are led to a more dynamic > concept of the laws of nature; for because of their infinite complexity, > richness, and depth, the applicability even of certain very general forms of > laws at a particular level may depend on conditions at other levels, which > are in principle subject to our prediction and control. This experience > should ultimately be repeated at any given level, however deep, as our > knowledge is extended. (Bohm to Miriam Yevick; Letter 58. Folder C116, > dated: Nov 23 [1951], [19], p. 205) Note that this idea, while keeping being refined, remained essentially unchanged throughout Bohm’s transition from the period of his 1951 book to his hidden variable proposal, and reached its main expression in the book Causality and Chance [3] published in 1957 (see section 3.4). For instance, after he had already completed his hidden variable interpretation, he wrote to Yevick: > The “things” at each level, are made up of smaller “elements” at a more > fundamental level, and it is the motion of these more fundamental elements > (not usually directly visible to us, except with the aid of elaborate > scientific research) which causes the appearance and disappearance of the > “things” existing at a higher level. These more fundamental “elements” > however, cannot be permanent, but must be made up of still more fundamental > “elements” and so on ad infinitum. (Bohm to Miriam Yevick; Letter 65. Folder > C117, dated: Jan 7, 1952, [19], p. 227; extended quotation in Appendix 4.1) Bohm also points out his position on the need for infinite levels to this collaborator Schatzman in a letter from 1952: > It is most likely that not even the substratum particles could be > indestructible and unanalysable. Instead, there is probably another > substratum below this (of a qualitatively different kind most probably) and > so on ad infinitum. Thus, we should have an infinite series of qualitatively > different levels of laws. Any finite number of levels can always be > understood by humanity, but never all of them. ([23], p. 351; extended > quotation in Appendix 4.2) And soon after his letter to Miriam Yevick in January, he wrote what is one of the most important quotations from the whole collection of known writings of David Bohm, because it unambiguously states that he could not accept mechanic determinism, even in the period when he was promoting his hidden variable model: > Most of the errors of both the positivist and the 19th century “mechanical” > materialists spring from an implicit assumption that the laws of nature will > some day finally be understood in terms of a limited number of hypotheses. > From this comes the nightmare of a mechanically determined universe that > follows an inevitable course. To avoid this nightmare, positivists and > idealists have given up causality and assumed a “spontaneous” (i.e., > uncaused) element in physical processes. > > The concept of a limitless number of levels […] provides a motive power for > continual development & growth. Moreover, the nightmare of complete > determinism is avoided. Although each level is causal, the totality of > levels cannot ever be taken into account. Thus, as a matter of principle, we > say that complete determinism could not even be conceived of, yet, each > level can be determined. Here, we part company with the believers in > “spontaneity” for we say that what appears to be spontaneous is caused by > factors, in principle, knowable, but now hidden to us. But to be able to say > this without implying complete determinism, we must assume an unlimited > number of levels. (Bohm to Miriam Yevick; Letter 73. Folder C118, dated: Rec > Mar 31 [1952], [19], pp. 254-55; extended quotation in Appendix 4.4) It is now clear that Bohm did not undergo a conversion form indeterminism (à al Copenhagen) to determinism (with hidden variables), as the standard narrative implies. He actually stayed faithful to his tenets of realism and causality and his shift was merely that of realising that Bohr‘s approach was not enough to achieve what he had in mind. So it seems that his philosophical theory of the infinite levels was conceived to “cure” his own model from the “nightmare” of determinism. One should also remark that this idea of unlimited levels is very much in the spirit of dialectics, and indeed this is the most Marxist trait in Bohm’s work. As pointed out by Talbot, such a connection is perhaps less abstract that one could think, drawing directly from the work of Engels: “especially in the Dialectics of Nature, Engels introduces the idea of levels, or what he calls ‘forms of motion’. […] Engels is especially opposed to attempts at mechanical reductionism, which ‘blots out the specific character’ and ‘qualitative difference’ of non-mechanistic forms of motion.” ( [19], p. 25). For Bohm this dialectic view of nature is a way to maintain a non trivial form of causality, intended as the possibility of creating non necessary new things, contrarily to the mechanistic view. In a letter to his friend —the American physicist Melba Phillips— Bohm spelled out this connection in detail: > Also an important additional aspect of causality needs to be discussed in > more detail —namely— causality as a means of determining the mode of being > of qualitatively new things, which grow out of the old things. The basic > aspect of mechanism is that (as in an idealized machine) the universe is > conceived of as made of basic elements (particles, fields, or what have you) > which simply interact according to fixed roles, and which themselves never > change as a result of the processes in which they take part. […] However, > the concept of the infinity of levels shows that there need exist in nature > no such thing as a basic element which never changes. Thus, causal laws not > only determine the future in a mechanical sense; i.e., in the sense of > determining quantitative changes in the arrangements of entities whose > intrinsic character is fixed. The causal laws also tell when qualitative > changes will occur and may define the characteristics of the new entities > that can come into being. Thus, causality is a broader concept than that of > mechanical determinism. […] A “mechanistic” attitude toward science however, > tends to limit the growth of our concepts in an arbitrary and dogmatically > conceived way. Such a mechanistic attitude refers not only, however, to the > mechanistic determinists, but also to the “mechanistic indeterminists”, who > insist that in the quantum of action, we have reached an ultimate, > indivisible, and unanalyzable entity, which will never be found to have a > structure understandable in terms of a deeper level. to fixed rules. (Bohm > to Melba Phillips. Letter 43. Folder C48, dated: Oct 13, 1953, [19], p. 164; > extended quotation in Appendix 4.5). In the following years, Bohm kept developing his philosophy of the infinite levels, sharpening the distinction between causality and deterministic mechanism, advocating the former and in strong opposition to the latter. Causality is for Bohm the possibility of creating new qualitative entities in a non trivial sense, i.e. without being able to reduce everything to a finite collection of basic elements that cannot change and that are subject to fix laws: > Now, at first sight, it may seem that we could eliminate the large-scale > level by analyzing it in terms of its basic molecular motions. And if there > were a finite number of levels, this would be true. But if there are an > infinite number, then each level stands on a footing that is, in the long > run, as basic as that of any other. For every level has below it a deeper > one. Indeed, matter can be regarded as made up of the totality of all > levels. Each level makes its own specific contribution to the totality. > (Bohm to Melba Phillips. Letter 46. Folder C48, dated: March 15, 1954, [19], > p. 170; extended quotation in Appendix4.6). Let us now stop for a moment and go back to the standard narrative. Freire makes a case that > in the 1950s Bohm did indeed promote the recovery of determinism. In 1951, > before the term ‘causal interpretation’ had gained currency in the debates > on Bohm’s proposal, he himself emphasized it in his first letter to the > French astrophysicist and Marxist Évry Schatzman, while looking for allies, > such as Jean-Pierre Vigier and Louis de Broglie, to get support for his > proposal: “My position in these physical questions is that the world along > with all observers who are part of it is objectively real and in principle > precisely definable (with arbitrarily high accuracy), and subject to precise > causal laws that apply in each individual case and not only statistically.” > ([17], p. 65). There seems to be a tension between the statements of Bohm here. However, one can hypothesize that his actual point of view on determinism is the one that emerges from the letters to his intimate friends, i.e., a staunch anti- mechanistic position. Thus, these letters seem to be a more trustable source than a first contact to somebody from whom Bohm was seeking the support. He probably tamed his more complex philosophical positions and tailored his letters to his interlocutor by highlighting the deterministic aspect in the interactions with Schatzman and later with Vigier to find a common ground with these more “traditional” Marxists who definitely prised determinism (see Appendix 4.9). Moreover, note that in the quoted letter to Schatzman, Bohm stresses the causal aspect of his proposal, which, as clarified above, does not necessarily means determinism. ### 3.3 An indeterministic causal model by Bohm and Vigier (1954) So far, the evidence that Bohm was against determinism even during the years in which he devised and promoted his hidden variable model are limited to private correspondence. However, in 1954, Bohm published a paper with Vigier—Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations [5]—that is a first attempt to put into practice the ideas of a model of causal interpretation which is however fundamentally non-deterministic, due to different levels of description. In fact, therein Bohm and Vigier postulate a field that is described by a fluid of density $|\psi|^{2}$, which is then able to recover the standard quantum mechanics > by introducing the hypothesis of a very irregular and effectively random > fluctuation in the motions of the fluid. […] Such random fluctuations are > evidently consistent within the framework of the causal interpretation of > the quantum theory. Thus, there are always random perturbations of any > quantum mechanical system which arise outside that system. [5] They indeed clarify that “the causal interpretation of the quantum theory permits an unlimited number of new physical models” and that their proposed “model is an extension of the causal interpretation of the quantum theory already proposed, which provides a more concrete physical image of the meaning of our postulates than has been available before, and which suggests new properties of matter that may exist at deeper levels.” [5]. Here causal means the possibility of explaining the theory in terms of a sub-quantum level (the fluid) that accounts for the higher quantum level. Note that, contrarily to the first hidden variable model [1, 2], this model is based on fundamental random fluctuations, thereby dispelling even more the doubt that Bohm was a committed determinist: “In the model that we have proposed here, however, the statistical fluctuation in the results of such [quantum] measurements are shown to be ascribable consistently to an assumed deeper level of irregular motion”. It is interesting to notice that while the postulated fluctuations of the fluid are considered to be (at this level of description) genuinely indeterministic, Bohm and Vigier think of these fluctuation as having a certain structure in terms of potentialities: “The fact that the mean density remains equal to $|\psi|^{2}$, despite the effects of the random fluctuations, implies then that a systematic tendency must exist for fluid elements to move toward regions of high mean fluid density.” The ontological basis of this new indeterministic model and how it relates to Bohm’s philosophy of the infinite levels is explained by Bohm in correspondence with Einstein: > “The general idea is that at a level more fundamental than that of quantum > mechanics, there is a field which satisfies causal laws. This field is, > however, in a state of statistical fluctuations. These fluctuations are > somehow described by the $\Psi$ field.” (Bohm to Einstein ; Letter 16. page > 5 Folder C14, February 3, 1954, [38], p. 5). > > My own point of view is that below the quantum theory there exists a sub > quantum-mechanical level of continuous and causally determined motion, and > that the quantum theory is related to the sub-quantum mechanical level, more > or less as ordinary Brownian motion is related to the atomic level. In other > words, events at the atomic level are contingent on the (in general > irregular) motions of some as yet unknown but qualitatively new kind of > entity, existing below the atomic level. As a result, the relationships > between things, that can be defined at the atomic level will be > characterized by the laws of chance, since they will be determined only in > terms of some quasi-ergodic type of motion of new kinds of entities existing > at the lower level. (Bohm to Einstein; Letter 21. Folder C15, dated: > November 14, 1954, [39]) Einstein’s replies may seem surprising to those who still believe that he was also a committed determinist at any cost, because they show once more that he was dissatisfied with Bohm’s first (deterministic) hidden variable model: “I am glad that you are deeply immersed seeking an objective description of the phenomena and that you feel that the task is much more difficult as you felt hitherto.” (Einstein to Bohm ; Letter 17. Folder C14, February 10, 1954, [38]). And again: “In the last years several attempts have been made to complete quantum theory as you have also attempted. But it seems to me we are still quite remote from a satisfactory solution of the problem.” (Einstein to Bohm ; Letter 20. Folder C15, dated: October 28, 1954, [39]) Bohm did not develop further this approach which he most likely perceived as well as a proposed first step towards his philosophy of levels of description, but he came back to a stochastic causal interpretation, also with Hiley, in the 1980s [40, 41]. ### 3.4 Causality and Chance in Modern Physics (1957) It is around the same period that Bohm started thinking not only that either a deterministic or an indeterministic description was possible in every level of an infinite series, but that both individual laws and statistical laws are necessary for a causal interpretation: > The picture which I propose is this: The totality of causal laws includes > both statistical and individual laws. We start with this totality as our > basic reality. […] The fundamental reality is that of matter in being and in > process of change, or of becoming, as it may more accurately be called. > (Bohm to Miriam Yevick. Letter 121. Folder C124, dated: Sept 10 1954, [19], > p. 419-22). These dialectic ideas grew into a book, Causality and Chance, which Bohm published in 1957 [3]. Therein, Bohm identifies two types of causal laws (both considered fundamental): simple causal laws that connect past and future one- to-one (i.e. deterministic), and more general ones that are one-to-many, (i.e. that do not lead to a unique evolution but only to an array of possibility): > [L]et us note that the one-to-many character of a causal law has no > essential relationship to a lack of knowledge on our part concerning the > additional causal factors to which the more precise details of the effect > can be traced. […] In other words, a one-to-many law represents an > objectively necessary causal connection, but in this case, what is necessary > is that the effect remain within certain bounds; and not, as in simpler > types of causal laws, that the effect be determined uniquely. ([3], p. 17). And again, Bohm clarifies, as he always maintained (cf. 3.1), that causality is a more general concept than that of necessity (i.e., determinism): > We see, then, that it is appropriate to speak about objectively valid laws > of chance, which tell us about a side of nature that is not treated > completely by the causal laws alone. Indeed, the laws of chance are just as > necessary as the causal laws themselves. [Footnote:] Thus necessity is not > to be identified with causality, but is instead a wide category. ([3], p. > 23). Furthermore, Bohm here again stresses the fact that objective chance should be interpreted, as a potentiality, i.e., a property of the system and its causal conditions: > On the basis of the above considerations, we are then led to interpret the > probability of, for example, a given result in the game of dice as an > objective property associated with the dice that are being used and with the > process by which they are thrown ([3], p. 27; extended quotation in Appendix > 4.8) Note that this example is exactly the same used by Karl Popper [42] when he introduced the propensities interpretation (see section 3.5), again showing the compatibility between Bohm and a worldview based both on causality and on indeterminism. Beyond causality, a large part of Bohm’s 1957 book [3] is devoted to defend another of his main tenets, namely, anti-mechanism. However, while being still convinced that determinism is an unacceptable form of mechanism, there is a fundamental difference with respect to his book on quantum theory [26]. Here, in fact, Bohm does not consider randomness alone as a way out of mechanism: > The point of view described above evidently renounces an important aspect of > the various forms of the mechanistic philosophy that appeared from the > sixteenth through the nineteenth centuries; namely, their determinism. But > in doing this, it has conserved and in fact enhanced the central and most > essential characteristic of this philosophy; namely, the assumption that > everything in the whole universe can be reduced completely and perfectly to > nothing more than the effects of a set of mechanical parameters undergoing > purely quantitative changes. […] > > The question of what constitutes a mechanistic philosophy, therefore, cuts > across the problems of determinism and indeterminism. For this reason, we > shall call the philosophy described in this section by the name of > “indeterministic mechanism” ([3], pp.62-63). Bohm’s criticism of mechanism (and thereby of determinism), does not spare his own hidden variable interpretation, which he considers again an unsatisfactory physical model, whose main feature, he stresses, is consistency: > While our theory can be extended formally in a logically consistent way by > introducing the concept of a wave in a 3N-dimensional space, it is evident > that this procedure is not really acceptable in a physical theory, and > should at least be regarded as an artifice that one uses provisionally until > one obtains a better theory in which everything is expressed once more in > ordinary three-dimensional space. ([3], p. 117) Finally, in his Causality and Chance, Bohm for the first time defends publicly his philosophical view of the infinite levels of description as the main alternative to mechanism, be it deterministic or indeterministic (see Appendix 4.8 for relevant quotations). As noted already by Freire [17], this marks Bohm’s entry in the philosophical debate and would allow him to engage with prominent philosophers of science, the like of Paul Feyerabend and Karl Popper (see further). However, these ideas of infinite levels were not appreciated by his more traditional Marxist followers, who saw in this the undermining of determinism: a positive feature for Bohm and an unacceptable price for them. This is the case of Évry Schatzman and and Vigier who wrote to Bohm: “We may be wrong, but we do not agree at all with your ideas about the different levels of reality. It seems to us that it is a formal interpretation of the famous sentence of Lenin, in Materialism and Empiriocriticism, about the different levels of reality” (quoted in [17], p. 108). To conclude, in Causality and Chance Bohm synthesizes his main philosophical tenets that have been present in his writing since the beginning, but in a quite scattered way. Therein, Bohm defends, for the first time systematically, causality in its broadest sense, advocating the fundamental necessity of both individual laws and statistical laws, depending on the context. Moreover, he firmly rejects mechanism, not only in the form of determinism (which he did for many years already), but also in its indeterministic form. Finally, Bohm opposes mechanism with a dialectic philosophy of infinite levels of description that he had developed throughout the 1950s. For what concerns physics proper, in 1957, Bohm published with his student Yakir Aharonov a paper where he rejects his own 1952 model, not on the basis of determinism but on nonlocality: “It must be admitted, however, that this quantum potential seems rather artificial in form […] that it implies instantaneous interactions between distant particles, so that it is not consistent with the theory of relativity.” [43]. Bohm thus kept proposing his dialectical views of different levels, similar to the paper with Vigier [5], looking for a a “deeper subquantum-mechanical level” [43]. It is interesting to notice, that still at this stage, Bohm’s views were completely misunderstood. Luis de Broglie, who wrote the forward of his Causality and Chance, for instance, keeps attributing to Bohm the great merit of giving hope to those who look for a deterministic hidden variable explanation of quantum theory: “It is possible that looking into the future to a deeper level of physical reality we will be able to interpret the laws of probability and quantum physics as being the statistical results of the development of completely determined values of variables which are at present hidden from us. It may be that the powerful means we are beginning to use to break up the structure of the nucleus and to make new particles appear will give us one day a direct knowledge which we do not now have of this deeper level.” ([3], p. x). This goes completely against what Bohm conveys in his book, making wander whether people like de Broglie were actually reading Bohm’s works or they just imposed on him what they wished to hear. Towards the end of the 1950s Bohm abandoned Communism, following the revelations of Stalin’s crimes by Nikita Khrushchev in 1956 (see [17]). As already recalled, this has been identified in the literature as the main motivation to abandon his commitment to determinism. But as we have shown, such an alleged commitment to determinism was never present in the first place and his dialectic attitude remained an important factor in his philosophy. However, probably due the frustration of being continuously misunderstood, Bohm’s engagement with different models of the causal interpretation became sparser. Actually, since his moving to the UK, firstly in Bristol and then in London, he engaged more and more in the philosophical debate, becoming friend with Paul Feyerabend, Karl Popper and Stephen Körner, and he kept his interpretational considerations away from his physics colleagues. Hiley joined Bohm at Birkbeck college in London in 1961 and, as a matter of fact, they passed “ten years without actually talking about the causal interpretation” [37]. As recalled by Hiley [37], it was only in the 1970s that two of Bohm’s students, Chris Dewdney and Chris Philippidis, “rediscovered” the hidden variable papers [1, 2] and went to Hiley to ask why Bohm and him were not discussing this important results. Hiley replied “because it is all wrong”, but when further inquired, he realized that he did not actually know why, he only had picked up what everybody was saying. And when he finally read thoroughly Bohm’s original papers, he understood that nothing was wrong and motivated the students to use the computer to calculate the trajectories of particles using Bohm’s model. This marks the revival of Bohm’s hidden variables (see also [17] Ch. 6.1), a revival to whom Bohm, however, obviously did not participate. Actually, when approached by Dewdney Philippidis, “Bohm himself […] admitted that he had made a tactical error in his original presentation of the theory. The term hidden variables, he said, created the wrong impression, and the papers themselves were too rigid and deterministic.” ([25], p. 266). In the following decades Bohm dedicated his work to an holistic approach that continued his ideas from the work on the causal interpretation of quantum theory. The purpose of Bohm’s original proposal in the light of his new ideas was later explained by himself in the following way: > To show that it was wrong to throw out hidden variables because they could > not be imagined, it was therefore sufficient to propose any logically > consistent theory that explained the quantum mechanics, through hidden > variables, no matter how abstract and hypothetical it might be. Thus, the > existence of even a single consistent theory of this kind showed that > whatever arguments one might continue to use against hidden variables, one > could no longer use the argument that they are inconceivable. Of course, the > specific theory that was proposed was not satisfactory for general physical > reasons, but if one such theory is possible, then other and better theories > may also be possible, and the natural implication of this argument is ‘Why > not try to find them?’ ([44], p. 104) His scientific program was based on quantum field theory to approach the concept of the infinite levels he already pointed out in the early works. His philosophical ideas remained consistent to his early works in the refusal of mechanistic ideas: > As we have seen, relativity theory requires continuity, strict causality (or > determinism) and locality. On the other hand, quantum theory requires > noncontinuity, non-causality and non-locality. So the basic concepts of > relativity and quantum theory directly contradict each other. […] > > What is very probably needed instead is a qualitatively new theory, from > which both relativity and quantum theory are to be derived as abstractions, > approximations and limiting cases. The basic notions of this new theory > evidently cannot be found by beginning with those features in which > relativity and quantum theory stand in direct contradiction. The best place > to begin is with what they have basically in common. This is undivided > wholeness. Though each comes to such wholeness in a different way, it is > clear that it is this to which they are both fundamentally pointing. To > begin with undivided wholeness means, however, that we must drop the > mechanistic order. ([44], p. 223) ### 3.5 Propensities and the causal interpretation Bohm has been in touch with Popper at least since 1959 (for the relationship between them, see [24] and references therein). It is exactly in that period that Popper—who was advocating for fundamental indeterminism in physics even at the classical level—developed a new interpretation of probabilities that are interpreted as objective physical properties, i.e., propensities or tendencies for a system to produce an outcome [42]. Here we would like to stress that although Bohm’s never actually pursued a program based on potentialities, he hinted at it in several occasions (see above). As we have seen, he endorsed that view in his Quantum Theory [26] and hinted that the statistical behaviors of quantum mechanics constrains the tendency of the sub-quantum fluid in his paper with Vigier [5]. Looking at Bohm’s correspondence with Popper, we find an explicit support of this view: “I feel that what you have to say about propensities make a genuine contribution to clarifying the issue that you discuss” (Bohm to K. Popper on March 15th 1967. PA, Popper’s Archives, Box/Folder: 84/19. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California) [45]. This was not appreciated by Popper himself, who should be listed among the many that misinterpreted Bohm, attributing to him a strong commitment to determinism. In fact, when Popper published his book on the foundations of quantum theory in 1982 [46], although prizing Bohm for striving for realism, he harshly criticized him for being a determinist. Bohm replied to him, emphasizing once again that he was not committed to determinism and explicitly acknowledging for the first time, to our knowledge, that his view on the causal interpretation can be regarded in terms of potentialities: > “I certainly think that a realistic interpretation of physics is essential. > I think also that I understand your propensity interpretation of probability > and I have no objections against it. […]. However, I feel that you have not > properly understood my own point of view, which is much less different from > yours than is implied in your book. Firstly I am not wedded to determinism. > It is true that I first used a deterministic version of […] quantum theory. > But later, with Vigier, a paper was written, in which we assumed that the > movement of the particle was a stochastic process. Clearly that is not > determinism. Indeed, we can regard the stochastic movement of the particle > as affected by a field of propensities, in accordance with your ideas […] > The key question at issue is therefore not that of determinism vs. > indeterminism. I personally do not feel addicted to determinism […]. > > [W]hat is real has a being independent of the consciousness of the observer. > John Bell has used the term “beable” to describe such an independent > reality. From the point of view of realism, the main criticism of the > orthodox interpretation of the quantum theory is that it has no room in it > for “beables”. […] I introduced the notion that the “beables” of the quantum > theory are the particles and the wavefunction (which contains information > about the propensities). Along with Vigier, I can say that the “beables” are > themselves conditioned by such propensities. What are called the observables > of quantum theory are then potentialities of the “beables”, realized > according to a context, which in current physics, is determined by the > experimental arrangement (though in nature, similar contexts will still > exist without the intervention of human being). […] My proposal has been > that the “beables” are particles (moving stochastically), along with the > wave function. (Bohm to K. Popper 13.07.1984. Box/Folder: 278/2. AAU, > Klagenfurt (Austria)/Hoover Institution, Stanford (California) [47]) ## 4 Discussion and conclusion In this paper, we have shown that Bohm was always against mechanism and therefore determinism. We have rebutted the historical narrative according to which one can identify an early period when Bohm was a supporter of Bohr, a later period when he was a committed determinist (influenced by Einstein and by Marxism), and finally a period, after his break with Marxism, in which determinism quit being a main concern of his. On the contrary, Bohm’s philosophical tenets have never changed throughout his whole life: he was always committed to develop a realistic, causal, non-mechanistic view of physics. This led him to develop a new dialectical philosophy composed of infinite levels of description that guided him in his work for the following decades. As such, Bohm would have never accepted determinism, at any stage of his life. In a slogan, Bohm was never a Bohmian. Although the content of this paper has mostly a historical scope, it may affect also the physicists and philosophers who have proclaimed themselves Bohmians. It is undeniably true that Bohm provided the first deterministic hidden variable model of quantum theory. And yet, we just want to stress that for him this was nothing more than a model, a proof of principle that it was possible to do what was considered fundamentally unattainable. However, at the same time, this was for him most unsatisfactory, for it betrayed one of his deepest convictions about nature, namely, that a basic ontology of particles moved around by deterministic laws cannot be the end of the story. Therefore, the many scholars who today support Bohmian mechanics at face value, giving to it an ontological role, should be aware that they are advocating a worldview that stems from what its original proposer considered a mere model which could not satisfy the basic standards of acceptability for a physical theory (except internal consistency). Now, while this is obviously a logically acceptable position, they should be aware that they are going directly against the fundamental views of Bohm, and cannot therefore whatsoever appeal to his authority. This separation between the original though of Bohm and those who adopted his model was so striking that soon before his death when he became aware of Sheldon Goldstein and Detlev Dürr’s work on his ideas, Bohm bitterly confessed to his main collaborator Basil Hiley: “why on earth are they calling it Bohmian mechanics? Haven’t they read a word I have written?” [37]. So, concerning determinism, Bohm finds himself in a position comparable (fortunately with less ethical implications) to that Einstein with respect to the atomic bomb: It is a historical fact that it was Einstein who suggested to US president Franklin Roosevelt to research on nuclear weapons to preempt Nazi Germany to achieve the same threat. However, for his whole life—before and after—Einstein was a committed pacifist. Similarly, it is a historical fact that Bohm developed a deterministic interpretation of quantum theory. However, for his whole life—before and after—he was a committed anti-determinist. Invoking Bohm to defend deterministic views of physics is like invoking Einstein to promote nuclear weapons. ### Acknowledgements The authors would like to thank Basil Hiley for taking time for an interview and valuable discussions. We also would like to express our thanks to Emma Illingworth from the David Bohm Archive at Birbeck Library for her support during our research. ## References * [1] David Bohm. A suggested interpretation of the quantum theory in terms of” hidden” variables. I. Physical review, 85(2):166, 1952. * [2] David Bohm. A suggested interpretation of the quantum theory in terms of” hidden” variables. II. Physical review, 85(2):180, 1952. * [3] David Bohm. Causality and chance in modern physics. University of Pennsylvania Press, 1957. * [4] David Bohm. Proof that probability density approaches $|\psi|^{2}$ in causal interpretation of the quantum theory. Physical Review, 89(2):458, 1953. * [5] David Bohm and Jean-Pierre Vigier. Model of the causal interpretation of quantum theory in terms of a fluid with irregular fluctuations. Physical Review, 96(1):208, 1954. * [6] D. Bohm and B. Hiley. The Undivided Universe An Ontological Interpretation of Quantum Theory. Routledge, 1993. * [7] Paavo Pylkkänen. Quantum theory, active information and the mind-matter problem. In Contextuality from quantum physics to psychology, pages 325–334. World Scientific, 2016. * [8] Peter R Holland. The quantum theory of motion: an account of the de Broglie-Bohm causal interpretation of quantum mechanics. Cambridge university press, 1995. * [9] Detlef Dürr and Stefan Teufel. Bohmian mechanics: the physics and mathematics of quantum theory. Springer Science & Business Media, 2009. * [10] Tim Maudlin. Philosophy of Physics: Quantum Theory. Princeton University Press, 2019. * [11] Basil J Hiley. Non-commutative geometry, the bohm interpretation and the mind-matter relationship. In AIP Conference Proceedings, volume 573, pages 77–88. American Institute of Physics, 2001. * [12] Michael Esfeld, Mario Hubert, Dustin Lazarovici, and Detlef Dürr. The ontology of Bohmian mechanics. The British Journal for the Philosophy of Science, page axt019, 2013\. * [13] James T Cushing. Quantum mechanics: historical contingency and the Copenhagen hegemony. University of Chicago Press, 1994. * [14] Basil. J. Hiley. David Joseph Bohm. 20 December 1917—27 October 1992. Biographical Memoirs of Fellows of the Royal Society, 43:107–131, 1997. * [15] Flavio Del Santo. Striving for realism, not for determinism: Historical misconceptions on Einstein and Bohm. APS News, May 2019. * [16] Marij van Strien. Bohm’s theory of quantum mechanics and the notion of classicality. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 71:72–86, 2020. * [17] Olival Freire Junior. David Bohm: A life dedicated to understanding the quantum world. Springer Nature, 2019. * [18] Mara Beller. Quantum dialogue: The making of a revolution. University of Chicago Press, 1999. * [19] Chris Talbot. David Bohm: Causality and chance, letters to three women. Springer, 2017. * [20] David Kaiser. How the hippies saved physics: science, counterculture, and the quantum revival. WW Norton & Company, 2011. * [21] Olival Freire Junior. The quantum dissidents: rebuilding the foundations of quantum mechanics (1950-1990). Springer, 2014. * [22] Angelo Baracca, Silvio Bergia, and Flavio Del Santo. The origins of the research on the foundations of quantum mechanics (and other critical activities) in Italy during the 1970s. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 57:66–79, 2017. * [23] Virgile Besson. L’interprétation causale de la mécanique quantique: biographie d’un programme de recherche minoritaire (1951–1964). PhD thesis, Université de Lyon; Universidade federal da Bahia, 2018\. * [24] Flavio Del Santo. Karl Popper’s forgotten role in the quantum debate at the edge between philosophy and physics in 1950s and 1960s. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 67:78–88, 2019. * [25] David F Peat. Infinite potential: The life and times of David Bohm, 1997. * [26] David Bohm. Quantum theory. Prentice-Hall, Inc., New York, 1951. * [27] Paavo Pylkkänen. Bohm-Biederman Correspondence: David Bohm and Charles Biederman: Vol. I.: Creativity and Science. Routledge, 1999. * [28] Max Jammer. Philosophy of Quantum Mechanics. the interpretations of quantum mechanics in historical perspective. Wiley: New York, 1974. * [29] Christian Forstner. The early history of David Bohm’s quantum mechanics through the perspective of Ludwik Fleck’s thought-collectives. Minerva, 46(2):215–229, 2008. * [30] Max Born, Hedwig Born, Irene Born, and Albert Einstein. The Born-Einstein letters: correspondence between Albert Einstein and Max and Hedwig Born from 1916 to 1955. McMillan: London, 1971. * [31] Christian Forstner. Dialectical Materialism and the Construction of a New Quantum Theory: David Joseph Bohm, 1917–1992. Max-Planck-Institut für Wissenschaftsgeschichte, 2005. * [32] Detlef Dürr, Sheldon Goldstein, and Nino Zanghì. Quantum physics without quantum philosophy. Springer Science & Business Media, 2012. * [33] Angelo Baracca, David J Bohm, Basil J Hiley, and Allan EG Stuart. On some new notions concerning locality and nonlocality in the quantum theory. Il Nuovo Cimento B (1971-1996), 28(2):453–466, 1975. * [34] Chris Philippidis, Chris Dewdney, and Basil J Hiley. Quantum interference and the quantum potential. Nuovo Cimento B, 52(1):15–28, 1979. * [35] Werner Heisenberg. Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik. Springer, 1985. * [36] Dennis Dieks. Von neumann’s impossibility proof: Mathematics in the service of rhetorics. Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics, 60:136–148, 2017. * [37] Flavio Del Santo and Gerd Christian Krizek. Interview to Basil Hiley. Birkbeck College, London (UK), 19 Jan 2019. * [38] David Bohm. Correspondence. Papers and correspondence of David Joseph Bohm 1917-1992, letter C14. Birkbeck Library Archives and Special Collections, University of London. GB 1832 BOHM/C, 1954. * [39] David Bohm. Correspondence. Papers and correspondence of David Joseph Bohm 1917-1992, letter C15. Birkbeck Library Archives and Special Collections, University of London. GB 1832 BOHM/C, 1954. * [40] David Bohm. Non-locality in the stochastic interpretation of the quantum theory. In Annales de l’IHP Physique théorique, volume 49(3), pages 287–296, 1988. * [41] David Bohm and Basil J Hiley. Non-locality and locality in the stochastic interpretation of quantum mechanics. Physics Reports, 172 (3):93–122, 1989. * [42] Karl R Popper. The propensity interpretation of probability. The British journal for the philosophy of science, 10(37):25–42, 1959. * [43] David Bohm and Yakir Aharonov. Discussion of experimental proof for the paradox of Einstein, Rosen, and Podolsky. Phys. Rev., 108:1070–1076, Nov 1957. * [44] David Bohm. Wholeness and the Implicate Order. Routledge, 1980. * [45] David Bohm. Letter 84/19: D. Bohm to K. Popper on March 15th 1967. PA, Popper’s Archives, Box/Folder: 84/19. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California), 1967. * [46] Karl R Popper and William Bartley III. Vol. III of the postscript to the logic of scientific discovery: Quantum theory and the schism in physics, 1982. * [47] David Bohm. Letter 278/2: D. Bohm to K. Popper 13.07.1984. PA, Popper’s Archives, Box/Folder: 278/2. AAU, Klagenfurt (Austria)/Hoover Institution, Stanford (California), 1984. ## Appendix A – Excerpts from correspondences of D. Bohm ### 4.1 Excerpt of a letter from Bohm to Miriam Yevick (January 7, 1952) Letter 65. Folder C117, dated: Jan 7, 1952, [19], p. 227. Now, to retain the concept of matter, we must above all retain the idea that in some aspects at least, matter is indestructible and uncreatable. How then do we explain the prevalence of change and the transiency of material things? This is done by the notion of endless transformation. The “things” at each level, are made up of smaller “elements” at a more fundamental level, and it is the motion of these more fundamental elements (not usually directly visible to us, except with the aid of elaborate scientific research) which causes the appearance and disappearance of the “things” existing at a higher level. These more fundamental “elements” however, cannot be permanent, but must be made up of still more fundamental “elements” and so on ad infinitum. Thus, we can see that every “thing” that exists may at some time come into existence and later go out of existence, but there is always a deeper level, in terms of which this change can be viewed rationally as a transformation of a more elementary form of matter, which is not itself basically altered in this particular transformation. Nevertheless, no single “thing” is uncreatable or indestructible. Only matter as a whole in its infinity of properties and potentialities is eternal. ### 4.2 Excerpt of a letter from Bohm to Schatzman; (not dated, 1952) Letter A1.20, not dated, 1952. [23], p. 351. For quantum mechanics has show, that ”empty” space a strongly fluctuating electromagnetic fields and more important still, a very high density ( infinite according to the present inadequate theories) of negative energy electrons, protons and neutrons. If one adopts the new interpretation of the quantum mechanics, there is no choice but co suppose chat these particles are really in existence. One therefore has been back to the old notion of a material substratum filling all space. As a have said, this substratum is very dense, much denser than any other form of matter. In fact, matter as it is usually called, would be only a disturbance in the uniform background of substratum. Light waves, etc. would also be disturbances of the substratum. The mysterious ”annihilation” and ”creation” of material particles could now be understood naturally; for with the [ ?] of energy, the substratum could be made non-uniform as a spreading wave. These two forms of energy could be transformed into each other when we look out at the sky, space appears to be almost empty, because light waves are scattered only by inhomogeneities in space. Similarly material particles are likewise inhomogeneities propagated freely in a uniform background. Thus, to a naive way of looking, space appears empty, a similar phenomenon appears in connection with the theory of metals. As you know, an electron will go through a very dense metal without being scattered as long as the crystal lattice is perfectly regular. Only non- uniformities in the lattice will scatter the electron. A naive observer (for example a positivist) would conclude from this evidence that a metal consists of empty space, with a very thin haze of ”matter” . I would like to add one point here. It is most likely that not even the substratum particles could be indestructible and unanalysable. Instead, there is probably another substratum below this ( of a qualitatively different kind most probably) and so on ad infinitum. Thus, we should have an infinite series of qualitatively different levels of laws. Any finite number of levels can always be understood by humanity, but never all of them. Thus, ·we can understand more vividly a number of dialectical principles, for example, many people are puzzled by the dialectical assertion that matter must be eternal ( i.e. no creation). The answer is that at any particular level, the forms of matter as a whole, in its infinite number of properties and inter -connections is eternal. Secondly, consider the statement of dialectics chat ”a thing is not equal to itself” . this we understand by the [ ? ] that a materiel ”thing” contains an infinity of properties whereas the concepts usually defining what the thing ”is” cover only a finite number of these properties. Thus, a thing is not only ”what it is” but also a large nun1ber of other things, which will manifest themselves later ; or in other words in ”what is coming to be”. Moreover, the levels not taken into account in the usual definition of the ”theory” will generally produce effects that are in contradiction with the permanent existence of this ”thing” . ### 4.3 Excerpt of a letter from Bohm to Miriam Yevick (January 23, 1952) Letter 66. Folder C117, dated: Jan 23, 1952, [19], p. 235: [I]t is essential to think that things are not only “what they are known to be”, but also a whole list of different things connected with the infinite number of levels not known to us. These other things may be thought of roughly as “what is coming into being” since it is in the future form of the thing that the underlying factors will ultimately manifest themselves. […] As in the structure of “elementary” forms of matter human beings contain an infinite number of at present unknown (or poorly known) levels of complexity of behavior. This fact has two important implications: (1) The most obvious, that by scientific study, we may ultimately learn to control some of the factors at any particular level, and thus to produce startling changes in human nature (including even ourselves) (2) Before this can be done, the different levels will manifest themselves in that people cannot correctly be regarded as “being only what they are”, but that they can also undergo fundamental transformations of character with changing conditions. […] As for the book [[26]], you must try to imagine the situation when I wrote it. You suggest that I may have had some dishonesty, perhaps some desire to please the “big shots” in writing it, and that this led me to back up the usual interpretation of the quantum theory. You must remember several things however: (1) When I wrote this book, there did not exist anywhere a clear statement of the basis of the theory. There existed some books which made ridiculous abstract mathematical postulates that no one could possibly understand, and there were other discussions, such as those of Bohr, which aimed at discussing the physics, but in an incredibly vague way. A student at Princeton once told me that Bohr’s statements not only cancelled out with regard to their meaning in the first order, but also with regard to connotation in the second order. It was therefore necessary to go to the third order to find what Bohr meant. When I first started to study this subject 15 years ago, it fascinated me and puzzled me. I had no reason to suspect that the “big shots” had muddled up the subject, since after all, had they not been astonishingly successful in predicting experiment after experiment? Above all, I never got over being puzzled by the theory. When I started the book, I was in no position to see through the matter, because I still hadn’t made complete sense of it. All I knew was that there was one school, which utterly repelled me, in which one was supposed to introduce abstract mathematical postulates, and be satisfied if the calculations agreed with experiment. Against this, Bohr’s school seemed to be a big improvement, because at least he tried to explain the physical meaning of the theory. Moreover, there was an element of dialectics in Bohr’s point of view which attracted me. It seemed progressive because it broke the old mechanist materialist determinism, which left no room for growth and development of something new. After I had written the book, I finally began to grasp the full meaning of the theory, and could see that it leads inevitably to a form of (dialectical) idealism. But this was not so clear when I started, because of the general confusion in the literature. If you tried to read other books, you wouldn’t be able to say that you see through this stuff, just because the other books leave things just vague enough so that you don’t know quite what you are seeing through. In writing this book, I hope that I have not only clarified the issues for myself, but perhaps for other people too. I suspect that a clear presentation of Bohr’s point of view (the first clear one, if I may boast a little) will do more to favor the causal interpretation than to favor Bohr’s interpretation. Now with my new point of view, I can see an infinitely better way to get out of the trap of mechanistic determinism; namely through the concept of an unlimited number of causal levels. I would call Bohr’s point of view “static dialectics”. This is because it is a form of “slinging the lingo” in which the dialectically opposing concepts are made just vague enough so that the contradictions between them are avoided. Thus, one is not faced with the necessity of seeking new concepts that synthesise the opposites, and the dynamic aspects of dialectics (i.e. the contradictions leading to something new at another level) are lost. Finally, I should say that I wrote the book in a spirit of struggle against the obscurantist notion that nature can from now on be understood only in terms of abstract mathematical postulates. The struggle was well worth while, since it led me to a new point of view. ### 4.4 Excerpt of a letter from Bohm to Miriam Yevick (March 31, 1952) Letter 73. Folder C118, dated: Rec Mar 31 [1952], [19], pp. 254-55: I think that the explicit recognition of a limitless number of levels would be a big step forward in science. Most of the errors of both the positivist and the 19th century “mechanical” materialists spring from an implicit assumption that the laws of nature will some day finally be understood in terms of a limited number of hypotheses. From this comes the nightmare of a mechanically determined universe that follows an inevitable course. To avoid this nightmare, positivists and idealists have given up causality and assumed a “spontaneous” (i.e., uncaused) element in physical processes. […] The concept of a limitless number of levels suggests, however that the work of science is never finished and leads one at each level to seek the contradictions which can [unreadable] at the next level etc. Thus it provides a motive power for continual development & growth. Moreover, the nightmare of complete determinism is avoided. Although each level is causal, the totality of levels cannot ever be taken into account. Thus, as a matter of principle, we say that complete determinism could not even be conceived of, yet, each level can be determined. Here, we part company with the believers in “spontaneity” for we say that what appears to be spontaneous is caused by factors, in principle, knowable, but now hidden to us. But to be able to say this without implying complete determinism, we must assume an unlimited number of levels. It is the unlimited number of levels which give matter its “non- mechanical” aspects, for if the analysis of physical laws could ever be completed, the theory would either be deterministic + “mechanical”, or “indeterministic” and “spontaneous”. Another interesting point – if there are an infinite number of levels, we can expect that all existing limitations (such as speed of light and uncertainty principle) can be overcome with the aid of more fundamental levels. Thus, by the use of causal laws, humanity can move toward freedom. Whereas, in the ignorance of causal laws, humanity is enslaved either to determinism or to “spontaneity”, which, being pure accident, is just as tyrannical. One other point, a distinction between “determinism” and “causality”. Although both words have roughly the same meaning, their implications are different. For causality implies (a) that if you know the causes, you can predict the effects. (b) That if you change the causes, you can change the effects in a predictable way. But determinism implies only predictability. In fact, with complete determinism, it would be impossible for us ever to change anything. Now, if there are a finite number of levels, then complete causality obviously implies complete determinism. But if there are an infinite number, then the two concepts part company. For we can have complete causality at every level, in the sense that we can use this causality to change the world in a predictable way,with the error in the predictions dependent only on our level of knowledge; whereas we can in no sense conceive of the world as completely determined. In this connection, note that the statement that new things can come into existence is consistent with causality, only if what is already in existence has an infinite number of levels. For if we have a finite number of causal levels, then the future is already contained logically in the present, but not if we have an infinite number. The appearance of qualitatively new things with time is possible with an infinite number, because the effects of the limitless number of lower levels can always surge up into a higher level (and vice versa) producing qualitative [missing words] describable as a rearrangement of things already in existence. ### 4.5 Excerpt of a letter from Bohm to Melba Phillips (October 13, 1953) Letter 43. Folder C48, dated: Oct 13, 1953, [19], p. 164: Also an important additional aspect of causality needs to be discussed in more detail – namely – causality as a means of determining the mode of being of qualitatively new things, which grow out of the old things. The basic aspect of mechanism is that (as in an idealized machine) the universe is conceived of as made of basic elements (particles, fields, or what have you) which simply interact according to fixed roles, and which themselves never change as a result of the processes in which they take part. Naturally, every physical theory has some non-mechanistic aspects. For example, in the field theory, new entities (waves+particle — like singularities) can arise out of the interconnections of the basic field elements through the field equations (especially if the latter are non-linear). Also in a particle theory, new entities can arise out of interactions. […] Nevertheless, the basic elements in such theories are usually conceived of as fixed and eternal. However, the concept of the infinity of levels shows that there need exist in nature no such thing as a basic element which never changes. Thus, causal laws not only determine the future in a mechanical sense; i.e., in the sense of determining quantitative changes in the arrangements of entities whose intrinsic character is fixed. The causal laws also tell when qualitative changes will occur and may define the characteristics of the new entities that can come into being. Thus, causality is a broader concept than that of mechanical determinism. It contains limited mechanical determinism as a special case. Indeed, the concept of causality is continually evolving with the development of science and other aspects of human activity, so that the potential richness of this concept has no limit. In other words, we may expect future generations to discover more and more aspects of the concept of causality, thus transforming this concept in a way that we have at present no inkling of. Yet these changes will not be arbitrary, but will instead grow in a definite way out of the efforts to solve real problems presented by the successive levels of reality that we shall be able to reach. A “mechanistic” attitude toward science however, tends to limit the growth of our concepts in an arbitrary and dogmatically conceived way. Such a mechanistic attitude refers not only, however, to the mechanistic determinists, but also to the “mechanistic indeterminists”, who insist that in the quantum of action, we have reached an ultimate, indivisible, and unanalyzable entity, which will never be found to have a structure understandable in terms of a deeper level. In fact, the quantum of action presents many aspects of the ultimate particles of the atomists, so that the insistence that the quantum will never be analyzed is as mechanistic as a theory of point particles following determined orbits. Similarly, the insistence that chance+probability are not subject to a causal analysis at a deeper level constitutes a mechanistic attitude toward these things, since chance+probability are conceived of as existing in themselves and functioning under all possible circumstances according to fixed rules. […] According to the mechanistic indeterminists, it is fixed by an equally mechanical “chance” which is conceived of as absolute and not itself capable of change or development. We may make an analogy of a man who is offered the possibility of 100 different ways of being executed. The deterministic school of executioners would choose the way according to certain definite factors, e.g., the chemical concentration of the blood, the wave \- length of the light emitted from his skin, etc. The indeterministic school would chose the way by spinning a roulette wheel. The non-mechanistic school would seek a qualitative change - i.e., to find a way to escape execution, taking advantage of all factors, both “determinate” and “chance”. So the essential point is that because of the infinite complexity and depth of the laws governing the nature of matter, no preassigned scheme of things can remain adequate forever, not even if it is restricted to being a general framework or outline. But this is just what most people find it difficult to accept – perhaps because our society requires us to accept the idea that a certain general form of social organization is inevitable, although within this general framework, we may make various quantitative changes, either by chance, or by determinate rule, as we please, as long as nothing essential is ever changed. […] My own opinion is that the synthesis will eventually have to be on a still deeper level and will have to introduce new kinds of entities that are neither particles nor fields, of which we have only a vague idea at present. ### 4.6 Excerpt of a letter from Bohm to Melba Phillips (March 15, 1954) Letter 46. Folder C48, dated: March 15, 1954, [19], p. 170: First of all, it is necessary to sharpen the distinction between causality and mechanism (or deterministic mechanism). Mechanism is characterized by two fundamental aspects: (1) Everything is made of certain basic elements which themselves never change in essence (i.e., qualitatively). (2)All that these elements can do is to undergo some quantitative change according to some fixed laws of change. For example, if they are bodies, they can move in space. If they are fields, they can change their numerical values, etc. But the basic elements themselves never undergo qualitative change. If we postulate an infinity of levels, then we make a step beyond mechanism. For the elements existing at each level are made of still smaller elements in motion (i.e., changing quantitatively), and the mode of being of the higher level elements arises out of the motions of the lower level elements. Thus, there are no elements that can never change. Indeed, even if we have a finite number of levels, some qualitative change is possible within a mechanistic theory. For example, with atoms in chaotic motion, we obtain new large scale properties, such as pressure, temperature, etc., new entities, such as gas, liquid, solid, and qualitative changes between them. Now, at first sight, it may seem that we could eliminate the large-scale level by analyzing it in terms of its basic molecular motions. And if there were a finite number of levels, this would be true. But if there are an infinite number, then each level stands on a footing that is, in the long run, as basic as that of any other. For every level has below it a deeper one. Indeed, matter can be regarded as made up of the totality of all levels. Each level makes its own specific contribution to the totality. Of course, each level finds an image in others, so that one can deduce many properties of a given level by studying other levels. Yet, there may be properties that cannot so be deduced. Not only may these properties be peculiar to a given level, but they may involve “crossing” of levels. […] Now, a mechanical law is characterized by the fact that it specifies a rule governing quantitative changes of elements that are fixed in nature. A more general causal law may express the conditions governing qualitative change. But if it does this, it must do something else that a mechanical law is never called upon to do. It must not only determine the mode of change, but also the mode of being of the elements when they are not changing. A mechanical law simply postulates a certain fixed and eternal mode of being of the elements, so that there is a sharp separation between the laws of change and the mode of being of the elements. A more general causal law does not make such a sharp separation. Thus, in the theory of evolution, the principle of natural selection enables us to say something about the mode of being of the various forms of life, in terms of their past history of evolution, struggle for survival, etc. Similarly, in embryology, one can in part, understand the characteristic properties of an animal at a given stage of development in terms of its past history which helped make it what it now is. Thus, a more general causal law may be historical in form. By this, I mean that the very mode of being of the elements which enter into the laws is a necessary consequence of the causal laws governing the whole chain of development.[…] A causal law may express the necessity of a fundamental qualitative change, so that what develops may have something new in it. This something new arise[s] as a necessary consequence of what is old, and yet it is not just a rearrangement or a quantitative change of the old elements. ### 4.7 Excerpt of a letter from Bohm to Miriam Yevick (September 10, 1954) Letter 121. Folder C124, dated: Sept 10 1954, [19], p. 419-22: The picture which I propose is this: The totality of causal laws includes both statistical and individual laws. We start with this totality as our basic reality. Then, we may take various views of this totality, some of which stress the individual aspect of the laws, and some of which stress the statistical aspect. But there is no such thing as a perfect individual law, because there are always fluctuations and errors coming from what has been left out. […] We start with the idea of a real world, which is in a continual process of change and development. We must now find means of analyzing this change and development. To begin, we seek those aspects that have a relative permanence. Over a short period of time, these aspects may be idealized and abstracted as having a being, conceived of as static. But like the mathematical point, the notion of a property or an aspect of things as having such a static and complete being is only a simplifying abstraction. In reality it does not have such static being, as is shown by the fact that it changes after some time. The fundamental reality is that of matter in being and in process of change, or of becoming, as it may more accurately be called. […] We note that causal laws are relationships between various aspects of reality at different times. Depending on which aspects that we find are necessary, possible, or convenient to relate, we will have different kinds of causal laws, some more nearly statistical and some more nearly individual. But the essential point is that one and the same system simultaneously obeys individual and statistical laws. […] Thus, we do not regard the world as made of certain fixed eternal basic elements, satisfying corresponding laws. […] [S]tatistical laws are not purely a matter of convenience and practicability. Moreover every level of individual law ultimately has some deeper statistical basis. A more accurate statement of the problem is thus: Both for reasons of practical convenience and for reasons of principle, we study statistical aggregates in their own right. […] What must be stressed however is that individual and statistical laws are abstractions as limiting cases of laws in general, and that there remains before us the problem of formulating more general types of laws that could connect these two limiting cases in a continuous and rationally understandable way. ## Appendix B – Excerpts from the writings of D. Bohm ### 4.8 Excerpts from Causality and Chance (1957) Evidently, then, the applicability of the theory of probability to scientific and other statistical problems has no essential relationship either to our knowledge or to our ignorance. Rather, it depends only on the objective existence of certain regularities that are characteristic of the systems and processes under discussion, regularities which imply that the long run or average behaviour in a large aggregate of objects or events is approximately independent of the precise details that determine exactly what will happen in each individual case. On the basis of the above considerations, we are then led to interpret the probability of, for example, a given result in the game of dice as an objective property associated with the dice that are being used and with the process by which they are thrown, a property that can be defined independently of the question of whether or not we know enough to predict what will happen in each individual throw. (p. 27) When we study any particular set of processes within one of its relatively autonomous contexts, we discover that certain relationships remain constant under a wide range of changes of the detailed behaviour of the things that enter into this context. Such constancy is interpreted not as a coincidence, but rather as an objective necessity inherent in the nature of the things we are studying. These necessary relationships are then manifestations of the causal laws applying in the context in question. These laws do not have to determine a given effect uniquely. Instead, they may (in the case of one-to- many relationships) determine only that the effect must remain within a certain range of possibilities. (p. 29) Now, as we shall see in this chapter and in other parts of the book, the mechanistic philosophy has taken many specific forms throughout the development of science. The most essential aspects of this philosophy seem to the author, however, to be its assumption that the great diversity of things that appear in all of our experience, every day as well as scientific, can all be reduced completely and perfectly to nothing more than consequences of the operation of an absolute and final set of purely quantitative laws determining the behaviour of a few kinds of basic entities or variables. (p. 37) The essential change brought in by this new point of view was the introduction of an element of arbitrariness into the theory. One still thought of the universe as a gigantic mechanical system with the property that everything in it can in principle be reduced completely and perfectly to nothing more than the results of purely quantitative changes taking place in suitable mechanical parameters. But instead of having its behaviour determined completely in terms of definite laws governing these parameters, this universal system could continually be subject to irregular alterations in the course of its motion. […] For we now see that there is a whole level in which chance fluctuations are an inseparable part of the mode of being of things, so that they must be interwoven into the fabric of the theory of this level in a fundamental way. Thus, we have been led to take an important step beyond the classical notion of chance as nothing more than the effects of contingencies that modify the boundary conditions or introduce randomly fluctuating external forces in a way that is not predictable within the context of interest, but which play no essential part in the formulation of the basic laws that apply within such a context. If we stopped at this point, however, we should, as we have seen in the previous chapter, merely have switched from deterministic to indeterministic mechanism. To avoid indeterministic mechanism, we must suppose that, in their turn, the chance fluctuations come from something else. Since, as Heisenberg and Bohr have shown so well, there is no room in the quantum domain for anything to exist in which these fluctuations might originate, it is clear that to find their origin we must go to some new domain. […] Of course, if one were now to make the assumption that these new laws would surely be nothing more than purely causal laws, one would then fall back into deterministic mechanism, while the similar assumption that they were surely nothing more than laws of probability would throw one back into indeterministic mechanism. On-the other hand, we have in the proposals made in this chapter avoided both these dogmatic and arbitrary extremes, since we have considered, as the situation demanded, the possibility that there are new features to the causal laws (a “quantum force” not appearing at higher levels) as well as to the laws of chance (random fluctuations originating in the sub- quantum mechanical level). Of course, as we have indicated in Section 5, we do not regard our earlier proposals as providing a completely satisfactory and definitive interpretation of the laws of the quantum domain. The basic reason is, in a sense, that the fundamental concepts considered in the theory (waves and particles in interaction) are still very probably too close to those applying in the classical domain to be appropriate to a completely new domain such as that treated in the quantum theory. (pp. 126-127) Actually, however, neither causal laws nor laws of chance can ever be perfectly correct, because each inevitably leaves out some aspect of what is happening in broader contexts. […] Thus, we are led to regard these two kinds of laws as effectively furnishing different views of any given natural process, such that at times we may need one view or the other to catch what is essential, while at still other times, we may have to combine both views in an appropriate way. But we do not assume, as is generally done in a mechanistic philosophy, that the whole of nature can eventually be treated completely perfectly and unconditionally in terms of just one of these sides, so that the other will be seen to be inessential, a mere shadow, that makes no fundamental contribution to our representation of nature as a whole. (p. 143) ## Appendix C – Excerpts from the secondary literature about D. Bohm ### 4.9 Excerpt from Freire, O. Jr, David Bohm: A life dedicated to understanding the quantum world Évry Schatzman, who was the intermediary for Bohm to contact Vigier, wrote to Bohm: “Any physical theory should be completely deterministic, because an affirmation of the dialectical materialism is that there is an objective reality and that this reality is cognizable, that we can built an image of that reality in our mind”. Schatzman was far from modest about the work which was being done by Bohm and Vigier, comparing it to Marx’s works: “We should be grateful to people like Vigier, like you, who have with tenacity devoted their efforts to the rebuilding of the quantum theory on its feet, just like the dialectic of Hegel, which had to be put back on its feet!” However, if the Marxist background was the cement, the collaboration between Bohm and Vigier blossomed in a fruitful scientific collaboration. ([21], p. 91)
# Multi-Level Feedback Generation with Large Language Models for Empowering Novice Peer Counselors Alicja Chaszczewicz , Raj Sanjay Shah* , Ryan Louie* Bruce A Arnow , Robert Kraut , Diyi Yang Stanford University , Georgia Institute of Technology , Carnegie Mellon University Email IDs of the authors: {alicjach, rylouie, arnow<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>These authors contributed equally to this work ###### Abstract Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. However, existing mechanisms of providing feedback largely rely on human supervision. Peer counselors often lack mechanisms to receive detailed feedback from experienced mentors, making it difficult for them to support the large number of people with mental health issues who use peer counseling. Our work aims to leverage large language models to provide contextualized and multi-level feedback to empower peer counselors, especially novices, at scale. To achieve this, we co-design with a group of senior psychotherapy supervisors to develop a multi-level feedback taxonomy, and then construct a publicly available dataset with comprehensive feedback annotations of 400 emotional support conversations. We further design a self-improvement method on top of large language models to enhance the automatic generation of feedback. Via qualitative and quantitative evaluation with domain experts, we demonstrate that our method minimizes the risk of potentially harmful and low-quality feedback generation which is desirable in such high-stakes scenarios. ## 1 Introduction Realistic practice and tailored feedback are key processes for training peer counselors with clinical skills. Providing feedback could significantly enhance peer counselor skills, thereby improving support quality and benefiting many seeking help online Ali et al. (2015). However, it is often time-consuming and costly for counseling supervisors to provide detailed feedback Atkins et al. (2014) to beginner peer counselors. Without appropriate guidance, peer counselors might develop biased or even inappropriate helping skills without Figure 1: Example conversation excerpt taken from the ESConv dataset Liu et al. (2021) annotated using our feedback taxonomy. Feedback components (appropriateness, goal definition and alignment, areas for improvement, alternative goal-aligned response) are demonstrated on one utterance of peer counselor’s response (in blue). Optionally, one can also provide positive reinforcement by highlighting areas in categories peer counselors excelled at. being aware of it, based on their own experiences. What can we do to provide detailed feedback to a large number of novice peer counselors at scale? In this work, we explore whether large language models (LLMs) can be used to provide contextualized feedback to empower peer counselors in training. Numerous recent studies have explored the feasibility of applying computational techniques to differentiate between low and high-quality counseling automatically Pérez-Rosas et al. (2019); Imel et al. (2019); Sharma et al. (2020); Flemotomos et al. (2021); Min et al. (2022); Shen et al. (2022); Wu et al. (2023); Fang et al. (2023); Sharma et al. (2023); Hsu et al. (2023); Chiu et al. (2024). In doing so, prior work mostly provides numeric feedback to counselors about how well a particular skill is used. Some recent studies provide utterance-level suggestions of responses to use according to appropriate helping skills Hsu et al. (2023), or alternatives for more empathetic responses Sharma et al. (2023). Yet, little attention is given to developing automatic feedback that closely mirrors how clinical supervisors provide feedback to novice counselors. To this end, we co-designed a feedback framework with senior psychotherapy supervisors to reflect the content and delivery of feedback they give to novice counselors. Concretely, we conducted a contextual inquiry (Karen and Sandra, 2017) with supervisors engaging in a representative task of providing feedback on a transcript of an emotional support conversation Liu et al. (2021) as if they were communicating the feedback to a novice counselor. We then developed a multi-level feedback framework by modeling the common patterns at different granularity observed in interviews and important feedback dimensions highlighted in textbooks and training for foundational active listening skills Hill (2009); 7Cups (2023). With this multi-level feedback framework presented in Figure 1, we introduce a publicly available dataset of conversations enriched with comprehensive feedback annotations, building upon an existing public emotional support conversations dataset ESConv Liu et al. (2021). Specifically, we leverage a model-in-the-loop annotation paradigm where GPT-4 and counseling domain experts work together to produce the annotations for 400 conversations. To enable transparent model development, especially for a high-stakes domain like counseling, we fine-tuned the open-source Llama-2 model to generate multi-level feedback. We further introduce a simple but effective self- improvement method to forecast how specific feedback might improve subsequent interaction and use this forecast information to supervise feedback generation. Unlike general natural language generation tasks, we aim at optimizing feedback generation for worst-case performance since failures (e.g., generating poor advice) matter more in this high-stakes scenario. Using both quantitative evaluation and qualitative evaluation with domain experts, we demonstrate that our approach generates high-quality feedback and significantly boosts the worst-case performance on multi-level feedback generation compared to baselines. In summary, this paper makes the following contributions: * • We propose a novel and comprehensive multi-level feedback framework for training peer counseling skills co-designed with senior psychotherapy supervisors. * • We constructed and make publicly available _FeedbackESConv_ 111We will release our code at https://github.com/SALT-NLP/counseling-feedback, a dataset of 400 emotional support conversations with multi-level feedback annotated by domain experts and GPT-4. * • We enhanced a fine-tuned LLM for multi-level feedback using a simple but effective self-improvement method to forecast how specific feedback might improve subsequent interaction and further use such signal to supervise the feedback generation. * • We conducted extensive evaluations with domain experts to demonstrate the effectiveness of our method and find that, compared to baselines, it significantly boosts the worst-case performance on multi-level feedback generation. | Numerical scoring of response quality | Suggestion of response or alternate response | Response evaluation across multiple peer counseling skills categories | Goal - oriented natural language explanations ---|---|---|---|--- Pérez-Rosas et al. (2019) | ✓ | ✗ | ✓ | ✗ Tanana et al. (2019); Imel et al. (2019) | ✓ | ✗ | ✓ | ✗ Sharma et al. (2020) | ✓ | ✗ | ✗ | ✗ Flemotomos et al. (2021) | ✓ | ✗ | ✓ | ✗ Min et al. (2022) | ✓ | ✗ | ✗ | ✗ Shen et al. (2022) | ✗ | ✓ | ✗ | ✗ Sharma et al. (2023) | ✗ | ✓ | ✗ | ✗ Hsu et al. (2023) | ✗ | ✓ | ✓ | ✗ Chiu et al. (2024)* | ✓ | (✓) | ✓ | ✗ Our work | ✓ | ✓ | ✓ | ✓ Table 1: Categorization of previously proposed approaches aimed at evaluating or enhancing the quality of emotional support conversations. "Numerical scoring of response quality" indicates whether a study applied a binary or continuous scale for quality assessment. "Response evaluation across multiple peer counseling skills categories" indicates whether the feedback mechanism incorporated a multidimensional structure (more than two dimensions). "Suggestion of response" examines if the approach includes generating potential peer counselor answers. "Goal-oriented natural language explanation" indicates whether the system offers natural language conversation goals and explains how errors it identified can be aligned to these goals. *Chiu et al. (2024) is concurrent work focusing on evaluating the quality of LLM-based therapy simulations. ## 2 Related Work ### 2.1 Automated Feedback for Peer Counseling There have been different approaches to build automated methods that help peer counselors evaluate and improve their skills, ranging from scoring-based methods (e.g., measures of empathy; the use of counseling-specific dialogue acts Sharma et al. (2020); Min et al. (2022); Pérez-Rosas et al. (2019); Tanana et al. (2019); Flemotomos et al. (2021); Chiu et al. (2024) to automatically generated-suggestions for alternative responses Shen et al. (2022); Sharma et al. (2023); Hsu et al. (2023). Rather than taking a technical perspective focusing on the feedback systems that _can_ be built with scoring or response generation methods, we posit that one can design better-automated feedback methods for peer counseling training by understanding and mirroring the existing ways supervisors deliver feedback to novices. Thus, in this work, we take a collaborative design approach with senior psychotherapy supervisors who are experienced in giving tailored feedback to novice counselors. We translate their input into the peer counseling domain and use it to inform the construction of our multi-level feedback taxonomy. Our co-design reveals that post-session feedback for peer counseling _encompasses and extends beyond_ scoring and suggestions for improving the quality of individual responses. Most differently, it emphasizes that each response should be based on the counseling goals it should serve at the specific point in the session. Incorporating contextualized _goals_ into the feedback structure provides a purpose-led orientation compared to previous approaches (see Table 1). Natural language goal descriptions are especially valuable since providing explanations is more beneficial for learning than simply giving the correct answer (Butler et al., 2013). ### 2.2 Generation Capabilities of LLMs Past work explored the capabilities of LLMs in generating natural-language feedback across various domains. Wang et al. (2023a) explores the use of LLMs like GPT-4 and GPT3.5 in math tutoring to deliver high-quality feedback to remediate student mistakes. Liang et al. (2023) employ GPT-4 for generating comprehensive reviews for research papers. These varied applications demonstrate the adaptability and potential of LLMs to generate feedback across educational and professional settings. Unlike past work that builds feedback systems directly on top of GPT-4, we seek to enable the transparent development of open-source feedback models for the domain of peer counseling. Thus, we first develop an annotated dataset of feedback which is co-annotated by domain experts and GPT-4 using our multi-level feedback taxonomy, and then fine-tune the open-source Llama2-13B model using this feedback dataset. The effectiveness of LLM feedback, and of LLM generated outputs more broadly, can be undermined by undesired and inconsistent behaviors, including hallucination, unfaithful reasoning, and toxic content. A promising approach to rectify these flaws is using self-correction or self-improvement techniques, in which a source of automated feedback, either produced by the LLM itself or some external system, can prompt or guide the LLM to fix problems in its output Pan et al. (2023). Self-correction methods can be categorized into training-time, generation-time, and post-hoc corrections. Our self-improvement method is most related to training-time self-corrections. For example, Huang et al. (2023) used self-consistency Wang et al. (2023b) and chain of thought (CoT) prompting to select best generations for further supervised fine-tuning (SFT) on reasoning tasks. Ye et al. (2023) fine-tuned LLama models with self-feedback and revision data generated by ChatGPT to enable the model to self-revise its outputs. Concurrent to our work, Yuan et al. (2024) uses iterative LLM-as-a-Judge Zheng et al. (2023) prompting to obtain self-rewards and perform direct preference optimization Rafailov et al. (2023) to perform model alignment to the preferences from this self-reward. In our work, undesirable and inconsistent LLM feedback generation may include poor goal identification or utterance-level rewrites that are inconsistent with the conversation goals. To mitigate this, we developed a training-time self-improvement method that relies on the fine-tuned LLM itself to provide automated scoring feedback on candidate outputs; this allows it to select preferred generations upon which the feedback model can be further preference- tuned. ## 3 Feedback Framework Given the crucial role of human supervision and tailored contextual feedback in the peer counselors training process Borders and Brown (2005); Bernard and Goodyear (1998); Gonsalvez and Milne (2010); Rønnestad and Skovholt (2013), we collaborated with senior psychotherapy supervisors (each with over 20 years of experience) to develop an automated feedback system that is aligned with best peer counseling practices. Together, we co-designed a multi-level feedback framework for peer counselor training. Four one-hour co-design sessions with these senior supervisors revealed that initial training of novice therapists emphasizes foundational active listening skills and that these are generic skills common to all therapy approaches, including peer counseling Watkins Jr and Milne (2014); Laska et al. (2014); Wampold (2015); Cuijpers et al. (2019). Details about the co-design process including research questions, key themes, and the outcomes are given in Appendix B. Via our co-design, we found that the structure of the supervisors’ feedback spans different levels: it often starts with positive reinforcement, followed by a line-by-line analysis of session transcripts; for any utterances needing improvement, supervisors clarified the session goals, identified categories of skills that could be improved, and voiced alternative responses that would achieve the goals. ### 3.1 Multi-Level Feedback Taxonomy Building upon our co-design sessions, we derive a multi-level feedback framework that reflects the components of senior psychotherapy supervisors’ feedback and trains foundational listening skills that are relevant to peer counseling; see Figure 1. This taxonomy has five key components: 1. 1. Appropriateness indicates whether a peer counselor’s response in a given context is appropriate and aligned with foundational active listening best practices. No further feedback will be provided if the response is appropriate. 2. 2. Goal and Alignment. Unlike casual conversations, peer counseling is goal- oriented, with each question or statement purpose-driven. This component defines what the counselor’s goal in this part of the conversation should be and how the response can be changed to improve the alignment to this goal. 3. 3. Areas for Improvement. Re-iterating with domain experts and consulting mental health literature Hill (2009); 7Cups (2023), we identify eight widely-used categories of effective communication for peer counseling context: Reflections, Questions, Suggestions, Validation, Self-disclosure, Empathy, Professionalism, Structure. Areas of improvement highlights a set of categories that counselors need to further improve. 4. 4. Alternative Goal-Aligned Response suggests an alternative response that aligns with the predefined goals and improves over these highlighted areas that need improvement, for a given context. 5. 5. Positive Reinforcement (optional) highlights a set of concrete categories as defined in Areas for Improvement the peer counselors excel at. Our multi-level feedback taxonomy, co-designed with senior psychotherapy supervisors, is the first of its kind to resemble how supervisors deliver feedback to counselors post-session. Unlike previous methods that only did one or the other, it uniquely combines evaluating responses and suggesting alternatives. Furthermore, the goal and alignment is a unique component of the taxonomy which explains how to improve alignment to a session-level goal. ## 4 FeedbackESConv Dataset In order to develop an automatic model that provides contextualized feedback at multiple levels, we use the feedback taxonomy to annotate peer counseling conversations. Given the sensitive nature of peer counseling data and the involved ethical implications, we chose a publicly available counseling dataset _ESConv_ Liu et al. (2021) as our starting point, which contains a large number of emotional support conversations. ESConv was collected on a crowd-sourcing platform, thus requiring quality control. We performed a manual review to filter out conversations that were either low quality or irrelevant to peer counseling (refer to Appendix C for the comprehensive filtering criteria). We divided the obtained dataset into three parts: a dataset with 400 conversations for further annotation by domain experts; a dataset of 150 conversations (Preferences QESconv) used for obtaining self-scored preference pairs as described in Section 5; and a test dataset of 67 conversations. ### 4.1 Domain Experts To obtain high-quality annotation, we take a user-centered approach by working with domain experts who have mental health expertise and hand-on practice experience. We recruited domain experts from the Upwork platform by using a selective hiring process (see Appendix D for the hiring criteria). Our final annotator group consisted of two experts – both with over 10 years of experience in professional mental health practice (one was a Certified Chemical Dependency Counselor and the other an Associate Professional Clinical Counselor). | FeedbackESConv --- Number of sessions | 400 | Number of utterances | 8179 | Number of appropriate utterances | 4721 | (57.7%) Number of inappropriate utterances | 3458 | (42.3%) Avg. length of alternative response | 28.3 | Avg. length of goal alignment | 36.6 | Categories | - | + Reflections | 616 | 831 Questions | 1431 | 1995 Suggestions | 1159 | 259 Validation | 901 | 1774 Self-disclosure | 558 | 614 Empathy | 1185 | 3313 Professionalism | 279 | 462 Structure | 333 | 1030 Table 2: FeedbackESConv: Statistics describing the number and average length of feedback annotations at different levels, as well the breakdown of highlighted categories for Areas of Improvement (-) and Positive Reinforcement (+). ### 4.2 Model-in-the-loop Co-annotation Recent work has shown that LLMs can offer a certain amount of facilitation for data annotation Li et al. (2023). Thus, to facilitate the annotation process, we leverage a _model-in-the-loop annotation paradigm_ , with GPT-4 and domain experts working together on the annotation task – the approach we later refer to as GPT-4+Expert. Before doing so, we rigorously compare the effectiveness of this co-annotation paradigm, where we set up a comparison of two approaches: generation of initial pre-annotations by GPT-4 and the subsequent refinement by experts, and annotations solely produced by experts. A full GPT-4 based annotation was technically possible, however, it was impossible to ensure feedback correctness and relevance without human supervision. Our results (see Appendix G) show that in 80.8% of cases, feedback created with GPT-4 pre-annotations is either preferred by experts (61.1%) or there is no strong preference either way (19.7%). This demonstrates the domain expert’s preference for the model-in-the-loop co-annotation paradigm. As a result, during the annotation process, we use GPT-4 for the initial feedback annotation and then ask our experts to re-work these annotations. We prompt (see Appendix LABEL:sec:appendix_prompt) GPT-4 with detailed definitions of each of the feedback components (defined in Section 3.1) and provide in-context examples containing feedback discussed with senior psychotherapy supervisors. We provided domain experts with a detailed annotation guide (see Appendix LABEL:sec:appendix_annotation) with definitions and examples of each feedback component as described in our multi-level feedback taxonomy, to get them familiar with the task. This co-annotation produces annotations of over 400 emotional support conversations. We provide the detailed dataset statistics with the breakdown of highlighted categories for Areas of Improvement (-) and Positive Reinforcement (+) in Table 2. ## 5 Model We leverage the resulting FeedbackESConv dataset to develop models that can generate contextualized feedback at different levels for peer counseling. To enable transparent model development, we build upon the open-source Llama-2 model and introduce a simple but effective self-improvement method to generate multi-level feedback, as described below. ### 5.1 Problem Definition Formally, we define the task of feedback generation based on our multi-level feedback framework as: (1) given the peer counselor’s utterance $U_{i}$ and a context of the peer counselor-seeker conversation $c_{i}$, decide if the peer counselor’s response is appropriate or needs further improvement by setting $y_{i}$ to $\mathrm{true}$ or $\mathrm{false}$, respectively. (2) If the response is classified as needing improvement, provide goal and alignment $goal_{i}$ (text), areas for improvement $ar-_{i}$ (list) and an alternative goal-aligned response $A_{i}$ (text). (3) Optionally, provide positive reinforcement or good areas $ar+_{i}$ (list) for this utterance as a form of positive reinforcement. We represent the feedback generation model as $\mathcal{M}$. ### 5.2 Self-improvement via Forecasting Figure 2: Illustration of the self-scoring mechanism – Phase 1 of the self- improvement method. The first step is to generate $n$ alternative answers for a given conversation utterance $U_{i}$. By substituting an alternative answer for the original utterance and passing it back to the model we obtain the probability of the alternative answer being marked as appropriate. These scores can be used to create preference pairs for further alignment. The specifics of our multi-level feedback framework allow us to suggest a self-improvement method for $\mathcal{M}$ that does not require any teacher model or additional costly expert data annotation. On a high level, we take advantage of the fact that both response quality assessment ($y$) and alternative answer ($A$) are part of our feedback taxonomy. By substituting an alternative answer for the original utterance, our method uses the feedback model once again to forecast how generated alternative answers will be assessed. This forecast operation estimates the quality of the originally generated feedback and can then be used to guide further alignment of the model. This self-improvement method has the potential to generalize to other scenarios since it applies to any model that jointly assigns binary $y_{i}$ ($\mathrm{false}$ or $\mathrm{true}$) label and suggests improvements for $y_{i}=\mathrm{false}$. Concretely, to enable the self-improvement method with forecasting, we create self-scored preference pairs of feedback generations. To achieve that, we first establish a self-scoring function (Phase 1) and then use sample generations to choose the ones with maximum and minimum scores to form a pair (Phase 2). The model is then aligned to those self-scored preferences (Phase 3). ##### Phase 1: Self-scoring The goal is to establish a self-scoring function $\sigma$. Our feedback framework is designed in such a way that an alternative answer $A_{i}$ is part of the output of the model $\mathcal{M}(U_{i})$. Hence, we can feed back the alternative answer $A_{i}$ to the original utterance $U_{i}$ and substitute it for the originally provided answer and obtain $U_{i}^{*}$ (Figure 2). This constitutes a self-assessment loop because we can evaluate the quality of $U_{i}^{*}$ by once again passing it to $\mathcal{M}$. The proposed score is the probability of obtaining feedback labeled as appropriate ($y_{i}=true$) for the refined utterance $U_{i}^{*}$. In summary, a feedback generation is assigned a high score if after following the advice (i.e. modifying the peer counselor’s response in the suggested way) the probability of $y_{i}=true$ is high for this altered context. This self-scoring mechanism is a proxy of feedback quality, as we assume that good feedback will lead to good alternative answers. ##### Phase 2: Preference Pairs Building on the self-scoring mechanism from Phase 1, these self-scores are obtained for a set of samples of $\mathcal{M}$ for the same utterance $U_{i}$. Samples with the maximum and minimum scores are indexed with $\omega_{i}$ and $\alpha_{i}$, respectively. If the probability that the original utterance $U_{i}$ receives feedback labeled as appropriate is below 0.5 (indicating that further improvement is required), a preference pair is formed using samples $\omega_{i}$ and $\alpha_{i}$. As a robustness check to assess whether these preference pairs are aligned with human judgment, we asked domain experts to annotate 20 test conversations with minimum and maximum score samples. They preferred the utterance with the higher score $63.0\%$ of the time, had no preference $28.9\%$, and only preferred the utterance with the lower score 8.1% of the time. Method | $\mathcal{M_{\text{SFT}}}$ | $\mathcal{M_{\text{SFT}}}$ \+ new data | $\mathcal{M_{\text{SFT}}}$ \+ best scores | $\mathcal{M_{\text{self-improvement}}}$ ---|---|---|---|--- Mean Score Overall | 0.968 | 0.967 | 0.971 | 0.983* Mean Score Worst 1% | 0.28 | 0.28 | 0.38* | 0.56* Mean Score Worst 5% | 0.64 | 0.64 | 0.69* | 0.81* Figure 3: Baselines comparisons. Table presents means of automatically- computed quality scores (as defined in Section 5.2) for three baselines and the self-improvement method. The comparison is shown for three different groups: overall and for the worst 1% and 5% of the generations. * denotes statistically significant (p < 0.01) improvements over the $\mathcal{M_{\text{SFT}}}$ baseline based on t-test and Mann–Whitney U test. Plots present score distributions. ##### Phase 3: Alignment The last step is to further align the model with Direct Preference Optimization (DPO) Rafailov et al. (2023) to the preference pairs obtained from Phase 2. This technique contrasts high and low quality generations and encourages the model to produce generations similar to the ones marked as preferred. We align $\mathcal{M}$ on the Preferences QESconv dataset introduced in Section 4. The resulting model is $\mathcal{M}_{\text{Self- imp}}$. ### 5.3 Baselines $\mathcal{M_{\text{SFT}}}$ baseline222Training details can be found in Appendix H.. To evaluate the self-improvement via forecasting method, we compare it with a supervised fine-tuned Llama2 13B model baseline, denoted as $\mathcal{M_{\text{SFT}}}$. To understand whether the different phases in the self-improvement method are essential, we compare it with two additional baseline ablation conditions: $\mathcal{M_{\text{SFT}}}$ \+ new data. We apply the $\mathcal{M_{\text{SFT}}}$ model to obtain feedback generations for the additional data Preferences QESConv that $\mathcal{M_{\text{Self-imp}}}$ uses. We use those generations for further supervised fine-tuning. The goal here is to determine if self-scoring gives value beyond simply fine-tuning on additional generations on new data used by $\mathcal{M_{\text{Self-imp}}}$. $\mathcal{M_{\text{SFT}}}$ \+ best scores. We follow the self-scoring procedure, but instead of creating a single preference pair, we generate multiple scored samples and choose the one with the highest score for further fine-tuning the $\mathcal{M_{\text{SFT}}}$ model. The aim is to see whether alignment to preference pairs gives improvement compared to fine-tuning to the highest scored generation. ## 6 Evaluation and Results In Section 6.1, we compare the quality of feedback generated with $\mathcal{M}_{\text{Self-imp}}$ vs. those generated with baseline models via automatic scores and domain-expert ratings. After validating the improved feedback quality of $\mathcal{M}_{\text{Self-imp}}$ over baselines, in Section 6.2, we compare its feedback to the feedback co-annotated by GPT-4+Experts (approach described in Section 4.2) to understand if the $\mathcal{M}_{\text{Self-imp}}$ model matches in quality. ### 6.1 Comparing $\mathcal{M}_{\text{Self-imp}}$ with Baselines We use the automatically-computed quality scores (as defined in Section 5.2) as one way to evaluate the performance of our self-improvement method against baselines333To ensure a fair comparison, we perform scoring using the same base $\mathcal{M_{\text{SFT}}}$ model.. For each model, we generate 10 samples of feedback for each counselor utterance in 67 test conversations resulting in 8090 data points. Our results are reported in Figure 3. Over all feedback generations, the mean quality score is highest for $\mathcal{M}_{\text{Self- imp}}$, where the difference compared to $\mathcal{M}_{\text{SFT}}$ is statistically significant. In the context of peer counseling, unlike typical natural language generation tasks where average performance is key, our focus is on minimizing the chance of producing poor or unhelpful feedback, prioritizing the worst-case scenario. We illustrate this with an example of both low quality and high quality feedback in Figure 4. Figure 4: Example of feedback response of very poor quality. The model incorrectly provided feedback that the peer counselor response is generally good. Although the model properly outlined the intended goal of peer’s counselor reply, the proposed alternative fails to align with this goal and repeats the same errors. A representation of what constitutes high-quality feedback generation for this specific instance is provided for clarity. As shown in the table in Figure 3, in the worst 5% and 1% of generated feedback, the quality scores for the $\mathcal{M}_{\text{Self-imp}}$ model are significantly higher than the baselines. For the bottom 1% of samples, the mean score increases from 0.28 for $\mathcal{M}_{\text{SFT}}$ to 0.56 for $\mathcal{M}_{\text{Self-imp}}$, indicating a reasonable shift from inappropriate to appropriate feedback. Figure 5: Expert quality assessments for the worst 1% of generations. The statistically significant shift of scores to the right ($p$ < 0.01) shows the self-improvement method was judged to be of higher quality than the $\mathcal{M}_{\text{SFT}}$ baseline, with mean score improving from 2.61 (Below Acceptable) to 3.16 (Above Acceptable). Automatically-computed quality scores enable observations of improvements on the aggregate distribution level. To affirm that our proposed method $\mathcal{M}_{\text{Self-imp}}$ enhances the quality of feedback in the worst- case scenario, we defer to the gold standard of evaluation: the judgment of domain experts. We conducted the following experiment. We asked domain experts to rate the feedback quality of the bottom 1% of generations using a 5-point Likert scale for $\mathcal{M}_{\text{SFT}}$ and $\mathcal{M}_{\text{Self-imp}}$. As shown in the bottom of Figure 5, generations rated as Very Poor were almost all eliminated by the use of the $\mathcal{M}_{\text{Self-imp}}$ method, to less than 1% of the ratings. Moreover, we see consistent growth of the proportion of generations marked as Acceptable, Good or Very Good. One author further conducted a qualitative investigation of the worst 1% of feedback. We observe that feedback from $\mathcal{M}_{\text{SFT}}$ can often suggest alternative answers with slight rephrasing that do not resolve the core issue, whereas $\mathcal{M}_{\text{Self-imp}}$ exhibits fewer of these errors. Together, the results from these two experiments suggest that for the worst generations, $\mathcal{M}_{\text{Self-imp}}$ improves feedback quality as measured both by automatically-computed quality scores and domain expert ratings. ### 6.2 Comparing $\mathcal{M}_{\text{Self-imp}}$ with GPT-4+Expert Feedback Aspect | $\mathcal{M_{\text{self-imp}}}$ | GPT-4 + Expert ---|---|--- Selection for Feedback | 4.20 | 4.18 Strengths Identification | 3.68 | 3.95 Improvement Areas Selection | 4.28 | 4.3 Goal Description Quality | 4.3 | 4.43 Rationale for Alternatives | 4.33 | 4.45 Quality of Alternatives | 4.03 | 4.38∗ Feedback Style | 4.45 | 4.55 Feedback Helpfulness | 4.15 | 4.48∗ Overall | 4.10 | 4.35∗ Table 3: Experts’ conversation level evaluation of eight aspects of feedback quality for $\mathcal{M_{\text{Self-imp}}}$ and the reference GPT-4+Expert annotations. Results based on a test sample of 20 conversations. * denotes statistically significant difference under t-test ($p<0.05$). We further assessed feedback quality at the conversation level and compared feedback generated by $\mathcal{M}_{\text{Self-imp}}$, against the GPT-4+Expert annotations. Domain-experts evaluated the quality of feedback along eight aspects that cover the components of the multi-level feedback taxonomy. Results (Table 3) indicate that the $\mathcal{M}_{\text{Self-imp}}$ model’s feedback quality approaches the reference standard of GPT-4+Expert annotations across 6 out of 8 feedback aspects, with a median overall quality rating of _4 - Good_. We note significant differences in the _Quality of Alternatives_ and overall _Feedback Helpfulness_. Nevertheless, we find that experts agree (4 or 5 on the Likert-scale) in 90% of conversations that the feedback generated by $\mathcal{M}_{\text{Self-imp}}$ would be helpful in the training process of novice peer counselors (100 % of GPT-4 + Experts annotations are considered helpful). These results validate how $\mathcal{M}_{\text{Self-imp}}$, a model based on Llama-13B trained using our self-improvement method, can match the GPT-4+expert reference annotations across many aspects, while highlighting aspects of the multi-level feedback taxonomy that future modeling work can improve. Example feedback generations are in Appendix LABEL:sec:appendix_generations. ## 7 Conclusions We introduced a multi-level feedback framework for training counseling skills by co-designing with senior psychotherapy supervisors, constructed a public dataset of counseling conversation with feedback annotations, and proposed a simple but effective self-improvement method for feedback generation. We demonstrate through qualitative and quantitative evaluation that our method minimizes the risk of low-quality feedback generation and generates feedback that domain experts find useful. ## Limitations In this work, we first co-designed with senior psychotherapy supervisors a feedback framework and then developed a LLM model that can automatically generate advice for novice peer counselors. Although the framework covers multiple aspects of active listening, it is not enumerative and might not cover all possible feedback dimensions relevant to the complex peer counseling context. While we consider the way in which the feedback is delivered (and specifically evaluate the feedback style – whether it was delivered "in a friendly but professional way"), we do not tailor our feedback to a specific trainee in a personalized way. In professional training of therapists, supervisors alter their feedback style to optimize feedback delivery: “But in addition I have a take on who is this person I’m supervising. And what are they like as a person? And do they listen to me or not? And how can I say it differently so they can hear it?” Our feedback dataset, which we used for training of our model, was built on a public dataset of emotional support conversations. This allows us to make our data publicly available. However it was built upon conversations between crowd workers who have only received very abbreviated training. While the training covers a broad range of counseling skills, it is unclear whether these crowd- sourced conversations might generalize to conversations among peer counselors and seekers or other similar counseling contexts. Although we involved human experts (senior psychotherapy supervisors and domain experts with counseling expertise) at every stage of the development process and system evaluation, we acknowledge that the opinions and judgments from this small group of domain experts might not represent a broader population of psychotherapy supervisors or mental health practitioners, as well as the ways in which they coach novice peer counselors. ## Ethics Statement This study has been approved by the Institutional Review Boards (IRB) at the authors’ institutions. All the researchers involved in this study have completed CITI Program certifications on responsible code of conduct in research. We have compensated domain experts fairly for their time, going beyond minimum wage in the United States. The purpose of this paper is to develop a model that generates feedback for novice peer counselors with limited or no access to human supervision. The system should not be regarded as a substitute for expert feedback. Importantly, while our self-improvement method aims to limit the risk of poor feedback generations (e.g., giving inappropriate advice), this risk is not fully eliminated. It is therefore important to treat model-generated advice only as potential guidance and discard it if necessary, based on trainee judgment. For potential uses of this feedback generation system, we will design consent form to disclose potential risks of our system, and will also advocate for practitioners to centrally host and log the content generated by our system so that it can be audited to determine whether there are any problematic behaviors in the system use. ## References * 7Cups (2023) 7Cups. 2023. 7cups verfiers team mock chat guide: Discussing points that need improvement. * Ali et al. (2015) Kathina Ali, Louise Farrer, Amelia Gulliver, Kathleen M Griffiths, et al. 2015. Online peer-to-peer support for young people with mental health problems: a systematic review. _JMIR mental health_ , 2(2):e4418. * Arnold (2014) Kyle Arnold. 2014. Behind the mirror: Reflective listening and its tain in the work of carl rogers. _The Humanistic Psychologist_ , 42(4):354–369. * Atkins et al. (2014) David C Atkins, Mark Steyvers, Zac E Imel, and Padhraic Smyth. 2014. Scaling up the evaluation of psychotherapy: evaluating motivational interviewing fidelity via statistical text classification. _Implementation Science_ , 9(1):1–11. * Beck (2020) Judith S Beck. 2020. _Cognitive behavior therapy: Basics and beyond_. Guilford Publications. * Bernard and Goodyear (1998) Janine M Bernard and Rodney K Goodyear. 1998. _Fundamentals of clinical supervision_. Allyn & Bacon. * Borders and Brown (2005) L DiAnne Borders and Lori L Brown. 2005. The new handbook of counseling supervision. * Bugental et al. (2001) James FT Bugental, J Fraser Pierson, and Kirk J Schneider. 2001. _The handbook of humanistic psychology: Leading edges in theory, research, and practice_. Sage Publications. * Butler et al. (2013) Andrew C Butler, Namrata Godbole, and Elizabeth J Marsh. 2013. Explanation feedback is better than correct answer feedback for promoting transfer of learning. _Journal of Educational Psychology_ , 105(2):290. * Chen and Yang (2020) Jiaao Chen and Diyi Yang. 2020. Multi-view sequence-to-sequence models with conversational structure for abstractive dialogue summarization. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4106–4118. * Chiu et al. (2024) Yu Ying Chiu, Ashish Sharma, Inna Wanyin Lin, and Tim Althoff. 2024. A computational framework for behavioral assessment of llm therapists. _arXiv preprint arXiv:2401.00820_. * Choi (2000) Freddy YY Choi. 2000. Advances in domain independent linear text segmentation. In _Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference_ , pages 26–33. * Cooper et al. (2020) David Cooper, Keong Yap, Maureen O’Brien, and India Scott. 2020. Mindfulness and empathy among counseling and psychotherapy professionals: A systematic review and meta-analysis. _Mindfulness_ , 11:2243–2257. * Cuijpers et al. (2019) Pim Cuijpers, Mirjam Reijnders, and Marcus JH Huibers. 2019. The role of common factors in psychotherapy outcomes. _Annual review of clinical psychology_ , 15(1):207–231. * Day and Sparacio (1980) Robert W Day and Richard T Sparacio. 1980. Structuring the counseling process. _Personnel & Guidance Journal_, 59(4). * Fang et al. (2023) Anna Fang, Wenjie Yang, Raj Sanjay Shah, Yash Mathur, Diyi Yang, Haiyi Zhu, and Robert Kraut. 2023. What makes digital support effective? how therapeutic skills affect clinical well-being. _arXiv preprint arXiv:2312.10775_. * Flemotomos et al. (2021) Nikolaos Flemotomos, Victor R Martinez, Zhuohao Chen, Torrey A Creed, David C Atkins, and Shrikanth Narayanan. 2021. Automated quality assessment of cognitive behavioral therapy sessions through highly contextualized language representations. _PloS one_ , 16(10):e0258639. * Gilardi et al. (2023) Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd-workers for text-annotation tasks. _arXiv preprint arXiv:2303.15056_. * Gonsalvez and Milne (2010) Craig J Gonsalvez and Derek L Milne. 2010. Clinical supervisor training in australia: A review of current problems and possible solutions. _Australian Psychologist_ , 45(4):233–242. * Henretty and Levitt (2010) Jennifer R Henretty and Heidi M Levitt. 2010. The role of therapist self-disclosure in psychotherapy: A qualitative review. _Clinical psychology review_ , 30(1):63–77. * Hill (2009) Clara E Hill. 2009. _Helping skills: Facilitating, exploration, insight, and action_. American Psychological Association. * Hsu et al. (2023) Shang-Ling Hsu, Raj Sanjay Shah, Prathik Senthil, Zahra Ashktorab, Casey Dugan, Werner Geyer, and Diyi Yang. 2023. Helping the helper: Supporting peer counselors via ai-empowered practice and feedback. _arXiv preprint arXiv:2305.08982_. * Huang et al. (2023) Jiaxin Huang, Shixiang Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2023. Large language models can self-improve. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 1051–1068, Singapore. Association for Computational Linguistics. * Imel et al. (2019) Zac E Imel, Brian T Pace, Christina S Soma, Michael Tanana, Tad Hirsch, James Gibson, Panayiotis Georgiou, Shrikanth Narayanan, and David C Atkins. 2019. Design feasibility of an automated, machine-learning based feedback system for motivational interviewing. _Psychotherapy_ , 56(2):318. * James et al. (2010) Ian Andrew James, Rachel Morse, and Alan Howarth. 2010. The science and art of asking questions in cognitive therapy. _Behavioural and Cognitive Psychotherapy_ , 38(1):83–93. * Karen and Sandra (2017) Holtzblatt Karen and Jones Sandra. 2017. Contextual inquiry: A participatory technique for system design. In _Participatory design_ , pages 177–210. CRC Press. * Kuzman et al. (2023) Taja Kuzman, Nikola Ljubešić, and Igor Mozetič. 2023. Chatgpt: beginning of an end of manual annotation? use case of automatic genre identification. _arXiv preprint arXiv:2303.03953_. * Laska et al. (2014) Kevin M Laska, Alan S Gurman, and Bruce E Wampold. 2014. Expanding the lens of evidence-based practice in psychotherapy: a common factors perspective. _Psychotherapy_ , 51(4):467. * Li et al. (2023) Minzhi Li, Taiwei Shi, Caleb Ziems, Min-Yen Kan, Nancy Chen, Zhengyuan Liu, and Diyi Yang. 2023. CoAnnotating: Uncertainty-guided work allocation between human and large language models for data annotation. In _Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing_ , pages 1487–1505, Singapore. Association for Computational Linguistics. * Liang et al. (2023) Weixin Liang, Yuhui Zhang, Hancheng Cao, Binglu Wang, Daisy Ding, Xinyu Yang, Kailas Vodrahalli, Siyu He, Daniel Smith, Yian Yin, et al. 2023. Can large language models provide useful feedback on research papers? a large-scale empirical analysis. _arXiv preprint arXiv:2310.01783_. * Linehan (1997) Marsha M Linehan. 1997. Validation and psychotherapy. * Liu et al. (2021) Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_ , pages 3469–3483, Online. Association for Computational Linguistics. * Min et al. (2022) Do June Min, Verónica Pérez-Rosas, Kenneth Resnicow, and Rada Mihalcea. 2022. PAIR: Prompt-aware margIn ranking for counselor reflection scoring in motivational interviewing. In _Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing_ , pages 148–158, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. * Moyers et al. (2014) TB Moyers, JK Manuel, D Ernst, T Moyers, J Manuel, D Ernst, and C Fortini. 2014. Motivational interviewing treatment integrity coding manual 4.1 (miti 4.1). _Unpublished manual_. * Moyers et al. (2016) Theresa B Moyers, Lauren N Rowell, Jennifer K Manuel, Denise Ernst, and Jon M Houck. 2016. The motivational interviewing treatment integrity code (miti 4): rationale, preliminary reliability and validity. _Journal of substance abuse treatment_ , 65:36–42. * OpenAI (2023) OpenAI. 2023. Gpt-4 technical report. * Pan et al. (2023) Liangming Pan, Michael Saxon, Wenda Xu, Deepak Nathani, Xinyi Wang, and William Yang Wang. 2023. Automatically correcting large language models: Surveying the landscape of diverse self-correction strategies. _arXiv preprint arXiv:2308.03188_. * Pérez-Rosas et al. (2019) Verónica Pérez-Rosas, Xinyi Wu, Kenneth Resnicow, and Rada Mihalcea. 2019. What makes a good counselor? learning to distinguish between high-quality and low-quality counseling conversations. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 926–935, Florence, Italy. Association for Computational Linguistics. * Rafailov et al. (2023) Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. _arXiv preprint arXiv:2305.18290_. * Rautalinko et al. (2007) Erik Rautalinko, Hans-Olof Lisper, and Bo Ekehammar. 2007. Reflective listening in counseling: effects of training time and evaluator social skills. _American journal of psychotherapy_ , 61(2):191–209. * Rønnestad and Skovholt (2013) Michael Helge Rønnestad and Thomas M Skovholt. 2013. _The developing practitioner: Growth and stagnation of therapists and counselors_. Routledge. * Shah et al. (2022) Raj Sanjay Shah, Faye Holt, Shirley Anugrah Hayati, Aastha Agarwal, Yi-Chia Wang, Robert E Kraut, and Diyi Yang. 2022. Modeling motivational interviewing strategies on an online peer-to-peer counseling platform. _Proceedings of the ACM on Human-Computer Interaction_ , 6(CSCW2):1–24. * Sharma et al. (2023) Ashish Sharma, Inna W Lin, Adam S Miner, David C Atkins, and Tim Althoff. 2023. Human–ai collaboration enables more empathic conversations in text-based peer-to-peer mental health support. _Nature Machine Intelligence_ , 5(1):46–57. * Sharma et al. (2020) Ashish Sharma, Adam Miner, David Atkins, and Tim Althoff. 2020. A computational approach to understanding empathy expressed in text-based mental health support. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5263–5276, Online. Association for Computational Linguistics. * Shen et al. (2022) Siqi Shen, Veronica Perez-Rosas, Charles Welch, Soujanya Poria, and Rada Mihalcea. 2022. Knowledge enhanced reflection generation for counseling dialogues. In _Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 3096–3107, Dublin, Ireland. Association for Computational Linguistics. * Sripada et al. (2005) Somayajulu Sripada, Ehud Reiter, and Lezan Hawizy. 2005. Evaluation of an nlg system using post-edit data: Lessons learnt. In _Proceedings of the Tenth European Workshop on Natural Language Generation (ENLG-05)_. * Tanana et al. (2019) Michael J Tanana, Christina S Soma, Vivek Srikumar, David C Atkins, and Zac E Imel. 2019. Development and evaluation of clientbot: Patient-like conversational agent to train basic counseling skills. _Journal of medical Internet research_ , 21(7):e12529. * Taori et al. (2023) Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford alpaca: An instruction-following llama model. https://github.com/tatsu-lab/stanford_alpaca. * Terry et al. (2017) Gareth Terry, Nikki Hayfield, Victoria Clarke, and Virginia Braun. 2017. Thematic analysis. _The SAGE handbook of qualitative research in psychology_ , 2:17–37. * Touvron et al. (2023) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_. * Wampold (2015) Bruce E Wampold. 2015. How important are the common factors in psychotherapy? an update. _World Psychiatry_ , 14(3):270–277. * Wang et al. (2023a) Rose E Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, and Dorottya Demszky. 2023a. Step-by-step remediation of students’ mathematical mistakes. _arXiv preprint arXiv:2310.10648_. * Wang et al. (2023b) Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2023b. Self-consistency improves chain of thought reasoning in language models. In _International Conference on Learning Representations_. * Watkins Jr and Milne (2014) C Edward Watkins Jr and Derek L Milne. 2014. _The Wiley international handbook of clinical supervision_. John Wiley & Sons. * Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ , 35:24824–24837. * Wilcoxon (1992) Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In _Breakthroughs in Statistics: Methodology and Distribution_ , pages 196–202. Springer. * Wolf et al. (2020) Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language processing. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 38–45, Online. Association for Computational Linguistics. * Wu et al. (2023) Zixiu Wu, Simone Balloccu, Vivek Kumar, Rim Helaoui, Diego Reforgiato Recupero, and Daniele Riboni. 2023. Creation, analysis and evaluation of annomi, a dataset of expert-annotated counselling dialogues. _Future Internet_ , 15(3). * Ye et al. (2023) Seonghyeon Ye, Yongrae Jo, Doyoung Kim, Sungdong Kim, Hyeonbin Hwang, and Minjoon Seo. 2023. Selfee: Iterative self-revising llm empowered by self-feedback generation. Blog post. * Yuan et al. (2024) Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho, Sainbayar Sukhbaatar, Jing Xu, and Jason Weston. 2024. Self-rewarding language models. _arXiv preprint arXiv:2401.10020_. * Zheng et al. (2023) Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-judge with MT-bench and chatbot arena. In _Thirty-seventh Conference on Neural Information Processing Systems Datasets and Benchmarks Track_. ## Appendix A Evaluation areas Table 4 presents specific example mistakes which peer counselors can make. These are grouped into 8 categories with definitions aligned with mental health literature. Reflections | This skill involves repeating or rephrasing clients’ statements to identify and acknowledge their feelings. This technique helps clarify the client’s emotions and encourages them to explore these feelings further. ---|--- References: | Bugental et al. (2001); Rautalinko et al. (2007); Arnold (2014); Hill (2009); Moyers et al. (2014, 2016); Pérez-Rosas et al. (2019); Beck (2020); Shah et al. (2022) Example mistakes: | Not reflecting, drawing conclusions from the helper’s experience without listening to what the seeker is saying and checking it out with them; Making assumptions beyond what was said; Copying the seeker’s words exactly; Stating feelings too definitely rather than tentatively (e.g., "you obviously feel X" vs. "I wonder if you feel X"); Becoming repetitive, not varying the format of restatements (e.g., I’m hearing you feel sad, I’m hearing you have some thoughts about X, I’m hearing you …); Labeling feelings inaccurately; Not capturing the most salient feeling; Reflecting on many feelings at the same time; Being judgmental; Focusing on the feelings of others and not the seeker; Reflecting when the seeker is resistant to expressing feelings and reflection might add more pressure. Questions | Questions in peer counseling can be formulated either as inquiries (e.g., "How do you feel about that?") or as prompts (e.g., "Tell me more about your feelings on that"), provided to aid the client in understanding or examining their emotions. References: | Bugental et al. (2001); Hill (2009); James et al. (2010); Moyers et al. (2014, 2016); Beck (2020); Shah et al. (2022) Example mistakes: | Making questions too focused in situations in which they should be more open-ended; Trying to cover everything instead of focusing on one aspect; Asking questions without a clear intention/goal; Not encouraging expression of feelings; Not exploring the details of the situation the seeker is coming with; Not asking the seeker to check the facts ("tell me what data you have that supports that", "do you have any evidence that you’d be X if you did Y?"); Asking questions without empathy; Asking lengthy or multiple questions at once; Turning the attention to other people instead of the seeker (i.e., asking what person X did, instead of asking how the seeker felt about X’s behavior); Asking too many closed-questions interviewing instead of exploring. Suggestions | This technique involves offering specific directives or advice that clients can apply outside the counseling sessions. References: | Bugental et al. (2001); Hill (2009); Moyers et al. (2014, 2016); Beck (2020); Shah et al. (2022) Example mistakes: | Giving too much or premature advice, answers, or solutions; Telling people what to do, giving direct advice "you should"; Imposing beliefs or personal values on seekers; Trying to debate with the seeker and convince them of the helper’s point of view. Validation | Validation goes beyond simply acknowledging a client’s feelings. It actively affirms their experiences and perspectives as understandable and worthy of respect, even if the counselor may not personally share their viewpoints. References: | Linehan (1997); Bugental et al. (2001); Hill (2009); Moyers et al. (2014, 2016); Beck (2020) Example mistakes: | Not letting the seeker know that their feelings are normal; Validating invalid (e.g., validating opinions or seeker’s biases); Helper not being there, paying attention to what the seeker brings to the conversation. Self-disclosure | Sharing of personal experiences can create a sense of empathy and connection, reducing the client’s feeling of isolation. This approach is balanced to avoid overshadowing the client’s emotions or introducing irrelevant personal details. References: | Henretty and Levitt (2010); Bugental et al. (2001); Hill (2009); Moyers et al. (2014, 2016); Beck (2020); Shah et al. (2022) Example mistakes: | Not turning the focus back to the seeker immediately; Making self-disclosure too long or too complex; Disclosing too much information; Talking too much and not letting the seeker talk more. Empathy | This skill involves understanding the client’s emotions and sharing in their experience, offering a sense of being truly seen and heard. This deeper connection allows counselors to guide clients toward self-discovery and provide targeted support. References: | Bugental et al. (2001); Hill (2009); Beck (2020); Cooper et al. (2020); Sharma et al. (2020) Example mistakes: | [Empathetic Emotional Reactions] Not expressing warmth, compassion, concern, or similar feelings towards the seeker in situations in which it would be appropriate; [Empathetic Interpretations] Not communicating an understanding of the seeker’s experiences and feelings in situations in which it would be appropriate; [Empathetic Explorations] Not making an attempt to explore the seeker’s experiences and feelings in situations in which it would be appropriate; Expressing empathy but without maintaining a professional attitude; Expressing sympathy instead of empathy. Professionalism | Professionalism refers to setting clear boundaries and using appropriate language and communication style. References: | Bugental et al. (2001); Hill (2009) Example mistakes: | Overusing slang; Being overly professional and formal, which results in robotic-style conversations; Using vocabulary that expresses too much closeness. Structure | This skill assists the counselor and client in guiding the conversation effectively, ensuring productive use of time, and covering essential topics. A basic structure, while flexible to individual needs, provides both parties with a sense of security and direction. References: | Day and Sparacio (1980); Bugental et al. (2001); Hill (2009); Moyers et al. (2014, 2016); Beck (2020) Example mistakes: | [beginning] Not establishing a collaborative agenda and a friendly emotional rapport; [middle] Having too many topics on the table at the same time, not focusing on the main problem ("keep it simple"); [end] Not summarizing what the person is going to take away from the conversation; [end] Lack of clear, actionable items or insights for the seeker after the conversation. Table 4: Examples of evaluation areas in peer counseling communication grouped into 8 categories: Reflections, Questions, Suggestions, Validation, Self- disclosure, Empathy, Professionalism, Structure. ## Appendix B Interviews with senior experts To understand the nature of feedback in professional training, we conducted multiple interviews with three senior psychotherapists with over 20 years of direct supervision experience of novice therapists. We first understood the common practices of feedback-giving sessions and then engaged with supervisors on a representative task of providing feedback on a transcript of an emotional support conversation to simulate the process of communicating feedback to a psychotherapist student. The interviews focus on the following questions, insights from which guided the framework design process: * – R1: What are important skills for novice counselors? * – R2: How are these skills learned and what is the role of feedback in the learning process? * – R3: What is the structure of this feedback? We transcribed all audio recordings of the interviews. Then, using a thematic coding Terry et al. (2017) approach, we analyzed the interview transcripts to identify key themes and patterns across the data. We then studied how those inform our research questions. #### R1: What are important skills for novice counselors? Beginner psychotherapy skills involve increasing the depth of self-description of the support seeker’s problems. Experienced psychotherapists can perceive nuances and undertones in conversations that beginners might miss. > "I think an experienced psychotherapist can hear some, can hear some things > or pick up on some things that a novice therapist maybe won’t that are > between the lines." (Supervisor 1) The main objective for beginners is not necessarily about adhering to a particular model but mastering basic foundational skills. > "I’m thinking that with the beginning novice therapist it’s less the model > than sort of basic foundational skills that we, I think, we’re trying to > teach" Our experts often referred to “Helping Skills: Facilitating Exploration, Insight, and Action” textbook by Carla Hill, who has devised a system categorizing these essential helping skills. The initial training phase focuses on foundational listening skills, which are also crucial for peer counselors to master. #### R2: How are these skills learned and what is the role of feedback in the learning process? Early-stage students undergo training in foundational counseling skills like listening, empathy, and asking open-ended questions. > "Students in the beginning, they, they take certain classes on what I might > call basic foundational counseling skills, how to listen, how to be > empathetic, how to, you know, ask open-ended questions. There’s a list. You > know, there’s a list of skills" (Supervisor 1) After their first year, novice therapists undergo a practicum experience where their sessions are taped and reviewed for feedback on foundational skills. > "At the end of their first year, they go for their first clinical > experience. We call it practicum experience. And their sessions are taped, > and their supervisor goes over those tapes with them and gives them feedback > on their, you know, on how they’re doing on those basic skills." (Supervisor > 1) It’s beneficial for novices to bring session transcripts, as these provide clear evidence of their actions and their consequences. These tapes and transcripts allow both the supervisor and the novice to study the impact of the therapist’s actions on the patient. > "But also I like them to bring a transcript. Because then I can go show > them. See what you did here led to this, which led to this, and this is what > you should do instead." (Supervisor 2) #### R3: What is the structure of this feedback? When providing feedback to the novice therapist, the experts emphasized the importance of positive reinforcement by starting with what the counselor did well. > "I generally start out with. What they’re doing well" (Supervisor 2) > "Well, I’d say this is pretty good overall, so I’d give positive feedback > first." (Supervisor 3) They would then gently introduce areas for improvement. The two crucial skills are making proper reflections and asking good open-ended questions. However, many other areas were mentioned by the experts as they analyzed the provided conversation transcripts. > "paraphrasing is a main thing. It’s just a couple of, I think that’s an > important piece. You ask the person a question or they start and then you > just kind of repeat what they say […] asking open questions is another > really good one thing that people learn to do" (Supervisor 3) Using transcripts like the discussed one can be an effective teaching tool, prompting the therapist to think of alternative responses. > "I would teach it by using a transcript like this. And then I’d say [… ] > what other kinds of things can you think of that if I said them to you, > you’d be more likely to really sink into what it is you’re trying to come > and talk about?" (Supervisor 3) Counseling should be goal-focused, each question or statement should have a goal. > "[…] what were your goals right? What were your goals in making these > questions or suggestions or statements? And I would have have them try and > think about it." (Supervisor 3) When going back to the transcript, the expert analyzed it line by line, stopping at each of the helper’s responses and giving feedback on it. > "Counselor says “she gave you a lot of meeting and filled your time fondly”. > OK, So she’s interpreting his statement rather than pulling out more of his > statement." (Supervisor 2) Crucially, the experts point out that the delivery of feedback should be in a manner that ensures the counselor doesn’t feel criticized. > "How do they deliver it so that the therapist can hear it? And how does the > therapist work with the patient? There are two communications going on > there" (Supervisor 2) Senior supervisors were compensated $150/hour. ## Appendix C ESConv filtering Figure 6: QESConv distribution of the number of utterances in conversations. Figure 7: QESConv distribution of the number of words in helper’s utterances. Figure 8: QESConv distribution of the number of words in seeker’s utterances. We manually analyze the conversations in ESConv Liu et al. (2021) (CC BY-NC 4.0 license) and filter the ones that meet the following criteria: * – Conversation not on topic * – Conversation referring in big part to MTurk * – Conversation not serious: making jokes, etc. * – Ungrammatical * – Chatting mostly about the current situation COVID, not a specific problem (i.e., exchanging news, vaccination discussions, etc.) * – Mostly meta-conversation (“sorry, are you there, I have not seen your message”) * – Generic topic chat: hobbies, having a dog, looking for job advice In this way we select 400 conversations for the QESconv dataset. We further remove many conversation-finishing artifacts by searching for keywords “survey”, “quit”, “we need to chat”,“button” and manually removing those from utterances. For example: “can you press quit first, I can’t do it from my end” , “I think we need to chat a bit more in order to wrap things up”, “please remember to take the survey :)”, “Is there a quit/finish button on your end?”. The final dataset has in total 11.3K utterances (distribution shown in Figure 6, with average utterance length equal 21.4 words (distribution for helper in Figure 7 and seeker in Figure 8). ## Appendix D Domain experts hiring process Based on the submitted applications and conducted interviews, we choose a group of six experts. We then conduct a pilot study in which we ask the experts to annotate a single conversation based on our annotation guide describing the feedback framework (Section 3) and our annotating interface (Appendix LABEL:sec:appendix_annotation). Based on adherence to the guide and projected time availability, we establish a group of three self-validated (at least 4/5 in the Likert scale – for details see Appendix E) experts – all with over 10 years of professional mental health practical experience (for example as Certified Chemical Dependency Counselor, Licensed Marriage and Family Therapist and Associate Professional Clinical Counselor). Upon further quality tests for the final data annotation scheme, we narrow down the group to two experts who consistently validate the quality of each other’s annotations on the final annotation task (see Appendix G.1). Our annotators are US-based. Domain experts were compensated $30/hour. We informed them of the purpose of the study and the potential risks. ## Appendix E Pilot quality validation We observe variability in feedback among experts, but we confirm with senior supervisors that this is to be expected since each practitioner may focus on different counseling components. Since there is no gold truth feedback, 444Even identifying areas for improvement cannot be simply defined as a multi- classification problem since different areas can be highlighted and there is not a single correct set evaluating the annotation quality is challenging and requires human expertise. We, therefore, perform a pilot self-validation study in which each expert judged on a 5-point Likert scale the quality of of the other experts’ annotations. In an experiment involving three experts (third expert later excluded at the co-annotation stage), each was tasked with evaluating the annotations made by the others for a single conversation. The assessment was based on a five-point Likert scale: 1. 1. Completely Irrelevant: The feedback is unrelated to the task. 2. 2. Slightly Relevant: The feedback has minimal relevance, lacking depth or specificity. 3. 3. Moderately Relevant: The feedback is partially relevant, covering some, but not all, key aspects. 4. 4. Highly Relevant: The feedback addresses most key aspects effectively. 5. 5. Exceptionally Relevant: The feedback is comprehensive, insightful, and offers actionable suggestions. Even though the annotations varied, the experts found different ways of giving valid feedback: “I think the other annotators and I emphasized things in slightly different ways. For example, one was more focused on clarity and the other was more focused on validation.” They all rated each others annotations to be at least 4/5 validating the overall annotation quality (see Table 5). All evaluations were blind, i.e. we did not reveal the source of the annotations. Evaluator | Annotator ---|--- A | B | C Expert A | - | 4/5 | 4/5 Expert B | 5/5 | - | 4/5 Expert C | 4/5 | 5/5 | - Table 5: Quality validation pilot results. ## Appendix F Potential of LLMs for providing feedback We explore whether LLMs could help in the annotation process within the feedback framework we have defined. This presents a topic of empirical investigation on its own. LLMs have been used for annotations Gilardi et al. (2023); Kuzman et al. (2023), or co-annotation Li et al. (2023), and GPT-3.5 and GPT-4 models excel at classification tasks related to client/therapist behaviors Chiu et al. (2024); however, our task is much more open-ended, requiring a generation of natural language rationale using deep understanding of the specialized feedback framework. We experiment with Llama2-70b chat Touvron et al. (2023), GPT-3.5 Turbo and GPT-4 models OpenAI (2023). While all models give reasonable feedback when prompted with a short generic statement, 555Example simple prompt: Act as a supervisor of novice helpers in the mental health context. Give feedback to the helper on their last response in the conversation below. the feedback is not focused (the most generic for the Llama model). When provided with a detailed definition of our framework, we find Llama to be unsuccessful in parsing framework guidelines, which both GPT-3.5 and GPT-4 manage to do. However, we find in early experiments that GPT-3.5 produces feedback of significantly inferior quality to human one, therefore we proceed with GPT-4 as our base model, which showed high potential. ### F.1 GPT-4 prompting While the most straightforward approach would be to use an API call to annotate each $U_{i}$, it would be very expensive given the usage of the GPT-4 model and the number of tokens in the instruction (>2k). Annotating the full conversation at once would be the most efficient option, but we notice a significant degradation of quality in annotations of the final helper’s utterances. Therefore, we annotate overlapping chunks for the conversation of 5 helper’s utterances 666The chunks are overlapping; we discard feedback for the first two utterances, which lack the sufficient context.. ### F.2 GPT-4 quality pilot Similar to the setting described in in Appendix E, we follow up with GPT-4 quality pilot by annotating ten conversations with GPT-4 and asking the experts for the 5-point Likert scale evaluation (one overall score for ten conversations). The results are presented in Table 6. Some experts pointed out that sometimes the language seems “stuffy” and “medical”, thus leading us to prompt refinement 777We refined the prompt additional language consideration: Use professional and friendly language when giving feedback. Focus on what is most beneficial to hear.. The final prompt can be found in Appendix LABEL:sec:appendix_prompt. Evaluator | Annotator ---|--- GPT-4 Expert A | 5/5 Expert B | 5/5 Expert C | 5/5 Table 6: Quality validation pilot results for GPT-4 generated annotations. ## Appendix G Expert-only vs. GPT-4+expert annotations All experts annotated a set of ten conversations. The sets were different so that later evaluations are not biased by comparison to oneself, i.e., “this is not good because I did something else”888Due to the subjective nature of this task, there is no single correct way of annotating.. Additionally, the experts annotated another set of ten conversations, this time, refining GPT-4 feedback (refinement instructions in Appendix LABEL:sec:appendix_refinement). Each expert then evaluated the quality of annotations made by other experts with and without GPT-4 default feedback (7 questions asking about feedback components, 5-point Likert scale) and compared on utterance level whether expert-only annotation or GPT-4+expert annotation is preferred (or there is no significant difference) (assessment instructions in Appendix LABEL:sec:appendix_comparison). ### G.1 Do experts consistently validate themselves? Experts A and B get high ratings, even without GPT-4 pre-annotation. While expert C initially demonstrated the ability to produce high-quality annotations, there appears to be some inconsistency in maintaining the same level of quality across an entire batch of conversations (scores below Acceptable rating). Figure 9 presents the average and median score of each expert rated by every other expert. Figure 10 presents how each expert overall (averaged over the raters) was scored in each of the questions asking about different feedback components – the list of questions is attached at the end of the Appendix LABEL:sec:appendix_comparison presenting the interface used to conduct the study. Experts A and B consistently achieve scores around 4 which translates to Good quality. Experts C fails to exceed the Acceptable rating for all questions across the board. Moreover, their answers are also subject to the highest variation in the score in majority of cases. Figure 9: Tables presenting average and median score for quality of annotations for experts A, B and C. Each entry in the table shows how a particular expert (row) rated another expert’s annotation (column). The summary of each column provides the overall quality score of the expert’s annotations. Figure 10: Figure presents average score with standard deviation for each expert A, B, C broken down by 7 questions used to assess the quality of experts annotations. The dotted line marks the Acceptable rating (3). ### G.2 Do annotations benefit from GPT-4 usage? With Expert C excluded as a rater, we compare the average annotations quality score of Expert A and B with and without GPT-4 pre-annotations (same setting as in the validation pilot - 7 questions and 5-point Likert scale). The average score assessing the annotations’ quality improves when GPT-4 is used for pre-annotations (Table 7). Moreover, the standard deviation of the scores decreases. Taking these factors combined, the results point to higher and more consistent quality of annotations when GPT-4 is used. Annotation method | Average score ---|--- Expert-only | $3.54\pm 0.81$ GPT-4 + Expert | $3.96\pm 0.62$ Table 7: Comparison of the average score of annotations’ quality averaged over the experts without and with GPT-4 pre-annotations. Additionally, GPT-4 + Expert is strongly preferred on the utterance level (see Figure 11). Presented with two annotations, one with and the other without GPT-4 pre-annotations, raters in the majority of cases (61.1 %) prefer the ones with pre-annotations. In 19.7% of cases, they are indifferent, and in 19.3% of cases, they prefer annotations without GPT-4 pre-annotations. Figure 11: Diagram presenting the distribution of whether the raters (Experts A and B) prefer annotations with or without GPT-4 pre-annotations. The figure presents percentages for three options: GPT-4 Win (green) – 61.1%, Tie (blue) – 19.7%, and GPT-4 Loss (red) – 19.3%. Qualitatively, when experts refine annotations, they tend to add extra feedback components, for instance, adding an extra goal over the one already pointed out by GPT-4. They sometimes rephrase goal/alternative response chunks that can be improved (e.g., making the question more open-ended). Those were thus not only due to fixing errors but also aim to refine and follow individual preferences Sripada et al. (2005). We hypothesize that GPT-4 + Expert are preferred since they allow experts to focus on what is most important and refining parts where GPT-4 failed. This reduces the work burden of writing everything from scratch. Quantitatively, we conduct the Wilcoxon test Wilcoxon (1992) on conversation ratings, and statistically, GPT-4+Expert conversations obtain better ratings (p<0.05). The win/loss rate is also statistically significant (Wilcoxon and Binomial test, p <0.05). Sample examples of GPT-4 + Expert vs. Expert-only annotations are provided in Appendix LABEL:sec:appendix_preference. Based on the above pilot results, we continue annotating QESConv with Experts A and B. ## Appendix H Fine-tuning experimental setup To curate a fine-tuning dataset, we leverage our FeedbackESConv data. To format each training datapoint we follow Alpaca style instruction formatting Taori et al. (2023). Each datapoint contains as output the feedback annotations from FeedbackESConv for the utterance $U_{i}$, with goal & alignment parts preceding the alternative answer, to provide “explanations” in order to guide the generation process Wei et al. (2022). The input is the conversation context $\mathrm{c}_{i}$. To find the part of the conversation to provide relevant context, we follow Chen and Yang (2020) by segmenting the conversation using C99 algorithmChoi (2000) on utterance embeddings. We embed the utterances using HuggingFace Wolf et al. (2020) transformer model all-MiniLM-L6-v2. We define the relevant context for each utterance as all past utterances in the current and previous segments. We fine-tune and align for 3 epochs. We use a single A100 GPU for the experiments. Overall, our computational budget amounted to approximately 130 GPU hours. See pages 1 of figures/guide.pdf See pages 2- of figures/guide.pdf See pages 1 of figures/GPT4_prompt.pdf See pages 2- of figures/GPT4_prompt.pdf See pages 1 of figures/guide_refinement.pdf See pages 2- of figures/guide_refinement.pdf See pages 1 of figures/guide_comparison.pdf See pages 2- of figures/guide_comparison.pdf See pages 1 of figures/preference.pdf See pages 2- of figures/preference.pdf See pages 1 of figures/example_generations.pdf See pages 2- of figures/example_generations.pdf
∎ e1e-mail<EMAIL_ADDRESS>e2e-mail<EMAIL_ADDRESS>e3e-mail<EMAIL_ADDRESS>e4e-mail<EMAIL_ADDRESS>e5e-mail: anza- <EMAIL_ADDRESS> 11institutetext: Department of Applied Physics and Astronomy, University of Sharjah, UAE. 22institutetext: Laboratoire de Physique des Rayonnements, Badji Mokhtar University, B. P. 12, 23000 Annaba, Algeria. 33institutetext: Department of Physics, University of Khartoum, PO Box 321, Khartoum 11115, Sudan. 44institutetext: School of Physics and Institute for Collider Particle Physics, University of the Witwatersrand, Johannesburg, Wits 2050, South Africa. # The scale invariant scotogenic model: CDF-II $W$-boson mass and the 95 GeV excesses Amine Ahrichee1,addr1 Mohamed Lamine Bellilete2,addr2 Mohammed Omer Khojalie3,addr3,addr4 Mukesh Kumare4,addr4 Anza-Tshildzi Mulaudzie5,addr4 (Received: date / Accepted: date) ###### Abstract The anomalies observed in the $W$ mass measurements at the CDF-II experiments and the excesses seen around 95 GeV at the Large Hadron Collider (LHC) motivate this work, in which we investigate and constrain the parameter space of the Scale Invariant Scotogenic Model with a Majorana dark matter candidate. The scanned parameters are chosen to be consistent with the dark matter relic density and the observed excesses at $\sim 95$ GeV signal strength rates in $\gamma\gamma$, $b\bar{b}$ and $\tau^{+}\tau^{-}$ final states. Furthermore, the model’s viable parameters can be probed in both the LHC and future $e^{+}e^{-}$ colliders for di-Higgs production. ###### Keywords: Dilaton, Majorana dark matter, signal strength modifiers & di-Higgs production. ## 1 Introduction In the Standard Model (SM), the mass of the $W$-boson is a fundamental parameter, and precise measurements of this mass are crucial for testing the model’s predictions. The reported measurements at CDF-II CDF:2022hxs have shown a significant discrepancy between the measured $W$-boson mass ($M_{W}^{\rm CDF}=80.4335\pm 0.0094$ GeV) and the mass predicted by the SM $m_{W}=80.357\pm 0.006$ GeV ParticleDataGroup:2022pth . This discrepancy is said to be at the level of 7 standard deviations. The $W$-boson is a weak interaction carrier; and any deviation from its SM-predicted properties, including its mass, has important implications, potentially indicating the presence of new physics beyond the Standard Model (BSM). Note that a recent measurement from ATLAS ATLAS:2023fsi ($M_{W}^{\rm ATLAS}=80.370\pm 0.019$ GeV) shows no deviation from the SM expectation. Excluding the recent measurements from CDF-II CDF:2022hxs , the current world average from experiments yields $M_{W}^{\rm avg.}=80.377\pm 0.012$ GeV, based on measurements at LEP-2 ALEPH:2013dgf , Tevatron CDF:2012gpf ; D0:2013jba , and the LHC ATLAS:2017rzl ; LHCb:2021bjt . In search of a light scalar Higgs boson, the CMS and ATLAS experiments at the Large Hadron Collider (LHC) reported a local excess of 2.9$\sigma$ and 1.7$\sigma$ at 95.4 GeV in the di-photon ($\gamma\gamma$) invariant mass spectrum in Run 2 dataset CMS:2018cyk ; CMS:2023yay ; ATLAS:2023jzc , respectively. The Higgs boson ($H$) production in the Higgsstrahlung process $e^{+}e^{-}\to ZH$ with $H\to b\bar{b}$ an excess of 2.3$\sigma$ has been observed in the mass range 95 GeV $<m_{H}<$ 100 GeV at the LEP collider experiments LEPWorkingGroupforHiggsbosonsearches:2003ing ; OPAL:2002ifx . CMS also reported another local excess in the light-Higgs boson searches in the $\tau^{+}\tau^{-}$ final state with a significance of 3.1$\sigma$ which is compatible with the aforementioned excesses CMS:2022goy . A recent study estimates the global significance of the excesses at 95 GeV to be 3.8$\sigma$ Bhattacharya:2023lmu . The notable discovery of the Higgs boson at the LHC ATLAS:2012yve ; CMS:2012qbp marks the completion of the SM’s foundation. Nevertheless, the observed anomalies mentioned above open new avenues for considering and constraining BSM physics. Several such studies are being considered in Refs. Biekotter:2022abc ; Botella:2022rte ; Escribano:2023hxj ; Borah:2023hqw ; Abouabid:2023mbu . Despite its success, the SM has left many questions unanswered, including the hierarchy problem, the nature of dark matter (DM), and the smallness of neutrino masses. Among the extensions of the SM that address these three problems simultaneously is the Scale Invariant Scotogenic Model (SI-SCM) Ahriche:2016cio . In this framework, the SM is extended by a real scalar singlet, three Majorana singlet fermions and an inert scalar doublet. The real scalar singlet develops a vacuum expectation value (vev) to assist in the radiatively induced electroweak symmetry breaking (EWSB), à la Coleman Coleman:1973jx . Here, we have two CP-even scalars whose tree-level eigenmasses are 0 and 125 GeV which correspond to a dilaton and a SM-like Higgs, respectively. When considering the radiative corrections (RCs), two scenarios are possible: (1) the dilaton mass squared acquires a positive nonzero value, $m_{D}<m_{H}$, and the Higgs mass remains $m_{H}=125$ GeV (light dilaton case); and (2) the zero mass value shifts to 125 GeV due to the RCs, and the 125 GeV tree-level eigenstate becomes a heavy scalar, with $m_{S}>m_{H}$, referred as the Pure Radiative Higgs Mass (PRHM) case Ahriche:2021frb . In this setup, the new Yukawa interactions that couple the Majorana singlet fermions and the inert scalar doublet to the lepton doublets induce a neutrino mass at one-loop level similar to the minimal scotogenic model Ma:2006km . The DM candidate here could be either a scalar (the lightest neutral inert scalar), resembling the case of the inert Higgs model extended by a real scalar Khojali:2022squ , or the lightest Majorana singlet fermion Soualah:2021xbn . The Majorana DM scenario in this model differs from the minimal scotogenic model, as DM annihilation occurs additionally into all SM fermions and gauge bosons via processes mediated by the Higgs boson and dilaton. This makes the new Yukawa coupling restricted only by the requirements of neutrino oscillation data and lepton flavour constraints. Here, in the setup, we investigate whether the dilaton scalar field could address the 95 GeV excess mentioned previously while considering theoretical and experimental constraints and requirements, including the DM relic density and direct detection, and the $W$-boson mass values measured by CDF-II. In addition, we would like to investigate the impact of all these assumptions and constraints on the di-Higgs production at the LHC (and at future $e^{+}e^{-}$ colliders) with $\sqrt{s}=14$ TeV (500 GeV). This work is organized as follows: Section 2 is dedicated to presenting the SI-SCM model, describing EWSB, and discussing various theoretical and experimental constraints. Next, in Section 3, we delve into the discussion and formulation of $W$ mass corrections and the 95 GeV signal strength modifiers in the SI-SCM model. The di-Higgs production mechanism is detailed in Section 4, and our numerical results are presented and discussed in Section 5. We conclude our work in Section 6. ## 2 Model and Framework In the SI-SCM, the SM is extended by one inert doublet scalar, $S$, three singlet Majorana fermions $N_{i}$ ($i=1,2,3$), and one real singlet scalar $\phi$ to assist the radiative EWSB, as shown in Table 1. The model is assigned by a global $Z_{2}$ to make the lightest $Z_{2}$-odd field stable, which plays the DM candidate role. The Lagrangian contains the following terms $\displaystyle\mathcal{L}\supset$ $\displaystyle-\;\\{g_{i,\alpha}\overline{N_{i}^{c}}S^{\dagger}L_{\alpha}+\mathrm{h.c}\\}-\frac{1}{2}y_{i}\phi\overline{N_{i}^{c}}\,N_{i}-V(\mathcal{H},S,\phi),$ (1) where, $g_{i,\alpha}$ and $y_{i}$ are new Yukawa couplings; $L_{\beta}$ are ($\ell_{\alpha R}$) the left-handed lepton doublet (right-handed leptons); the Greek letters label the SM flavours, $\alpha,\,\beta\in\\{e,\,\mu,\,\tau\\}$; the SM Higgs and the inert scalar doublets are parameterised as: $\mathcal{H}^{T}=\Big{(}\chi^{+},(h+i\,\chi^{0})/\sqrt{2}\Big{)}$ and $S^{T}=\Big{(}S^{+},(S^{0}+i\,A^{0})/\sqrt{2}\Big{)}$, respectively (where $\chi^{+}$ and $\chi^{0}$are Goldstone bosons). The most general SI scalar potential that obeys the $Z_{2}$ symmetry is given by $\displaystyle V(\mathcal{H},\,S,\,\phi)$ $\displaystyle=\frac{1}{6}\lambda_{H}(\left|\mathcal{H}\right|^{2})^{2}+\frac{\lambda_{\phi}}{24}\phi^{4}+\frac{\lambda_{S}}{2}|S|^{4}$ $\displaystyle+\frac{\omega}{2}|\mathcal{H}|^{2}\phi^{2}+\frac{\kappa}{2}\,\phi^{2}|S|^{2}+\lambda_{3}\,|\mathcal{H}|^{2}|S|^{2}$ $\displaystyle+\lambda_{4}\,|\mathcal{H}^{\dagger}S|^{2}+\left\\{\frac{\lambda_{5}}{2}(\mathcal{H}^{\dagger}S)^{2}+h.c.\right\\},$ (2) The first term in eq. (1) and the last term in eq. (2) are responsible for generating neutrino mass via the one-loop diagrams as illustrated in Fig. 1. Gauge group | $S$ | $N_{i}$ | $\phi$ | $X_{SM}$ ---|---|---|---|--- $SU(2)_{L}$ | 2 | 1 | 1 | $U(1)_{Y}$ | -1 | 0 | 0 | $Z_{2}$ | -1 | -1 | 1 | 1 Table 1: The field charges under the symmetry $Z_{2}$, where $X_{SM}$ denotes all SM fields. Figure 1: The neutrino mass is generated in the SI-scotogenic model at one-loop level. The neutrino mass matrix element Merle:2015gea can be written as $m_{\alpha\beta}^{(\nu)}=\sum_{i}g_{i,\alpha}g_{i,\beta}\Lambda_{i}=\left(g^{T}\cdot\varLambda\cdot g\right)_{\alpha\beta}$, which permits us to estimate the new Yukawa couplings using to the Casas-Ibarra parameterization Casas:2001sr , where lepton flavour violating (LFV) bounds on the branching ratios of $\ell_{\alpha}\to\ell_{\beta}\gamma$ and $\ell_{\alpha}\to\ell_{\beta}\ell_{\beta}\ell_{\beta}$ should be fulfilled. Here, the EWSB is triggered by the RCs where the counter-term $\delta\lambda_{H},\,\delta\lambda_{\phi},\,\delta\omega$ corresponding to terms in eq. (2), are chosen to fulfil the tadpole conditions and one of the CP-even eigenmasses matches the 125 SM-like Higgs and the other corresponds to light Higgs (PRHM case) or a heavy Higgs (light dilaton case). After the EWSB ($\langle h\rangle=\upsilon,\,\langle\phi\rangle=x$), we obtain two CP-even eigenstates as $H=c_{\alpha}\leavevmode\nobreak\ h-s_{\alpha}\leavevmode\nobreak\ \phi$ and $D=s_{\alpha}\leavevmode\nobreak\ h+c_{\alpha}\leavevmode\nobreak\ \phi$, where $H$ denotes the 125 $\mathrm{GeV}$ Higgs, $D$ is the dilaton scalar whose mass should be around $m_{D}=95.4\,\mathrm{GeV}$ in this setup; and $\alpha$ is the Higgs-dilaton mixing angle. Here, the RCs in both PRHM and light dilaton cases ensure the mixing angle $\alpha$ to be in the experimental range dictated by the Higgs gauge couplings measurements. Detailed discussions on these conditions can be found in Ahriche:2021frb . The vacuum stability must be ensured by imposing the coefficients of the term $\phi^{4}\log\phi$ to be positive, which represents the leading term in the scalar; instead the term $\phi^{4}$, where $\phi$ refers to any direction in the $h-\phi$ plane. Since all field dependent squared masses can be written as $m_{i}^{2}(h,\phi)=\frac{1}{2}(\alpha_{i}h^{2}+\beta_{i}\phi^{2})$, the vacuum stability conditions can be written as $\sum_{i}n_{i}\alpha_{i}^{2}>0$ and $\sum_{i}n_{i}\beta_{i}^{2}>0$, with $n_{i}$ to be the multiplicity of the field “$i$”. In addition to these conditions, the quartic couplings in eq. (2) must fulfil the perturbative unitarity conditions Ahriche:2021frb . In this model, the DM candidate could be fermionic (the lightest Majorana fermion, $N_{1}$) or a scalar (the lightest among $S^{0}$ and $A^{0}$). In the case of a scalar DM, the situation matches the singlet extended inert doublet model case Khojali:2022squ , where the co-annihilation effect should be considered in order to have viable parameters space. In the minimal scotogenic model with Majorana DM, the DM annihilation occurs via $t$-channel diagrams mediated by the inert fields, which makes the Yukawa couplings $g_{i,\alpha}$ values constrained by the relic density, and therefore the neutrino mass smallness can be achieved only in extreme $S^{0}-A^{0}$ mass degeneracy, i.e., imposing a very small value for $\lambda_{5}\sim\mathcal{O}(10^{-10})$ Ahriche:2020pwq . However, in the scale-invariant version, new $s$-channels mediated by the Higgs-boson or dilaton exist, which allows all the perturbative range for the $g_{i,\alpha}$ Yukawa couplings. Also, it is worth noting that, in contrast to many Majorana dark matter models, in this model, the dark matter couples to quarks at the tree level. This feature underscores the significance of direct detection constraints on the parameter space Soualah:2021xbn . ## 3 $M_{W}$ measurements & 95 GeV excesses The mass of the $W$-boson can be calculated as a function of the oblique parameters $\Delta S$, $\Delta T$, and $\Delta U$, and is given by: $\displaystyle M_{W}=m_{W}\,\Bigg{[}1+$ $\displaystyle\frac{\alpha}{c_{W}^{2}-s_{W}^{2}}\times$ $\displaystyle\Big{(}-\frac{1}{2}\Delta S+c_{W}^{2}\Delta T+\frac{c_{W}^{2}-s_{W}^{2}}{4s_{W}^{2}}\Delta U\Big{)}\Bigg{]}^{\frac{1}{2}},$ (3) where $c_{W}=\cos\theta_{W}$ and $s_{W}=\sin\theta_{W}$, with $\theta_{W}$ being the weak mixing angle. The oblique parameters in SI-SCM model are given by Grimus:2008nb $\displaystyle\varDelta S$ $\displaystyle=\frac{1}{24\pi}\Big{\\{}\left(2s_{W}^{2}-1\right)^{2}G\left(m_{S^{\pm}}^{2},m_{S^{\pm}}^{2},m_{Z}^{2}\right)$ $\displaystyle+G\left(m_{S^{0}}^{2},m_{A^{0}}^{2},m_{Z}^{2}\right)+\log\left(\frac{m_{S^{0}}^{2}m_{A^{0}}^{2}}{m_{S^{\pm}}^{4}}\right)$ $\displaystyle+s_{\alpha}^{2}\left[\log\frac{m_{D}^{2}}{m_{H}^{2}}-\hat{G}\left(m_{H}^{2},m_{Z}^{2}\right)+\hat{G}\left(m_{D}^{2},m_{Z}^{2}\right)\right]\Big{\\}},$ (4) $\displaystyle\varDelta T$ $\displaystyle=\frac{1}{16\pi s_{W}^{2}m_{W}^{2}}\times$ $\displaystyle\Big{\\{}F\left(m_{S^{\pm}}^{2},m_{S^{0}}^{2}\right)+F\left(m_{S^{\pm}}^{2},m_{A^{0}}^{2}\right)-F\left(m_{S^{0}}^{2},m_{A^{0}}^{2}\right)$ $\displaystyle+3s_{\alpha}^{2}\left[F\left(m_{W}^{2},m_{H}^{2}\right)-F\left(m_{Z}^{2},m_{H}^{2}\right)-F\left(m_{W}^{2},m_{D}^{2}\right)\right.$ $\displaystyle+F\left(m_{Z}^{2},m_{D}^{2}\right)]\Big{\\}},$ (5) $\displaystyle\varDelta U$ $\displaystyle=\frac{1}{24\pi}\Big{\\{}G\left(m_{S^{\pm}}^{2},m_{S^{0}}^{2},m_{W}^{2}\right)+G\left(m_{S^{\pm}}^{2},m_{A^{0}}^{2},m_{W}^{2}\right)$ $\displaystyle-\left[2s_{W}^{2}-1\right]^{2}G\left(m_{S^{\pm}}^{2},m_{S^{\pm}}^{2},m_{Z}^{2}\right)-G\left(m_{S^{0}}^{2},m_{A^{0}}^{2},m_{Z}^{2}\right)$ $\displaystyle+s_{a}^{2}\Big{[}\hat{G}\left(m_{D}^{2},m_{W}^{2}\right)-\hat{G}\left(m_{D}^{2},m_{Z}^{2}\right)-\hat{G}\left(m_{H}^{2},m_{W}^{2}\right)$ $\displaystyle+\hat{G}\left(m_{H}^{2},m_{Z}^{2}\right)\Big{]}\Big{\\}},$ (6) where the one-loop functions $G,F$ and $\hat{G}$ can be found in Grimus:2008nb .111Note: Subsequent to the CDF-II results, several research groups have adjusted their fits for the oblique parameters $\Delta S$, $\Delta T$, and $\Delta U$ in the context of electroweak precision measurements CentellesChulia:2022vpz ; Flacher:2008zq ; Asadi:2022xiy , examining their potential effects on BSM physics. The oblique parameter $\varDelta T$ quantifies the contribution of new physics at low energies and $\varDelta S$ at different energy scales. In order to analyze whether the SI-SCM model can yield a shift in the prediction for $M_{W}$ that is compatible with the measurements at experiments and simultaneously provides a possible explanation of the observed $\gamma\gamma$, $b\bar{b}$ and $\tau^{+}\tau^{-}$ excesses, we perform a $\chi^{2}$ analysis, quantifying the agreement between theoretically predicted signal rates $\mu_{X\bar{X}}$ ($X=\gamma,b,\tau$) and the experimentally observed values $\mu_{X\bar{X}}^{\rm exp}$. Experimentally, it was determined that the excesses at $\sim 95$ GeV were best described assuming signal rates of a scalar resonance as CMS:2018cyk ; CMS:2023yay ; ATLAS:2023jzc ; LEPWorkingGroupforHiggsbosonsearches:2003ing ; OPAL:2002ifx ; CMS:2022goy $\displaystyle\left.\begin{array}[]{l}\mu_{\gamma\gamma}^{\mathrm{exp}}=0.27_{-0.09}^{+0.10},\\\ \\\ \mu_{b\bar{b}}^{\mathrm{exp}}=0.117\pm 0.057,\\\ \\\ \mu_{\tau\tau}^{\mathrm{exp}}=1.2\pm 0.5\end{array}\right\\},$ (12) where the signal strengths are defined as the cross-section times the branching ratios divided by the corresponding predictions for the hypothetical SM Higgs boson at the same mass and the experimental uncertainties are given as 1$\sigma$ uncertainties. The theoretically predicted values for $\mu_{X\bar{X}}$ can be simplified as $\displaystyle\mu_{X\bar{X}}$ $\displaystyle=\frac{\sigma(gg\to D)\cdot\mathcal{B}(D\to X\bar{X})}{\sigma^{\rm SM}(gg\to H)\cdot\mathcal{B}^{\rm SM}(H\to X\bar{X})}$ $\displaystyle=\rho_{X}\big{(}1-{\cal B}(D\to X_{\rm BSM})\big{)},$ (13) with $X_{\rm BSM}=N_{i}N_{k},S^{0}S^{0},A^{0}A^{0}$; and $\sigma(gg\to D)$ and $\mathcal{B}(D\to X\bar{X})$ are the ggF dilaton production and the final state $X\bar{X}$ branching ratio, respectively. We have $\sigma^{\rm SM}(gg\to h)$ and $\mathcal{B}^{\rm SM}(h\to X\bar{X})$ to be the same quantities estimated in SM at the Higgs-boson mass $m_{H}\to m_{D}$ HevayHiggs . Here, we have $\displaystyle\rho_{\gamma}$ $\displaystyle=\left|1+\frac{\upsilon}{2}\frac{\lambda_{DS^{\pm}S^{\mp}}}{m_{S^{+}}^{2}}\frac{A_{0}^{\gamma\gamma}\Big{(}\frac{m_{D}^{2}}{4m_{S^{+}}^{2}}\Big{)}}{A_{1}^{\gamma\gamma}\Big{(}\frac{m_{D}^{2}}{4m_{W}^{2}}\Big{)}+\frac{4}{3}A_{1/2}^{\gamma\gamma}\Big{(}\frac{m_{D}^{2}}{4m_{t}^{2}}\Big{)}}\right|^{2},$ $\displaystyle\rho_{\tau}$ $\displaystyle=\rho_{b}=s_{\alpha}^{2},$ (14) where $\lambda_{DS^{\pm}S^{\mp}}=s_{\alpha}\upsilon\lambda_{3}+\kappa c_{\alpha}x$ is the scalar triple coupling of the dilaton with charged scalars; and the loop functions $A_{0,1,1/2}^{\gamma\gamma}$ are given in Djouadi:2005gi . Here, we have $\Gamma_{\rm tot}^{D}=s_{\alpha}^{2}\Gamma_{\rm tot}^{D,\,\rm SM}+\Gamma(D\to X_{\rm BSM})$, then if the channels $D\to X_{\rm BSM}$ are closed ($m_{D}<\min(M_{i}+M_{k},2m_{S^{0}},2m_{A^{0}})$), then $\mu_{X\bar{X}}=\rho_{X}$. In order to assess the combined description of the three excesses, we define the total $\chi^{2}$ function as $\chi_{\rm tot}^{2}=\sum_{X=\gamma,\tau,b}\frac{\left(\mu_{X\bar{X}}-\mu_{X\bar{X}}^{\rm exp}\right)^{2}}{\left(\Delta\mu_{X\bar{X}}^{\rm exp}\right)^{2}},$ (15) where the results for the three channels in which the excesses were observed are treated as independent measurements. If one considers $\rho_{\gamma}\sim 1$, then, within the experimental values given in eq. (12), the function in eq. (15) is minimal $\left(\chi_{\rm tot}^{2}\right)^{(\min)}=7.708$ at $(s_{\alpha}^{2}=0.11$, ${\cal B}(D\to X_{\rm BSM})=69.7\%)$. This scenario becomes plausible only if the channel $D\to{\rm inv.}$ is accessible (‘inv.’ denotes invisible particles such as $N_{i}N_{k}$), i.e., $m_{DM}=M_{1}<m_{D}/2$; as will be confirmed later. In the following numerical analysis (Section-5), we will consider parameter points as providing a good description of the excesses if they account for the combined effect of the three excesses at the level of 1$\sigma$ or better. ## 4 The di-Higgs production In the SM, the measurement of di-Higgs ($HH$) production is intriguing not only because it enables the determination of Higgs-boson self-interaction but also because it contributes to understanding EWSB. Non-resonant $HH$ production at the LHC occurs primarily through the dominant gluon fusion (ggF) mode and the sub-dominant vector-boson fusion (VBF) mode. The cross section for $HH$ production at next-to-next-to-leading order (NNLO), including finite top-quark-mass effects in the ggF mode, is $\sigma_{\rm ggF}^{\rm SM}=31.05^{+2.1}_{-7.2}$ fb Dawson:1998py ; Borowka:2016ehy ; Baglio:2018lrj ; deFlorian:2013jea ; Shao:2013bz ; deFlorian:2015moa ; Grazzini:2018bsd ; Baglio:2020wgt . However, in the VBF mode, at next-to-next-to-next-to-leading order (N3LO), the cross section is $\sigma^{SM}_{\rm VBF}=1.73\pm 0.04$ fb Baglio:2012np ; Frederix:2014hta ; Ling:2014sne ; Dreyer:2018rfu ; Dreyer:2018qbw for $m_{h}=125$ GeV at $\sqrt{s}=13$ TeV. The smallness of the $HH$ production cross section in ggF mode at leading order (LO) results from the negative interference between the box and triangle Feynman diagrams, determined by three contributing factors Ahriche:2014cpa ; Baouche:2021wwa ; Ahriche:2021frb : $\sigma^{\rm SM}(HH)=\sigma_{B}+\sigma_{T}+\sigma_{BT},$ (16) where $\sigma_{B}=70.1$ fb represents the box contribution, $\sigma_{T}=9.66$ fb corresponds to the triangle contribution, and $\sigma_{BT}=-49.9$ fb accounts for their interference Spira:1995mt . In the SI-SCM framework, non-resonant $HH$ production through ggF mode includes an additional triangle Feynman diagram mediated through the dilaton field $D$. Therefore, the $HH$ production cross-section can be expressed as follows: $\sigma(HH)=\zeta_{1}\sigma_{B}+\zeta_{2}\sigma_{T}+\zeta_{3}\sigma_{BT},$ (17) where the coefficients $\zeta_{i}$ in this model are modified with respect to the SM as Ahriche:2014cpa $\displaystyle\left.\begin{array}[]{l}\zeta_{1}=c_{\alpha}^{4},\\\ \\\ \zeta_{2}=\left|c_{\alpha}\frac{\lambda_{HHH}}{\lambda_{HHH}^{\text{SM}}}+s_{\alpha}\frac{\lambda_{HHD}}{\lambda_{HHH}^{\text{SM}}}\frac{s-m_{H}^{2}+im_{H}\Gamma_{H}}{s-m_{D}^{2}+im_{D}\Gamma_{D}}\right|^{2},\\\ \\\ \zeta_{3}=c_{\alpha}^{2}\Re\left(c_{\alpha}\frac{\lambda_{HHH}}{\lambda_{HHH}^{\text{SM}}}+s_{\alpha}\frac{\lambda_{HHD}}{\lambda_{HHH}^{\text{SM}}}\frac{s-m_{H}^{2}+im_{H}\Gamma_{H}}{s-m_{D}^{2}+im_{D}\Gamma_{D}}\right)\end{array}\right\\},$ (23) where $\lambda_{HHH}^{\text{SM}}$ is the Higgs triple coupling in the SM; $\sqrt{s}$ is the center-of-mass collision energy, which we will consider to be $\sqrt{s}=14$ TeV at LHC. The expression for the one-loop triple Higgs coupling in the SM is Kanemura:2004mg : $\lambda_{HHH}^{\text{SM}}\simeq\frac{3m_{H}^{2}}{\upsilon}\left[1-\frac{m_{t}^{4}}{\pi^{2}\upsilon^{2}m_{H}^{2}}\right],$ (24) with $m_{t}$ is the top quark mass. Interestingly, a direct measurement of the triple Higgs-boson self-coupling is achievable through resonant $HH$ production at a future $e^{+}e^{-}$ collider. This involves double Higgsstrahlung processes with $W$ or $Z$ bosons, as well as through $WW$ or $ZZ$ fusion. In the case of double Higgsstrahlung ($e^{+}e^{-}\to HHZ$) production at $\sqrt{s}=500$ GeV, the production cross- section can be expressed as eq. (17) using the same coefficients in eq. (23) and the cross-section contributions given as $\sigma_{B}=0.0837$ fb, $\sigma_{T}=0.01565$ fb, and $\sigma_{BT}=0.05685$ fb Ahriche:2021frb . (a) (b) (c) Figure 2: In plot (a), we display the maximum Yukawa couplings $\max(|g_{i,\alpha}|)$ as functions of the masses of the charged inert doublet $m_{S^{\pm}}$ and the neutral scalar $m_{S^{0}}$. In plot (b), the DM annihilation cross-section $\sigma_{\rm DM}$ is presented as a function of $m_{\rm DM}$ and the freeze-out parameter $x_{f}$. Plot (c) illustrates the $M_{W}$ prediction in the SI-SCM model with respect to the scalar mixing angle $s_{\alpha}^{2}$ and $m_{S^{\pm}}$. The horizontal color bands represent the $M_{W}$ measurements at different experiments at a $2\sigma$ level: the green band corresponds to PDG, the blue band to LEP, the red band to CDF, the grey band to ATLAS, the cyan band to the world average, and the yellow band to the SM value. (a) (b) (c) Figure 3: (a) The plot illustrates the signal strengths of the three excesses: $\gamma\gamma,\leavevmode\nobreak\ \tau\tau,\leavevmode\nobreak\ b\bar{b}$. (b) In this plot, the signal strengths of $\gamma\gamma$ and $b\bar{b}$ are presented as functions of the total $\chi^{2}_{\rm tot}$. (c) The third plot displays the dilaton invisible branching ratio versus the scalar mixing angle $s_{\alpha}^{2}$, presented alongside the total $\chi^{2}_{\rm tot}$. (a) (b) Figure 4: The di-Higgs production cross-sections (a) through ggF mode at the LHC with $\sqrt{s}=14$ TeV and (b) future $e^{-}e^{+}$ colliders with $\sqrt{s}=500$ GeV versus the scalar mixing angle $s_{\alpha}^{2}$, where the palette shows the singlet vev $x$ in GeV. The black dashed lines represent the SM predictions. ## 5 Numerical Results and Discussion In this section, our attention is directed towards the parameter space corresponding to the dilaton mass window of 94 GeV $<m_{D}<$ 97 GeV. We systematically consider various theoretical and experimental constraints, including perturbativity, perturbative unitarity, Higgs boson di-photon and invisible decay channels, LEP negative searches, and electroweak precision tests. In addition, we require the DM relic density to match the observed values, and the DM direct detection constraints to be satisfied Soualah:2021xbn . In addition, we impose requirements for the DM relic density to align with observed values and ensure the satisfaction of DM direct detection constraints Soualah:2021xbn . Within this framework, the total Higgs signal strength at the LHC, denoted as $\mu_{{\rm tot}}\geq 0.89$ at 95% confidence level (C.L.) ATLAS:2016neq , implies a condition on the Higgs- dilaton mixing, expressed as $\mu_{{\rm tot}}=c_{\alpha}^{2}\times(1-\mathcal{B}_{\rm inv.})\geq 0.89$. Here, the Higgs invisible branching ratio is constrained by ATLAS to be $\mathcal{B}_{\rm inv.}=\mathcal{B}(H\to N_{i}N_{k})<0.11$ ATLAS:2020kdi . Additionally, we ensure that the chosen values for the model’s free parameters, including the inert masses ($m_{S^{0}}$, $m_{A^{0}}$, $m_{S^{\pm}}$), Majorana masses ($M_{i}$), scalar coupling $\lambda_{3}$, and the singlet vev $x$, correspond to values of the new Yukawa couplings $g_{i,\alpha}$ that satisfy the neutrino oscillation data and LFV constraints. Through a random numerical scan adhering to the aforementioned constraints and conditions, we consider 5.7k benchmark points (BPs) that satisfy the 95 GeV excess within a 95% C.L., meaning the function in eq. (15) should yield $\chi^{2}_{\rm tot}<11.34$. The viable parameter space fulfilling the various conditions and constraints is illustrated in Fig. 2. Fig. 2a reveals that the assumptions of $m_{D}\approx 95$ GeV and $\chi^{2}_{\rm tot}<11.34$ lead to inert masses exceeding 300 GeV. Additionally, the new Yukawa couplings are an order of magnitude smaller than the perturbative limit, in contrast to general cases in the SI-SCM Ahriche:2016cio ; Soualah:2021xbn . It is noteworthy that the DM mass must be smaller than $m_{D}/2$, ensuring the branching ratio ${\cal B}(D\to\text{inv.})$ lies between 50% and 85% to satisfy the assumption $\chi^{2}_{\rm tot}<11.34$ (Fig. 2b). One has to mention that the values of the DM annihilation cross section and the freeze-out parameter $x_{f}=m_{DM}/T_{f}$ ($T_{f}$ is the freeze-out temperature at which the DM decouples from the thermal bath) exhibit typical values for a Weakly Interacting Massive Particle DM candidate. Regarding the CDF-II anomaly, the model’s viable parameters can accommodate any measurement of the $W$-gauge boson mass, Fig. 2c. Importantly, in this model, the correction $\varDelta m_{W}$ is strictly positive, driven by the fact that $\varDelta T$ is always positive and dominates over the values of $\varDelta S$ and $\varDelta U$. In Fig. 3, the signal strength modifiers and relevant observables to the $95.4$GeV scalar candidate are presented. Notably, the excesses $\mu_{\gamma\gamma,b\bar{b}}$ can be simultaneously addressed, while the excess $\mu_{\tau\tau}$ exhibits suppressed values. As illustrated in Fig. 3b, the three excesses can be accommodated at a 99% C.L., given that $8.02<\chi^{2}_{\rm tot}<11.34$. The preference for maximal values in both scalar mixing $s^{2}_{\alpha}\approx 0.11$ and ${\cal B}(D\to\rm inv.)\approx 70\%$ becomes evident when simultaneously matching the three excesses. In the event that the di-tau excess is analyzed by ATLAS and/or re-analyzed by CMS with additional data, and the measured $\mu_{\tau\tau}^{\rm exp}$ is relaxed to a smaller value around 0.6-0.7, addressing the three excesses within this model becomes feasible. In Fig. 4, we present the di-Higgs production cross section at both (a) LHC with $\sqrt{s}=14$ TeV and (b) future $e^{-}e^{+}$ colliders with $\sqrt{s}=500$ GeV versus the scalar mixing angle $s_{\alpha}^{2}$. As seen in Fig. 4a, di-Higgs production at the LHC shows no enhancement, but a reduction of 65% is possible for benchmark points with a smaller singlet vev $x$ and non-suppressed scalar mixing. In contrast, at $e^{+}e^{-}$ colliders, the double Higgsstrahlung cross-section varies between a reduction of 20% and an enhancement of 70% with respect to the SM cross-section (Fig. 4b). While our results indicate that the SI-SCM model aligns with current studies on collider anomalies, including the CDF-II $W$-mass anomaly and the 95 GeV excess Escribano:2023hxj ; Borah:2023hqw , it should be noted that the muon’s magnetic moment, $(g-2)_{\mu}$, anomaly cannot be addressed in this model due to its single negative contribution, which is insufficient to match the measured value. To accommodate the observed anomaly in $(g-2)_{\mu}$, an extension of the SI-SCM through the incorporation of additional scalar components may be necessary. This extension has the potential to enable the model to successfully account for the experimental values associated with $(g-2)_{\mu}$. ## 6 Conclusion In response to anomalies in the measurement of the $W$-mass at CDF-II, along with an observed excess around $\sim 95$ GeV at LEP, CMS, and ATLAS, we conducted a study to address these issues within the framework of the SI-SCM model. This model, characterized by radiatively induced EWSB, not only accounts for light neutrino masses but also proposes a Majorana dark matter candidate, with predictions well within reach of collider experiments. We identified a viable parameter space where inert scalar masses explain the $W$-mass anomaly and the light dilaton $m_{D}\sim 95$ GeV aligns with observations in that region. Under these assumptions, the dilaton might need to decay into invisible neutrinos. Using these parameter space, the di-Higgs production at LHC shows no enhancement compared to the SM. However, at $e^{+}e^{-}$ colliders, the double Higgsstrahlung cross-section varies between a reduction of 20% and an enhancement of 70% with respect to the SM cross- section. ###### Acknowledgements. A.A. and M.L.B. were funded by the University of Sharjah under the research projects No 21021430107 “Hunting for New Physics at Colliders” and No 23021430135 “Terascale Physics: Colliders vs Cosmology”. ## References * (1) T. Aaltonen et al. [CDF], Science 376, no.6589, 170-176 (2022) * (2) R. L. Workman et al. [Particle Data Group], PTEP 2022, 083C01 (2022) * (3) [ATLAS], ATLAS-CONF-2023-004. * (4) S. Schael et al. [ALEPH, DELPHI, L3, OPAL and LEP Electroweak], Phys. Rept. 532, 119-244 (2013) [arXiv:1302.3415 [hep-ex]]. * (5) T. Aaltonen et al. [CDF], Phys. Rev. Lett. 108, 151803 (2012) [arXiv:1203.0275 [hep-ex]]. * (6) V. M. Abazov et al. [D0], Phys. Rev. D 89, no.1, 012005 (2014) [arXiv:1310.8628 [hep-ex]]. * (7) M. Aaboud et al. [ATLAS], Eur. Phys. J. C 78, no.2, 110 (2018) [erratum: Eur. Phys. J. C 78, no.11, 898 (2018)] [arXiv:1701.07240 [hep-ex]]. * (8) R. Aaij et al. [LHCb], JHEP 01, 036 (2022) [arXiv:2109.01113 [hep-ex]]. * (9) A. M. Sirunyan et al. [CMS], Phys. Lett. B 793, 320-347 (2019) [arXiv:1811.08459 [hep-ex]]. * (10) [CMS], CMS-PAS-HIG-20-002. * (11) [ATLAS], ATLAS-CONF-2023-035. * (12) R. Barate et al. [LEP Working Group for Higgs boson searches, ALEPH, DELPHI, L3 and OPAL], Phys. Lett. B 565, 61-75 (2003) [arXiv:hep-ex/0306033 [hep-ex]]. * (13) G. Abbiendi et al. [OPAL], Eur. Phys. J. C 27 (2003), 311-329 [arXiv:hep-ex/0206022 [hep-ex]]. * (14) A. Tumasyan et al. [CMS], JHEP 07 (2023), 073 [hep-ex]]. * (15) S. Bhattacharya, G. Coloretti, A. Crivellin, S. E. Dahbi, Y. Fang, M. Kumar and B. Mellado, [arXiv:2306.17209 [hep-ph]]. * (16) G. Aad et al. [ATLAS], Phys. Lett. B 716 (2012), 1-29 [arXiv:1207.7214 [hep-ex]]. * (17) S. Chatrchyan et al. [CMS], Phys. Lett. B 716 (2012), 30-61 [arXiv:1207.7235 [hep-ex]]. * (18) T. Biekötter, S. Heinemeyer and G. Weiglein, Eur. Phys. J. C 83, no.5, 450 (2023) [arXiv:2204.05975 [hep-ph]]. * (19) F. J. Botella, F. Cornet-Gomez, C. Miró and M. Nebot, Eur. Phys. J. C 82, 915 (2022) [arXiv:2205.01115 [hep-ph]]. * (20) P. Escribano, V. M. Lozano and A. Vicente, [arXiv:2306.03735 [hep-ph]]. * (21) D. Borah, S. Mahapatra, P. K. Paul and N. Sahu, [arXiv:2310.11953 [hep-ph]]. * (22) H. Abouabid, A. Arhrib, R. Benbrik, M. Boukidi and J. E. Falaki, [arXiv:2302.07149 [hep-ph]]. * (23) A. Ahriche, K. L. McDonald and S. Nasri, JHEP 06, 182 (2016) [arXiv:1604.05569 [hep-ph]]. * (24) S. R. Coleman and E. J. Weinberg, Phys. Rev. D 7 (1973), 1888-1910 * (25) A. Ahriche, Nucl. Phys. B 982 (2022), 115896 [arXiv:2110.10301 [hep-ph]]. * (26) E. Ma, Phys. Rev. D 73 (2006), 077301 [arXiv:hep-ph/0601225 [hep-ph]]. * (27) M. O. Khojali, A. Abdalgabar, A. Ahriche and A. S. Cornell, Phys. Rev. D 106 (2022) no.9, 095039 [arXiv:2206.06211 [hep-ph]]. * (28) R. Soualah and A. Ahriche, Phys. Rev. D 105 (2022) no.5, 055017 [arXiv:2111.01121 [hep-ph]]. * (29) A. Merle and M. Platscher, Phys. Rev. D 92 (2015) no.9, 095002 [arXiv:1502.03098 [hep-ph]]. * (30) J. A. Casas and A. Ibarra, Nucl. Phys. B 618 (2001), 171-204 [arXiv:hep-ph/0103065 [hep-ph]]. * (31) A. Ahriche, A. Jueid and S. Nasri, Phys. Lett. B 814 (2021), 136077 [arXiv:2007.05845 [hep-ph]]. * (32) W. Grimus, L. Lavoura, O. M. Ogreid and P. Osland, Nucl. Phys. B 801 (2008), 81-96 [arXiv:0802.4353 [hep-ph]] * (33) S. Centelles Chuliá, R. Srivastava and S. Yadav, Mod. Phys. Lett. A 38 (2023) no.7, 2350049 [arXiv:2206.11903 [hep-ph]]. * (34) H. Flacher, M. Goebel, J. Haller, A. Hocker, K. Monig and J. Stelzer, Eur. Phys. J. C 60 (2009), 543-583 [erratum: Eur. Phys. J. C 71 (2011), 1718] [arXiv:0811.0009 [hep-ph]]. * (35) P. Asadi, C. Cesarotti, K. Fraser, S. Homiller and A. Parikh, Phys. Rev. D 108 (2023) no.5, 055026 [arXiv:2204.05283 [hep-ph]]. * (36) The LHC Higgs Working Group: https://twiki.cern.ch/twiki/bin/view/LHCPhysics/LHCHWG * (37) A. Djouadi, Phys. Rept. 457 (2008), 1-216 [arXiv:hep-ph/0503172 [hep-ph]]. * (38) S. Dawson, S. Dittmaier and M. Spira, Phys. Rev. D 58, 115012 (1998) [arXiv:hep-ph/9805244 [hep-ph]]. * (39) S. Borowka, N. Greiner, G. Heinrich, S. P. Jones, M. Kerner, J. Schlenk, U. Schubert and T. Zirke, Phys. Rev. Lett. 117, no.1, 012001 (2016) [erratum: Phys. Rev. Lett. 117, no.7, 079901 (2016)] [arXiv:1604.06447 [hep-ph]]. * (40) J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, M. Spira and J. Streicher, Eur. Phys. J. C 79, no.6, 459 (2019) [arXiv:1811.05692 [hep-ph]]. * (41) D. de Florian and J. Mazzitelli, Phys. Rev. Lett. 111, 201801 (2013) [arXiv:1309.6594 [hep-ph]]. * (42) D. Y. Shao, C. S. Li, H. T. Li and J. Wang, JHEP 07, 169 (2013) [arXiv:1301.1245 [hep-ph]]. * (43) D. de Florian and J. Mazzitelli, JHEP 09, 053 (2015) [arXiv:1505.07122 [hep-ph]]. * (44) M. Grazzini, G. Heinrich, S. Jones, S. Kallweit, M. Kerner, J. M. Lindert and J. Mazzitelli, JHEP 05, 059 (2018) [arXiv:1803.02463 [hep-ph]]. * (45) J. Baglio, F. Campanario, S. Glaus, M. Mühlleitner, J. Ronca and M. Spira, Phys. Rev. D 103, no.5, 056002 (2021) [arXiv:2008.11626 [hep-ph]]. * (46) J. Baglio, A. Djouadi, R. Gröber, M. M. Mühlleitner, J. Quevillon and M. Spira, JHEP 04, 151 (2013) [arXiv:1212.5581 [hep-ph]]. * (47) R. Frederix, S. Frixione, V. Hirschi, F. Maltoni, O. Mattelaer, P. Torrielli, E. Vryonidou and M. Zaro, Phys. Lett. B 732, 142-149 (2014) [arXiv:1401.7340 [hep-ph]]. * (48) L. S. Ling, R. Y. Zhang, W. G. Ma, L. Guo, W. H. Li and X. Z. Li, Phys. Rev. D 89, no.7, 073001 (2014) [arXiv:1401.7754 [hep-ph]]. * (49) F. A. Dreyer and A. Karlberg, Phys. Rev. D 99, no.7, 074028 (2019) [arXiv:1811.07918 [hep-ph]]. * (50) F. A. Dreyer and A. Karlberg, Phys. Rev. D 98, no.11, 114016 (2018) [arXiv:1811.07906 [hep-ph]]. * (51) N. Baouche, A. Ahriche, G. Faisel and S. Nasri, Phys. Rev. D 104 (2021) no.7, 075022 [arXiv:2105.14387 [hep-ph]]. * (52) A. Ahriche, A. Arhrib and S. Nasri, Phys. Lett. B 743 (2015), 279-283 [arXiv:1407.5283 [hep-ph]]. * (53) M. Spira, [arXiv:hep-ph/9510347 [hep-ph]]. * (54) S. Kanemura, Y. Okada, E. Senaha and C. P. Yuan, Phys. Rev. D 70 (2004), 115002 [arXiv:hep-ph/0408364 [hep-ph]]. * (55) G. Aad et al. [ATLAS and CMS], JHEP 08 (2016), 045 [arXiv:1606.02266 [hep-ex]]. * (56) [ATLAS], ATLAS-CONF-2020-052.
# Efficient Spectrum Sharing Between Coexisting OFDM Radar and Downlink Multiuser Communication Systems Jia Zhu, Yifeng Xiong, , Junsheng Mu, Ronghui Zhang and Xiaojun Jing ###### Abstract This paper investigates the problem of joint subcarrier and power allocation in the coexistence of radar and multi-user communication systems. Specifically, in our research scenario, the base station (BS) provides information transmission services for multiple users while ensuring that its interference to a separate radar system will not affect the radar’s normal function. To this end, we propose a subcarrier and power allocation scheme based on orthogonal frequency division multiple access (OFDM). The original problem consisting involving multivariate fractional programming and binary variables is highly non-convex. Due to its complexity, we relax the binary constraint by introducing a penalty term, provided that the optimal solution is not affected. Then, by integrating multiple power variables into one matrix, the original problem is reformulated as a multi-ratio fractional programming (FP) problem, and finally a quadratic transform is employed to make the non-convex problem a sequence of convex problems. The numerical results indicate the performance trade-off between the multi-user communication system and the radar system, and notably that the performance of the communication system is not improved with power increase in the presence of radar interference beyond a certain threshold. This provides a useful insight for the energy-efficient design of the system. ###### Index Terms: Radar-communication coexistence, sum rate, resource allocation, OFDM. ## I Introduction Radar-Communication Coexistence (RCC) has become the trend of future wireless system development, as both radar and communication systems develop towards higher frequency bands, larger antenna arrays and miniaturisation, becoming increasingly similar in hardware architectures, channel characteristics and signal processing [1, 2, 3, 4]. Thus, both high quality wireless communication services and reliable radar sensing capabilities are ensured. This motivates the study of resource allocation (RA), especially the spectrum sharing between communication and radar systems [5, 6, 7]. Spectrum sharing requires a judicious allocation to mitigate interference and optimize resource utilization for RCC. Due to the difference in the functional purposes and performance metrics of communication and radar systems, there performance cannot be optimized using a unified utility function. Instead, one should maximize the performance of radar (resp. communication) system, under the constraints of ensuring communication (resp. radar) function and resource budget. To accomplish the desired objectives, current research methods can be broadly classified into three categories. The first design strategy is a radar-centric design. It achieves coexistence between the two by limiting the interference of the radar system on the coexisting communication system [8, 9]. In a similar vein, communication-centric approaches have been proposed in several recent studies, which aim for eliminating radar interference through a priori knowledge or receiver design [10, 11, 12]. To this end, the third category involves jointly optimizing the coexisting systems to ensure that both communication and radar performance are satisfactory [13, 14, 15, 16]. In this paper, we consider spectrum sharing between a single base station and a radar system using the OFDM system, where the base station serves multiple communication users simultaneously. We have noticed that in related research on RCC spectrum sharing, either only spectrum resources are optimized or only power is optimized, and the existence of multi-users is often ignored in the case of joint optimization. Unlike previous methods, our algorithm not only focuses on the performance trade-off between radar and communication systems, but also considers the multi-user situation. To this end, we design an effective and practical RA scheme to satisfy the radar system performance while maximizing the total multi-user sum rate. The main contributions of this algorithm are summarized as follows: * • Different from the existing research work on spectrum sharing between radar and communication systems, we focus on resource allocation when multi-user communication systems and radar sensing coexist. * • The optimization problem we formulated is highly non-convex due to the inclusion of coupling variables and binary variables. We eliminate the binary variable using a continuous reformulation that yields identical solutions, and then transform the non-convex problem into a sequential convex problem by quadratic transformation. * • The numerical simulation results prove the effectiveness of the algorithm. Interestingly, we observe that under the radar performance constraint and interference, the communication sum rate does not improve as the allocated power grows beyond a certain threshold, this provides useful insights for energy-efficient design in practical applications. The remainder of this paper is organized as follows. In Section II, we describe the system model and the optimization formulation. The subcarrier and power allocation algorithm is described in Section III. We evaluate the performance of the proposed algorithm by simulations in Section IV. Finally, we conclude the paper in Section V. ## II System Descriptions and Problem Formulation ### II-A System Descriptions Figure 1: Diagram for co-existence of OFDM radar and downlink communication systems. As depicted in Figure 1, we consider a scenario where communication and radar coexist, in which both the communication system and the radar system employ OFDM waveforms with $N$ subcarriers. The BS provides service to $K$ downlink communication users (CUs). The channels are assumed to be stationary over the observation period, and perfect channel state information for both the communication and radar channels is obtained in advance. The radar steers its beam at the potential target area according to the acquired a priori knowledge, so the radar signal does not directly interfere with the CUs, but rather indirectly through target scattering. For the downlink communication system, CUs not only receive communication signals from the BS, but also receive interference signals from radar system in the same frequency band. In particular, for CU $k$, the received signal can be represented as $\bm{y}_{k}^{c}=\sum_{n=1}^{N}f_{n,k}\left(x_{k}P_{n}^{c}h_{n,k}^{2}+x_{r}P_{n}^{r}s_{n,k}^{2}+m_{k}\right)$ (1) where $f_{n,k}$ is the subcarrier sharing factor, $f_{n,k}=1$ indicates that subcarrier $n$ is assigned to CU $k$ and $f_{n,k}=0$ vice versa. $\bm{p}^{c}=[P_{1}^{c},P_{2}^{c},\dots,P_{N}^{c}]^{T}$ is the transmit power vector for communication system, $P_{n}^{c}$ is the power allocated to the subcarrier $n$. $\bm{p}^{r}=[P_{1}^{r},P_{2}^{r},\dots,P_{N}^{r}]^{T}$ is the transmit power vector for radar system, $P_{n}^{r}$ is the power allocated to the subcarrier $n$. $h_{n,k}$ is the channel gain from the BS to user $k$ on subcarrier $n$. $s_{n,k}$ is the interference channel gain from the radar transmitter to communication receiver $k$ on subcarrier $n$. $x_{k}$ is the symbols transmitted on subcarrier $n$ to CU $k$. The symbol streams $x_{k}$ are statistically independent with distribution $\mathcal{CN}(0,1)$. $x_{r}$ is the radar symbols transmitted on subcarrier $n$. The symbol streams $x^{r}$ are statistically independent with distribution $\mathcal{CN}(0,1)$. $m_{k}$ denotes the additive noise of the CU $k$. It is assumed to be distributed as $\mathcal{CN}(0,\sigma_{c,k}^{2})$. To this point, the achievable data rate of CU $k$ on subcarrier $n$ is given by $R_{n,k}=f_{n,k}\log_{2}\left(1+\frac{h_{n,k}^{2}P_{n}^{c}}{s_{n,k}^{2}P_{n}^{r}+\sigma_{c,k}^{2}}\right)$ (2) So we can get the total rate of CU $k$, $R_{k}=\sum_{n=1}^{N}f_{n,k}\log_{2}\left(1+\frac{h_{n,k}^{2}P_{n}^{c}}{s_{n,k}^{2}P_{n}^{r}+\sigma_{c,k}^{2}}\right)$ (3) The data at the radar receiver with can be expressed as $\bm{y}_{r}=\sum_{n=1}^{N}(x_{r}P_{n}^{r}g_{n}^{2}+\sum_{k=1}^{K}f_{n,k}x_{k}P_{n}^{c}u_{n}^{2}+m_{r})$ (4) where $g_{n}$ is the channel gain of radar system on subcarrier $n$, $u_{n}$ is the interference channel gain from the BS to radar receiver on subcarrier $n$. $m_{r}$ denotes the additive noise at the radar receiver. It is assumed to be distributed as $\mathcal{CN}(0,\sigma_{r}^{2})$. To ensure the normal operation of the radar function, we need to ensure that the signal-to-noise ratio (SINR) of the radar receiver is not lower than a certain specified threshold, $\mathrm{SINR}=\frac{\sum_{n=1}^{N}g_{n}^{2}P_{n}^{r}}{\sum_{n=1}^{N}(\sum_{k=1}^{K}u_{n}^{2}f_{n,k}P_{n}^{c}+\sigma_{r}^{2})}\geq\mu.$ (5) To conclude, we have obtained the signal model for the communication system serving multiple users and the signal model for the radar system sensing a single target. Next, we formulated this problem as an optimization problem and then solved for its optimal solution. ### II-B Optimization Problem Formulation We choose the sum rate of CUs as the optimization metric, while ensuring that the SINR of the radar system is above a preset threshold and satisfies the power constraint of the system, etc. The optimization problem is formulated as follows: $\displaystyle\max\ \ \sum_{k=1}^{K}\sum_{n=1}^{N}f_{n,k}\log_{2}\left(1+\frac{h_{n,k}^{2}P_{n}^{c}}{s_{n,k}^{2}P_{n}^{r}+\sigma_{c,k}^{2}}\right)$ (6a) $\displaystyle\mathrm{s.t.}\ \ f_{n,k}\in\\{0,1\\},\forall n\in[1,2,\dots,N],\forall k\in[1,2,\dots,K]$ (6b) $\displaystyle\quad\ \ \sum_{k=1}^{K}f_{n,k}\leq 1,\forall n\in[1,2,\dots,N]$ (6c) $\displaystyle\quad\ \ \mathrm{SINR}\geq\mu,$ (6d) $\displaystyle\quad\ \ \sum_{k=1}^{K}\sum_{n=1}^{N}f_{n,k}P_{n}\leq P_{c}^{\max},$ (6e) $\displaystyle\quad\ \ \sum_{n=1}^{N}P_{n}^{r}\leq P_{r}^{\max},$ (6f) $\displaystyle\quad\ \ 0\leq P_{n}^{c}\leq P_{c},\forall n\in[1,2,\dots,N]$ (6g) $\displaystyle\quad\ \ 0\leq P_{n}^{r}\leq P_{r},\forall n\in[1,2,\dots,N]$ (6h) Constraints (6b) and (6c) ensure that each subcarrier is allocated to at most one CU, Constraint (6d) represents the minimum SINR for radar sensing. $P_{c}^{\max}$ in (6e) and $P_{r}^{\max}$ in (6f) are the maximum transmit powers of the communication and radar transmitters, respectively. Constraints (6e) and (6f) guarantee the transmit powers of communication and radar transmitters cannot go beyond their maximum limits. $P_{c}$ and $P_{r}$ represent the peak power constraints of communication subcarriers and radar subcarriers, respectively. It should be highlighted that constraints (6g) and (6h) has the effect of preventing the concentration of system power on one or a few subcarriers, thus avoiding the loss of frequency diversity advantage and the decrease of distance resolution in multi-carrier systems[17, 18], as well as to prevent subcarrier interference caused by excessive peak power[19], which is practical and necessary. Problem (6) is a mixed-integer non-convex optimization problem and seemingly intractable. In particular, the non-convex combinatorial objective function (6a), the nonconvex constraint (6d) and the binary selection constraint (6b) are the main obstacles for the design of the resource allocation algorithm. Nevertheless, despite these challenges, in the next section, we will provide an efficient algorithm yielding near-optimal solution to problem (6). ## III Maximization Sum Rate based Allocation Design In this section, we reformulation the problem (6) by applying FP [20]. Firstly, we relax the binary variable $f_{n,k}$ to a continuous variable and introduce a penalty term to ensure that the optimal solution of the (6) is not altered. Then, we merge the two type variables into one matrix and use FP to solve problem (6). ### III-A Equivalent continuous reformulation Firstly, an auxiliary variable $w_{n,k}=f_{n,k}P^{c}_{n}\in\left(0,P^{c}_{n}\right)$ is introduced to make the problem statement more concise. $w_{n,k}=P^{c}_{n}$ represents that subcarrier $n$ is allocated to CU $k$ and the corresponding power is $P^{c}_{n}$. By allowing $w_{n,k}$ to take continuous values in $\left(0,P^{c}_{n}\right)$, the communication rate (2) may be rewritten as $\displaystyle R_{n,k}=\log_{2}\left(1+\frac{h_{n,k}^{2}f_{n,k}P_{n}^{c}}{s_{n,k}^{2}P_{n}^{r}+\eta\sum_{i\neq k}^{K}h_{n,k}^{2}f_{n,i}P_{n}^{c}+\sigma_{c,k}^{2}}\right)$ $\displaystyle=\log_{2}\left(1+\frac{h_{n,k}^{2}w_{n,k}}{s_{n,k}^{2}P_{n}^{r}+\eta\sum_{i\neq k}^{K}h_{n,k}^{2}w_{n,i}+\sigma_{c,k}^{2}}\right),$ (7) where $\eta\sum_{i\neq k}^{K}h_{n,i}^{2}w_{n,i}$ is a penalty term representing the interference term caused by subcarrier multiplexing. In particular, if the constraints (6b) and (6c) are satisfied, the value of the penalty term is zero. In fact, the optimal solutions of the relaxed problem always have zero penalty terms for appropriate choices of $\eta$, as indicated by the following proposition: ###### Proposition 1 Optimization problem (6) and (8) are equivalent for all feasible solutions when $\eta\geq 1/2$. $\displaystyle\max\limits_{w_{n,k},P_{n}^{r}}\ \ \sum_{k=1}^{K}\sum_{n=1}^{N}\log_{2}\left(1+\frac{h_{n,k}^{2}w_{n,k}}{s_{n,k}^{2}P_{n}^{r}+\eta\sum_{i\neq k}^{K}h_{n,k}^{2}w_{n,i}+\sigma_{c,k}^{2}}\right)$ (8a) $\displaystyle\mathrm{s.t.}\ \ \mathrm{SINR}\geq\mu$ (8b) $\displaystyle\quad\ \ \sum_{k=1}^{K}\sum_{n=1}^{N}w_{n,k}\leq P_{c}^{\max},$ (8c) $\displaystyle\quad\ \ \sum_{n=1}^{N}P_{n}^{r}\leq P_{r}^{\max}$ (8d) $\displaystyle\quad\ \ 0\leq w_{n,k}\leq P_{c},\forall n\in[1,2,\dots,N],\forall k\in[1,2,\dots,K]$ (8e) $\displaystyle\quad\ \ 0\leq P_{n}^{r}\leq P_{r},\forall n\in[1,2,\dots,N]$ (8f) ###### Proof: Assume the total communication transmit power allocated to subcarrier $n$ is $W_{n}$ and $\sum_{k=1}^{K}w_{n,k}=W_{n}$. Denote $\delta_{n,k}\triangleq\sum_{i\neq k}^{K}w_{n,i}$ represent the power allocated to other subcarriers. The communication rate of user $k$ on subcarrier $n$ in (2) can be rewritten as $\displaystyle R_{n,k}$ $\displaystyle=\log_{2}\left(1+\frac{h_{n,k}^{2}w_{n,k}}{s_{n,k}^{2}P^{r}_{n}+\eta h_{n,k}^{2}\delta_{n,k}+\sigma_{c,k}^{2}}\right)$ $\displaystyle=\log_{2}\left(1+\frac{W_{n}-\delta_{n,k}}{s_{n,k}^{2}h_{n,k}^{-2}P^{r}_{n}+\eta\delta_{n,k}+\sigma_{c,k}^{2}h_{n,k}^{-2}}\right).$ (9) Let us first consider the scenario that there are only two users ($K=2$). We are interested in the condition under which the following holds $\displaystyle\log_{2}\left(1\\!+\\!\frac{W_{n}}{\zeta_{n,1}}\right)\\!\geq$ $\displaystyle\\!\log_{2}\left(1+\frac{W_{n}-\delta_{n,1}}{\zeta_{n,1}\\!+\\!\eta\delta_{n,1}}\right)\\!+$ $\displaystyle\\!\log_{2}\left(1\\!+\\!\frac{\delta_{n,1}}{\zeta_{n,2}+\eta(W_{n}-\delta_{n,1})}\right),$ (10) namely that it is better not to share the power between the two users, where $\zeta_{n,1}=\frac{s_{n,1}^{2}}{h_{n,1}^{2}}P^{r}_{n}+\frac{\sigma_{c,1}^{2}}{h_{n,1}^{2}}$ and $\zeta_{n,2}=\frac{s_{n,2}^{2}}{h_{n,2}^{2}}P^{r}_{n}+\frac{\sigma_{c,2}^{2}}{h_{n,2}^{2}}$, and $\zeta_{n,1}\leq\zeta_{n,2}$. After some algebra, we see that (III-A) holds whenever (11) holds. The equality is apparently achieved when $\delta_{n,1}=0$. Next, we wish to investigate the condition under which (11) holds for all $\delta_{n,1}\geq 0$. To this end, it suffices to show that $f^{\prime}(\delta_{n,1})\leq 0,~{}\forall\delta_{n,1}\geq 0$. Taking the derivative of $f(\delta_{n,1})$ with respect to $\delta_{n,1}$, we have (12). Through observation, we can determine that (13) holds. In other words, as long as $\frac{-2\eta(W_{n}-\delta_{n,1})+2\eta\delta_{n,1}+W_{n}-2\delta_{n,1}-\zeta_{n,2}+\zeta_{n,1}}{(\eta\delta_{n,1}+\zeta_{n,1})(\zeta_{n,1}(W_{n}-\delta_{n,1})+\zeta_{n,2})}\leq 0,$ the condition $f^{\prime}(\delta_{n,1})\leq 0,~{}\forall\delta_{n,1}\geq 0$ will certainly be satisfied. Thus, we see that a sufficient condition for $f^{\prime}(\delta_{n,1})\leq 0,~{}\forall\delta_{n,1}\geq 0$ is $\eta\geq\frac{1}{2}+\frac{\zeta_{n,1}-\zeta_{n,2}}{W_{n}-2\delta_{n,1}}.$ (14) Since the term $\frac{\zeta_{n,1}-\zeta_{n,2}}{W_{n}-2\delta_{n,1}}\leq 0$, we may conclude that $\eta\geq\frac{1}{2}$ is sufficient for (III-A) to hold for any $\delta_{n,1}\geq 0$. We may extend the result to the case of $K=3$. By viewing $W_{n}-\delta_{n,1}$ as the total power (denoted by $\widetilde{W}_{n}$), we are ainterested in the condition under which the following holds $\displaystyle\log_{2}\left(1\\!+\\!\frac{\widetilde{W}_{n}}{\widetilde{\zeta}_{n,1}}\right)\\!\geq$ $\displaystyle\\!\log_{2}\left(1+\frac{\widetilde{W}_{n}-\delta_{n,2}}{\widetilde{\zeta}_{n,1}\\!+\\!\eta\delta_{n,2}}\right)\\!+$ (15) $\displaystyle\\!\log_{2}\left(1\\!+\\!\frac{\delta_{n,2}}{\widetilde{\zeta}_{n,2}+\eta(\widetilde{W}_{n}-\delta_{n,2})}\right),$ where $\zeta_{n,1}+(1/2)\delta_{n,1}$ and $\zeta_{n,2}+(1/2)\delta_{n,1}$ as the noise plus interference (denoted by $\widetilde{\zeta}_{n,1}$ and $\widetilde{\zeta}_{n,2}$, respectively. $\widetilde{\zeta}_{n,1}\leq\widetilde{\zeta}_{n,2}$.) After some algebra, we see that (15) holds whenever (16) holds. Similar to the case when $K=2$, the equality is apparently achieved when $\delta_{n,2}=0$. Next, we wish to investigate the condition under which (16) holds for all $\delta_{n,2}\geq 0$. To this end, it suffices to show that $f^{\prime}(\delta_{n,2})\leq 0,~{}\forall\delta_{n,2}\geq 0$. Taking the derivative of $f(\delta_{n,2})$ with respect to $\delta_{n,2}$, we can derive an inequality equivalent $\eta\geq\frac{1}{2}+\frac{\widetilde{\zeta}_{n,1}-\widetilde{\zeta}_{n,2}}{\widetilde{W}_{n}-2\delta_{n,2}}.$ (17) Since the term $\frac{\widetilde{\zeta}_{n,1}-\widetilde{\zeta}_{n,2}}{\widetilde{W}_{n}-2\delta_{n,2}}\leq 0$, we may conclude that $\eta\geq\frac{1}{2}$ is sufficient for (15) to hold for any $\delta_{n,2}\geq 0$. By employing the method of mathematical induction, the previous arguments can be reused to show that allocating power to $(k+1)$ users is never better than all strategies that allocate power to $k$ users, and hence allocating power exclusively to a single user is always the optimal choice. As a conclusion, the optimization problem (6) and (8) are equivalent whenever $\eta\geq 1/2$. ∎ $\frac{W_{n}}{\zeta_{n,1}}\geq f(\delta_{n,1})=\frac{\delta_{n,1}(W_{n}-\delta_{n,1})+(W_{n}-\delta_{n,1})(\zeta_{n,2}+\eta(W_{n}-\delta_{n,1}))+\delta_{n,1}(\zeta_{n,1}+\eta\delta_{n,1})}{(\zeta_{n,1}+\eta\delta_{n,1})(\zeta_{n,2}+\eta(W_{n}-\delta_{n,1}))}$ (11) $\begin{split}f^{\prime}(\delta_{n,1})=&\frac{-2\eta(W_{n}-\delta_{n,1})+2\eta\delta_{n,1}+W_{n}-2\delta_{n,1}-\zeta_{n,2}+\zeta_{n,1}}{(\eta\delta_{n,1}+\zeta_{n,1})(\zeta_{n,1}(W_{n}-\delta_{n,1})+\zeta_{n,2})}+\\\ &\frac{\eta\left(\eta(W_{n}-\delta_{n,1})^{2}+\delta_{n,1}(\eta\delta_{n,1}+\zeta_{n,1})+\zeta_{n,2}(W_{n}-\delta_{n,1})+\delta_{n,1}(W_{n}-\delta_{n,1})\right)}{(\eta\delta_{n,1}+\zeta_{n,1})(\eta(W_{n}-\delta_{n,1})+\zeta_{n,2})^{2}}-\\\ &\frac{\eta\left(\eta(W_{n}-\delta_{n,1})^{2}+\delta_{n,1}(\eta\delta_{n,1}+\zeta_{n,1})+\zeta_{n,2}(W_{n}-\delta_{n,1})+\delta_{n,1}(W_{n}-\delta_{n,1})\right)}{(\eta\delta_{n,1}+\zeta_{n,1})^{2}(\eta(W_{n}-\delta_{n,1})+\zeta_{n,2})}\end{split}$ (12) $\begin{split}&\frac{\eta\left(\eta(W_{n}-\delta_{n,1})^{2}+\delta_{n,1}(\eta\delta_{n,1}+\zeta_{n,1})+\zeta_{n,2}(W_{n}-\delta_{n,1})+\delta_{n,1}(W_{n}-\delta_{n,1})\right)}{(\eta\delta_{n,1}+\zeta_{n,1})(\eta(W_{n}-\delta_{n,1})+\zeta_{n,2})^{2}}-\\\ &\frac{\eta\left(\eta(W_{n}-\delta_{n,1})^{2}+\delta_{n,1}(\eta\delta_{n,1}+\zeta_{n,1})+\zeta_{n,2}(W_{n}-\delta_{n,1})+\delta_{n,1}(W_{n}-\delta_{n,1})\right)}{(\eta\delta_{n,1}+\zeta_{n,1})^{2}(\eta(W_{n}-\delta_{n,1})+\zeta_{n,2})}\leq 0.\end{split}$ (13) $\frac{\widetilde{W}_{n}}{\widetilde{\zeta}_{n,1}}\geq f(\delta_{n,2})=\frac{\delta_{n,2}(\widetilde{W}_{n}-\delta_{n,2})+(\widetilde{W}_{n}-\delta_{n,2})(\widetilde{\zeta}_{n,2}+\eta(\widetilde{W}_{n}-\delta_{n,2}))+\delta_{n,2}(\widetilde{\zeta}_{n,1}+\eta\delta_{n,2})}{(\widetilde{\zeta}_{n,1}+\eta\delta_{n,2})(\widetilde{\zeta}_{n,2}+\eta(\widetilde{W}_{n}-\delta_{n,2}))}$ (16) ### III-B Sequential convex relaxation Although we have relaxed the binary variable into continuous variable, the existence of coupling variables $w_{n,k}$ and $P^{r}_{n}$ in (8) makes it still a non-convex problem. To solve (8), alternating optimization is a common solution method. By fixing one variable and optimizing another variable, the original problem is decomposed into two sub-problems. The disadvantage of this method is that the decomposed sub-problem is still a non-convex optimization problem, which has high computational complexity and is difficult to obtain the optimal solution. Inspired by [21], we combine the variables $w_{n,k}$ and $P^{r}_{n}$ to be optimized into matrix variable $\bm{P}$, avoiding the process of alternating optimization, and only need to update the matrix variables to get the solution of the problem. Specifically, we define $\bm{e}_{k}=[\bm{0}_{k-1};1;\bm{0}_{K+1-k}]^{T}$, $\bm{\alpha}_{n,k}=\frac{h_{n,k}^{2}}{\sigma_{c,k}^{2}}\bm{e}_{k}$, $\bm{\beta}_{n,k}=\frac{s_{n,k}^{2}}{\sigma_{c,k}^{2}}\bm{e}_{k}$, $\bm{\xi}_{n}=\frac{g_{n}^{2}}{\sigma_{r}^{2}}\bm{e}_{K+1}$, $\bm{\gamma}_{n}=[\frac{u_{n}^{2}}{\sigma_{r}^{2}},\dots,\frac{u_{n}^{2}}{\sigma_{r}^{2}},0]^{T}$ and $\bm{P}=[\bm{w}_{1};\bm{w}_{2};\dots;\bm{w}_{K};\bm{p}^{r}]$ is a $(K+1)\times N$ matrix, $\bm{w}_{k}=[w_{1,k},w_{2,k},\dots,w_{N,k}]^{T}$. we rewrite (III-A) and (5) as $R_{k}(\bm{P})\\!=\\!\\!\sum_{n=1}^{N}\log_{2}\\!\left(\\!1\\!+\\!\frac{\bm{\alpha}_{n,k}^{T}\bm{P}\bm{v}_{n}}{\bm{\beta}_{n,k}^{T}\bm{P}\bm{v}_{n}\\!+\\!\eta\\!\sum_{i\neq k}^{K}\\!\bm{\alpha}_{n,i}^{T}\bm{P}\bm{v}_{n}\\!+\\!1}\\!\right),$ (18) $\mathrm{SINR}(\bm{P})=\frac{\sum_{n=1}^{N}\bm{\xi}_{n}^{T}\bm{P}\bm{v}_{n}}{\sum_{n=1}^{N}(\bm{\gamma}_{n}^{T}\bm{P}\bm{v}_{n}+1)},$ (19) where $\bm{v}_{n}$ is $N$-dimensional vector, $\bm{v}_{n}(j)=1$ when $j=n$ and $\bm{v}_{n}(j)=0$ otherwise. Then, (8) can be rewritten as $\displaystyle\max\limits_{\bm{P}}\ \ \sum_{k=1}^{K}R_{k}(\bm{P})$ (20a) $\displaystyle\mathrm{s.t.}\quad\eqref{SINR-Re},\eqref{O-re-2}-\eqref{O-re-5}$ (20b) Problem (20) remains a challenging non-convex problem due to the strong interdependence of the transmit power levels of different subcarriers, as reflected in the interference terms of the SINR. We take the quadratic transform is proposed in [20] to address the multiple-ratio FP problems. By performing a quadratic transform on each SINR term, we obtain the following reformulation $\displaystyle\max\limits_{\bm{P},\bm{Y}}\ \ Q(\bm{P},\bm{Y})$ (21a) $\displaystyle\mathrm{s.t.}\quad\eqref{SINR-Re}.\eqref{O-re-2}-\eqref{O-re-5}$ (21b) where $\displaystyle Q(\bm{P},\bm{Y})=\sum_{k=1}^{K}\sum_{n=1}^{N}\log_{2}\left(1+2y_{n,k}\sqrt{\bm{\alpha}_{n,k}\bm{P}\bm{v}_{n}}\right.$ (22) $\displaystyle\left.-y_{n,k}^{2}\left(\bm{\beta}_{n,k}\bm{P}\bm{v}_{n}+\eta\sum_{i\neq k}^{K}\bm{\alpha}_{n,i}\bm{P}\bm{v}_{n}+1\right)\right),$ where $y_{n,k}=[\bm{Y}]_{n,k}$ is the auxiliary variable introduced by the quadratic transform for each CU $k$ on subcarrier $n$. We update $y_{n,k}$ and $\bm{P}$ in an iterative fashion. The optimal $y_{k}$ for fixed $\bm{P}$ is $y_{n,k}^{*}=\frac{\sqrt{\bm{\alpha}_{n,k}^{T}\bm{P}\bm{v}_{n}}}{\bm{\beta}_{n,k}^{T}\bm{P}\bm{v}_{n}+\eta\sum_{i\neq k}^{K}\bm{\alpha}_{n,i}^{T}\bm{P}\bm{v}_{n}+1}$ (23) Then, finding the optimal $\bm{P}$ for fixed $y_{n,k}$ is a convex problem and can be solved by off-the-shelf convex optimization solvers. Algorithm 1: Joint Design Algorithm --- Input: $h_{n,k}$, $s_{n,k}$, $u_{n}$, $g_{n}$, $\eta$, $m_{k}$, $m_{r}$, $\bm{p}^{c}$, $\bm{p}^{r}$. Output: Communication power $\bm{p}^{c}$, Radar power $\bm{p}^{r}$. Initialization: Initialize $\bm{p}^{c}$, $\bm{p}^{r}$ and $\eta$ to feasible values. Repeat 1. Solve Problem (23). 2. Update $\bm{P}$ by solving the reformulated convex optimization problem (21) for fixed $y_{n,k}$. until convergence. ## IV Simulations Results We consider a scenario where one BS serves 5 CUs are randomly distributed within the cell. The main simulation parameters are listed in table I. TABLE I: Simulation Parameters Parameters | Values ---|--- Number of subcarriers | $128$ Carrier frequency | $2.4$ GHz Cell radius | $800$ m noise variance $\sigma_{c,k}^{2}$ | $-105$ dB noise variance $\sigma_{r}^{2}$ | $-105$ dB Maximum transmit power $P_{c}^{\max}$ | $50$ dBm Maximum transmit power $P_{r}^{\max}$ | $45$ dBm Maximum subcarrier power $P_{c}$ | $30$ dBm Maximum subcarrier power $P_{r}$ | $30$ dBm Shadowing distribution | Log-normal Shadowing standard deviation | $8$ dB Pathloss model | WINNER II [22] Figure 2: Sum rate versus SINR under different values of $P_{c}^{\max}$ Figure 2 shows the sum rate (in bits per channel use, bpcu) versus radar SINR when $P_{c}^{\max}=[40,42,44,46,48]$ dBm and $P_{c}=P_{r}=30$ dBm. According to the results shown in Figure 3, the proposed algorithm is a nonlinear decreasing function of radar SINR. The reason for such a result is that the transmitted signal of the radar system will interfere with the communication system, and as the minimum SINR required by the radar system increases, the interference to the communication system will become more serious. The dotted line in the figure shows the communication rate in the absence of radar interference. It can be seen that when there is no radar interference, the sum rate is higher than the sum rate when the radar interference is present with same total communication power. On the other hand, when the radar SINR becomes large, the total communication power has little effect on the total rate, and they tend to be the same. Figure 3: Sum rate versus the maximum power of the communication system with different radar SINR constraints. Figure 3 shows the sum rate versus the total communication power when $\mathrm{SINR}=[12,16,20,24,28]$ dB and $P_{r}^{\max}=45$ dBm. We observe that the sum rate increases as the total communication power increases under different SINR constraints. However, an interesting result is that although the total communication power is increasing, the sum rate converges to a constant beyond a certain power threshold. The larger the radar SINR is, the lower the threshold will be. This indicates that radar SINR constraints prevents the sum rate from increasing unboundedly as the total power increases. This result implies that the power of the communication system should be reasonably allocated under given radar SINR constraints to achieve power parsimony. Figure 4: Sum rate versus the maximum power of single radar subcarrier with different radar SINR constraints. Figure 4 shows the sum rate versus the maximum per-subcarrier radar power $P_{r}$ when $P_{c}^{\max}=50$ dBm and $P_{c}=30$ dBm. Figure 4 indicates that the maximum power of single radar subcarrier is positively correlated with the achievable sum rate of communication system, which is the case under different SINR constraints. The conclusion that can be drawn is that the smaller the constraint on radar SINR, the weaker the impact of the change in maximum power of single radar subcarrier on the sum rate. Figure 5: Sum rate versus the maximum power of single communication subcarrier with different radar SINR constraints. In contrast to Figure 4, Figure 5 shows the impact of changing the maximum transmit power of a single communication subcarrier on the sum rate while keeping the remaining variables fixed. Overall, the change in the maximum transmission power of a single communication subcarrier has less impact on the sum rate compared to the impact of changing the maximum transmission power of a single radar subcarrier. ## V Conclusions In this paper, we have investigated the power allocation problem in the spectrum coexistence of radar and communication systems, where we jointly allocate the communication transmission power and radar transmission power to maximize the sum rate of CUs under the constraint of radar sensing performance. Through proper reformulation, the problem containing binary variables is transformed into an equivalent optimization problem with only continuous-valued variables, and then the computationally tedious alternating optimization is replaced by an FP optimization in vector form. Simulation results exhibit the effectiveness of the algorithm and show the trade-off between communication rate and radar SINR. Especially, the interesting result that the sum rate does not increase with the total power beyond certain thresholds can be useful for the design of energy-efficient RCC systems. ## References * [1] A. Hassanien, M. G. Amin, E. Aboutanios, and B. Himed, “Dual-function radar communication systems: A solution to the spectrum congestion problem,” _IEEE Signal Process. Mag_ , vol. 36, no. 5, pp. 115–126, 2019. * [2] K.-W. Huang, M. Bică, U. Mitra, and V. Koivunen, “Radar waveform design in spectrum sharing environment: Coexistence and cognition,” in _Proc. 2015 IEEE Radar Conference (RadarCon)_ , Arlington, VA, USA, 2015, pp. 1698–1703. * [3] Y. Cui, F. Liu, X. Jing, and J. Mu, “Integrating sensing and communications for ubiquitous IoT: Applications, trends, and challenges,” _IEEE Netw._ , vol. 35, no. 5, pp. 158–167, 2021. * [4] F. Liu, C. Masouros, A. Li, H. Sun, and L. Hanzo, “MU-MIMO communications with MIMO radar: From co-existence to joint transmission,” _Trans. Wireless Commun._ , vol. 17, no. 4, pp. 2755–2770, 2018. * [5] C. Ding, J.-B. Wang, H. Zhang, M. Lin, and G. Y. Li, “Joint MIMO precoding and computation resource allocation for dual-function radar and communication systems with mobile edge computing,” _J. Sel. Areas Commun._ , vol. 40, no. 7, pp. 2085–2102, 2022. * [6] J. Lee, Y. Cheng, D. Niyato, Y. L. Guan, and D. González G., “Intelligent resource allocation in joint radar-communication with graph neural networks,” _Trans. Veh. Technol._ , vol. 71, no. 10, pp. 11 120–11 135, 2022. * [7] J. Chen, X. Wang, and Y.-C. Liang, “Impact of channel aging on dual-function radar-communication systems: Performance analysis and resource allocation,” _IEEE Trans. Commun._ , pp. 1–1, 2023. * [8] A. Aubry, A. De Maio, Y. Huang, M. Piezzo, and A. Farina, “A new radar waveform design algorithm with improved feasibility for spectral coexistence,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 51, no. 2, pp. 1029–1038, 2015. * [9] L. G. de Oliveira, B. Nuss, M. B. Alabd, A. Diewald, M. Pauli, and T. Zwick, “Joint radar-communication systems: Modulation schemes and system design,” _IEEE Trans. Microw. Theory Techn._ , vol. 70, no. 3, pp. 1521–1551, 2021\. * [10] F. Liu, C. Masouros, A. Li, T. Ratnarajah, and J. Zhou, “MIMO radar and cellular coexistence: A power-efficient approach enabled by interference exploitation,” _IEEE Trans. Signal Process._ , vol. 66, no. 14, pp. 3681–3695, 2018. * [11] N. Nartasilpa, A. Salim, D. Tuninetti, and N. Devroye, “Communications system performance and design in the presence of radar interference,” _IEEE Trans. Commun._ , vol. 66, no. 9, pp. 4170–4185, 2018. * [12] F. Wang, H. Li, and M. A. Govoni, “Power allocation and co-design of multicarrier communication and radar systems for spectral coexistence,” _IEEE Trans. Signal Process._ , vol. 67, no. 14, pp. 3818–3831, 2019. * [13] B. Li and A. P. Petropulu, “Joint transmit designs for coexistence of MIMO wireless communications and sparse sensing radars in clutter,” _IEEE Trans. Aerosp. Electron. Syst._ , vol. 53, no. 6, pp. 2846–2864, 2017. * [14] L. Zheng, M. Lops, X. Wang, and E. Grossi, “Joint design of overlaid communication systems and pulsed radars,” _IEEE Trans. Signal Process._ , vol. 66, no. 1, pp. 139–154, 2017. * [15] F. Wang and H. Li, “Joint power allocation for radar and communication co-existence,” _IEEE Signal Process. Lett._ , vol. 26, no. 11, pp. 1608–1612, 2019. * [16] Y. Xiong, F. Liu, Y. Cui, W. Yuan, T. X. Han, and G. Caire, “On the fundamental tradeoff of integrated sensing and communications under Gaussian channels,” _IEEE Trans. Inf. Theory_ , Early Access 2023\. * [17] S. Sen, G. Tang, and A. Nehorai, “Multiobjective optimization of OFDM radar waveform for target detection,” _IEEE Trans. Signal Process._ , vol. 59, no. 2, pp. 639–652, 2011. * [18] Y. L. Sit, B. Nuss, and T. Zwick, “On mutual interference cancellation in a MIMO OFDM multiuser radar-communication network,” _Trans. Veh. Technol._ , vol. 67, no. 4, pp. 3339–3348, 2018. * [19] N. Papandreou and T. Antonakopoulos, “Bit and power allocation in constrained multicarrier systems: The single-user case,” _EURASIP J. Adv. Signal Process._ , vol. 2008, pp. 1–14, 2007. * [20] K. Shen and W. Yu, “Fractional programming for communication systems—part I: Power control and beamforming,” _IEEE Trans. Signal Process._ , vol. 66, no. 10, pp. 2616–2630, 2018. * [21] F. Wang and H. Li, “Power allocation for coexisting multicarrier radar and communication systems in cluttered environments,” _IEEE Trans. Signal Process._ , vol. 69, pp. 1603–1613, 2021. * [22] Y. d. J. Bultitude and T. Rautiainen, “IST-4-027756 WINNER II D1. 1.2 V1. 2 WINNER II Channel Models,” _EBITG, TUI, UOULU, CU/CRC, NOKIA, Tech. Rep_ , 2007.
# Uniform (d+1)-bundle over the Grassmannian G(d,n) Rong Du and Yuhang Zhou School of Mathematical Sciences Shanghai Key Laboratory of PMMP, East China Normal University, Rm. 312, Math. Bldg, No. 500, Dongchuan Road, Shanghai, 200241, P. R. China<EMAIL_ADDRESS>School of Mathematical Sciences Shanghai Key Laboratory of PMMP, East China Normal University, No. 500, Dongchuan Road, Shanghai, 200241, P. R. China, <EMAIL_ADDRESS>Both authors are sponsored by Innovation Action Plan (Basic research projects) of Science and Technology Commission of Shanghai Municipality (Grant No. 21JC1401900), Natural Science Foundation of Chongqing Municipality, China (general program, Grant No. CSTB2023NSCQ-MSX0334) and Science and Technology Commission of Shanghai Municipality (Grant No. 22DZ2229014). ###### Abstract This paper is dedicated to the classification of uniform vector bundles of rank $d+1$ over the Grassmannian $G(d,n)$ ($d\leq n-d$) over an algebraically closed field in characteristic $0$. Specifically, we show that all uniform vector bundles with rank $d+1$ over $G(d,n)$ are homogeneous. (Dedicate to the innocent civilians who sacrificed in the war) ## 1 Introduction Algebraic vector bundles over a projective variety $X$ over an algebraically closed field $k$ are basic research objects in algebraic geometry. If $X$ is $\mathbb{P}^{1}$, then any vector bundle over $X$ splits as a direct sum of line bundles by Grothendieck’s well known result. However, if $X$ is a projective space of dimension greater than one, then the structures of vector bundles over $X$ are not so easy to be determined. Since any projective space is covered by lines, it is a natural way to consider the restriction of vector bundles to lines in it. If the splitting type of a vector bundle $E$ keeps same when it restricts to any line in the projective space $\mathbb{P}^{n}$, then $E$ is called a uniform vector bundle on $\mathbb{P}^{n}$. The notion of a uniform vector bundle appears first in Schwarzenberger’ paper ([16]). Over the field in characteristic zero, much work has been done on the classification of uniform vector bundles over projective spaces. In 1972, Van de Ven ([17]) proved that for $n>2$, uniform 2-bundles over $\mathbb{P}^{n}$ split and uniform 2-bundles over $\mathbb{P}^{2}$ are precisely the bundles $\mathcal{O}_{\mathbb{P}^{2}}(a)\bigoplus\mathcal{O}_{\mathbb{P}^{2}}(b)$, $T_{\mathbb{P}^{2}}(a)$ and $\Omega^{1}_{\mathbb{P}^{2}}(b)$, where $a,b\in\mathbb{Z}$. In 1976, Sato ([15]) proved that for $2<r<n$, uniform $r$-bundles over $\mathbb{P}^{n}$ split. In 1978, Elencwajg ([5]) extended the investigations of Van de Ven to show that uniform vector bundles of rank 3 over $\mathbb{P}^{2}$, up to dual, are of the forms $\mathcal{O}_{\mathbb{P}^{2}}(a)\bigoplus\mathcal{O}_{\mathbb{P}^{2}}(b)\bigoplus\mathcal{O}_{\mathbb{P}^{2}}(c),~{}T_{\mathbb{P}^{2}}(a)\bigoplus\mathcal{O}_{\mathbb{P}^{2}}(b)\quad\text{and}\quad S^{2}T_{\mathbb{P}^{2}}(a),$ where $a$, $b$, $c\in\mathbb{Z}$. Previously, Sato ([15]) had shown that for $n$ odd, uniform $n$-bundles over $\mathbb{P}^{n}$ are of the forms $\oplus_{i=1}^{n}\mathcal{O}_{\mathbb{P}^{n}}(a_{i}),~{}T_{\mathbb{P}^{n}}(a)\quad\text{and}\quad\Omega^{1}_{\mathbb{P}^{n}}(b),$ where $a_{i},a,b\in\mathbb{Z}$. So the results of Elencwajg and Sato yield a complete classification of uniform 3-bundles over $\mathbb{P}^{n}$. In particular, all uniform 3-bundles over $\mathbb{P}^{n}$ are homogeneous. Later, Elencwajg, Hirschowitz and Schneider ([6]) showed that Sato’s result is also true for $n$ even. Around 1982, Ellia ([7]) proved that for $n+1=4,5,6$, uniform $(n+1)$-bundles over $\mathbb{P}^{n}$ are of the form $\oplus_{i=1}^{n+1}\mathcal{O}_{\mathbb{P}^{n}}(a_{i}),~{}T_{\mathbb{P}^{n}}(a)\bigoplus\mathcal{O}_{\mathbb{P}^{n}}(b)\quad\text{and}\quad\Omega^{1}_{\mathbb{P}^{n}}(c)\bigoplus\mathcal{O}_{\mathbb{P}^{n}}(d),$ where $a_{i},a,b,c,d\in\mathbb{Z}$. Later, Ballico ([2]) showed that Ellia’s result is still true for any $n$. So, uniform $(n+1)$-bundles over $\mathbb{P}^{n}$ are totally classified. In particular, they are all homogeneous. One can see from Ellia and Ballico’s papers that classification problem becomes harder when the rank of a vector bundle is great that then $n$. Uniform bundles over an algebraically closed field in characteristic $0$ are widely studied not only on projective spaces, but also on special Fano manifolds of Picard number one ([1] [10] [8] [4] [3] [11] [14]). In [12], Mu˜ñoz, Occhetta and Solá Conde proposed a problem as follows. ###### Problem 1.1. Classify low rank uniform principal $G$-bundles ($G$ semisimple algebraic group) on rational homogeneous spaces. Grassmannians are simplest rational homogeneous spaces except projective spaces. In 1985, Guyot ([8]) proved that when $r<d$, uniform $r$-bundles over $G(d,n)$ ($d\leq n-d$) split and when $r=d$, are one of the form: $H_{d}{(a)},H_{d}^{*}{(b)}~{}\text{and}~{}\overset{k}{\underset{i=1}{\oplus}}\mathcal{O}_{G(d,n)}{(c_{i})},$ where $a,b,c_{i}\in\mathbb{Z}$. For $r>d$, the classification problem still remains open. In this paper, we classify uniform $(d+1)$-bundles over $G(d,n)$ ($d\leq n-d$) over an algebraic closed field in characteristic $0$ and deduce the following main theorem. ###### Theorem 1.2. Let $E$ be a uniform $(d+1)$-bundle over the Grassmannian $G(d,n)$ over an algebraically closed field in characteristic zero, where $2\leq d\leq n-d$. Then $E$ is isomorphic to one of the following: $H_{d}{(a)}\bigoplus\mathcal{O}_{G(d,n)}{(b)},H_{d}^{*}{(c)}\bigoplus\mathcal{O}_{G(d,n)}{(d)},\overset{k}{\underset{i=1}{\oplus}}O^{\oplus r_{i}}_{G(d,n)}{(e_{i})},Q_{n-d}(s),Q_{n-d}^{*}(t)~{}\text{and}~{}S^{2}H_{2}(f)$ where $a,b,c,d,e_{i},s,t,f\in\mathbb{Z}$ and $H_{d}$ is the universal subbundle over $G(d,n)$. ###### Corollary 1.3. All uniform vector bundles with rank $d+1$ over $G(d,n)$ over an algebraically closed field in characteristic $0$ are homogeneous. ## 2 Preliminaries ### 2.1 Grassmannian and Flag variety Let $V=k^{n}$, where $k$ is an algebraically closed field. Denote $G(d,n)$ ($d\leq n-d$) to be the Grassmannian manifold of $d$-dimensional linear subspace of $V$. Let $\mathcal{V}:=G(d,n)\times V$ be the trivial vector bundle of rank $n$ on $G(d,n)$ whose fiber at every point is the vector space $V$. We write $H_{d}$ for the rank $d$ subbundle of $\mathcal{V}$ whose fiber at a point $[\Lambda]\in G(d,n)$ is the subspace $\Lambda$ itself; that is, $(H_{d})_{[\Lambda]}=\Lambda\subseteq V=\mathcal{V}_{[\Lambda]}.$ $H_{d}$ is called the _universal subbundle_ on $G(d,n)$; the rank $n-d$ quotient bundle $Q_{n-d}=\mathcal{V}/H_{d}$ is called the _universal quotient bundle_ , i.e., they fit the exact sequence $0\rightarrow H_{d}\rightarrow\mathcal{V}\rightarrow Q_{n-d}\rightarrow 0.$ (1) The flag variety $F(d_{1},d_{2},\cdots,d_{s},n)$ is a manifold parameterizing increase chain of k-linear subspace in $V$: $F(d_{1},d_{2},\cdots,d_{s},n)=\\{(A_{1},\cdots A_{s})\in G(d_{1},n)\times\cdots G(d_{s},n)|A_{1}\subset A_{2}\subset\cdots\subset A_{s},\text{dim}A_{i}=d_{i},i=1,\dots,s\\}$. When $s=n,d_{i}=i$, we call flag variety $F(1,2,\cdots,n-1,n)$ the complete flag variety. There are universal subbundles $H_{d_{i}}(1\leq i\leq s)$ of rank $d_{i}$ over $F(d_{1},d_{2},\cdots,d_{s},n)$ whose fibers at the point $(A_{1},\cdots A_{s})$ is $A_{d_{i}}$. In fact, there are natural projections from $F(d_{1},d_{2},\cdots,d_{s},n)$ to $G(d_{i},n)$. The pull back of the universal subbundle $H_{d_{i}}$ by the projection map is the universal subbundle over $F(d_{1},d_{2},\cdots,d_{s},n)$, which is still denoted by $H_{d_{i}}$ if there is no confusion in the context. In particular, we have universal subbundles $H_{i},1\leq i\leq n$ over $F(1,2,\cdots,n-1,n)$ and denote $X_{i}:=c_{1}(H_{i}/H_{i-1})$. There are also universal quotient bundles $Q_{n-d_{i}}(1\leq i\leq s)$ of rank $n-d_{i}$ over $F(d_{1},d_{2},\cdots,d_{s},n)$ whose fibers at the point $(A_{1},\cdots A_{s})$ is $V/A_{d_{i}}$. The Chow ring of a flag variety can be found in [8] Theorem 3.2. ###### Theorem 2.1 ([8] Theorem 3.2). The Chow ring $A(F)$of flag variety $F=F(d_{1},d_{2},\cdots,d_{k},n)$ is $Z[X_{1},\cdots,X_{d_{1}};X_{d_{1}+1},\cdots X_{d_{2}};\cdots;X_{d_{k}+1},\cdots,X_{n}]/I$, where $Z[X_{1},\cdots,X_{d_{1}};X_{d_{1}+1},\cdots X_{d_{2}};\cdots;X_{d_{k}+1},\cdots,X_{n}]$ is a ring of polynomials that are symmetric about $X_{d_{i}+1},\cdots,X_{d_{i+1}}$, where $0\leq i\leq k-1$, if we assume $d_{0}=0$, and $I$ is an ideal generated by $\underset{a_{1}+a_{2}+\cdots+a_{d_{k}}=j}{\sum}X_{1}^{a_{1}}X_{2}^{a_{2}}\cdots X_{d_{k}}^{a_{d_{k}}},$ where $a_{t}\geq 0$ and $(n-d_{k})<j\leq n$. ### 2.2 Standard diagram We have the standard commutative diagram as follows: (6) where all the morphisms in the diagram are projections. The mapping $q~{}(resp.~{}r)$ identifies $F(d-1,d,n)~{}(resp.~{}F(d,d+1,n)$ with the projective bundle $\mathbb{P}(H_{d})~{}(resp.~{}\mathbb{P}(Q_{n-d}^{*}))$ of $G(d,n)$. Let $\mathscr{H}_{H_{d}^{*}}~{}(resp.~{}\mathscr{H}_{Q_{n-d}})$ be the tautological line bundle on $F(d-1,d,d+1,n)$ associated to $F(d-1,d,n)~{}(resp.~{}F(d,d+1,n))$, i.e. $\mathscr{H}_{H_{d}^{*}}={pr_{1}}^{*}\mathcal{O}_{F(d-1,d,n)}(-1)~{}(resp.~{}\mathscr{H}_{Q_{n-d}}={pr_{2}}^{*}\mathcal{O}_{F(d,d+1,n)}(-1)).$ So ${pr_{1}}_{*}\mathscr{H}_{H_{d}^{*}}=\mathcal{O}_{F(d-1,d,n)}(-1)~{}(resp.~{}{pr_{2}}_{*}\mathscr{H}_{Q_{n-d}}=\mathcal{O}_{F(d,d+1,n)}(-1)).$ Moreover, we also have the standard diagram (11) The map $s$ identifies $F(d-1,d,d+1,n)$ with the projective bundle $\mathbb{P}(H_{d+1}/H_{d-1})$ over $F(d-1,d+1,n)$. Denote $\mathcal{O}_{s}(1)$ to be the relative Hopf bundle associate to $H_{d+1}/H_{d-1}$. In fact, $\mathcal{O}_{s}(1)=\mathscr{H}_{H_{d}^{*}}^{*}$. ### 2.3 relative HN-filtration Let $E$ be a vector bundle of rank $r$ on $G(d,n)$ and $L$ be a line over $G(d,n)$. Suppose that $E|_{L}=\overset{k}{\underset{i=1}{\oplus}}O^{\oplus r_{i}}_{L}{(u_{i})}$, where $u_{1}>u_{2}>\cdots>u_{k}$. If $k,r_{1},r_{2},\cdots,r_{k},u_{1},u_{2},\cdots,u_{k}$ are independent of $L$, then $E$ is called uniform of splitting type $(k,r_{1},r_{2},\cdots,r_{k};u_{1},u_{2},\cdots,u_{k})$. For any uniform vector bundles E of type $(k,r_{1},r_{2},\cdots,r_{k};u_{1},u_{2},\cdots,u_{k})$ over $G(d,n)$,there exist a relative HN-filtration [6] $HN^{i}$ of $p^{*}E$ over $F(d-1,d,d+1,n)$: $0\subset HN^{1}(E)\subset HN^{2}(E)\subset\cdots\subset HN^{k}(E)=p^{*}E,$ where $HN^{i}(E)=Im[s^{*}s_{*}(p^{*}E\otimes\mathcal{O}_{s}(-u_{i}))\otimes\mathcal{O}_{s}(u_{i})\longrightarrow p^{*}E]$ and $\mathcal{O}_{s}(u)=O^{\otimes u}_{s}(1)$. Moreover, there are exact sequences: $0\rightarrow HN^{i}(E)\rightarrow HN^{i+1}(E)\rightarrow s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{i})\rightarrow 0$, where $E_{i}$ is a vector bundle of rank $r_{i}$ over $F(d-1,d+1,n)$. ###### Definition 2.1. The Chern polynomial of a vector bundle $E$ of rank r over variety X is $c_{E}(T)=T^{r}-c_{1}(E)T^{r-1}+c_{2}(E)T^{r-2}+\cdots+(-1)^{r}c_{r}(E)$ where $c_{i}(E)$ is the $i$-th Chern class of $E$. ###### Lemma 2.1 ([8]). The Picard group of $F(d-1,d,d+1,n)$ is generated by $p^{*}{\mathcal{O}_{G}(1)},\mathscr{H}_{H_{d}^{*}}$ and $\mathscr{H}_{Q_{n-d}}$, and their Chern polynomials are $T+(X_{1}+X_{2}+\cdots+X_{d}),T+X_{d}$ and $T-X_{d+1}$ respectively. The following lemma which can be found in [6] will be used in our proofs. ###### Lemma 2.2 ([6] Proposition 3.5). Let $X$ be a projective manifold and $K$ be a vector bundle over $X$. Suppose that $p$ is a morphism from $F=\mathbb{P}(K)$ to $X$ and $\mathscr{H}_{K}$ is the relative Hopf bundle over $F$. Then a vector bundle $E$ over $X$ has a subbundle isomorphic to $K$ if and only if $p^{*}E$ has a subbundle isomophic to $\mathscr{H}_{K}$. ## 3 (d+1)-uniform bundle over $G(d,n)$ when $d<n-d-1$ From now on, we suppose that everything is over an algebraically closed fields in characteristic $0$. First, we consider the Chern polynomial of $p^{*}E$ and its relative HN- filtration. By Lemma 2.1, $c_{p^{*}E}(T)$ and $c_{s^{*}(E_{i})}(T)$ are both homogeneous polynomials. Moreover, $c_{p^{*}E}(T)$ is symmetric about $X_{1},X_{2},\cdots X_{d}$ and $c_{s^{*}(E_{i})}(T)$ is symmetric about $X_{1},X_{2},\cdots X_{d-1}$ and $X_{d},X_{d+1}$ respectively by Theorem 2.1. With the help of the relative HN-filtration, we deduce that $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{j})}(T)$ in the Chow ring $A(F(d-1,d,d+1,n))$, i.e, $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{j})}(T)~{}~{}~{}(\text{mod}~{}I).$ (12) ###### Lemma 3.1 ([8] Proposition 4.1). If $E(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}(T+u_{i}X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1}),$ where $E(T;X_{1},X_{2},\cdots X_{d})$ is a homogeneous polynomial symmetric about $X_{1},X_{2},\cdots X_{d}$ and $S_{i}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is a homogeneous polynomial symmetric about $X_{1},X_{2},\cdots X_{d-1}$ and $X_{d},X_{d+1}$ respectively. Then every irreducible factor of $E(T;X_{1},X_{2},\cdots X_{d})$ has the form $T+\sum_{i=1}^{d}\lambda_{i}X_{i}$, where each $\lambda_{i}$ is a constant. ### 3.1 Chern Polynomial of $p^{*}E$ When the uniform vector bundle $E$ is of rank $d+1$ under the assumption $d<n-d-1$, the Chern polynomial $c_{p^{*}E}(T)$ is of degree $d+1$ and degree of non-zero elements in $I$ are of degrees great than $n-d>d+1$ by Theorem 2.1. So the equation (12) is exactly $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{j})}(T).$ By Lemma 3.1, $c_{p^{*}E}(T)$ has an irreducible polynomial of form: $T+\lambda_{1}(\underbrace{X_{i_{1}}+\cdots+X_{j_{1}}}_{k_{1}})+\lambda_{2}(\underbrace{X_{i_{2}}+\cdots+X_{j_{2}}}_{k_{2}})+\cdots+\lambda_{l}(\underbrace{X_{i_{l}}+\cdots+X_{j_{l}}}_{k_{l}}),$ where $k_{1}+k_{2}+\cdots+k_{l}=d$ and $\lambda_{1},\dots,\lambda_{l}$ are all distinct. Since $c_{p^{*}E}(T)$ is symmetric about $X_{1},X_{2},\cdots X_{d}$, its degree is at least $\dfrac{d!}{{k_{1}}!{k_{2}}!\cdots{k_{l}}!}$. ###### Proposition 3.1. Suppose that $E(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}(T+u_{i}X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1}),$ where $E(T;X_{1},X_{2},\cdots X_{d})$ is a homogeneous polynomial of degree $d+1$ and symmetric about $X_{1},X_{2},\cdots X_{d}$ , and $S_{i}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is a homogeneous polynomial symmetric about $X_{1},X_{2},\cdots X_{d-1}$ and $X_{d},X_{d+1}$ respectively. If the degree of $E(T;X_{1},X_{2},\cdots X_{d})$ is $d+1$, then $E(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}(T+\lambda_{i}(X_{1}+X_{2}+\cdots X_{d}))^{r_{i}}$ or $E(T;X_{1},X_{2},\cdots X_{d})=\overset{d}{\underset{i=1}{\prod}}(T+\lambda_{1}X_{i}+\lambda_{2}(X_{1}+X_{2}+\cdots\\\ +\overset{\wedge}{X_{i}}+\cdots+X_{d}))(T+\lambda^{\prime}_{1}(X_{1}+X_{2}+\cdots+X_{d})).$ ###### Proof. By Lemma 3.1, suppose that an irreducible polynomial of $E(T;X_{1},X_{2},\cdots X_{d})$ is of form $T+\lambda_{1}(\underbrace{X_{i_{1}}+\cdots+X_{j_{1}}}_{k_{1}})+\lambda_{2}(\underbrace{X_{i_{2}}+\cdots+X_{j_{2}}}_{k_{2}})+\cdots+\lambda_{l}(\underbrace{X_{i_{l}}+\cdots+X_{j_{l}}}_{k_{l}}),$ where $k_{1}+k_{2}+\cdots+k_{l}=d$ and $\lambda_{1},\dots,\lambda_{l}$ are all distinct. $\displaystyle\dfrac{d!}{{k_{1}}!{k_{2}}!\cdots{k_{l}}!}$ $\displaystyle=$ $\displaystyle\dfrac{(k_{1}+k_{2}-1)!}{k_{1}!k_{2}!}\cdot\dfrac{(k_{1}+k_{2}+k_{3}-2)!}{(k_{1}+k_{2}-1)!k_{3}!}\cdot\cdots\cdot\dfrac{(k_{1}+\cdots+k_{l}-(l-1))!}{(k_{1}+\cdots+k_{l-1}-(l-2))!k_{l}!}\cdot\dfrac{d!}{(d-(l-1))!}$ $\displaystyle\geq$ $\displaystyle\dfrac{d!}{(d-(l-1))!}.$ If $l\geq 3$, then $d(d-1)\leq\dfrac{d!}{(d-(l-1))!}\leq\dfrac{d!}{{k_{1}}!{k_{2}}!\cdots{k_{l}}!}\leq d+1$ which is a contradiction since $d\geq l\geq 3$. Thus, $l\leq 2$. Let’s consider the case $l=2$ first. Without loss of generality, we may assume $k_{1}\leq k_{2}$. If $k_{1}\geq 2$, then $\dfrac{3}{2}d\leq\dfrac{d!}{k_{1}!k_{2}!}\leq(d+1)$. So $d\leq 2$ contradicts to $d\geq 1+k_{1}\geq 3$. So $k_{1}=1$ and $\overset{d}{\underset{i=1}{\prod}}(T+\lambda_{1}X_{i}+\lambda_{2}(X_{1}+X_{2}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d}))$ divides $E(T;X_{1},X_{2},\cdots X_{d})$ since $E(T;X_{1},X_{2},\cdots X_{d})$ is symmetric about $X_{1},X_{2},\cdots X_{d}$. Comparing the degree of these two polynomials, we get the second identity in the statement. If $l=1$, then $E(T;X_{1},X_{2},\cdots X_{d})$ has a factor $T+\lambda^{\prime}_{1}(X_{1}+X_{2}+\cdots+X_{d})$. Assume that $E(T;X_{1},X_{2},\cdots X_{d})=(T+\lambda^{\prime}_{1}(X_{1}+X_{2}+\cdots+X_{d}))E^{\prime}(T;X_{1},X_{2},\cdots X_{d}).$ Obviously, $E^{\prime}(T;X_{1},X_{2},\cdots X_{d})$ satisfies the condition of Lemma 3.1. So $E(T;X_{1},X_{2},\cdots X_{d})$ is of the form $\overset{d}{\underset{i=1}{\prod}}(T+\lambda_{1}X_{i}+\lambda_{2}(X_{1}+X_{2}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d}))(T+\lambda^{\prime}_{1}(X_{1}+X_{2}+\cdots+X_{d}))$ or $\overset{k}{\underset{i=1}{\prod}}(T+\lambda_{i}(X_{1}+X_{2}+\cdots X_{d}))^{r_{i}}$ by induction. ∎ In [8] (Proposition 2.5), Guyot proved that the uniform vector bundle $E$ with split type $(k;u_{1},\cdots,u_{k};r_{1},\cdots,r_{k})$ is an extension of two uniform vector bundles if $u_{i}-u_{i+1}\geq 2$ for some $i$. So we only need to classify the uniform bundle when $u_{i}-u_{i+1}=1$ for all $i$. Now we are going to classify the uniform vector bundle with different Chern polynomials and prove Theorem 1.2. ###### Proposition 3.2. Let $E$ be a uniform vector bundle of rank $d+1$ over $G(d,n)$. If $u_{i}-u_{i+1}=1$ for all $i$, then the Chern polynomial of $p^{*}E$ (taking a dualization or tensoring with suitable line bundles) are one of the following cases. 1\. $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}(T+u_{i}(X_{1}+X_{2}+\cdots X_{d}))^{r_{i}}$; 2\. $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T-(X_{1}+X_{2}+\cdots+X_{d})$; 3\. $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})T$; 4\. $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T+(X_{1}+X_{2}+\cdots+X_{d}))$; 5\. $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T+2(X_{1}+X_{2}+\cdots+X_{d}))$; 6\. $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+2X_{i})(T+(X_{1}+X_{2}+\cdots+X_{d}))$. ###### Proof. With the result of Proposition 3.1, $c_{p^{*}E}(T)$ is of the form $\overset{k}{\underset{i=1}{\prod}}(T+u_{i}(X_{1}+X_{2}+\cdots X_{d}))^{r_{i}}$ or $\overset{d}{\underset{i=1}{\prod}}(T+\lambda_{1}X_{i}+\lambda_{2}(X_{1}+X_{2}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d}))(T+\lambda_{3}(X_{1}+X_{2}+\cdots+X_{d})$, where $\lambda_{1},\lambda_{2},\lambda_{3}$ are one of $u_{1},u_{2},u_{3}$. After taking dualization of $E$ or tensoring $E$ with $\mathcal{O}_{G(d,n)}(-\lambda_{2})$, the Chern polynomial of $p^{*}E$ must be one in the statement. ∎ ### 3.2 classification of $(d+1)$-uniform bundle over $G(d,n)$ when $d<n-d-1$ We will consider the Chern polynomials in the Proposition 3.2 one by one. ###### Lemma 3.2. Taking a dualization or tensoring with suitable line bundles if necessary, the uniform vector bundle corresponding to case 1, 2, 3, 5 in Proposition 3.2 are $\overset{k}{\underset{i=1}{\oplus}}\mathcal{O}_{G(d,n)}{(a_{i})},$ or $E\cong H_{d}^{*}\bigoplus\mathcal{O}_{G(d,n)}{(j)}$ for some integers $a_{i}$ and $j$. ###### Proof. For case 1, by Proposition 5.2 in [8], $E\cong\overset{k}{\underset{i=1}{\oplus}}\mathcal{O}_{G(d,n)}{(u_{i})}$. For case 2 and 3, by Proposition 3.1, $S_{1}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is a homogeneous polynomial symmetric about $X_{1},X_{2},\cdots X_{d-1}$ and $X_{d},X_{d+1}$ respectively. So $c_{HN^{1}}(T)=T+X_{d}=c_{\mathscr{H}_{H_{d}^{*}}}(T)$. Thus $HN^{1}$ is a line bundle and $HN^{1}\cong\mathscr{H}_{H_{d}^{*}}$. Thus $p^{*}E$ has $\mathscr{H}_{H_{d}^{*}}$ as a subbundle. By Lemma 2.2, $E$ has $H_{d}^{*}$ as subbundle. So we have following exact sequence for some quotient bundle $\mathcal{O}_{G(d,n)}(i)$: $0\longrightarrow H_{d}^{*}\longrightarrow E\longrightarrow\mathcal{O}_{G(d,n)}(i)\longrightarrow 0,$ where $i=0$ or $-1$. Thus $E\cong H_{d}^{*}\bigoplus\mathcal{O}_{G(d,n)}{(i)}$ since $Ext^{1}(\mathcal{O}_{G(d,n)}(i),H_{d}^{*})=H^{1}(G(d,n),H_{d}^{*}\otimes\mathcal{O}_{G(d,n)}(-i))=0.$ For case 5, $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T+2(X_{1}+X_{2}+\cdots+X_{d}))$. Comparing the Chern polynomial of $HN^{1}$ and $p^{*}\mathcal{O}_{G(d,n)}(2)$, we know that they are isomorphic to each other. So the following exact sequence holds for some quotient bundle $F$. $0\longrightarrow p^{*}\mathcal{O}_{G(d,n)}(2)\longrightarrow p^{*}E\longrightarrow F\longrightarrow 0$. Applying $p_{*}$ to the short exact sequence, we have the exact sequence $0\longrightarrow\mathcal{O}_{G(d,n)}(2)\longrightarrow E\longrightarrow p_{*}F\longrightarrow 0$, since $R^{1}p_{*}\mathcal{O}_{G(d,n)}(2)=0$. Restricting the exact sequence to a line in $G(d,n)$, we know that $p_{*}F$ must be a uniform vector bundle of rank $d$. So, by comparing the splitting type and Chern polynomial of $p_{*}F$, it can be seen that $p_{*}F$ is isomorphic to $H_{d}^{*}$ by Proposition 5.4 in [8] or Theorem 1.1 in [4]. Thus $E\cong H_{d}^{*}\bigoplus\mathcal{O}_{G(d,n)}(2)$ or direct sum of line bundles because $Ext^{1}((p_{*}F)^{*},\mathcal{O}_{G(d,n)}(2))=0$. ∎ The following lemma solves for case 4 in Proposition 3.2. ###### Lemma 3.3. When $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T+(X_{1}+X_{2}+\cdots+X_{d})),$ the vector bundle $E\cong H_{d}^{*}\bigoplus\mathcal{O}_{G(d,n)}(1)$. ###### Proof. The HN-filtration gives the exact sequence $0\longrightarrow HN^{1}\longrightarrow p^{*}E\longrightarrow F\longrightarrow 0$ (13) for some quotient bundle $F$. Then we have $c_{HN^{1}}(T)=(T+X_{d})(T+(X_{1}+X_{2}+\cdots+X_{d}))$ and $\displaystyle-c_{1}(HN^{1})$ $\displaystyle=$ $\displaystyle X_{1}+\cdots+X_{d-1}+2X_{d}$ (14) $\displaystyle=$ $\displaystyle c_{1}(H_{1})+c_{1}(H_{2}/H_{1})+\cdots+c_{1}(H_{d-1}/H_{d-2})+2c_{1}(H_{d}/H_{d-1})$ $\displaystyle=$ $\displaystyle-c_{1}(H_{d-1})+2c_{1}(H_{d}).$ Restricting the HN-filtration to the fiber $pr_{1}^{-1}(y)$ at a point $y\in F(d-1,d,n)$, we have $0\longrightarrow HN^{1}|_{pr_{1}^{-1}(y)}\longrightarrow p^{*}E|_{pr_{1}^{-1}(y)}\longrightarrow F|_{pr_{1}^{-1}(y)}\longrightarrow 0,$ and $c_{1}(HN^{1}|_{pr_{1}^{-1}(y)})=0$ by equation (14). Notice that $pr_{1}^{-1}(y)$ is a subvariety of $p^{-1}(q(y))$, so $p^{*}E|_{pr_{1}^{-1}(y)}$ is a trivial bundle. Thus $HN^{1}|_{pr_{1}^{-1}(y)}$ is a trivial bundle and $pr_{1*}HN^{1}$ is a $2$-bundle over $F(d-1,d,n)$. Applying $pr_{1*}$ to the exact sequence (13), we have $0\longrightarrow pr_{1*}HN^{1}\longrightarrow q^{*}E\longrightarrow pr_{1*}F\longrightarrow 0.$ Since the canonical morphism from $pr^{*}_{1}(pr_{1*}HN^{1})$ to $HN^{1}$ restrict to any fiber of $pr_{1}$ is an isomorphism as two trivial bundles of rank $2$, we can get $pr^{*}_{1}(pr_{1*}HN^{1})\cong HN^{1}$. So $c_{1}(HN^{1})=pr^{*}_{1}c_{1}(pr_{1*}HN^{1})$. By equation (14), $c_{1}(HN^{1})=c_{1}(H_{d-1})-2c_{1}(H_{d})=pr_{1}^{*}(c_{1}(\overset{\sim}{H_{d-1}})-2c_{1}(\overset{\sim}{H_{d}})),$ where $\overset{\sim}{H_{d-1}}$ and $\overset{\sim}{H_{d}}$ are universal subbundles over $F(d-1,d,n)$. Thus $c_{1}(pr_{1*}HN^{1})=c_{1}(\overset{\sim}{H_{d-1}})-2c_{1}(\overset{\sim}{H_{d}}).$ So, when restrict $c_{1}(pr_{1*}HN^{1})$ to the fiber of $q$ at $x\in G(d,n)$, we have $c_{1}(pr_{1*}HN^{1})|_{q^{-1}(x)}=\mathcal{O}_{q^{-1}(x)}(-1).$ Thus $pr_{1*}HN^{1}|_{q^{-1}(x)}=\mathcal{O}_{q^{-1}(x)}\oplus\mathcal{O}_{q^{-1}(x)}(-1)$ since $pr_{1*}HN^{1}|_{q^{-1}(x)}$ is a subbundle of $q^{*}E|_{q^{-1}(x)}$ which is a trivial bundle. Therefore, $q_{*}(pr_{1*}HN^{1})$ is a line bundle over $G(d,n)$, i.e. $q_{*}(pr_{1*}HN^{1})=\mathcal{O}_{G(d,n)}(i)$ for some $i$. So we have exact sequence $0\longrightarrow\mathcal{O}_{G(d,n)}(i)\longrightarrow E\longrightarrow p_{*}F\longrightarrow 0.$ Then $c_{p^{*}E}(T)=c_{p^{*}(p_{*}F)}(T)(T+i(X_{1}+X_{2}+\cdots+X_{d}))=\overset{d}{\underset{i=1}{\prod}}(T+X_{i})(T+(X_{1}+X_{2}+\cdots+X_{d})).$ Clearly, $i=1$, so we deduce that $0\longrightarrow\mathcal{O}_{G(d,n)}(1)\longrightarrow E\longrightarrow p_{*}F\longrightarrow 0.$ Obviously, $p_{*}F$ is a uniform vector bundle of rank $d$ over $G(d,n)$ with split type $(1,0,\cdots,0)$. By Proposition 5.4 in [8] or Theorem 1.1 in [4], $p_{*}F$ is isomorphic to $H_{d}^{*}$. Thus $E\cong H_{d}^{*}\bigoplus\mathcal{O}_{G(d,n)}(1)$ since $Ext^{1}(H_{d}^{*},\mathcal{O}_{G(d,n)}(1))=0$. ∎ The following lemma solves for case 6 in Proposition 3.2. ###### Lemma 3.4. There does not exist uniform vector bundle of rank $d+1$ whose Chern polynomial is $c_{p^{*}E}(T)=\overset{d}{\underset{i=1}{\prod}}(T+2X_{i})(T+(X_{1}+X_{2}+\cdots+X_{d}))$ when $d\geq 3$. If $d=2$ and $c_{p^{*}E}(T)=\overset{2}{\underset{i=1}{\prod}}(T+2X_{i})(T+(X_{1}+X_{2}))$, then $E\cong S^{2}H_{2}$. ###### Proof. In this case, we have $c_{HN^{1}}(T)=T+2X_{d}=c_{\mathscr{H}_{H_{d}^{*}}^{\otimes 2}}(T)$ and $c_{HN^{2}/HN^{1}}(T)=T+X_{1}+X_{2}+\cdots+X_{d}=c_{p^{*}\mathcal{O}_{G(d,n)}(1)}(T).$ Since $HN^{1}$, $\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$, $HN^{2}/HN^{1}$ and $p^{*}\mathcal{O}_{G(d,n)}(1)$ are all vector bundle of rank $1$, we get that $HN^{1}\cong\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$ and $HN^{2}/HN^{1}\cong p^{*}\mathcal{O}_{G(d,n)}(1)$. The HN-filtration gives $0\longrightarrow\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\longrightarrow HN^{2}\longrightarrow p^{*}\mathcal{O}_{G(d,n)}(1)\longrightarrow 0$ (15) and $0\longrightarrow HN^{2}\longrightarrow p^{*}E\longrightarrow F\longrightarrow 0$ (16) for some quotient bundle $F$. When $d\geq 3$, viewing $F(d-1,d,n)$ as a projective bundle over $G(d,n)$, we get $R^{i}q_{*}(\mathcal{O}_{F(d-1,d,n)}(-2))=0$ for $i>0$ (see [9] Chapter III, Exercise 8.4 (a)). Since $\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1)$ restricts to a fiber of $pr_{1}$ is trivial for any $d\geq 2$, we get $R^{i}pr_{1*}(\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1))=0$ (17) for $i>0$. So $\displaystyle H^{1}(F(d-1,d,d+1,n),\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle\cong$ $\displaystyle H^{1}(F(d-1,d,n),pr_{1*}(\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1)))$ $\displaystyle=$ $\displaystyle H^{1}(F(d-1,d,n),\mathcal{O}_{F(d-1,d,n)}(-2)\otimes q^{*}\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{1}(G(d,n),q_{*}\mathcal{O}_{F(d-1,d,n)}(-2)\otimes\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{1}(G(d,n),0)$ $\displaystyle=$ $\displaystyle 0.$ Then $Ext^{1}(p^{*}\mathcal{O}_{G(d,n)}(1),\mathscr{H}_{H_{d}^{*}}^{\otimes 2})=0$. So, from (15), $HN^{2}\simeq\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\oplus p^{*}\mathcal{O}_{G(d,n)}(1).$ Applying $p_{*}$ to (16) and noticing that $p_{*}(\mathscr{H}_{H_{d}^{*}}^{\otimes 2}))=q_{*}(\mathcal{O}_{F(d-1,d,n)}(-2))=0$ and $R^{1}p_{*}(\mathscr{H}_{H_{d}^{*}}^{\otimes 2}))=0$, we have $0\longrightarrow\mathcal{O}_{G(d,n)}(1)\longrightarrow E\longrightarrow p_{*}F\longrightarrow 0.$ Moreover, over $F(d-1,d,d+1,n)$, we have the commutative diagram $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}\mathcal{O}_{G(d,n)}(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{p^{*}p_{*}F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{HN^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ The snake lemma gives the exact sequence $0\longrightarrow\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\longrightarrow p^{*}p_{*}F\longrightarrow F\longrightarrow 0$. Restricting to the fiber of $s$ at some point $l$ in $F(d-1,d+1,n)$, which is isomorphic to $\mathbb{P}^{1}$, by Proposition 2.3 in [8], we have $T_{F(d-1,d,d+1,n)/G(d,n)}|_{s^{-1}(l)}=\mathcal{O}_{s^{-1}(l)}(-1)^{\oplus m}.$ Moreover, $\mathscr{H}_{H_{d}^{*}}|_{s^{-1}(l)}=\mathcal{O}_{s^{-1}(l)}(1)$ and $F|_{s^{-1}(l)}\cong\mathcal{O}_{s^{-1}(l)}^{\oplus d-1}$ since $F=p^{*}E/HN^{2}$. So $Hom(T_{F(d-1,d,d+1,n)/G(d,n)},\mathscr{H}om(\mathscr{H}_{H_{d}^{*}}^{\otimes 2},F))|_{s^{-1}(l)}=0$ $\Rightarrow Hom(T_{F(d-1,d,d+1,n)/G(d,n)},\mathscr{H}om(\mathscr{H}_{H_{d}^{*}}^{\otimes 2},F))=0.$ Thus, by Descente-Lemma (Lemma 2.1.2 in [13]), there is a line subbundle $L$ of $p_{*}F$ such that $p^{*}L$ isomorphic to $\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$, which is impossible because $p_{*}p^{*}L\cong L$ while $p_{*}\mathscr{H}_{H_{d}^{*}}^{\otimes 2}=0$. When $d=2$, from the HN-filtration we have following two exact sequences $0\longrightarrow\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\longrightarrow HN^{2}\longrightarrow p^{*}\mathcal{O}_{G(2,n)}(1)\longrightarrow 0$ (18) and $0\longrightarrow HN^{2}\longrightarrow p^{*}E\longrightarrow F\longrightarrow 0.$ (19) Now $F$ is a line bundle, so comparing the Chern polynomial we can get $F=\mathscr{H}_{H_{2}^{*}}^{\otimes-2}\otimes p^{*}\mathcal{O}_{G(2,n)}(2).$ Next, we want to consider the extension of (18) by calculating $H^{1}(F(1,2,3,n),\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1))$. By Leray spectral sequence, we have exact sequence $H^{1}(G(2,n),p_{*}(\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1)))\rightarrow H^{1}(F(1,2,3,n),\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1))\\\ \rightarrow H^{0}(G(2,n),R^{1}p_{*}(\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1)))\rightarrow H^{2}(G(2,n),p_{*}(\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1))).$ Since $p_{*}\mathscr{H}_{H_{2}^{*}}^{\otimes 2}=0$ and $R^{1}p_{*}\mathscr{H}_{H_{2}^{*}}^{\otimes 2}=R^{1}q_{*}(\mathcal{O}_{\mathbb{P}(H_{2})}(-2))=q_{*}\mathcal{O}_{\mathbb{P}(H_{2})}\otimes\wedge^{2}H_{2}^{*}=\mathcal{O}_{G(2,n)}(1)$ (see [9] Chapter III, Exercise 8.4 (c)), we have $\displaystyle H^{1}(F(1,2,3,n),\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{0}(G(2,n),R^{1}p_{*}(\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,n)}(-1)))$ $\displaystyle=$ $\displaystyle H^{0}(G(2,n),R^{1}p_{*}\mathscr{H}_{H_{2}^{*}}^{\otimes 2}\otimes\mathcal{O}_{G(2,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{0}(G(2,n),\mathcal{O}_{G(2,n)}).$ Thus, $\text{dim}Ext^{1}(p^{*}\mathcal{O}_{G(2,n)}(-1)),\mathscr{H}_{H_{2}^{*}}^{\otimes 2})=h^{0}(G(d,n),\mathcal{O}_{G(2,n)})=1.$ From the same argument above, $HN^{2}$ is not the direct sum of $p^{*}\mathcal{O}_{G(2,n)}(1))$ and $\mathscr{H}_{H_{2}^{*}}^{\otimes 2}$. So the extension (18) is nontrivial. Applying $p_{*}$ to (19), we have $0\longrightarrow p_{*}HN^{2}\longrightarrow E\longrightarrow S^{2}H_{2}(2)\longrightarrow R^{1}p_{*}HN^{2}\longrightarrow 0.$ (20) Applying $pr_{1*}$ to (18) and setting $M=pr_{1*}HN^{2}$, we have $0\longrightarrow\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\longrightarrow M\longrightarrow q^{*}\mathcal{O}_{G(2,n)}(1)\longrightarrow 0.$ (21) If the above exact sequence (21) splits, then $pr^{*}_{1}pr_{1*}HN^{2}=pr^{*}_{1}M\simeq\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\oplus p^{*}\mathcal{O}_{G(d,n)}(1).$ Since the canonical morphism from $pr^{*}_{1}(pr_{1*}HN^{2})$ to $HN^{2}$ restrict to any fiber of $pr_{1}$ is an isomorphism as two trivial bundles of rank $2$, we get $pr^{*}_{1}(pr_{1*}HN^{2})\cong HN^{2}$. So $HN^{2}\simeq\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\oplus p^{*}\mathcal{O}_{G(d,n)}(1),$ which is a contradiction. So (21) is a nontrivial extension. Next, we are going to prove $q_{*}M=p_{*}HN^{2}=0$. When $d=2$, $q^{-1}(x)$ is a line for some point $x\in G(2,n)$. We have standard exact sequence $0\longrightarrow\mathcal{I}_{q^{-1}(x)}\longrightarrow\mathcal{O}_{\mathbb{P}(H_{2})}\stackrel{{\scriptstyle h_{x}}}{{\longrightarrow}}\mathcal{O}_{q^{-1}(x)}\longrightarrow 0.$ Tensoring with $\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(d,n)}(-1)$, we get the short exact sequence $0\longrightarrow\mathcal{I}_{q^{-1}(x)}\otimes\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)\longrightarrow\\\ \mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)\stackrel{{\scriptstyle h_{x}}}{{\longrightarrow}}\mathcal{O}_{q^{-1}(x)}(-2)\longrightarrow 0,$ which induces the long exact sequence $0=H^{0}(q^{-1}(x),\mathcal{O}_{q^{-1}(x)}(-2))\longrightarrow H^{1}(\mathbb{P}(H_{2}),\mathcal{I}_{q^{-1}(x)}\otimes\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1))\longrightarrow\\\ H^{1}(\mathbb{P}(H_{2}),\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1))\stackrel{{\scriptstyle f_{x}}}{{\longrightarrow}}H^{1}(q^{-1}(x),\mathcal{O}_{q^{-1}(x)}(-2)).$ Over $F(1,2,n)=\mathbb{P}(H_{2})$, we have the communitative diagram $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{O_{\mathbb{P}(H_{2})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{q^{-1}(x)}(-2)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M|_{q^{-1}(x)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{q^{-1}(x)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$ which induces the commutative diagram $\textstyle{H^{0}(\mathbb{P}(H_{2}),\mathcal{O}_{\mathbb{P}(H_{2})})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{M}}$$\scriptstyle{h^{\prime}_{x}}$$\textstyle{H^{1}(\mathbb{P}(H_{2}),\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{x}}$$\textstyle{H^{0}(q^{-1}(x),\mathcal{O}_{q^{-1}(x)})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{M|{q^{-1}(x)}}}$$\textstyle{H^{1}(q^{-1}(x),\mathcal{O}_{q^{-1}(x)}(-2)).}$ Then $f_{x}\circ{\delta_{M}(1)}=\delta_{M|{q^{-1}(x)}}\circ h^{\prime}_{x}(1)$ and $h^{\prime}_{x}(1)=1,~{}\text{and}~{}\delta_{M}(1)=t\neq 0$ since (21) is the nontrivial extension. If (21) restricting to the fiber $q^{-1}(x)$ is a trivial extension, then $\delta_{M|{q^{-1}(x)}}(1)=f_{x}(t)=0$. So $h^{1}(\mathbb{P}(H_{2}),\mathcal{I}_{q^{-1}(x)}\otimes\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1))=h^{1}(\mathbb{P}(H_{2}),\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)).$ For any other point $y\in G(2,n)$, since the Grassmannian is rationally connected, we have $\mathcal{I}_{q^{-1}(x)}\cong\mathcal{I}_{q^{-1}(y)}$. So $h^{1}(\mathbb{P}(H_{2}),\mathcal{I}_{q^{-1}(y)}\otimes\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1))=h^{1}(\mathbb{P}(H_{2}),\mathcal{O}_{\mathbb{P}(H_{2})}(-2)\otimes q^{*}\mathcal{O}_{G(2,n)}(-1)),$ for all $y\in G(2,n),$ i.e. $f_{y}=0$. It follows that $M|_{q^{-1}(x)}\cong\mathcal{O}_{\mathbb{P}^{1}}(-2)\oplus\mathcal{O}_{\mathbb{P}^{1}},~{}\forall x\in G(2,n).$ Thus $q_{*}M$ is a subbundle of of $E$ of rank $1$. We get the following exact sequence for some quotient bundle $K$ $0\rightarrow q_{*}M\longrightarrow E\longrightarrow K\longrightarrow 0.$ Comparing the Chern polynomial of $p^{*}E$ and $p^{*}q_{*}M$, we have $p^{*}q_{*}M\cong p^{*}\mathcal{O}_{G(2,n)}(1)$ and the following commutative diagram over $F(1,2,3,n)$ holds by checking on any $p$-fibers: $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}\mathcal{O}_{G(d,n)}(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{p^{*}K\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{HN^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ Assume that the sequence $0\longrightarrow p^{*}\mathcal{O}_{G(d,n)}(1)\longrightarrow HN^{2}\longrightarrow D\longrightarrow 0$ is exact for some quotient bundle $D$. Comparing the Chern polynomial of $D$ and $\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$, since $c_{D}(T)=T+2X_{2}=c_{\mathscr{H}_{H_{d}^{*}}^{\otimes 2}}(T)$, we have $D\cong\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$. Then the snake lemma gives the exact sequence $0\longrightarrow\mathscr{H}_{H_{d}^{*}}^{\otimes 2}\longrightarrow p^{*}K\longrightarrow F\longrightarrow 0.$ With the similar argument above, there is a rank $1$ vector subbundle $L$ of $K$ such that $p^{*}L$ isomorphic to $\mathscr{H}_{H_{d}^{*}}^{\otimes 2}$, which is impossible because $p_{*}p^{*}L\cong L$ while $p_{*}\mathscr{H}_{H_{d}^{*}}^{\otimes 2}=0$. Therefore, $f_{x}\neq 0$, i.e $\delta_{M|{q^{-1}(x)}}\neq 0$, which means that $M|_{q^{-1}(x)}\cong\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)$ for all $q$ fibers. Since $p=q\circ pr_{1*}$ and $M=pr_{1*}HN^{2}$ has split type $(-1,-1)$ at each fiber of $q$ , we have $p_{*}HN^{2}=R^{1}p_{*}HN^{2}=0$. Thus $E\cong S^{2}H_{2}(2)$ from (20). ∎ ###### Proposition 3.3. The uniform vector bundles correspond Proposition 3.2 are one of the following: $H_{d}{(a)}\bigoplus\mathcal{O}_{G(d,n)}{(b)},~{}H_{d}^{*}{(c)}\bigoplus\mathcal{O}_{G(d,n)}{(d)},~{}\overset{k}{\underset{i=1}{\oplus}}O^{\oplus r_{i}}_{G(d,n)}{(e_{i})}~{}\text{and}~{}S^{2}H_{2}(f)$ where $a,b,c,d,e_{i},f\in\mathbb{Z}$ and $H_{d}$ is the tautological subbundle on $G(d,n)$. ## 4 Uniform $(d+1)$-bundle over $G(d,n)$ when $d=n-d-1$ and $d=n-d$ In characteristic $0$, the vector bundle $E$ with split form $(k;u_{1},\cdots,u_{k};r_{1},\cdots,r_{k})$ ($u_{1}>u_{2}>\cdots>u_{k}$) is an extension of two uniform vector bundles if $u_{i-1}-u_{i}\geq 2$. So we may assume $u_{i}$s’ are consecutive in this section. ### 4.1 Uniform $(d+1)$-bundle over $G(d,n)$ when $d=n-d-1$ When the uniform vector bundle $E$ is rank of $d+1$, under the assumption $d+1=n-d$, by Theorem 2.1, the Chern polynomial of $p^{*}E$ is $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{i})}(T)+a\sum_{n-d}(X_{1},\cdots,X_{d+1}),$ where $\sum_{n-d}(X_{1},\cdots,X_{d+1})=\underset{a_{1}+a_{2}+\cdots+a_{d+1}=n-d}{\sum}X_{1}^{a_{1}}X_{2}^{a_{2}}\cdots X_{d+1}^{a_{d+1}}$. With the help of [6], only the following cases may happen: * (A) $a=0$; * (B) $k=1$; * (C) $k=2$. When $a=0$, classifying of uniform $d+1$-bundle has been solved by same arguments in the above section. For cases (B) and (C), the uniform bundle $E$ is direct sum of line bundles, $Q_{n-d}(s)$ or $Q_{n-d}^{*}(t)$, where $s,t\in\mathbb{Z}$ (see [8] Proposition 5.4). ### 4.2 Chern polynomial of $p^{*}E$ when $d=n-d$ So we only need to consider $d=n-d$. Now, we suppose that the Chern polynomial of $p^{*}E$ is $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{i})}(T)+(aT+b_{1}X_{1}+\cdots\\\ +b_{d-1}X_{d-1}+cX_{d}+eX_{d+1})\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d+1}).$ Since $c_{p^{*}E}(T)$, $c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{i})}(T)$ and $\sum_{m}(X_{1},\cdots,X_{d+1})~{}(m=n-d,n-d+1)$ are symmetric about $X_{1},\cdots,X_{d-1}$ , we see that $b_{1}=b_{2}=\cdots=b_{d-1}$. So we rewrite the Chern polynomial of $p^{*}E$ as $c_{p^{*}E}(T)=\overset{k}{\underset{i=1}{\prod}}c_{s^{*}(E_{i})\otimes\mathcal{O}_{s}(u_{i})}(T)+(aT+b(X_{1}+\cdots\\\ +X_{d-1})+cX_{d}+eX_{d+1})\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d+1}).$ Now we study the behavior of general polynomial $(\mathscr{E}):~{}E(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}(T+u_{i}X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+\\\ (aT+b(X_{1}+\cdots+X_{d-1})+cX_{d}+eX_{d+1})\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d+1}),$ where $E(T;X_{1},X_{2},\cdots X_{d})$ is a homogeneous polynomial symmetric about $X_{1},X_{2},\cdots X_{d}$ and $S_{i}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is a homogeneous polynomial of degree $r_{i}$ and symmetric about $X_{1},X_{2},\cdots X_{d-1}$ and $X_{d},X_{d+1}$ respectively. Replacing $T$ by $T+u_{j}(X_{1}+\cdots+X_{d})$, we have $(\mathscr{E}_{j}):~{}E^{j}(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}^{j}(T+(u_{i}-u_{j})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+\\\ (aT+b^{j}(X_{1}+\cdots+X_{d-1})+c^{j}X_{d}+eX_{d+1})\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d+1}).$ Case $k=1$: In this case $r_{1}=d+1$. Without loss of generality, we can assume $u_{1}=0$. We use the result in [4] (Proposition 3.1) to get that $E$ a is trivial bundle. Case $k\geq 2$: ###### Proposition 4.1. $e+f=0$. ###### Proof. Taking $X_{1}=X_{2}=\cdots=X_{d-1}=0$ in $\mathscr{E}(j)$ and setting $E^{j}(T;X_{d})=T\overset{\sim}{E^{j}}(T;X_{d})+h^{i}X_{d+1}^{n-d+1},$ we get $T\overset{\sim}{E^{j}}(T;X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}^{j}(T+(u_{i}-u_{j})X_{d},X_{d},X_{d+1})\\\ +(aT+c^{j}X_{d}+(e+f)X_{d+1})\sum_{n-d}(X_{d},X_{d+1})+f^{i}X_{d+1}^{n-d+1}.$ By the arguments after Proposition 1 in [2] ($c$ is $e+f$ here and $n$ is $d$ here, $x_{i}$ is $f^{i}$ here), when $k\geq 4$ or $d$ is odd, there must exist an index $i$ such that $f^{i}=0$. By Lemma 1 in [2], we have $e+f=0$. So we only need to consider $k\leq 3$ and $d$ is even. If $e+f\neq 0$, by the arguments after Proposition 1 in [2], we get $k\geq 3$. So $k=3$ and $d$ is even. When $k=3$, by the arguments after Proposition 1 in [2], we have $r_{1}=r_{3}$. Since $d$ is even, if $r_{1}$ is odd, by [[7] Lemma V.3.1], $e+f=0$ which is a contradiction. If $r_{1}\geq 4$, by [[7] V.6.3.1], $e+f=0$, a contradiction. When $r_{1}=2$, by [[7] V.6.4.2 case(2)], $e+f=0$, also a contradiction. ∎ Comparing the degree of $X_{d+1}$ in both sides of $\mathscr{E}$, there must exist an index $j_{0}$ such that the coefficient of $X_{d+1}^{r_{j_{0}}}$ in $S_{j_{0}}(T+u_{j_{0}}X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is zero, i.e, the coefficient of $X_{d+1}^{r_{j_{0}}}$ in $S_{j_{0}}^{j_{0}}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is zero. ###### Proposition 4.2. If $a=0$ in the equation $\mathscr{E}(j_{0})$, then $b^{j_{0}}=c^{j_{0}}=0$. ###### Proof. Compare the coefficient of $X_{d+1}^{d}$ in both sides of $\mathscr{E}(j_{0})$. Since the coefficient of $X_{d+1}^{r_{j_{0}}}$ in $S_{j_{0}}^{j_{0}}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is zero, if one of $b^{j_{0}}$ and $c^{j_{0}}$ is nonzero, then the coefficient of $X_{d+1}^{r_{i}}$ in $S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is nonzero constant when $i\neq j_{0}$. So, when $i\neq j_{0}$, $S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d},X_{1},\cdots,X_{d+1})$ is of form $a_{i}X_{d+1}^{r_{i}}+\text{lower degree terms}$, where $a_{i}\neq 0$ and $S_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d+1})$ is the form of $F_{j_{0}}^{1}(T,X_{1},\cdots,X_{d})X_{d+1}^{r_{j_{0}}-1}+\text{lower degree terms}$. Taking $X_{1}=\cdots=X_{d}=0$ in $\mathscr{E}(j_{0})$, we have $T^{d+1}=S_{j_{0}}^{j_{0}}(T,X_{d+1})\overset{k}{\underset{i\neq j_{0}}{\prod}}(a_{i}X_{d+1}^{r_{i}}+\text{lower degree terms})$, which is impossible. So $b^{j_{0}}=c^{j_{0}}=0$. ∎ ###### Proposition 4.3. If $a\neq 0$ in the equation $\mathscr{E}(j_{0}):E^{j_{0}}(T;X_{1},X_{2},\cdots X_{d})=\overset{k}{\underset{i=1}{\prod}}S_{i}^{j_{0}}(T+(u_{i}-u_{j})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+\\\ a(T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d})\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d})$ where $m_{1}=\frac{b^{j_{0}}}{a},m_{2}=\frac{c^{j_{0}}}{a}$, then $T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d}|S_{j_{0}}^{j_{0}}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ and $T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d}|E^{j_{0}}(T;X_{1},X_{2},\cdots X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ Moreover, if deg $S_{j_{0}}^{j_{0}}=1$, then $m_{2}=0$; if deg $S_{j_{0}}^{j_{0}}>1$, then $m_{1}=m_{2}=0$. ###### Proof. Suppose that $a\neq 0$. By the similar argument in Proposition 4.2, $S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d},X_{1},\cdots,X_{d+1})$ is of the form $a_{i}X_{d+1}^{r_{i}}+\text{lower degree terms}$, where $a_{i}\neq 0$, when $i\neq j_{0}$, and $S_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d+1})$ is of the form $F_{j_{0}}^{1}(T,X_{1},\cdots,X_{d})X_{d+1}^{r_{j_{0}}-1}+\text{lower degree terms}$. Taking $T=-m_{1}(X_{1}+\cdots+X_{d-1})-m_{2}X_{d}$ in $\mathscr{E}(j_{0})$, we get that $E^{j_{0}}(-m_{1}(X_{1}+\cdots+X_{d-1})-m_{2}X_{d};X_{1},X_{2},\cdots X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})=\\\ \overset{k}{\underset{i=1}{\prod}}S_{i}^{j_{0}}(-m_{1}(X_{1}+\cdots+X_{d-1})-m_{2}X_{d}+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1}).$ Since left side of the above equation is independent of $X_{d+1}$ and $a_{i}\neq 0$, $S_{j_{0}}^{j_{0}}(-m_{1}(X_{1}+\cdots+X_{d-1})-m_{2}X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})=0$. So $T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d}|S_{j_{0}}^{j_{0}}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ and $T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d}|E^{j_{0}}(T;X_{1},X_{2},\cdots X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ Assume that $E^{j_{0}}(T;X_{1},\cdots,X_{d})=(T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d}){\overset{\sim}{E}}^{j_{0}}(T,X_{1},\cdots,X_{d})$ and $S_{j_{0}}^{j_{0}}(T,X_{1}\cdots,X_{d-1};X_{d},X_{d+1})\\\ =(T+m_{1}(X_{1}+\cdots+X_{d-1})+m_{2}X_{d})\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1}).$ When deg $S_{j_{0}}^{j_{0}}=1$, since $S_{j_{0}}^{j_{0}}(T;X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})$ is symmetric about $X_{1},\cdots,X_{d-1}$ and $X_{d},X_{d+1}$ respectively, we get $m_{2}=0$. If deg $S_{j_{0}}^{j_{0}}>1$ and $m_{1}\neq m_{2}$, then $E(T,X_{1},\cdots,X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})=\\\ \overset{d}{\underset{i=1}{\prod}}(T+m_{1}(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d}+m_{2}X_{i})(T+n(X_{1}+\cdots+X_{d}))$ since $E^{j_{0}}(T;X_{1},X_{2},\cdots X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})$ is symmetric about $X_{1},\cdots,X_{d}$. The following equation is deduced from $\mathscr{E}(j_{0})$. $\mathscr{\overline{E}}(j_{0}):\overset{d-1}{\underset{i=1}{\prod}}(T+m_{1}(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d})+m_{2}X_{i})(T+n(X_{1}+\cdots+X_{d}))=\\\ \overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d+1})\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ Taking $T=-n(X_{1}+\cdots+X_{d})$ in $\mathscr{\overline{E}}(j_{0})$, we have $0=\overset{\sim}{S}_{j_{0}}^{j_{0}}(-n(X_{1}+\cdots+X_{d}),X_{1},\cdots,X_{d+1})\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(-n(X_{1}+\cdots+X_{d})+\\\ (u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1})$ which contradicts to the fact that $\sum_{n-d}(X_{1},\cdots,X_{d+1})$ is irreducible when $d\geqslant 2$. So $m_{1}=m_{2}$. If $m_{1}\neq 0$, set $E^{j_{0}}(T;X_{1},\cdots,X_{d})=(T+m_{1}(X_{1}+\cdots+X_{d-1}+X_{d}))\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})$ and $S_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1})=\\\ (T+m_{1}(X_{1}+\cdots+X_{d-1}+X_{d}))\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1}).$ From $\mathscr{E}(j_{0})$, we have $\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})-a\sum_{d}(X_{1},\cdots,X_{d+1})=\\\ \overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d+1})\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1}).$ Clearly, $\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})$ is symmetric about $X_{1},\cdots,X_{d}$ while ${S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1})$ is symmetric about $X_{1},\cdots,X_{d-1}$ and $X_{d},X_{d+1}$ respectively, so $T+m_{1}(X_{1}+\cdots+X_{d-1}+X_{d+1})|\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1})$. Then $T+m_{1}(X_{1}+\cdots+X_{d-1}+X_{d+1})|E^{j_{0}}(T;X_{1},\cdots,X_{d})-a\sum_{d}(X_{1},\cdots,X_{d+1})$. Since the right side of above equation is symmetric about $X_{1},\cdots,X_{d}$, $E^{j_{0}}(T;X_{1},\cdots,X_{d})-a\sum_{d}(X_{1},\cdots,X_{d+1})=\overset{d}{\underset{i=1}{\prod}}(T+m_{1}(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d+1}).$ Comparing the coefficient of $T^{d-1}$, we know that $(m_{1}(d-1))(X_{1}+\cdots X_{d})+dm_{1}X_{d+1}$ is independent of $X_{d+1}$. So $m_{1}=0$, which contradicts to the assumption. Therefore, $m_{1}=m_{2}=0$. ∎ ###### Proposition 4.4. When $a\neq 0$ and deg $S_{j_{0}}=1$, $E^{j_{0}}(T;X_{1},\cdots,X_{d})$ is one of the following cases. _(i)._ $k=2$ (without loss of generality, we can assume $j_{0}=1$.): $S_{1}(T,X_{1},\cdots,X_{d+1})=T+u_{2}(X_{1}+\cdots+X_{d-1}),$ $S_{2}(T,X_{1},\cdots,X_{d+1})=\overset{d-1}{\underset{i=1}{\prod}}(T+u_{2}(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots\\\ +X_{d-1})+u_{1}X_{i})(T+u_{2}(X_{1}+\cdots+X_{d-1}))+a\sum_{n-d}(X_{1},\cdots,X_{d+1}),$ $E(T,X_{1},\cdots,X_{d})=\overset{d}{\underset{i=1}{\prod}}(T+u_{2}(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots\\\ +X_{d})+u_{1}X_{i})(T+u_{2}(X_{1}+\cdots+X_{d}))+f\sum_{n-d+1}(X_{1},\cdots,X_{d})$ and $\mathscr{E}(1):E^{1}(T,X_{1},\cdots,X_{d})=\overset{d}{\underset{i=1}{\prod}}(T+(u_{2}-u_{1})(X_{1}+\cdots\\\ +\overset{\wedge}{X_{i}}+\cdots+X_{d}))(T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d}))+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ _(ii)._ $k=2$ (without loss of generality, we can assume $j_{0}=1$) : $S_{1}(T,X_{1},\cdots,X_{d+1})=T+u_{1}(X_{1}+\cdots+X_{d-1}),$ $S_{2}(T,X_{1},\cdots,X_{d+1})=(T+u_{2}(X_{1}+\cdots+X_{d-1}))^{d}+a\sum_{n-d}(X_{1},\cdots,X_{d+1}),$ $E(T,X_{1},\cdots,X_{d})=(T+u_{1}(X_{1}+\cdots+X_{d})((T+u_{2}(X_{1}+\cdots\\\ +X_{d}))^{d}))+f\sum_{n-d+1}(X_{1},\cdots,X_{d})$ and $\mathscr{E}(1):E^{1}(T,X_{1},\cdots,X_{d})=T(T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d}))^{d}+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ _(iii)._ $k=3$: $E(T,X_{1},\cdots,X_{d})=(T+u_{j_{0}}(X_{1}+\cdots+X_{d}))[(T+u_{q}(X_{1}+\cdots\\\ +X_{d})-\beta X_{d+1})\sum_{d-1}(T+u_{q}(X_{1}+\cdots+X_{d}),\beta X_{1},\cdots,\\\ \beta X_{d+1})+(\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}),$ $S_{j_{0}}(T,X_{1},\cdots,X_{d+1})=T+u_{j_{0}}(X_{1}+\cdots+X_{d-1}),$ $S_{p}(T,X_{1},\cdots,X_{d+1})=T+u_{q}(X_{1}+\cdots+X_{d-1})+(u_{q}-u_{p})(X_{d}+X_{d+1})$ and $S_{q}(T,X_{1},\cdots,X_{d+1})=\sum_{d-1}(T+u_{q}(X_{1}+\cdots+X_{d-1}),(u_{p}-u_{q})X_{1},\cdots,(u_{p}-u_{q})X_{d+1}),$ where $p$ and $q$ are two indices different from $j_{0}$, $r_{p}=1$, $r_{q}=d-1$ and $\beta=u_{p}-u_{q}$. The $\mathscr{E}(j_{0})$ is : $E^{j_{0}}(T,X_{1},\cdots,X_{d})=T[(T+(u_{q}-u_{j_{0}})(X_{1}+\cdots\\\ +X_{d})-\beta X_{d+1})\sum_{d-1}(T+(u_{q}-u_{j_{0}})(X_{1}+\cdots+X_{d}),\beta X_{1},\cdots,\\\ \beta X_{d+1})+(\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ ###### Proof. When deg $S_{j_{0}}=1$, by Proposition 4.3, $S_{j_{0}}^{j_{0}}=T+m(X_{1}+\cdots+X_{d-1})$, where $m=\frac{b^{j_{0}}}{a}$. Assume that $E^{j_{0}}(T,X_{1},\cdots,X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})=(T+m(X_{1}+\cdots+X_{d-1}))E^{{}^{\prime}j_{0}}(T,X_{1},\cdots,X_{d}).$ Then the equation $\mathscr{E}(j_{0})$ is of the form $E^{j_{0}}(T;X_{1},X_{2},\cdots X_{d})=(T+m(X_{1}+\cdots+X_{d-1}))\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+\\\ (u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots,X_{d-1};X_{d},X_{d+1})+a[T+m(X_{1}+\cdots+\\\ X_{d-1})]\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ So $E^{{}^{\prime}j_{0}}(T,X_{1},\cdots,X_{d})=\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots,\\\ X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ Taking $X_{1}=\cdots=X_{d-1}=0$, in [7][iii@-2], the author proved that only the following cases may happen. 1. A). $a=0$; 2. B). $k-1=1$; 3. C). $k-1=2$. Case A). $a=0$: By Proposition 4.2, we get $b^{j_{0}}=c^{j_{0}}=0$. So, we reduce the question to Section 3. Case B). $k=2$: Without loss of generality, we can assuming $j_{0}=1$. If $m\neq 0$, then $E^{1}(T,X_{1},\cdots,X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})$ is symmetric about $X_{1},\cdots,X_{d}$ and divisible by $T+m(X_{1}+\cdots+X_{d-1})$. So $E^{1}(T,X_{1},\cdots,X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})=\\\ \overset{d}{\underset{i=1}{\prod}}[T+m(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d})]F(T,X_{1},\cdots,X_{d}).$ Since $E^{1}(T,X_{1},\cdots,X_{d})$ is of degree $d+1$, $F(T,X_{1},\cdots,X_{d})$ is of degree $1$ and symmetric about $X_{1},\cdots,X_{d}$. So $F(T,X_{1},\cdots,X_{d+1})=T+g(X_{1}+\cdots+X_{d}).$ The following equation is deduced from $\mathscr{E}(1)$. $\overset{d-1}{\underset{i=1}{\prod}}(T+m(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d})(T+g(X_{1}+\cdots+X_{d}))=\\\ S_{2}^{1}(T+(u_{2}-u_{1})X_{d},X_{1},\cdots,X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ Taking $X_{1}=\cdots=X_{d-1}=0$ and $T=T^{\prime}+(u_{1}-u_{2})X_{d}$ in above equation, we have $(T^{\prime}+(m+u_{1}-u_{2})X_{d})^{d-1}(T^{\prime}+(g+u_{1}-u_{2})X_{d})=\\\ S_{2}(T^{\prime},0,\cdots,0,X_{d},X_{d+1})+a\sum_{d}(X_{d},X_{d+1}).$ Since the right side of the above equation is symmetric about $X_{d},X_{d+1}$, we can get $m+u_{1}-u_{1}=g+u_{1}-u_{2}=0,~{}\emph{i.e.}~{}m=g=u_{2}-u_{1}.$ Thus, the equation $\mathscr{E}(1)$ is $E^{1}(T,X_{1},\cdots,X_{d})=\overset{d}{\underset{i=1}{\prod}}(T+(u_{2}-u_{1})(X_{1}+\cdots\\\ +\overset{\wedge}{X_{i}}+\cdots+X_{d})(T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d}))+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ If $m=0$, then $\mathscr{E}(1)$ becomes $E^{1}(T;X_{1},X_{2},\cdots X_{d})=TS_{2}^{1}(T+(u_{2}-u_{1})X_{d};X_{1},X_{2},\cdots,\\\ X_{d-1};X_{d},X_{d+1})+aT\sum_{n-d}(X_{1},\cdots,X_{d+1})+f\sum_{n-d+1}(X_{1},\cdots,X_{d}).$ Suppose that $E^{1}(T;X_{1},X_{2},\cdots X_{d})-f\sum_{n-d+1}(X_{1},\cdots,X_{d})=TE^{{}^{\prime}1}(T,X_{1},\cdots,X_{d})$. Then $E^{{}^{\prime}1}(T,X_{1},\cdots,X_{d})=S_{2}^{1}(T+(u_{2}-u_{1})X_{d};X_{1},X_{2},\cdots,\\\ X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ Clearly, $E^{{}^{\prime}1}(T,X_{1},\cdots,X_{d})$ is symmetric about $X_{1},\cdots,X_{d}$. So, by Proposition 4.2 in [8], we have $E^{1^{\prime}}(T;X_{1},X_{2},\cdots X_{d})=(T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d}))^{d}$. The equation $\mathscr{E}(1)$ is $E^{1}(T,X_{1},\cdots,X_{d})=T(T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d}))^{d}+f\sum_{n-d+1}(X_{1},\cdots,X_{d})$. Case C): $k=3$. If $m\neq 0$, by argument above, we have $\overset{d-1}{\underset{i=1}{\prod}}(T+m(X_{1}+\cdots+\overset{\wedge}{X_{i}}+\cdots+X_{d})(T+g(X_{1}+\cdots+X_{d}))=\\\ \overset{3}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+(u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots,X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ Taking $T=-g(X_{1}+\cdots+X_{d})$ in above equation, we get $0=\overset{3}{\underset{i\neq j_{0}}{\prod}}S_{i}^{{}^{\prime}j_{0}}(X_{1},X_{2},\cdots,X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1})$. Since $\sum_{n-d}(X_{1},\cdots,X_{d+1})$ is irreducible when $d+1\geq 3$, we have $a=0$. We reduce the case to Section 3. If $m=0$, $E^{{}^{\prime}j_{0}}$ is symmetric about $X_{1},\cdots,X_{d}$, by Lemma 4.2.3 in [8], we have $E^{{}^{\prime}j_{0}}(T,X_{1},\cdots,X_{d})=(T^{*}-\beta X_{d+1})\sum_{d-1}(T^{*},\beta X_{1},\cdots,\\\ \beta X_{d+1})+(\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1}),$ where $p$ and $q$ are two indices different from $j_{0}$, $r_{p}=1$, $r_{q}=d-1$ and $\beta=u_{p}-u_{q}$. Now, $E^{j_{0}}(T^{*},X_{1},\cdots,X_{d})=T\sum_{d}(T^{*},\beta X_{1},\cdots,\beta X_{d})+f\sum_{n-d+1}(X_{1},\cdots,X_{d})$, where $T^{*}=T+(u_{q}-u_{j_{0}})(X_{1}+\cdots+X_{d})$, $\beta=u_{p}-u_{q}$ and $p$ and $q$ are two indices different from $j_{0}$, $r_{p}=1$ and $r_{q}=d-1$. Thus we get $\mathscr{E}(1)$ when $k=3$. ∎ ###### Proposition 4.5. When $a\neq 0$ and deg $S_{j_{0}}>1$, then $E^{j_{0}}(T,X_{1},\cdots,X_{d})$ is one of the following cases. _(i)._ $k=2$ (without loss of generality, we can assume $j_{0}=1$): $E(T,X_{1},\cdots,X_{d})=(T+u_{1}(X_{1}+\cdots+X_{d}))[(T+\\\ u_{1}(X_{1}+\cdots+X_{d})+\beta X_{d+1})\sum_{d-1}(T+u_{q}(X_{1}+\cdots+X_{d}),-\beta X_{1},\cdots,-\beta X_{d+1})+\\\ (-\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}),$ $S_{1}(T,X_{1},\cdots,X_{d+1})=[T+u_{1}(X_{1}+\cdots+X_{d-1})][T+u_{2}(X_{1}+\cdots\\\ +X_{d-1})+\beta(X_{d}+X_{d+1})],$ $S_{2}(T,X_{1},\cdots,X_{d+1})=\sum_{d-1}(T+u_{2}(X_{1}+\cdots+X_{d-1}),-\beta X_{1},\cdots,-\beta X_{d+1})$ and $\mathscr{E}(1):E^{1}(T,X_{1},\cdots,X_{d})=T[(T+\beta(X_{1}+\cdots\\\ +X_{d+1}))\sum_{d-1}(T+\beta(X_{1}+\cdots+X_{d}),-\beta X_{1},\cdots,-\beta X_{d+1})+\\\ (-\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}),$ where $\beta=u_{2}-u_{1}$. _(ii)._ $k=2$ (without loss of generality, we can assume $j_{0}=1$): $E(T,X_{1},\cdots,X_{d})=(T+u_{1}(X_{1}+\cdots+X_{d}))[(T+u_{1}(X_{1}+\cdots\\\ +X_{d})+\beta X_{d+1})\sum_{d-1}(T+u_{1}(X_{1}+\cdots+X_{d}),-\beta X_{1},\cdots,-\beta X_{d+1})+\\\ (-\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}),$ $S_{1}(T,X_{1},\cdots,X_{d+1})=\sum_{d-1}(T+u_{1}(X_{1}+\cdots\\\ +X_{d-1}),-\beta X_{1},\cdots,-\beta X_{d+1})[T+u_{1}(X_{1}+\cdots+X_{d-1})],$ $S_{2}(T,X_{1},\cdots,X_{d+1})=T+u_{1}(X_{1}+\cdots+X_{d-1})+\beta(X_{d}+X_{d+1})$ and $\mathscr{E}(1):E^{1}(T,X_{1},\cdots,X_{d})=T[(T+\beta X_{d+1})\sum_{d-1}(T,-\beta X_{1},\cdots,\\\ -\beta X_{d+1})+(-\beta)^{n-d}\sum_{n-d}(X_{1},\cdots,X_{d+1})]+f\sum_{n-d+1}(X_{1},\cdots,X_{d}),$ where $\beta=u_{1}-u_{2}$. ###### Proof. Since deg$S_{j_{0}}>1$, by Proposition 4.3, $E^{j_{0}}(T;X_{1},\cdots,X_{d})=T\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})$ and $S_{j_{0}}^{j_{0}}(T,X_{1}\cdots,X_{d-1};X_{d},X_{d+1})=T\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1}).$ Clearly, $\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})$ is symmetric about $X_{1},\cdots,X_{d}$, and $\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1})$ is symmetric about $X_{1},\cdots,X_{d-1}$ and $X_{d},X_{d+1}$ respectively, so we have the following equation. $\overset{\sim}{E}^{j_{0}}(T,X_{1},\cdots,X_{d})=\overset{\sim}{S}_{j_{0}}^{j_{0}}(T,X_{1},\cdots,X_{d-1};X_{d},X_{d+1})\overset{k}{\underset{i\neq j_{0}}{\prod}}S_{i}^{j_{0}}(T+\\\ (u_{i}-u_{j_{0}})X_{d};X_{1},X_{2},\cdots X_{d-1};X_{d},X_{d+1})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}).$ In [8], the author proved the following results in Lemma 4.2.3. $k=2$, and deg $\overset{\sim}{S}_{j_{0}}^{j_{0}}=1$ or $d-1$. We set $j_{0}=1$. When deg $\overset{\sim}{S}_{j_{0}}^{j_{0}}=1$, $\overset{\sim}{S}_{1}^{1}(T,X_{1},\cdots,X_{d+1})=T+\beta(X_{1}+\cdots+X_{d+1})$ and $S_{2}^{1}(T^{*}+(u_{2}-u_{1})X_{d},X_{1},\cdots,X_{d+1})=\sum_{n-d-1}(T^{*},-\beta X_{1},\cdots,-\beta X_{d+1}),$ where $T^{*}=T+\beta(X_{1}+\cdots+X_{d})$ and $\beta=u_{2}-u_{1}$. When deg $\overset{\sim}{S}_{j_{0}}^{j_{0}}=d-1$, $\overset{\sim}{S}_{1}^{1}(T,X_{1},\cdots,X_{d+1})=\sum_{d-1}(T,(u_{2}-u_{1})X_{1},\cdots,(u_{2}-u_{1})X_{d+1})$ and $S_{2}^{1}(T+(u_{2}-u_{1})X_{d},X_{1},\cdots,X_{d+1})=T+(u_{1}-u_{2})X_{d+1}.$ Then, we deduce our conclusion from above results. ∎ ### 4.3 Uniform $(d+1)$-bundle over $G(d,n)$ when $d=n-d$ ###### Proposition 4.6. Taking a dualization or tensoring with line bundle if necessary, the uniform vector bundles corresponding to Proposition 4.4 are as follows: $H_{d}^{*}(u_{2})\oplus\mathcal{O}_{G(d,n)}(u_{2}),\mathcal{O}_{G(d,n)}(u_{1})\oplus\overset{d}{\underset{i=1}{\oplus}}\mathcal{O}_{G(d,n)}(u_{2}),\mathcal{O}_{G(d,n)}(u_{1})\oplus Q_{n-d}(u_{2}-1)$ or $S^{2}Q_{2}(u_{2})$. ###### Proof. Consider the standard diagram $\textstyle{F(d-1,d,n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q}$$\textstyle{F(d-1,d,d+1,n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{pr_{2}}$$\scriptstyle{p}$$\scriptstyle{pr_{1}}$$\textstyle{F(d,d+1,n)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{r}$$\textstyle{G(d,n).}$ For case (i) in Proposition 4.4, after replacing $T$ with $T+u_{2}(X_{1}+\cdots X_{d})$, we have $c_{p^{*}E(-u_{2})}(T)=(T+X_{d})(\overset{d-1}{T\underset{i=1}{\prod}}(T+X_{i})+a\sum_{n-d}(X_{1},\cdots,X_{d+1}))$. The HN-filtration gives the exact sequence $0\longrightarrow HN^{1}\longrightarrow p^{*}E(-u_{2})\longrightarrow F\longrightarrow 0.$ For line bundles $HN^{1}$ and $\mathscr{H}_{H_{d}^{*}}$, since $c_{HN^{1}}(T)=T+X_{d}=c_{\mathscr{H}_{H_{d}^{*}}}(T)$, we get that $HN^{1}\cong\mathscr{H}_{H_{d}^{*}}$. Thus $p^{*}E(-u_{2})$ has $\mathscr{H}_{H_{d}^{*}}$ as a subbundle. By Lemma 2.2, $E(-u_{2})$ has $H_{d}^{*}$ as a subbundle. Because of the splitting type, we have exact sequence $0\longrightarrow H_{d}^{*}\longrightarrow E(-u_{2})\longrightarrow\mathcal{O}_{G(d,n)}\longrightarrow 0$. Therefore, $E(-u_{2})\cong H_{d}^{*}\oplus\mathcal{O}_{G(d,n)}$. For case (ii) in Proposition 4.4, after replacing $T$ with $T+u_{1}(X_{1}+\cdots X_{d})$, we have $c_{p^{*}E(-u_{1})}(T)=T((T+(u_{2}-u_{1})(X_{1}+\cdots+X_{d-1}))^{d}+a\sum_{n-d}(X_{1},\cdots,X_{d+1}))$. The HN-filtration gives the exact sequence $0\longrightarrow HN^{1}\longrightarrow p^{*}E(-u_{1})\longrightarrow F\longrightarrow 0.$ For line bundles $HN^{1}$ and $p^{*}{\mathcal{O}_{G(d,n)}}$, since $c_{HN^{1}}(T)=T=c_{p^{*}{\mathcal{O}_{G(d,n)}}}(T)$, we have $HN^{1}\cong p^{*}{\mathcal{O}_{G(d,n)}}$. Thus $p^{*}E$ has $p^{*}\mathcal{O}_{G(d,n)}$ as a subbundle. Applying $p_{*}$ to the above sequence, we have $0\longrightarrow\mathcal{O}_{G(d,n)}\longrightarrow E(-u_{1})\longrightarrow p_{*}F\longrightarrow 0$. Comparing the splitting type, we know that $p_{*}F$ has the splitting type $(-1,\cdots,-1)$. By Proposition 3.1 in [4], $p_{*}F\cong\overset{d}{\underset{i=1}{\oplus}}\mathcal{O}_{G(d,n)}(-1)$. Therefore $E\cong\mathcal{O}_{G(d,n)}(u_{1})\oplus\overset{d}{\underset{i=1}{\oplus}}\mathcal{O}_{G}(u_{2})$. For case (iii) in Proposition 4.4, taking a dualization if necessary, we may assume that $j_{0}=1$ or $j_{0}=2$. When $j_{0}=1$, after replacing $T$ with $T+u_{1}(X_{1}+\cdots X_{d})$, the HN-filtration gives the exact sequence $0\longrightarrow HN^{1}\longrightarrow p^{*}E(-u_{1})\longrightarrow F\longrightarrow 0$. For line bundles $HN^{1}$ and $p^{*}{\mathcal{O}_{G(d,n)}}$, since $c_{HN^{1}}(T)=T=c_{p^{*}{\mathcal{O}_{G(d,n)}}}(T)$ , we have $HN^{1}\cong p^{*}{\mathcal{O}_{G(d,n)}}$ and the exact sequence $0\longrightarrow p^{*}\mathcal{O}_{G(d,n)}\longrightarrow p^{*}E(-u_{1})\longrightarrow F\longrightarrow 0$. Applying $p_{*}$ to the above sequence, we get $0\longrightarrow\mathcal{O}_{G(d,n)}\longrightarrow E(-u_{1})\longrightarrow p_{*}F\longrightarrow 0$. Because of the splitting type of $E$, we know that $p_{*}F$ is a uniform vector bundle. By Theorem 1 in [8], we have $p_{*}F\cong H_{d}(-1)$ or $p_{*}F\cong H_{d}^{*}(-2)$. Therefore $E\cong\mathcal{O}_{G(d,n)}(u_{1})\oplus H_{d}(u_{1}-1)$ or $E\cong\mathcal{O}_{G(d,n)}(u_{1})\oplus H_{d}^{*}(u_{1}-2)$. When $j_{0}=2$, taking a dualization if necessary, we may assume that $r_{1}=1$. After replacing $T$ with $T+u_{3}(X_{1}+\cdots+X_{d})$, the HN- filtration gives the exact sequences $0\longrightarrow HN^{1}\longrightarrow HN^{2}\longrightarrow I\longrightarrow 0$ (22) and $0\longrightarrow HN^{2}\longrightarrow p^{*}E\longrightarrow F\longrightarrow 0.$ (23) We know that $c_{HN^{1}}(T)=T-2X_{d+1}=c_{\mathscr{H}_{Q_{n-d}}^{\otimes 2}}(T)$ and $c_{HN^{2}/HN^{1}}(T)=T+X_{1}+X_{2}+\cdots+X_{d}=c_{p^{*}\mathcal{O}_{G(d,n)}(1)}(T).$ Since $HN^{1}$, $\mathscr{H}_{Q_{n-d}}^{\otimes 2}$, $HN^{2}/HN^{1}$ and $p^{*}\mathcal{O}_{G(d,n)}(1)$ are line bundles, we get $HN^{1}\cong\mathscr{H}_{Q_{n-d}}^{\otimes 2}$ and $HN^{2}/HN^{1}\cong p^{*}\mathcal{O}_{G(d,n)}(1)$. Then the exact sequences (22) becomes $0\longrightarrow\mathscr{H}_{Q_{n-d}}^{\otimes 2}\longrightarrow HN^{2}\longrightarrow p^{*}\mathcal{O}_{G(d,n)}(1)\longrightarrow 0.$ When $d=n-d>2$ and $i>0$, we have $R^{i}pr_{2*}(\mathscr{H}_{Q_{n-d}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1))=0$ and $R^{i}r_{*}(\mathcal{O}_{F(d,d+1,n)}(-2))=0$ since $F(d,d+1,n)$ can be viewed as a projective bundle over $G(d,n)$. Combining with projection formula, we have $\displaystyle H^{1}(F(d-1,d,d+1,n),\mathscr{H}_{Q_{n-d}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{1}(F(d,d+1,n),pr_{2*}(\mathscr{H}_{Q_{n-d}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1)))$ $\displaystyle=$ $\displaystyle H^{1}(F(d,d+1,n),\mathcal{O}_{F(d,d+1,n)}(-2)\otimes r^{*}\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{1}(G(d,n),r_{*}\mathcal{O}_{F(d,d+1,n)}(-2)\otimes\mathcal{O}_{G(d,n)}(-1))$ $\displaystyle=$ $\displaystyle H^{1}(G(d,n),0)$ $\displaystyle=$ $\displaystyle 0.$ Since $H^{1}(F(d-1,d,d+1,n),\mathscr{H}_{Q_{n-d}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(d,n)}(-1))=Ext^{1}(p^{*}\mathcal{O}_{G(d,n)}(1),\mathscr{H}_{Q_{n-d}}^{\otimes 2})=0,$ we have $HN^{2}\simeq\mathscr{H}_{Q_{n-d}}^{\otimes 2}\oplus p^{*}\mathcal{O}_{G(d,n)}(1).$ Applying $p_{*}$ to (23) we have the exact sequence $0\longrightarrow\mathcal{O}_{G(d,n)}(1)\longrightarrow E\longrightarrow p_{*}F\longrightarrow 0$. Moreover, over $F(d-1,d,d+1,n)$, we have the commutative diagram $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}\mathcal{O}_{G(d,n)}(1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{p^{*}p_{*}F\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{HN^{2}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{p^{*}E\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{F\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$ The snake lemma gives the following exact sequence: $0\longrightarrow\mathscr{H}_{Q_{n-d}}^{\otimes 2}\longrightarrow p^{*}p_{*}F\longrightarrow F\longrightarrow 0$. Restricting to the fiber of $s$ at some point $l$ in $F(d-1,d+1,n)$, we have $Hom(T_{F(d-1,d,d+1,n)/G(d,n)},\mathscr{H}om(\mathscr{H}_{Q_{n-d}}^{\otimes 2},F))|_{s^{-1}(l)}=0$ $\Rightarrow Hom(T_{F(d-1,d,d+1,n)/G(d,n)},\mathscr{H}om(\mathscr{H}_{H_{d}^{*}}^{\otimes 2},F))=0.$ Thus, by Descente-Lemma (Lemma 2.1.2 in [13]), there is a line subbundle $L$ of $p_{*}F$ such that $p^{*}L$ is isomorphic to $\mathscr{H}_{Q_{n-d}}^{\otimes 2}$, which is impossible because $p_{*}p^{*}L\cong L$ while $p_{*}\mathscr{H}_{Q_{n-d}}^{\otimes 2}=0$. Thus, $n-d=2$. If $n-d=2$, then $d=2$ and $n=4$. The HN-filtration gives two exact sequences $0\longrightarrow\mathscr{H}_{Q_{2}}^{\otimes 2}\longrightarrow HN^{2}\longrightarrow p^{*}\mathcal{O}_{G(2,4)}(1)\longrightarrow 0$ (24) and $0\longrightarrow HN^{2}\longrightarrow p^{*}E\longrightarrow\mathscr{H}_{Q_{2}}^{*\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(2)\longrightarrow 0.$ (25) Next, we want to consider the extension of the exact sequence (24) by calculating $H^{1}(F(1,2,3,4),\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1)).$ By the Leray spectral sequence, we have the exact sequence $H^{1}(G(2,4),p_{*}(\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1)))\rightarrow H^{1}(F(1,2,3,4),\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1))\rightarrow\\\ H^{0}(G(2,4),R^{1}p_{*}(\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1)))\rightarrow H^{2}(G(2,4),p_{*}(\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1))).$ For $p_{*}\mathscr{H}_{Q_{2}}^{\otimes 2}=0$, we have $\displaystyle H^{1}(F(1,2,3,4),\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1))$ $\displaystyle=$ $\displaystyle H^{0}(G(2,4),R^{1}p_{*}(\mathscr{H}_{Q_{2}}^{\otimes 2}\otimes p^{*}\mathcal{O}_{G(2,4)}(-1)))$ $\displaystyle=$ $\displaystyle H^{0}(G(2,4),\mathcal{O}_{G(2,4)})$ $\displaystyle=$ $\displaystyle k.$ So $HN^{2}$ is the direct sum of line bundles or the nontrivial extension. From the argument above, $HN^{2}$ is not the trivial extension. Applying $p_{*}$ to (25), we have $0\longrightarrow p_{*}HN^{2}\longrightarrow E\longrightarrow S^{2}Q_{2}(2)\longrightarrow R^{1}p_{*}HN^{2}\longrightarrow 0.$ (26) Applying $pr_{2*}$ to (24) and setting $M=pr_{2*}HN^{2}$, we have $0\longrightarrow\mathcal{O}_{F(2,3,4)}(-2)\longrightarrow M\longrightarrow r^{*}\mathcal{O}_{G(2,4)}(1)\longrightarrow 0.$ (27) By the similar argument in Lemma 3.4, $M$ is a nontrivial extension. We are going to prove $r_{*}M=p_{*}HN^{2}=0$. Now, $r^{-1}(x)$ is a line for a point $x\in G(2,4)$. The exact sequence $0\longrightarrow\mathcal{I}_{r^{-1}(x)}\longrightarrow\mathcal{O}_{F(2,3,4)}\stackrel{{\scriptstyle h_{x}}}{{\longrightarrow}}\mathcal{O}_{r^{-1}(x)}\longrightarrow 0$ holds. Tensoring with $\mathcal{O}_{F(2,3,4)}(-2)\otimes r^{*}\mathcal{O}_{G(2,4)}(-1)$, we get the exact sequence $0\longrightarrow\mathcal{I}_{r^{-1}(x)}\otimes\mathcal{O}_{F(2,3,4)}(-2)\otimes r^{*}\mathcal{O}_{G(2,4)}(-1)\longrightarrow\mathcal{O}_{F(2,3,4)}(-2)\otimes r^{*}\mathcal{O}_{G(2,4)}(-1)\stackrel{{\scriptstyle h_{x}}}{{\longrightarrow}}\mathcal{O}_{r^{-1}(x)}(-2)\longrightarrow 0$, which induces the long exact sequence $0=H^{0}(r^{-1}(x),\mathcal{O}_{r^{-1}(x)}(-2))\longrightarrow H^{1}(F(2,3,4),\mathcal{I}_{r^{-1}(x)}\otimes\mathcal{O}_{F(2,3,4)}(-2))\longrightarrow\\\ H^{1}(F(2,3,4),\mathcal{O}_{F(2,3,4)}(-2)\otimes r^{*}\mathcal{O}_{G(2,4)}(-1))\stackrel{{\scriptstyle f_{x}}}{{\longrightarrow}}H^{1}(r^{-1}(x),\mathcal{O}_{r^{-1}(x)}(-2)).$ Over $F(2,3,4)$, we have the commutative diagram $\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{F(2,3,4)}(-2)\otimes r^{*}\mathcal{O}_{G(2,4)}(-1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M\otimes r^{*}\mathcal{O}_{G(2,4)}(-1)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{id}$$\textstyle{\mathcal{O}_{F(2,3,4)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{r^{-1}(x)}(-2)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{M|_{r^{-1}(x)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathcal{O}_{r^{-1}(x)}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0,}$ which induces the commutative diagram $\textstyle{H^{0}(F(2,3,4),\mathcal{O}_{F(2,3,4)})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{M}}$$\scriptstyle{h^{\prime}}$$\textstyle{H^{1}(F(2,3,4),\mathcal{O}_{F(2,3,4)}(-2)\otimes q^{*}\mathcal{O}_{G(2,4)}(-1))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f_{x}}$$\textstyle{H^{0}(r^{-1}(x),\mathcal{O}_{r^{-1}(x)})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\delta_{M|{r^{-1}(x)}}}$$\textstyle{H^{1}(r^{-1}(x),\mathcal{O}_{r^{-1}(x)}(-2)).}$ Then $f_{x}\circ{\delta_{M}(1)}=\delta_{M|{r^{-1}(x)}}\circ h^{\prime}(1)$ and $h^{\prime}(1)=1,\delta_{M}(1)=t\neq 0$ since (27) is the nontrivial extension. By the similar arguments in Lemma 3.4, we can get that $\delta_{M|{r^{-1}(x)}}\neq 0$. Thus $M|_{r^{-1}(x)}\cong\mathcal{O}_{\mathbb{P}^{1}}(-1)\oplus\mathcal{O}_{\mathbb{P}^{1}}(-1)$ for all $r$-fibers. Since $p=r\circ pr_{2*}$ and $pr_{2*}HN^{2}$ is of splitting type $(-1,-1)$ at each fiber of $r$, we have $p_{*}HN^{2}=R^{1}p_{*}HN^{2}=0$. Therefore $E\cong S^{2}Q_{2}(2)$ from (26). ∎ ###### Proposition 4.7. The uniform vector bundle $E$ corresponding Proposition 4.5 is isomorphic to $Q_{n-d}(u_{2})\oplus\mathcal{O}_{G(d,n)}(u_{1})$ or $Q_{n-d}^{*}(u_{1})\oplus\mathcal{O}_{G(d,n)}(u_{1})$. ###### Proof. For case (i), by the similar argument in Lemma 3.3, $E\cong Q_{n-d}(u_{2})\oplus\mathcal{O}_{G(d,n)}(u_{1})$. For case (ii), after replacing $T$ with $T+u_{1}(X_{1}+\cdots+X_{d})$, the HN- filtration gives the exact sequence $0\longrightarrow HN^{1}\longrightarrow p^{*}E(-u_{1})\longrightarrow F\longrightarrow 0$. Since $c_{F}(T)=T+X_{d+1}=c_{\mathscr{H}_{Q_{n-d}}^{*}}(T)$, we have $F\cong\mathscr{H}_{Q_{n-d}}^{*}$. Thus $p^{*}E^{*}(u_{1})$ has $\mathscr{H}_{Q_{n-d}}$ as a subbundle. By Lemma 2.2, $E^{*}(u_{1})$ has $Q_{n-d}$ as a subbundle. So we have the exact sequence $0\longrightarrow Q_{n-d}\longrightarrow E^{*}(u_{1})\longrightarrow N\longrightarrow 0$ for some line bundle $N$. Comparing the splitting type, we know that $N\cong\mathcal{O}_{G(d,n)}$. So $E\cong Q_{n-d}^{*}(u_{1})\oplus\mathcal{O}_{G(d,n)}(u_{1})$. ∎ Data availability No data was used for the research described in the article. Declarations Conflict of interest: The author states that there is no conflict of interest. ## References * [1] E. Ballico. Uniform vector bundles on quadrics. Ann. Univ. Ferrara Sez. VII (N.S.), 27(1):135–146, 1982. * [2] E. Ballico. Uniform vector bundles of rank $(n+1)$ on $\mathbb{P}^{n}$. Tsukuba J. Math., 7(2):215–226, 1983. * [3] R. Du, X. Fang, and Y. Gao. Vector bundles on rational homogeneous spaces. Ann. Mat. Pur. Appl., 200(6):2797–2827, 2021. * [4] R. Du, X. Fang, and Y. Gao. Vector bundles on flag varieties. Math. Nach., 296(2):630–649, 2023. * [5] G. Elencwajg. Les fibrés uniformes de rang 3 sur $\mathbb{P}^{2}(\mathbb{C})$ sont homogénes. Math.Ann, 231:217–227, 1978. * [6] G. Elencwajg, A. Hirschowitz, and M. Schneider. Les fibrés uniformes de rang n sur $\mathbb{P}^{n}(\mathbb{C})$ sont ceux qu’on croit. Progr. Math., 7:37–63, 1980. * [7] P. Ellia. Sur les fibrés uniformes de rang $(n+1)$ sur $\mathbb{P}^{n}$. Mém. Soc. Math. France (N.S.), 7:1–60, 1982. * [8] M. Guyot. Caractérisation par l’uniformité des fibrés universels sur la grassmanienne. Math. Ann., 270(1):47–62, 1985. * [9] R. Hartshorne. Algebraic geometry. Springer-Verlag, New York-Heidelberg, 1977. xvi+496 pp. * [10] Y. Kachi and E. Sato. Segre’s reflexivity and an inductive characterization of hyperquadrics. Mem. Amer. Math. Soc., 160(763), 2002. * [11] R. Muñoz, G. Occhetta, and L. E. Solá Conde. Uniform vector bundles on Fano manifolds and applications. J. Reine Angew. Math. (Crelles Journal), 664:141–162, 2012. * [12] R. Muñoz, G. Occhetta, and L. E. Solá Conde. Splitting conjectures for uniform flag bundles. European Journal of Mathematics, 6:430–452, 2020. * [13] C. Okonek, M. Schneider, and H. Spindler. Vector bundles on complex projective spaces. Birkh$\ddot{a}$user/Springer Basel AG, Basel, 2011. viii+239 pp. * [14] X. Pan. Triviality and split of vector bundles on rationally connected varieties. Math. Res. Lett., 22(2):529–547, 2015. * [15] E. Sato. Uniform vector bundles on a projective space. J. Math. Soc. Japan, 28(1):123–132, 1976. * [16] R. L. E. Schwarzenberger. Vector bundles on the projective plane. Proc. London Math. Soc., 11(3):623–640, 1961. * [17] A. Van de Ven. On uniform vector bundles. Math. Ann., 195:245–248, 1972.
# Boosting galactic outflows with enhanced resolution Martin P. Rey, Harley B. Katz, Alex J. Cameron, Julien Devriendt, Adrianne Slyz Sub-department of Astrophysics, University of Oxford, DWB, Keble Road, Oxford OX1 3RH, UK E-mail<EMAIL_ADDRESS> (Submitted to MNRAS) ###### Abstract We study how better resolving the cooling length of galactic outflows affect their energetics. We perform radiative-hydrodynamical galaxy formation simulations ($18\,\mathrm{pc}$ spatial resolution in the interstellar medium; ISM) of an isolated dwarf galaxy ($M_{\star}=10^{8}\,\textup{M}_{\mathrm{\sun}}$) with the ramses-rtz code, accounting for non-equilibrium cooling and chemistry coupled to radiative transfer. We further implement a new adaptive mesh refinement (AMR) strategy to resolve the local gas cooling length, allowing us to gradually increase the resolution in the stellar-feedback-powered outflows, from $\geq 200\,\mathrm{pc}$ to $18\,\mathrm{pc}$. The propagation of outflows into the inner circumgalactic medium (CGM) is significantly modified by this additional resolution, but the ISM, star formation and feedback remain by and large the same. With increasing resolution in the diffuse gas, the cold ($T<{8}\times 10^{3}\,\mathrm{K}$) phase of the outflow gets incrementally larger, colder and faster-moving, while the hot phase ($T>{8}\times 10^{4}\,\mathrm{K}$) is hotter and more energetic. This leads to greater than five-fold increases in the time-averaged mass, energy and metal outflow rates away from the galaxy ($r=5\,\mathrm{kpc}$) and a $\approx$50 per cent increase in the number of sightlines with $N_{\text{O~{}{vi}}}\geq 10^{13}\,\mathrm{cm}^{-2}$. Such a highly significant boost to the energetics of outflows without new feedback mechanisms or channels strongly motivates future studies quantifying the efficiency with which better-resolved multiphase outflows regulate galactic star formation in a cosmological context. ###### keywords: galaxies:evolution – methods: numerical – hydrodynamics ††pubyear: 2022††pagerange: Boosting galactic outflows with enhanced resolution–B ## 1 Introduction Multiphase galactic outflows are ubiquitous across the galaxy population, with their range of molecular, cold, ionized and hot phases detected across the electromagnetic spectrum (see e.g. Veilleux et al. 2020; Laha et al. 2021 for reviews). By removing mass from the dense ISM and heating the gas surrounding galaxies in the CGM, outflows provide a fundamental mechanism to regulate the star formation of galaxies across cosmic time. But a detailed understanding of how galactic outflows launch, how they propagate through the ISM and how they interact with the CGM remains elusive, and a key challenge for modern galaxy formation theories (see Somerville & Davé 2015; Naab & Ostriker 2017 for reviews). Low-mass dwarf galaxies provide valuable clues to address these questions, as their shallower gravitational potential wells makes them acutely sensitive to their internal stellar processes. In particular, supernovae (SNe) explosions provide an important engine to inject energy and momentum into the ISM and launch powerful galactic winds capable of escaping such low-mass systems (e.g. Chevalier & Clegg 1985; Dekel & Silk 1986; Murray et al. 2005). Despite this well-established picture, large uncertainties however remain in the quantitative efficiency with which these SN-driven outflows regulate star formation in dwarf galaxies. Already within the ISM, additional stellar processes and feedback channels (e.g. winds, radiation; see Naab & Ostriker 2017 for a review) can modify the properties of the gas in which SNe explode, leading to qualitative and quantitative changes in the venting behaviour of low-mass galaxies (e.g. Emerick et al. 2020; Agertz et al. 2020; Smith et al. 2021; Fichtner et al. 2022). Moreover, for a given feedback model, the spatial distribution of the energy injection and the clustering in space and time of SNe plays a major role in the ability to build a powerful outflow (e.g. Girichidis et al. 2016; Kim et al. 2017; Fielding et al. 2018; Ohlin et al. 2019), making outflow properties strongly sensitive to the underlying model for where young stars form and spatially distribute (e.g. Andersson et al. 2020; Andersson et al. 2022; Steinwandel et al. 2022b). Further, the amount of feedback energy and momentum available from stellar evolution at the low metallicities of dwarf galaxies remains debated, making the total feedback budget itself an important uncertainty in such systems (e.g. Gutcke et al. 2021; Prgomet et al. 2022). All these factors contribute to the large spectrum of outflow energetics reported in the literature, with the predicted efficiency with which outflows carry mass and energy out of dwarf galaxies spreading over orders of magnitudes at a given stellar mass (see e.g. Muratov et al. 2015; Hu 2019; Emerick et al. 2020; Smith et al. 2021; Pandya et al. 2021; Andersson et al. 2022; Steinwandel et al. 2022a for recent studies). Observational constraints are thus instrumental to help pinpoint the energetics of galactic outflows in low-mass galaxies, but the multiphase nature of such outflows makes this a challenging task. The different cold, warm and hot gas phases are probed through a diversity of observational techniques, each with their own wavelength window and challenges (see Collins & Read 2022 for a review). Moreover, there is no consensus as to the respective importance of each phase in regulating star formation. The cold-to- warm phase ($\approx 10^{4}\,\mathrm{K}$) is expected to dominate the mass budget of the outflow (e.g. Kim & Ostriker 2018; Kim et al. 2020; Fielding & Bryan 2022), but studies targeting this temperature range, i.e. through H$\alpha$ or ultraviolet (UV) absorption lines, have both reported above-unity mass-loading factors (Chisholm et al. 2017; McQuinn et al. 2019; Schroetter et al. 2019) and much lower values (Marasco et al. 2023). Alternatively, the hot outflow phase ($\geq 10^{5}\,\mathrm{K}$) is expected to dominate the energy budget, which could be key to pressurize the surrounding CGM and prevent further star formation by delaying gas inflows onto the galaxy (e.g. Hu 2019; Li & Bryan 2020; Carr et al. 2022). However, this phase is challenging to observe, emitting X-rays that are only accessible for limited samples of low- mass dwarfs (e.g. Heckman et al. 1995; Summers et al. 2003, 2004; Ott et al. 2005; McQuinn et al. 2018). This makes obtaining a holistic overview of outflow thermodynamics difficult observationally, and robust predictions of the multiphase structure of outflows paramount theoretically. Figure 1: Visualization of the impact of better resolving the diffuse gas cooling length (right) compared to a traditional ISM-focussed quasi-Lagrangian strategy (left). Panels show edge-on maps of the gas cooling length (top) and its ratio with the spatial resolution of the simulation (bottom), density- weighted along the line of sight at a time of comparable outflow rates ($t=540\,\mathrm{Myr}$, $\approx 0.05\,\textup{M}_{\mathrm{\sun}}\,\text{yr}^{-1}$). When resolution is focussed on the ISM (left), the dense outflowing material (top, orange to white, $l_{\text{cool}}\approx 10-100\,\mathrm{pc}$) mixes rapidly into a diffuse hotter halo (blue, $l_{\text{cool}}\geq 100\,\mathrm{pc}$). As the resolution degrades rapidly in these low densities (e.g. Figure 2), cooling processes are marginally or significantly under-resolved (bottom left, blue). Forcing refinement on $l_{\text{cool}}$ (right, here down to $18\,\mathrm{pc}$) resolves the thermodynamics of the outflowing gas much better, leaving only thin interfaces of under-resolved cooling gas outside the disc plane (bottom right). This improved treatment significantly enhances outflow energetics (Section 3) and modifies their ionic structure (Section 4). Accurately capturing the launching and cooling of outflows within the dense ISM requires sub-pc resolution (top, dark red) posing a distinct, complementary challenge for galaxy formation simulations. Beyond the launching mechanics of the wind and its initial structure within the ISM, a key aspect to achieving accurate predictions is to robustly capture the propagation of the multiphase outflow as it escapes the galaxy. A spectrum of processes can indeed efficiently transfer mass and energy between gas phases during the propagation, in particular shocks (e.g. Klein et al. 1994), hydrodynamical instabilities and turbulence (e.g. Gronke & Oh 2018, 2020; Fielding et al. 2020; Kanjilal et al. 2021; Tan et al. 2021), cooling and heating of the different gas phases (e.g. Field 1965; Begelman & McKee 1990; Sharma et al. 2010, 2012; McCourt et al. 2012; Voit et al. 2015), the interaction with a surrounding hot or already multiphase CGM medium (e.g. Armillotta et al. 2016; Brüggen & Scannapieco 2016; Gronke et al. 2022), or a combination of the above (see Faucher-Giguere & Oh 2023 for a review). The relative efficiency of each of these processes remains debated as they depend on the local gas and radiation conditions, but they introduce characteristic length scales that should be resolved to accurately capture how the original structure of outflows is reprocessed as they expand away from the galaxy. A key common feature for these processes, however, is that their characteristic length-scales are invariably small ($\ll 10\,\mathrm{pc}$), posing a huge numerical challenge to simulations that wish to model entire galaxies ($\geq\mathrm{kpc}$). This issue is further compounded by the Lagrangian behaviour of most galaxy simulations, focussing the computational effort and resolution where gas is dense (i.e. in the ISM) but quickly degrading it in the diffuse gas to speed up the computation. Density and temperature contrasts in the launched outflow close to the galaxy are then numerically smoothed out as it expands away and the resolution worsens. This leads to artificial over-mixing of the gas phases and the suppression of cooling instabilities which we illustrate in Figure 1 (see also the discussion in Hummels et al. 2019). Figure 1 shows the gas cooling length density-weighted along the line of sight in the edge-on isolated dwarf galaxy used in this study. The cooling length is $\leq 100\,\mathrm{pc}$ in the outflowing gas escaping the galaxy (top, blue to white to red), but is marginally or under-resolved in a traditional galaxy formation simulation relying on a quasi-Lagrangian resolution strategy (bottom left, blue). The multiphase structure of simulated galactic outflows is thus likely to be under-resolved and numerically suppressed, reminiscent of the numerical limitations in modelling the larger-scale diffuse CGM (e.g. Peeples et al. 2019; Hummels et al. 2019; Suresh et al. 2019; van de Voort et al. 2019). These challenges have motivated subgrid implementations and new hydrodynamical methods to model multiphase galactic winds (e.g. Huang et al. 2020, 2022; Smith et al. 2023; Weinberger & Hernquist 2023), but a detailed understanding of how the energetics and observables of outflows depend on resolution remains lacking. Furthermore, outflows are highly dynamic in nature, propagating at high velocities ($\gtrapprox 100\,\text{km}\,\text{s}^{-1}$) into highly stratified media where the density and strength of the radiation field steadily decrease away from the galaxy. These quickly-evolving physical conditions and their low-densities make the cooling, chemical and ionic composition of outflows particularly prone to non-equilibrium effects during their propagation, as ionic recombination timescales can significantly differ from the gas cooling time in low-density environments (e.g. Oppenheimer & Schaye 2013, see also Sarkar et al. 2022 for further radiative transfer effects in outflows). To gain a better understanding of the importance of these physical processes, we create a new suite of galaxy-formation simulations aimed to provide a more robust modelling of galactic outflows. We perform isolated radiation- hydrodynamical simulations of a dwarf galaxy with the AMR code ramses-rtz (Katz 2022). This allows us to (i) obtain an accurate account of non- equilibrium effects by solving the non-equilibrium chemistry of $\geq$60 ionic species coupled on-the-fly with the spatially- and time-varying radiation field, while (ii) following the development of galaxy-scale outflows self- consistently powered by stellar feedback within the disc. We further complement this setup by implementing a new AMR refinement strategy in ramses- rtz that explicitly aims to resolve the local gas cooling length (see also Simons et al. 2020 for a similar approach in the CGM), enabling us to better resolve instabilities and mass transfer between gas phases during the outflow propagation (Figure 1, right column). In this paper, we focus on understanding how gradually improving the resolution in galactic winds impact their energetics, and will dedicate companion papers quantifying how this improved treatment affects their emission lines properties (see also Cameron et al. 2022 for a similar approach with ramses-rtz). We describe the simulation suite and our numerical methods in Section 2. We then show in Section 3.1 that improving outflow resolution enhances the prevalence of the cold and hot gas phases, both of which exhibit systematically increasing energetics (Section 3.2). At fixed feedback budget, this results in a greater than five-fold increase in mass, energy and metal loading factors away from the galaxy (Section 3.3) and a doubling of high- ionization (e.g. O vi) covering fractions (Section 4). We discuss our results and new approach in Section 5 and conclude in Section 6. ## 2 Numerical setup We present a suite of radiative hydrodynamical simulations of isolated galaxies, all performed with the same physical modelling but gradually increasing off-the-plane spatial resolution to resolve thermal instabilities in the diffuse outflowing gas. We summarize our numerical galaxy formation model in Section 2.1, in particular how we account for non-equilibrium, ion- by-ion cooling coupled to on-the-fly radiative transfer (see Katz 2022 for a more extensive description of the same setup). Section 2.2 presents the implementation of the new refinement strategy targeting the gas cooling length and Section 2.3 the specific galaxy and simulations of this work. Figure 2: Edge-on (top) and face-on (bottom) AMR spatial resolution across a thin slice in the image plane, for the four galaxies with increasingly resolved $l_{\text{cool}}$ (left to right) at a time of comparable outflow rates ($t=540\,\mathrm{Myr}$, $\approx 0.05\,\textup{M}_{\mathrm{\sun}}\,\text{yr}^{-1}$). The traditional refinement strategy (left) has a strongly layered vertical structure (top left) which under-resolves cooling instabilities (Figure 1), transitioning quickly from the ISM resolution ($18\,\mathrm{pc}$, deep blue) to $\geq 100\,\mathrm{pc}$ (white). By contrast, resolution with our new scheme targeting the cooling length naturally tracks the interfaces of expanding superbubbles off the disc plane, following their irregular spatial structure and preventing them from numerically over-mixing. The structure of the dense ISM (bottom, deep blue) only gets visually modified when the additional refinement in the diffuse gas reaches a maximal resolution equal to that in the ISM (right-most). Nonetheless, star-formation conditions between all runs remain consistent with the fiducial setup, even for our most resolved cases (Appendix A). ### 2.1 Chemistry, cooling and galaxy formation physics We perform radiative hydrodynamical numerical simulations using the adaptive- mesh refinement code ramses-rtz (Katz 2022), built upon the ramses and ramses- rt software (Teyssier 2002; Rosdahl & Teyssier 2015). We solve the gas dynamics using a HLLC Riemann solver (Toro et al. 1994), assuming an ideal gas equation of state with adiabatic index of 5/3, while collisionless dynamics of stars and dark matter particles are computed by means of an adaptive particle- mesh method (Guillet & Teyssier 2011). A key novelty in our simulations is the treatment of ion non-equilibrium thermochemistry, coupled on-the-fly to the radiative transfer. We solve the dynamics of the radiation field at every timestep using the M1 method (Rosdahl et al. 2013; Rosdahl & Teyssier 2015), discretizing its spectrum in eight energy bins from the infrared to the UV (Kimm et al. 2017). A uniform UV background also permeates the simulation box (Haardt & Madau 2012), with gas above $n_{\mathrm{H}}\geq 10^{2}\,\mathrm{cm}^{-3}$ self-shielding exponentially (Aubert & Teyssier 2010; Rosdahl & Blaizot 2012). We then track the non-equilibrium evolution of 11 atomic species and 64 individual ions (H$~{}\textsc{i}-\textsc{ii}$, He$~{}\textsc{i}-\textsc{iii}$, C$~{}\textsc{i}-\textsc{vii}$, N$~{}\textsc{i}-\textsc{viii}$, O$~{}\textsc{i}-\textsc{ix}$, Ne$~{}\textsc{i}-\textsc{vii}$, Mg$~{}\textsc{i}-\textsc{vii}$, Si$~{}\textsc{i}-\textsc{vii}$, S$~{}\textsc{i}-\textsc{vii}$, Fe$~{}\textsc{i}-\textsc{vii}$ and Ca i) and the formation and evolution of molecular hydrogen (Katz et al. 2017). Number densities of individual ions and species are computed at every simulation timestep by solving a network of coupled equations accounting for recombination, collisional ionization, charge exchange and photo-ionization extracted from the local, time-varying radiation field (Katz 2022). Having access to individual number densities of ions and the radiation field further allows us to self-consistently compute the out-of-equilibrium cooling rate for the gas. At low temperatures ($T\leq 10^{4}$ K), we explicitly compute the level populations and the fine-structure emission of O i, O iii, C i, C ii, Si i, Si ii, S i, Fe i, Fe ii, Ne ii which dominate the cooling rate at the low metallicities we consider in this work (e.g. Glover & Jappsen 2007). We use tabulated non-equilibrium ion-by-ion cooling tables for our UV background at high temperatures ($T\geq 10^{4}$ K; Oppenheimer & Schaye 2013) and further account for non-equilibrium atomic cooling processes of H and He (Rosdahl et al. 2013) and molecular cooling of $\mathrm{H}_{\mathrm{2}}$ (Katz et al. 2017). This thermodynamical setup is complemented by an extensive galaxy formation model, including star formation and stellar feedback processes. Stars form in a molecular-cloud-like environment identified using a thermo-turbulent star- formation criterion (Federrath & Klessen 2012; Kimm et al. 2017) and with a Kroupa (2001) initial mass function. Stellar particles then re-inject energy, mass, momentum, metals and radiation locally, following the mechanical feedback prescription of Kimm et al. (2015) for SNe explosions and the model of Agertz et al. (2013) for stellar winds. Radiation is injected in gas cells surrounding stellar particles assuming a single stellar population of the age and metallicity of the particle and a BPASSv2.2.1 spectral energy distribution (Stanway et al. 2016). ### 2.2 Refinement strategy on the gas cooling length All our adaptive-mesh simulations use a traditional quasi-Lagrangian refinement strategy targeting the dense ISM. Specifically, gas cells are split in 8 if their dark matter or baryonic mass reaches 8 times its initial value, or if the local thermal Jeans length is resolved by less than four cells. Refinement is allowed up to the maximum resolution of the simulation of $18$ pc. Figure 2 illustrates the structure of this refinement strategy (left-most column), showing an edge-on (top) and face-on (bottom) map of the spatial resolution at a time of comparable outflow rates between the simulations ($t=540\,\mathrm{Myr}$, $\approx 0.05\,\textup{M}_{\mathrm{\sun}}\,\text{yr}^{-1}$). With the quasi-Lagrangian approach, resolution is near its allowed maximum within the disc ($18\,\mathrm{pc}$, deep blue) but drops rapidly in the vertical direction ($\geq 100\,\mathrm{pc}$, white) as the density decreases. However, the cooling length of gas escaping the galaxy can be much smaller and is either marginally or under-resolved by the quickly degrading resolution (Figure 1). To remedy to this issue, we store the net cooling rate $\Lambda_{\text{net}}$ obtained at each simulation timestep by the non- equilibrium solver and compute $l_{\text{cool}}=\sqrt{\frac{P_{\text{th}}}{\rho}}\times\frac{1}{\Lambda_{\text{net}}}\,$ (1) for each gas cell, where $P_{\text{th}}$ and $\rho$ are the thermal gas pressure and density used to define the isothermal sound speed. $\Lambda$ and $l_{\text{cool}}$ can be positive or negative if gas is cooling or heating. We then implement an additional refinement criterion, splitting gas cells in 8 where $l_{\text{cool}}$ is positive (i.e. only cooling gas) and unresolved by at least $l_{\text{cool}}/(N_{\text{cells}}\,\Delta x)<1$, where $\Delta x$ is the spatial resolution at a given level of the mesh hierarchy. We allow $N_{\text{cells}}$ to take different values at different levels along the mesh hierarchy. The right columns of Figure 2 show the resulting spatial resolutions with this new strategy, resolving $l_{\text{cool}}$ with $N_{\text{cells}}=8$ down to 72, 36 and 18 pc (second, third and fourth columns respectively). The structure of the galactic disc (bottom) and its dense ISM (deep blue) where star-formation occurs is visually similar in all cases, until we reach a cooling length refinement matching the maximal ISM resolution (right-most column). Even in this case however, we show in Appendix A that our new refinement scheme adds resolution in the diffuse gas around the disc, but does not impact the ISM properties, the location of star formation and the feedback conditions. Rather, targeting $l_{\text{cool}}$ introduces large amount of additional off- the-disc-plane resolution that tracks discrete plumes and shells of outflowing gas as they expand and mix with the inner CGM (top panels of Figure 2). The filamentary structure extending off the galaxy is visibly more extended from left to right, but remains contained by avoiding unnecessary resolution in the hot halo where densities are lower and the cooling lengths larger (white). Since the ISM is already at the maximum resolution due to the quasi-Lagrangian scheme, the additional refinement criterion on the cooling length allows us to allocate additional resources to the diffuse gas in a controlled and gradual way. This provides us with the ability to cleanly and conveniently decouple the resolution between dense and diffuse gas, and to isolate their relative impact on outflow properties in Section 3. Furthermore, we also verified that the resolution structure significantly varies with time and naturally follows to the star formation and outflow activity of the galaxy. This provides us with an efficient refinement strategy that focusses on the time-evolving physical quantity of interest rather than a pre-defined, arbitrary volume, and its adaptive nature also enables us to tailor the computational load to the application at hand. One can either model isolated outflows as resolved as the ISM (e.g. our most-resolved case in this study) or use more gradual strategies in the diffuse gas to limit the computational expense (e.g. our intermediate setups). We discuss the computational costs and pros and cons of different foreseen simulation strategies in Section 5.1. ### 2.3 Simulation suite All simulations in this work (Table 1) follow the evolution of the same, isolated dwarf galaxy, extensively described as G8 in Rosdahl et al. (2015); Kimm et al. (2018); Katz (2022). Briefly, we set up $2\times 10^{5}$ particles to sample a dark matter halo of $10^{10}\,\textup{M}_{\mathrm{\sun}}$, an initial disc and bulge (masses of ${3}\times 10^{8}\,\textup{M}_{\mathrm{\sun}}$ and ${3}\times 10^{7}\,\textup{M}_{\mathrm{\sun}}$, respectively) leading to a maximum circular velocity of $\approx 30\,\text{km}\,\text{s}^{-1}$. Disc gas is initialized with an exponential density profile and a weak metallicity gradient peaking at $0.1\,\textup{Z}_{\mathrm{\sun}}$, and is surrounded by a homogeneous ($\rho=10^{-6}\,\mathrm{cm}^{-3}$) hot, metal-free halo. We initialize all ions in their ground state in the disc and in their most ionized state in the hot halo. We perform four main re-simulations of this galaxy, progressively increasing the resolution target of the cooling length to 72, 36, and 18 pc eventually matching the ISM resolution. We follow the evolution of each galaxy for $750\,\mathrm{Myr}$ (i.e. $\sim 5$ disc crossing times) and save snapshots at least every 3 Myr. Figure 3 shows the evolution of the stellar mass formed over the course of each simulation (top), their star-formation rates averaged over 10 Myr (middle) and mass outflow rates close to the disc of the galaxy (bottom, measured in a 0.5 kpc-thick slab placed at $|z|=1\,\mathrm{kpc}$). Figure 3: Stellar mass growth as a function of time (top), star formation rates averaged over 10 Myr (middle) and mass outflow rates at $|z|=1\,\mathrm{kpc}$ (bottom) for our four galaxies with increasingly resolved outflows. Despite the change in numerical setup, all galaxies form a similar amount of stars, ensuring that the total feedback budget is approximately the same across all simulations. Better resolving the cooling length also yields comparable average star formation rates and mass outflow rates, leading to similar mass loading factors close to the disc. But the improved treatment significantly affects the further propagation of the outflow into the CGM and its resulting multiphase structure (Figure 4 and 5) and energetics (Figure 6, 7 and 8). We discard the first 200 Myr of each simulation to allow the artificial transient phase induced by our initial conditions to fully dissipate. Our initial conditions drive an initial starburst which recedes after $\approx 100\,\mathrm{Myr}$, and it takes an additional $\approx 100\,\mathrm{Myr}$ for the galactic disc and outflow to settle after a large enhancement in mass outflow rate (grey shaded region in Figure 3). We thus start our analysis at $200\,\mathrm{Myr}$, ensuring at least 130 snapshots of analysis when deriving time-averaged properties (Section 3). We verified that our conclusions are unchanged if starting at $250\,\mathrm{Myr}$ or $300\,\mathrm{Myr}$. Table 1: Summary of the simulations presented in this work (first column). We re-simulate the same initial conditions with a maximum resolution in the ISM of 18 pc, refining the outflows to an incrementally higher spatial resolution (second column). Two of our setups are re-simulated three times with different random seeds (first and third lines) to assess stochasticity in our results (Appendix B). We also provide a controlled, time-limited test of the computational cost of each setup (third column) discussed in Section 5.1. Simulation | Maximum cooling | Average timestep cost ---|---|--- | length target (pc) | (CPU s / simulated Myr) ISM 18pc (x3) | N/A | $1256^{+220}_{-247}$ \+ Outflow 72pc | 72 | $1282^{+183}_{-242}$ \+ Outflow 36pc (x3) | 36 | $1643^{+233}_{-267}$ \+ Outflow 18pc | 18 | $4109^{+483}_{-473}$ All simulations form a similar amount of stellar mass, ending up within 15 per cent of each other. This is key to compare their outflows, as it guarantees that the overall energy, momentum and metal budget from stellar feedback is nearly unchanged between different resolution runs. It also provides further evidence that the ISM is not strongly impacted by the additional resolution in the diffuse gas (Appendix A). However, the ability to drive galactic outflows also depends on the coupling between feedback and the surrounding gas, which in turn depends on the clustering of SNe and the density of the medium in which they explode. This is subject to stochasticity due to the probabilistic nature of our modelling of star formation (see Genel et al. 2019; Keller et al. 2019 for further discussion). To quantify the magnitude of this scatter, we perform three re- simulations of our reference galaxy and three with the cooling length resolved down to 36 pc, solely varying the random seed for star formation in each of them. We show in Appendix A that star formation and feedback conditions within the ISM are comparable between both stochastic resimulations of a given setup and different resolutions of the cooling length. We further show in Appendix B that run-to-run stochasticity leads to different but statistically compatible outflow rates close to the disc, and that the induced scatter remains subdominant compared to differences in outflow properties and rates resulting from an increase in resolution when measured further away from the galaxy (Section 3). This therefore confirms the causal association of differences in outflows with a more accurate treatment of their thermodynamics. ## 3 Energetics of better-resolved outflows Figure 4: Time-averaged, mass-weighted distributions of density (top) and temperature (bottom) of the outflowing gas ($0.5\leq z\leq 5\,\mathrm{kpc}$; $sgn(v_{z})=sgn(z)$) across our suite of simulations. A better resolved cooling length (purple to green) leads to longer tails towards both denser and more diffuse gas, and both colder and hotter gas. This points towards an increasingly multiphase structure of galactic outflows with increasing resolution (Figure 5). Figure 5: Time-averaged, mass-weighted temperature-density (top) and pressure- density (bottom) distributions of the outflowing gas when increasingly resolving the cooling length (left to right). In all cases, most gas mass is warm and in photo-ionization equilibrium with the UV background ($\approx 10^{4}\,\mathrm{K}$). However, an increasing resolution yields a better- sampled diffuse phase with an increased scatter in temperatures and pressures at a given density, and allows denser and colder gas at high densities. This increasingly multiphase structure in turn affects the kinematics (Figure 6) and energetics of the outflows (Figure 7). ### 3.1 Increasingly multiphase outflows We now turn to establishing how a better-resolved cooling length affects the properties of galactic outflows. To this end, we extract a cylinder of gas above and below the disc plane ($R\leq 5.0\,\mathrm{kpc}$ and $0.5\leq|z|\leq 5\,\mathrm{kpc}$111We verified that using $|z|>1.0\,\mathrm{kpc}$, $|z|\leq 4\,\mathrm{kpc}$, $|z|\leq 7\,\mathrm{kpc}$ or $R\leq 4.0\,\mathrm{kpc}$ does not modify our results.) and select outflowing gas with $sgn(v_{z})=sgn(z)$. At each simulation snapshot, we then compute the mass-weighted temperature, density and pressure distributions, and their two-dimensional combinations. We show the normalized stacks over their respective $\geq 500\,\mathrm{Myr}$ of evolution in Figure 4 and Figure 5. Despite the change in resolution, all runs have similar total masses of outflowing gas (varying by less than 20 per cent, top of each panel in Figure 5), ensuring that changes in the PDFs are largely unaffected by changes in overall normalizations. Starting from the one-dimensional distributions for our fiducial case (Figure 4, purple), we see that the majority of the outflowing gas mass is at low densities ($\sim 10^{-3}-10^{-1}\,\mathrm{cm}^{-3}$) and warm temperatures ($\sim 10^{4}\,\mathrm{K}$), as expected from diffuse gas in photo-ionization equilibrium with the surrounding UV background. A subdominant fraction of the gas is dense enough to self-shield efficiently against the UV background ($\rho\geq 10^{-1}\,\mathrm{cm}^{-3}$ in our model) and cool below $10^{4}\,\mathrm{K}$. SN-heated gas is also visible as a long tail towards high temperatures ($\geq 10^{5}\,\mathrm{K}$), while the enhancement of gas with $T\approx{8}\times 10^{4}\,\mathrm{K}$ maps to a local minimum of the cooling function at the low metallicities considered in this work ($\leq 0.3\,\textup{Z}_{\mathrm{\sun}}$; e.g. Bialy & Sternberg 2019; Katz 2022; Kim et al. 2023). Improving the numerical treatment of cooling instabilities (purple to green) does not modify this broad picture but introduces systematic trends for the tails of these distributions. The high-density and low-temperature tails incrementally reach higher densities and lower temperatures, confirming the expectation that condensation and cooling in the diffuse gas are numerically suppressed in our fiducial setup (Figure 1). However, the improving resolution also generates a longer tail towards higher temperatures and lower densities, highlighting that the hot, diffuse phase is becoming increasingly significant. Since the total gas mass is roughly conserved between simulations, this is mainly modifying the relative fractions between gas phases, and thus the multiphase structure of our galactic outflows. To better visualize these changes, Figure 5 shows the temperature-density (top) and pressure-density (bottom) two-dimensional phase diagrams of the same gas as in Figure 4, recovering the broad features identified in the one- dimensional PDFs (top, annotated). With increasing resolution (towards the right), the low-density gas ($\rho\leq 10^{-2}\,\mathrm{cm}^{-3}$) is better sampled, generating a larger and smoother scatter in pressure and temperatures for a given density. This low- density gas also has a visibly cooler tail (top right), which we check arises from better-resolved ionized cavities following SNe explosions. These cavities cool from adiabatic expansion in a regime of inefficient recombination, preventing rapid heating from the UV background (see also McQuinn 2016 for a similar argument driving the temperature-density relation of the IGM). The dense gas ($\rho\geq 10^{-1}\,\mathrm{cm}^{-3}$) cools more efficiently below $10^{4}\,\mathrm{K}$ with increasing resolution (top right), generating a more extended scatter in temperatures and pressures and a more pronounced ‘knee’ in an otherwise tightly linear pressure track (bottom right). Such inflection point in the pressure-density diagram and extended scatter in gas pressures at a given density are key indicators of multiphase structure, best theorized in the ISM (e.g. McKee & Ostriker 1977). Our results corroborate the physical picture in which the cooling length of diffuse outflowing gas from galaxies is under-resolved with traditional simulations (Figure 1 and Section 1), stemming the development of multiphase galactic outflows. Improving resolution in the outflowing gas significantly helps alleviate this issue, yielding a more prominent cold and dense phase, but also a more prominent hotter and diffuse phase as we avoid over-mixing and over-cooling SN-driven expanding superbubbles. Both of these components are particularly important for galactic-scale feedback, as the cold-to-warm phase is expected to be significantly mass loaded, while the hot phase can deposit energy in the surrounding diffuse CGM. We quantify the change in energetics of each of these gas phases in the next Section to gain better insights into these aspects. Before turning to this, we briefly highlight that the one- or two-dimensional PDFs do not show signs of numerical convergence with increasing resolution, and that the ordering of simulations with resolution is not linear (e.g. gas reaching colder temperatures despite a factor-of-two worse resolution; blue versus green in Figure 4). This hints that, despite improving the situation, we are yet to fully resolve the multiphase structure of the outflowing gas and that evolution at a given resolution possesses stochasticity due to the specific star formation history at hand. We quantify this stochasticity in Appendix B, showing that it is a sub-dominant effect compared to the broad trends we identify when increasing resolution, and discuss remaining numerical uncertainties and convergence in Section 5.2. Figure 6: Time-averaged, mass-weighted vertical velocity distribution of the outflowing gas with increasingly resolved cooling length. The whole gas distribution (solid lines) shifts towards larger velocities as we better resolve its multiphase structure (purple to green). The cold phase (dashed) becomes more prominent and faster moving with increasing resolution, augmenting its fraction reaching the escape velocity of the system (grey band). The hot phase (dotted) also exhibits a tail of ever-faster gas that contributes to increasing the outflow energetics, better visualized when weighting gas by energy (Figure 7). Figure 7: Time-averaged temperature- velocity distributions with increasing resolution (left to right), weighted by mass (top) and specific energy (bottom). With increasing resolution, more mass is travelling at velocities greater than the escape velocity of the system (grey in top panels). Better-resolved outflows are also systematically more energetic (bottom), extending towards hotter temperatures and faster moving velocities (top-right corner of each panel) along a track of near-constant Mach number (grey dashed). They also show increasingly long tails of hot temperatures at each given velocity, indicating more energetic thermalization (towards the top-left corner of each panel). These increases in outflow energetics translate into large boosts in outflow loading factors (Figure 8). ### 3.2 Kinematics of increasingly multiphase outflows Figure 6 shows the mass-weighted PDF of vertical velocities, $v_{z}$, for the same outflowing gas as in Section 3.1 split between cold ($\leq{8}\times 10^{3}\,\mathrm{K}$) and hot ($\geq{8}\times 10^{4}\,\mathrm{K}$) gas (dashed and dotted, respectively). Focussing first on the total gas (solid lines), all simulations are dominated by slow-moving ($v_{z}\approx 10\,\text{km}\,\text{s}^{-1}$) gas that is unlikely to escape the gravitational potential of the galaxy ($v_{z}\geq v_{\text{circ}}$ shown as grey band). However, as we better resolve the cooling length, the distribution shifts towards larger velocities (median going from 4.6 to 5.8 to 6.7 to 7.3 $\text{km}\,\text{s}^{-1}$ from purple to green, respectively), showing that the overall kinetic energy of the outflowing gas is systematically increasing. Splitting the gas in different temperature bins, we recover an increasing amount of cold gas at higher resolutions as discussed in Section 3.1 (dashed lines have higher normalizations), but this cold gas is also faster moving (medians going from $v_{z}=4.6$ to 6.6 to 8.5 to 7.3 $\text{km}\,\text{s}^{-1}$ with increasing resolution) which will translate into an increased mass-loading (Section 3.3). The enhanced amount of hot gas (dotted) is harder to interpret from the mass-weighted PDF, as it tracks the low-density, volume-filling phase which represents a small fraction of the total outflowing mass. To better visualize how outflow kinematics correlate with the gas thermodynamics, we show in Figure 7 the two-dimensional distribution of temperature and vertical velocities, weighted by gas mass and gas energy, $E_{\text{gas}}=m_{\text{gas}}(1/2\,\ v^{2}+P/\rho/(\gamma-1)$), in the top and bottom panels, respectively. Again, different panels show time stacks spanning the entire galaxy evolution as one increases the resolution of the cooling length (left to right). Focussing first on the mass-weighted diagrams (top panels), we recover that most of the outflow mass is warm ($\approx 10^{4}\,\mathrm{K}$) and travelling at relatively low velocities ($v_{z}\leq 10\,\text{km}\,\text{s}^{-1}$). The increased fraction of cold gas reaching the circular velocity noted in Figure 6 is visible as a faint cloud extending under the warm gas sequence that becomes more prominent with increasing resolution (e.g. third top panel, to the right of the dotted line). The hotter phase is better sampled with increasing resolution but does not significantly evolve when weighting by mass, consistent with Figure 6. Weighting by energy (bottom panels), however, puts more emphasis on this hotter, energy-carrying outflow phase. Focussing first on the fiducial case (left), this hotter phase materializes as a track extending towards high temperatures and velocities, broadly following an iso-Mach slope (dashed lines) arising from the superposition of the many SN-driven superbubbles expanding out of the disc plane. This gas is strongly supersonic (see also Kim et al. 2020; Andersson et al. 2022; Steinwandel et al. 2022a), and dominated by its kinetic energy (solid line shows the equality between gas kinetic and internal energy). This likely reflects the initial (kinetic) injection mechanism of our SN feedback scheme (Kimm et al. 2015), which then thermalizes through shock-heating (towards the top-left corner of each panel). The main trend emerging from the energy-weighted diagrams of Figure 7 is that better resolving the cooling length both extends the hot, energetic phase towards faster velocities and higher temperatures (respective upper-right corners of each panel), and increases the scatter towards higher temperatures at a given vertical velocity. The hot outflowing phase is thus intrinsically more energetic, but also more efficiently thermalized to higher temperatures at a given velocity and thus enhancing its ability to pressurize the surrounding medium. Our results highlight two key points as the multiphase structure of our galactic outflows is better resolved and the kinematics of their gas phases is more accurately captured: (i) the cold-to-warm phase dominating the mass budget is systematically colder and faster moving, reaching velocities ever closer to the circular velocity; and (ii) the hot phase dominating the energy budget is more energetic and efficient at thermalizing energy at a given velocity. These increased energetics are likely to significantly impact outflow loading factors, to which we now turn. ### 3.3 Consequences on outflow loading factors Figure 8: Mass, energy and metal loading factors (top, middle and bottom rows, respectively) over our galaxies’ evolution (violins show respective time distributions) measured at $|z|=1\,\mathrm{kpc}$ (left) and $r=5\,\mathrm{kpc}$ (right). Better resolving the cooling length (from the low resolution fiducial case in purple to the highest resolution case in green) marginally modifies outflow conditions close to their launching region in the disc (left column), with median loading factors (diamonds) increasing with resolution but staying well within the 16-84 confidence interval (black line) of the fiducial case (purple). By contrast, all loading factors when reaching the inner CGM (right column) show a sharp increase as one better resolves the multiphase structure of the outflow. To quantify outflow loading factors, we compute for each simulation snapshot the mass, energy and metal loading factors defined as $\begin{split}\eta_{M}&=\frac{\dot{M}_{\text{out}}}{\mathrm{SFR}_{\text{10 Myr}}},\\\ \eta_{E}&=\frac{\dot{E}_{\text{out}}}{\mathrm{SFR}_{\text{10 Myr}}\,p_{\text{SN}}},\ \text{and}\\\ \eta_{Z}&=\frac{\dot{Z}_{\text{out}}}{\ \mathrm{SFR}_{\text{10 Myr}}\,Z_{\text{disc}}},\end{split}$ (2) where $p_{\text{SN}}={4.89}\times 10^{5}\,\text{km}^{2}\,\text{s}^{-2}$ is the average specific energy injected by SNe for a Kroupa (2002) mass function (Kim & Ostriker 2018) and $Z_{\text{disc}}$ is the average mass-weighted metal fraction within the disc. We define the outflow rates first through a thin 1-kpc slab above and below the disc (see Section 2.3) to quantify differences in the launching of the wind close to the disc. We then repeat the same measure through a 2-kpc thick radial shell centred at 5 kpc to quantify differences in the propagation up to the inner CGM interface ($\approx 0.2\,r_{200}$). Choosing these spatial locations is arbitrary and will affect quantitative determinations of the outflow rates (e.g. Muratov et al. 2015). Our choices guarantee a self-consistent comparison between simulations and are sufficient to establish relative trends, which we verified are unchanged if measuring outflow rates at $|z|=0.5\,\mathrm{kpc}$ or $r=10\,\mathrm{kpc}$. In future work, rather than relying on these definitions, we will use mock- observations of our simulations to extract synthetic spectra and derive galaxy outflow rates using observational diagnoses. Figure 8 shows the mass, energy and metal outflow rates (top, middle and bottom panels respectively) at 1 and 5 kpc (left and right panels respectively). Violin plots showcase distributions across the evolution of each simulation, highlighting their medians (diamonds) and 16-84 confidence intervals (black lines). We remove negative outflow rates (i.e. inflows) as they bias low the median and confidence interval estimates, but note that inflows only occur for the fiducial case at 5 kpc (27 per cent of our time samples) and never in the runs where resolution is enhanced. Outflows close to the disc (left panels) are significantly mass and metal loaded ($\eta_{M},\eta_{Z}\geq 1$) but not energy loaded ($\eta_{E}\ll 1$). Enhancing off-the-disc-plane resolution leads to a mild but systematic evolution, with a 50 per cent enhancement in the median mass loading factor between the fiducial and the most resolved case, a factor of three growth in the energy loading factor, and near-constant median metal loading factor. These enhancements are consistent with the trends of faster-moving, cold-to- warm gas and more energetic hot phases established in Section 3.2. However, their significance remains difficult to interpret, as stochasticity due to varying star formation activity at different times (black lines show the 16-84 confidence intervals) as well as the overall noise in the evolution of a numerical setup at a given resolution (Appendix B) can generate shifts of a similar magnitude. In contrast, differences further away from the galaxy (right panels) are far more significant, with a seven-fold increase in mass loading (median $\eta_{M}=0.65,2.35,3.9,4.5$ from purple to green), and a five-fold increase in energy (median $\eta_{E}=0.13,0.38,0.52,0.51$) and metal loading factors (median $\eta_{Z}=1.3,2.7,4.9,6.1$). Importantly, these shifts with enhanced resolution are close to exceeding the 16-84 confidence interval span of the fiducial case and are robust against run-to-run stochasticity (Appendix B). Furthermore, these results are conservative, as we remove inflows which only impact the fiducial case at 5 kpc and further extend its distribution towards negative loading factors. Our study shows that better resolving the diffuse outflowing gas provides a significant boost to their energetics and, in turn, to their ability to regulate galactic star formation. When escaping into the CGM, our increasingly-resolved outflows go from being marginally ($\eta\leq 1$) to significantly ($\eta\geq 1$) mass, energy and metal loaded. This ability to maintain the high mass and metal loadings acquired in the ISM over super- galactic scales could have strong consequences for galactic outflows’ ability to efficiently transport mass outwards and pollute the CGM and IGM with metals. Similarly, the growth towards well energy-loaded outflows at 5 kpc would naturally help pressurize the inner CGM and prevent further gas accretion onto the galaxy. The full reach of these results remains to be determined in a cosmological context, however. Our outflows currently propagate in a low-density, static hot halo surrounding our idealized dwarf galaxy, rather than the layered, dynamic and inflowing CGM of a cosmological galaxy. However, their significance strongly motivates future studies quantifying how the gas cycle of galaxies is affected when outflows and inflows are better resolved. ## 4 Ionic structure of better-resolved outflows A key novel aspect of our simulations is to track the non-equilibrium abundances of ions, providing us with the opportunity to readily study the ionic structure of our outflows and eventually link to emission and absorption observables. We now provide preliminary insights on the observational consequences of our findings, and will study specific observational diagnoses to recover gas properties from outflow observations in dedicated companion papers (see Section 6 for further discussion). Figure 9 shows the time-averaged covering fractions of five ions with increasing ionization potentials. To derive those, we generate side-on images of the galaxy at each time output (8 kpc wide and deep, spatial resolution of 20 pc) masking the galactic disc ($|z|<0.5\,\mathrm{kpc}$) and compute histograms of line-of-sight column densities for each ion. We then aggregate the histograms over time, renormalize them to obtain the stacked probability distribution functions, and take the cumulative sums shown in Figure 9. Starting from low-ionization ions (left panel), we observe a systematic, although mild, trend with resolution, with both $f({N_{\text{H{i}}}\geq 10^{15}\,\mathrm{cm}^{-2}})$ and $f({N_{\text{Mg~{}{ii}}}\geq 10^{11}\,\mathrm{cm}^{-2}})$ increasing between our two extreme resolutions by 5 and 8 per cent, respectively. This is primarily driven by an increase at high column densities (e.g. $f({N_{\text{H{i}}}\geq 10^{18}\,\mathrm{cm}^{-2}})$ increases by 15 per cent), which is best explained by the growth of a colder and denser gas phase as the out-of-the- plane cooling length is better resolved (Figure 5). The relatively limited evolution in the covering fractions of these low-ionization ions with resolution is encouraging, although the lack of convergence highlights that we are not yet resolving the characteristic scales regulating the dense, cold phase (see e.g. van de Voort et al. 2019; Gronke & Oh 2020 and Section 5.2 for further discussion). For ions tracking warmer, more diffuse and ionized gas (e.g. C iv, middle panel), we find a reduction in high-column density sightlines and a growth at low column densities with increasing resolution. We attribute the latter to the larger scatter of temperatures in the diffuse phase (Figure 5) yielding a more extended scatter in diffuse C iv sightlines. In contrast, higher- densities sightlines hosting C iv at low resolution have likely recombined and cooled to lower-ionization ions when the cooling length is better resolved, explaining the reduction. Figure 9: Time-averaged covering fractions of selected ions with increasing ionization potentials (left to right panels respectively) when looking at simulated galaxies edge-on in runs with increasing off-the-plane resolution (from purple for the fiducial case to green for the most resolved: see text for detail). A better resolved cold phase leads to an increase in Hiand Mg ii, particularly at low column densities ($\leq 10^{17}\,\mathrm{cm}^{-2}$ and $\leq 10^{12}\,\mathrm{cm}^{-2}$). A better resolved hot phase strongly enhances the covering fractions of high-ionization ions, notably in C ivand O vi, two key ions to probe CGM gas in absorption. A much larger effect is visible when inspecting the covering fractions of higher ionization ions (right, O vi, O vii), with $f({N_{\text{O~{}{vi}}}\geq 10^{13}\,\mathrm{cm}^{-2}})$ going from to 0.148 to 0.160 to 0.169 to 0.246 with increasing resolution (0.306 to 0.321 to 0.327 to 0.437 for O vii). These 66 and 43 per cent increases materialize the increasing energetics of our better-resolved outflows, with the systematically hotter gas promoting higher- ionization ions. The trend with resolution is however non-linear and difficult to interpret quantitatively. Lower-resolution runs (violet and blue) show mild evolution until a sudden increase in the covering fraction occurs with our most-resolved run. We checked that this is in part due to stochasticity in the given realization of the star formation and outflow history. For example, our two other realizations of the outflow at 36 pc (Appendix B) show 74 and 24 per cent increase in $f({N_{\text{O~{}{vi}}}\geq 10^{13}\,\mathrm{cm}^{-2}})$ compared to our fiducial simulation, while our other two random realizations of the fiducial setup vary by as much 27 per cent. Despite these quantitative uncertainties, we find that runs with improved treatment of cooling instabilities in the diffuse gas always have elevated covering fractions of O vi and O vii. Our results show that improving the treatment of cooling instabilities in the diffuse gas provides a significant avenue to increasing the covering fraction of high ionization ions. Probing the thermodynamics of the gas surrounding dwarf galaxies using ionic absorption lines is already possible (e.g. Bordoloi et al. 2014; Johnson et al. 2017; Zheng et al. 2019, 2020, 2023; Qu & Bregman 2022), although we stress that our results should be compared with great care (if at all) to such data. Our idealized simulations lack a self-consistent large-scale CGM surrounding the dwarf, strongly limiting comparisons at large impact parameters and against statistical samples of dwarf galaxies. We verified that our results are qualitatively unchanged if including slightly higher impact parameters (using 10 kpc-wide images rather than 8), but refrain from detailed comparisons with data. Nonetheless, our results stress the importance of improving out-of-the-plane resolution to properly capture the ionic structure of outflows and constrain galaxy formation models from interpreting their observables in emission and absorption. These results further motivate applications of our setup and refinement strategy in a cosmological context to provide more robust comparisons against observational measurements at larger impact parameters, well into the CGM. ## 5 Discussion ### 5.1 The numerical cost of refining on the cooling length To establish that better resolving the multiphase structure of galactic outflows boosts their energetics, we rely on adding resolution in the diffuse thermally-unstable gas. This carries an associated numerical cost that potentially limits the practical scope of our approach and the applicability of our refinement strategy. We discuss in this section the cost of refining on the cooling length and strategies we envision to mitigate it. Simulations in our main suite were performed with a different number of cores across different machines. Their total costs thus mix the increased computational load with different intrinsic hardware and parallel efficiencies, making a direct comparison difficult. To provide a cleaner test, we start from our most refined setup at the time shown in Figure 1 and 2 ($t=540\,\mathrm{Myr}$; i.e. neither particularly quiescent nor outflowing) and restart four runs for 6 wall-time hours on 768 cores of the same machine, gradually degrading the cooling length refinement target to 36 pc, 72 pc, and none. Table 1 (right column) reports the median timestep cost (in CPU wall- clock time per simulated Myr) for each of these tests, which we stress can not be extrapolated to estimate the total simulation cost as the timestep strongly varies depending on the outflowing conditions. As expected, the median cost of a timestep is rising as we increasingly resolve the cooling length, driven by the accordingly increasing number of grid cells (Figure 2). However, this cost only increases by 2 and 28 per cent for our intermediate setups that resolve the cooling length to 72 and 36 pc respectively. It then sharply grows to a 3-fold increase in the most refined run that matches the cooling length with the ISM resolution at 18 pc. This non-linear increase reflects the fact that the computational effort is dominated by the densest, most refined cells that have more restrictive Courant–Friedrichs–Lewy conditions and higher convergence costs for the chemistry and radiative transfer solvers. In other words, intermediate setups resolving the diffuse gas twice or four times worse than the ISM are not free, but their computational costs can be kept under control as adding many coarser cells remains a sub-dominant cost compared to computations in the fine ISM gas cells. Given the already large gains in outflow loading factors we observe at 72 and 36 pc (Figure 8), we foresee these intermediate setups as the most attractive and tractable options for cosmological setups. In fact, cosmological simulations of galaxies with such an AMR strategy refining on the cooling length have already been achieved (Simons et al. 2020; Lochhaas et al. 2021, 2022), although at a comparatively much lower spatial resolution in both the ISM and the cooling length ($\approx 200\,\mathrm{pc}$) than here. This lower resolution arises from targeting larger galaxies than the dwarf considered here, and from fundamental differences between the isolated setup of this work and a live cosmological environment. The presence of dense filaments and inflows, of merging galaxies with their stripped tails of gas, and of an already multiphase and turbulent CGM surrounding the central galaxy all add gas highly susceptible to cooling instabilities. This would likely trigger large amounts of additional refinement and sharply increase the computational and memory costs of the simulation. Although field-testing is required to pinpoint the achievable cooling-length resolution in a given cosmological setup with ramses-rtz, we remain confident of its overall tractability due to the overwhelmingly dominant cost of the ISM gas. Furthermore, reducing the computational expense is possible by relaxing our requirement that the cooling length is resolved by at least $N_{\text{cells}}=8$, although further exploration is needed to understand the convergence properties of our simulations with $N_{\text{cells}}$. Another mitigation strategy could also be to release additional resolution levels gradually over cosmic time for the cooling length as is often done for the ISM (e.g. Snaith et al. 2018). Or more simply to restart a simulation for a limited time period around a specific event such as a starburst to focus all the effort on one maximally-resolved galactic outflow in a self-consistent environment. ### 5.2 Numerical uncertainties in resolving the multiphase structure of galactic outflows Our results show that incrementally increasing resolution in the diffuse, outflowing gas boosts both the mass and energy loading factors of outflows as they escape the galaxy. In Section 3, we tie this to a better-resolved multiphase structure, with the cold and hot phases each becoming more prominent and more energetic. In this Section, we discuss the physical and numerical mechanisms responsible for this effect, and the remaining uncertainties associated with modelling the hydrodynamics of galactic outflows. Multiple physical processes are responsible for setting the multiphase structure of gas on small scales, ranging from shocks, hydrodynamical instabilities and cooling and heating processes (Section 1 and Faucher-Giguere & Oh 2023 for a review). The primary targets of this study are unresolved cooling instabilities in the diffuse gas (Figure 1) leading to an artificial suppression of multiphase structures (e.g. discussion in Hummels et al. 2019). We tackle the issue using the refinement strategy described in Section 2.2, but a direct consequence of increasing resolution in the diffuse gas is to also help mitigate other numerical effects. In particular, numerical diffusion of hydrodynamical quantities due to the advection of a fluid in motion with respect to a fixed grid (e.g. Robertson et al. 2010; Teyssier 2015) is particularly important in the context of fast outflowing gas ($\geq 100\,\text{km}\,\text{s}^{-1}$). Advection errors are demonstrably reduced with increasing resolution, and our most resolved simulations would thus suffer less from advection-driven numerical mixing between gas phases. Similarly, the heating, lifetime and propagation of shocks are also better captured with increasing resolution (e.g. Skillman et al. 2008; Vazza et al. 2011) and typically under-resolved in the diffuse gas (e.g. Bennett & Sijacki 2020). A better treatment of shocks then comes as a by- product of refining on the cooling length (e.g. the thinner shells around expanding superbubbles in Figure 1) and would improve their ability to thermalize gas at higher temperatures, particularly in the lowest density regions which are better sampled (e.g. the increased scatter in temperature at a given velocity in Figure 7). Furthermore, resolving the transition between subsonic and supersonic gas in the outflow is key to accurately capture the acceleration of the hot phase (e.g. Chevalier & Clegg 1985; Fielding & Bryan 2022). This can be under-resolved with traditional schemes (Smith et al. 2023), and is likely improved in our more resolved runs. Lastly, hydrodynamical turbulence is heavily suppressed by insufficient resolution, and sets the characteristic growth timescales of the cold phase (Gronke & Oh 2018; Tan et al. 2021; Gronke et al. 2022). The traditional quasi-Lagrangian strategy erases small-scale information in the diffuse gas and likely under- estimates the amount of gas turbulence (see also Bennett & Sijacki 2020). The added resolution visibly improves the level of turbulence in the outflowing gas (e.g. in this movie), despite our refinement strategy still suppressing turbulence due to refinement/derefinement noise (Teyssier 2015, see also Martin-Alvarez et al. 2022 for other strategies to mitigate this issue in a galaxy formation context). Pinpointing the respective importance of each of these coupled numerical effects remains difficult in our current setup. Disentangling them could be possible by implementing complementary refinement strategies that target one effect at a time (e.g. shock refinement; Bennett & Sijacki 2020) and are used in turn. But we stress that the numerical gains associated with targeting the cooling length of the diffuse gas are multifold, and extend well beyond better-captured cooling instabilities. Despite these gains however, our simulations do not exhibit signs of numerical convergence in gas temperature, density or velocity PDFs (Figures 5 and 6), highlighting that we are still under-resolving relevant physical processes. This should be expected as resolving their characteristic length scales of each individual microscopic process is required to obtain numerically- converged results and can require sub-pc resolution in the diffuse gas (e.g. Koyama & Inutsuka 2004; Kim & Kim 2013). However, the interaction of many processes, each with their own preferred regime and characteristic scales, somewhat blurs this picture, and quantities integrated across the plasma might converge without ever resolving microscopic scales (Tan et al. 2021). In fact, despite the clear growth of outflow loading factors at 5 kpc with increasing resolution (Figure 8), this growth slows as the resolution keeps improving. This hints that convergence in such integrated quantities could be achieved, in particular in a statistical sense, given the stochasticity associated with star formation activity. More quantitative statements, however, require a wider exploration of numerical resolutions and models. To summarize, the large boost in outflow loading factors induced by better- resolving the structure of the wind (Figure 8) strongly motivates extending our analysis. In particular, future studies quantifying the respective roles of the physical and numerical effects discussed above will be key to gain an understanding on the convergence and robustness of our modelling of SN-driven winds, and of their predicted efficiency at regulating galactic star formation. We stress that such studies would need to be complemented by parallel, complementary explorations of the uncertain launching physics of the wind within the ISM. This aspect remains unexplored in this work, but different star formation models, feedback channels and stellar evolution processes can also generate order-of-magnitude variations in dwarf galaxy outflow loading factors (e.g. Agertz et al. 2020; Emerick et al. 2020; Smith et al. 2021; Andersson et al. 2022; Steinwandel et al. 2022b for recent studies; Naab & Ostriker 2017 for a review). ## 6 Conclusion We perform radiative-hydrodynamical numerical simulations of an isolated dwarf galaxy, improving the thermodynamical modelling of the outflowing gas escaping the galaxy. Each simulation drives self-consistent galactic outflows from SNe and photo-ionization feedback within the ISM resolved at $18\,\mathrm{pc}$, and accounts for the non-equilibrium chemistry and cooling of gas coupled to the radiative transfer using the AMR ramses-rtz code (Katz 2022). To remedy the under-resolved cooling instabilities in the diffuse outflow when using a traditional quasi-Lagrangian refinement strategy (Figure 1), we implement a new refinement strategy in ramses-rtz targeting the local gas cooling length computed on-the-fly from the non-equilibrium, line-by-line cooling rate. We then perform additional re-simulations of our dwarf galaxy incrementally resolving the cooling length down to 72, 36 and 18 pc. The additional refinement criterion on the cooling length improves the resolution in the diffuse out-of-the-galaxy-plane gas (Figure 2), without significantly modifying the average star formation rate or amount of stellar mass formed (Figure 3), and the star formation and feedback conditions in the dense, ISM gas (Appendix A). This provides us with a controlled study, where the launching mechanics of the outflows within the galaxy are comparable but their propagation as they escape the galaxy is better resolved. With increasing resolution, outflows become increasingly multiphase, exhibiting both a systematically higher fraction of colder and denser gas, and a more diffuse phase featuring an increased scatter in temperature (Figure 4 and 5). Furthermore, the improved numerical treatment systematically increases the vertical velocity of the gas (Figure 6), augmenting the fraction of mass exceeding the circular velocity of the galaxy. Simultaneously, the energy- carrying hot phase ($\geq 10^{5}\,\mathrm{K}$) becomes more energetic as resolution improves, exhibiting hotter temperatures, faster velocities, and thermalizing the gas at higher temperatures for a given velocity (Figure 7). Over $\approx 500\,\,\mathrm{Myr}$ of evolution, the combination of these factors leads to a seven-fold increase in the average mass loading factor, and five-fold increases in the average energy and metal loading factors in the inner CGM ($r=5\,\mathrm{kpc}$, Figure 8). These boosts occur without modifications to the internal feedback budget or modelling and are robust to stochastic realizations of the star formation history (Appendix B). Rather, they stem from better resolving the hydrodynamical and thermodynamical processes as the outflow propagates into the CGM (Section 5.2). The significance of this boost in outflow energetics without new feedback mechanisms strongly motivates future studies extending the analysis to a cosmological context. Quantifying how such outflows interact with a self- consistent, already-multiphase cosmological CGM, rather than the isolated hot halo of this study, will be key to understanding their improved efficiency at regulating galactic star formation over cosmic time. Furthermore, a key novelty of our simulations is to track the non-equilibrium abundances of $\geq 60$ ions, allowing us to robustly capture the ionic structure of the diffuse gas (Figure 9). Covering fraction of low-ionization ions (e.g. Hi) exhibit a limited increase, primarily driven by a more prominent dense and cold phase better (e.g. $N_{\text{H{i}}}\geq 10^{18}\,\mathrm{cm}^{-2}$ increasing by 15 per cent). Higher-ionization ions (O vi, O vii) show larger enhancements following the hotter temperatures of our outflows with increased resolution, with both $N_{\text{O~{}{vi}}}\geq 10^{13}\,\mathrm{cm}^{-2}$ and $N_{\text{O~{}{vii}}}\geq 10^{13}\,\mathrm{cm}^{-2}$ increasing by $\approx 50$ per cent between our extreme resolutions. Our results demonstrate the importance of additional resolution in the diffuse gas to capture the ionic structure of outflows, and interpret their observable signatures in emission and absorption. Dedicated companion papers building upon the improved treatment of outflows presented here will quantify the robustness of inferences of outflow properties from observational diagnoses (see also this attached movie). In particular, we will post-process our simulations with resonant-line radiative transfer using the rascas code (Michel-Dansac et al. 2020) to obtain mock Mg ii, Si ii and Fe ii spectra of our outflows. This will allow us to assess the accuracy with which such observations (e.g. Rupke et al. 2005; Martin & Bouché 2009; Rubin et al. 2011; Martin et al. 2013) can recover outflow properties and loading factors (H. Katz et al. in prep). We also plan to use our simulations to assess potential biases in recovering spatially-resolved outflow metallicities from emission line maps (Cameron et al. 2021, A. Cameron et al. in prep). Finally, thermally-unstable and multiphase gas are common features across galaxy formation and astrophysics. The approach presented in this study provides a highly modular AMR strategy to better resolve cooling instabilities, with a computational cost that can be adapted to galaxy formation applications while taking into account the resources at hand (see Section 5.1 for a discussion). We foresee a wide range of potential applications where the combination of this strategy with the non-equilibrium, ion-by-ion cooling of ramses-rtz could provide significant modelling improvements. Namely, to help resolve the multiphase structure of the diffuse, low-density CGM surrounding galaxies (e.g. Hummels et al. 2019; Peeples et al. 2019; Suresh et al. 2019; van de Voort et al. 2019; Lochhaas et al. 2021, 2022), to better capture the potential shattering of cosmological filaments and sheets (Mandelker et al. 2020; Mandelker et al. 2021), or to provide an efficient way to improve the numerical treatment of cosmological ram-pressure striping (e.g. Simons et al. 2020) and better understand the cooling, star- forming tails of jellyfish galaxies (e.g. Tonnesen & Bryan 2012; Tonnesen 2019; Lee et al. 2022). ## Acknowledgements MR would like to thank Oscar Agertz and Eric P. Andersson for insightful discussions during the construction of this work and comments on an earlier version of this manuscript. MR and HK are supported by the Beecroft Fellowship funded by Adrian Beecroft. This work was performed in part using the DiRAC Data Intensive service at Leicester and DiRAC@Durham facilities, operated by the University of Leicester and Institute for Computational Cosmology IT Services, which form part of the STFC DiRAC HPC Facility (www.dirac.ac.uk). These equipments are funded by BIS National E-Infrastructure capital grants ST/K000373/1, ST/P002293/1, ST/R002371/1 and ST/S002502/1, STFC DiRAC Operations grant ST/K0003259/1, and Durham University and STFC operations grant ST/R000832/1. DiRAC is part of the National E-Infrastructure. For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. We thank the developers of Pynbody (Pontzen et al. 2013), Tangos (Pontzen & Tremmel 2018), NumPy (van der Walt et al. 2011), SciPy (Virtanen et al. 2020), Jupyter (Ragan-Kelley et al. 2014) and Matplotlib (Hunter 2007) for providing open-source software used in this work. The Astrophysics Data Service (ADS) and arXiv preprint repository were used extensively in this work. ## Author contributions The main roles of the authors were, using the CRediT (Contribution Roles Taxonomy) system222https://authorservices.wiley.com/author-resources/Journal- Authors/open-access/credit.html: MR: Conceptualization ; Data curation; Formal analysis; Investigation; Project Administration; Writing – original draft. HK: Conceptualization; Methodology; Software; Writing – review and editing. AC: Conceptualization; Writing – review and editing. JD: Resources; Writing – review and editing. AS: Resources; Writing – review and editing. ## Data availability The data underlying this article will be shared upon reasonable request to the corresponding author. ## References * Agertz et al. (2013) Agertz O., Kravtsov A. V., Leitner S. N., Gnedin N. Y., 2013, ApJ, 770, 25 * Agertz et al. (2020) Agertz O., et al., 2020, MNRAS, 491, 1656 * Andersson et al. (2020) Andersson E. P., Agertz O., Renaud F., 2020, MNRAS, 494, 3328 * Andersson et al. (2022) Andersson E. P., Agertz O., Renaud F., Teyssier R., 2022, INFERNO: Galactic Winds in Dwarf Galaxies with Star-by-Star Simulations Including Runaway Stars (arXiv:2209.06218) * Armillotta et al. (2016) Armillotta L., Fraternali F., Marinacci F., 2016, MNRAS, 462, 4157 * Aubert & Teyssier (2010) Aubert D., Teyssier R., 2010, ApJ, 724, 244 * Begelman & McKee (1990) Begelman M. C., McKee C. F., 1990, ApJ, 358, 375 * Bennett & Sijacki (2020) Bennett J. S., Sijacki D., 2020, MNRAS, 499, 597 * Bialy & Sternberg (2019) Bialy S., Sternberg A., 2019, ApJ, 881, 160 * Bordoloi et al. (2014) Bordoloi R., et al., 2014, ApJ, 796, 136 * Brüggen & Scannapieco (2016) Brüggen M., Scannapieco E., 2016, ApJ, 822, 31 * Cameron et al. (2021) Cameron A. J., et al., 2021, ApJ, 918, L16 * Cameron et al. (2022) Cameron A. J., Katz H., Rey M. P., 2022, A Novel Approach to Correcting $T_e$-Based Mass-Metallicity Relations (arXiv:2210.14234) * Carr et al. (2022) Carr C., Bryan G. L., Fielding D. B., Pandya V., Somerville R. S., 2022, Regulation of Star Formation by a Hot Circumgalactic Medium (arXiv:2211.05115) * Chevalier & Clegg (1985) Chevalier R. A., Clegg A. W., 1985, Nature, 317, 44 * Chisholm et al. (2017) Chisholm J., Tremonti C. A., Leitherer C., Chen Y., 2017, MNRAS, 469, 4831 * Collins & Read (2022) Collins M. L. M., Read J. I., 2022, Nat Ast * Dekel & Silk (1986) Dekel A., Silk J., 1986, ApJ, 303, 39 * Emerick et al. (2020) Emerick A., Bryan G. L., Mac Low M.-M., 2020, preprint (arXiv:2007.03702) * Faucher-Giguere & Oh (2023) Faucher-Giguere C.-A., Oh S. P., 2023, Key Physical Processes in the Circumgalactic Medium (arXiv:2301.10253) * Federrath & Klessen (2012) Federrath C., Klessen R. S., 2012, ApJ, 761, 156 * Fichtner et al. (2022) Fichtner Y. A., Grassitelli L., Romano-Diaz E., Porciani C., 2022, MNRAS, 512, 4573 * Field (1965) Field G. B., 1965, ApJ, 142, 531 * Fielding & Bryan (2022) Fielding D. B., Bryan G. L., 2022, ApJ, 924, 82 * Fielding et al. (2018) Fielding D., Quataert E., Martizzi D., 2018, MNRAS, 481, 3325 * Fielding et al. (2020) Fielding D. B., Ostriker E. C., Bryan G. L., Jermyn A. S., 2020, ApJ, 894, L24 * Genel et al. (2019) Genel S., et al., 2019, ApJ, 871, 21 * Girichidis et al. (2016) Girichidis P., et al., 2016, MNRAS, 456, 3432 * Glover & Jappsen (2007) Glover S. C. O., Jappsen A. K., 2007, ApJ, 666, 1 * Gronke & Oh (2018) Gronke M., Oh S. P., 2018, MNRAS, 480, L111 * Gronke & Oh (2020) Gronke M., Oh S. P., 2020, MNRAS, 492, 1970 * Gronke et al. (2022) Gronke M., Oh S. P., Ji S., Norman C., 2022, MNRAS, 511, 859 * Guillet & Teyssier (2011) Guillet T., Teyssier R., 2011, J. Comput. Phys., 230, 4756 * Gutcke et al. (2021) Gutcke T. A., Pakmor R., Naab T., Springel V., 2021, MNRAS, 501, 5597 * Haardt & Madau (2012) Haardt F., Madau P., 2012, ApJ, 746, 125 * Heckman et al. (1995) Heckman T. M., Dahlem M., Lehnert M. D., Fabbiano G., Gilmore D., Waller W. H., 1995, ApJ, 448, 98 * Hu (2019) Hu C.-Y., 2019, MNRAS, 483, 3363 * Huang et al. (2020) Huang S., Katz N., Scannapieco E., Cottle J., Davé R., Weinberg D. H., Peeples M. S., Brüggen M., 2020, MNRAS, 497, 2586 * Huang et al. (2022) Huang S., Katz N., Cottle J., Scannapieco E., Davé R., Weinberg D. H., 2022, MNRAS, 509, 6091 * Hummels et al. (2019) Hummels C. B., et al., 2019, ApJ, 882, 156 * Hunter (2007) Hunter J. D., 2007, CiSE, 9, 90 * Johnson et al. (2017) Johnson S. D., Chen H.-W., Mulchaey J. S., Schaye J., Straka L. A., 2017, ApJ, 850, L10 * Kanjilal et al. (2021) Kanjilal V., Dutta A., Sharma P., 2021, MNRAS, 501, 1143 * Katz (2022) Katz H., 2022, MNRAS, 512, 348 * Katz et al. (2017) Katz H., Kimm T., Sijacki D., Haehnelt M. G., 2017, MNRAS, 468, 4831 * Keller et al. (2019) Keller B. W., Wadsley J. W., Wang L., Kruijssen J. M. D., 2019, MNRAS, 482, 2244 * Kim & Kim (2013) Kim J.-G., Kim W.-T., 2013, ApJ, 779, 48 * Kim & Ostriker (2018) Kim C.-G., Ostriker E. C., 2018, ApJ, 853, 173 * Kim et al. (2017) Kim C.-G., Ostriker E. C., Raileanu R., 2017, ApJ, 834, 25 * Kim et al. (2020) Kim C.-G., et al., 2020, ApJ, 903, L34 * Kim et al. (2023) Kim J.-G., Gong M., Kim C.-G., Ostriker E. C., 2023, ApJS, 264, 10 * Kimm et al. (2015) Kimm T., Cen R., Devriendt J., Dubois Y., Slyz A., 2015, MNRAS, 451, 2900 * Kimm et al. (2017) Kimm T., Katz H., Haehnelt M., Rosdahl J., Devriendt J., Slyz A., 2017, MNRAS, 466, 4826 * Kimm et al. (2018) Kimm T., Haehnelt M., Blaizot J., Katz H., Michel-Dansac L., Garel T., Rosdahl J., Teyssier R., 2018, MNRAS, 475, 4617 * Klein et al. (1994) Klein R. I., McKee C. F., Colella P., 1994, ApJ, 420, 213 * Koyama & Inutsuka (2004) Koyama H., Inutsuka S.-i., 2004, ApJ, 602, L25 * Kroupa (2001) Kroupa P., 2001, MNRAS, 322, 231 * Kroupa (2002) Kroupa P., 2002, Science, 295, 82 * Laha et al. (2021) Laha S., Reynolds C. S., Reeves J., Kriss G., Guainazzi M., Smith R., Veilleux S., Proga D., 2021, Nat Ast, 5, 13 * Lee et al. (2022) Lee J., Kimm T., Blaizot J., Katz H., Lee W., Sheen Y.-K., Devriendt J., Slyz A., 2022, ApJ, 928, 144 * Li & Bryan (2020) Li M., Bryan G. L., 2020, ApJL, 890, L30 * Lochhaas et al. (2021) Lochhaas C., Tumlinson J., O’Shea B. W., Peeples M. S., Smith B. D., Werk J. K., Augustin R., Simons R. C., 2021, ApJ, 922, 121 * Lochhaas et al. (2022) Lochhaas C., et al., 2022, arXiv e-prints, p. arXiv:2206.09925 * Mandelker et al. (2020) Mandelker N., Nagai D., Aung H., Dekel A., Birnboim Y., van den Bosch F. C., 2020, MNRAS, 494, 2641 * Mandelker et al. (2021) Mandelker N., van den Bosch F. C., Springel V., van de Voort F., Burchett J. N., Butsky I. S., Nagai D., Oh S. P., 2021, ApJ, 923, 115 * Marasco et al. (2023) Marasco A., et al., 2023, A&A, 670 * Martin & Bouché (2009) Martin C. L., Bouché N., 2009, ApJ, 703, 1394 * Martin-Alvarez et al. (2022) Martin-Alvarez S., Devriendt J., Slyz A., Sijacki D., Richardson M. L. A., Katz H., 2022, MNRAS, 513, 3326 * Martin et al. (2013) Martin C. L., Shapley A. E., Coil A. L., Kornei K. A., Murray N., Pancoast A., 2013, ApJ, 770, 41 * McCourt et al. (2012) McCourt M., Sharma P., Quataert E., Parrish I. J., 2012, MNRAS, 419, 3319 * McKee & Ostriker (1977) McKee C. F., Ostriker J. P., 1977, ApJ, 218, 148 * McQuinn (2016) McQuinn M., 2016, ARAA, 54, 313 * McQuinn et al. (2018) McQuinn K. B. W., Skillman E. D., Heilman T. N., Mitchell N. P., Kelley T., 2018, MNRAS, 477, 3164 * McQuinn et al. (2019) McQuinn K. B. W., van Zee L., Skillman E. D., 2019, ApJ, 886, 74 * Michel-Dansac et al. (2020) Michel-Dansac L., Blaizot J., Garel T., Verhamme A., Kimm T., Trebitsch M., 2020, A&A, 635, A154 * Muratov et al. (2015) Muratov A. L., Kereš D., Faucher-Giguère C.-A., Hopkins P. F., Quataert E., Murray N., 2015, MNRAS, 454, 2691 * Murray et al. (2005) Murray N., Quataert E., Thompson T. A., 2005, ApJ, 618, 569 * Naab & Ostriker (2017) Naab T., Ostriker J. P., 2017, ARAA, 55, 59 * Ohlin et al. (2019) Ohlin L., Renaud F., Agertz O., 2019, MNRAS, 485, 3887 * Oppenheimer & Schaye (2013) Oppenheimer B. D., Schaye J., 2013, MNRAS, 434, 1043 * Ott et al. (2005) Ott J., Walter F., Brinks E., 2005, MNRAS, 358, 1453 * Pandya et al. (2021) Pandya V., et al., 2021, MNRAS, 508, 2979 * Peeples et al. (2019) Peeples M. S., et al., 2019, ApJ, 873, 129 * Pontzen & Tremmel (2018) Pontzen A., Tremmel M., 2018, ApJS, 237, 23 * Pontzen et al. (2013) Pontzen A., Roškar R., Stinson G., Woods R., 2013, Astrophysics Source Code Library, p. ascl:1305.002 * Prgomet et al. (2022) Prgomet M., Rey M. P., Andersson E. P., Segovia Otero A., Agertz O., Renaud F., Pontzen A., Read J. I., 2022, MNRAS, 513, 2326 * Qu & Bregman (2022) Qu Z., Bregman J. N., 2022, ApJ, 927, 228 * Ragan-Kelley et al. (2014) Ragan-Kelley M., Perez F., Granger B., Kluyver T., Ivanov P., Frederic J., Bussonnier M., 2014, American Geophysical Union, 2014, H44D * Robertson et al. (2010) Robertson B. E., Kravtsov A. V., Gnedin N. Y., Abel T., Rudd D. H., 2010, MNRAS, 401, 2463 * Rosdahl & Blaizot (2012) Rosdahl J., Blaizot J., 2012, MNRAS, 423, 344 * Rosdahl & Teyssier (2015) Rosdahl J., Teyssier R., 2015, MNRAS, 449, 4380 * Rosdahl et al. (2013) Rosdahl J., Blaizot J., Aubert D., Stranex T., Teyssier R., 2013, MNRAS, 436, 2188 * Rosdahl et al. (2015) Rosdahl J., Schaye J., Teyssier R., Agertz O., 2015, MNRAS, 451, 34 * Rubin et al. (2011) Rubin K. H. R., Prochaska J. X., Ménard B., Murray N., Kasen D., Koo D. C., Phillips A. C., 2011, ApJ, 728, 55 * Rupke et al. (2005) Rupke D. S., Veilleux S., Sanders D. B., 2005, ApJS, 160, 87 * Sarkar et al. (2022) Sarkar K. C., Sternberg A., Gnat O., 2022, ApJ, 940, 44 * Schroetter et al. (2019) Schroetter I., et al., 2019, MNRAS, 490, 4368 * Sharma et al. (2010) Sharma P., Parrish I. J., Quataert E., 2010, ApJ, 720, 652 * Sharma et al. (2012) Sharma P., McCourt M., Quataert E., Parrish I. J., 2012, MNRAS, 420, 3174 * Simons et al. (2020) Simons R. C., et al., 2020, ApJ, 905, 167 * Skillman et al. (2008) Skillman S. W., O’Shea B. W., Hallman E. J., Burns J. O., Norman M. L., 2008, ApJ, 689, 1063 * Smith et al. (2021) Smith M. C., Bryan G. L., Somerville R. S., Hu C.-Y., Teyssier R., Burkhart B., Hernquist L., 2021, MNRAS, 506, 3882 * Smith et al. (2023) Smith M. C., et al., 2023, Arkenstone I: A Novel Method for Robustly Capturing High Specific Energy Outflows In Cosmological Simulations (arXiv:2301.07116) * Snaith et al. (2018) Snaith O. N., Park C., Kim J., Rosdahl J., 2018, MNRAS, 477, 983 * Somerville & Davé (2015) Somerville R. S., Davé R., 2015, ARAA, 53, 51 * Stanway et al. (2016) Stanway E. R., Eldridge J. J., Becker G. D., 2016, MNRAS, 456, 485 * Steinwandel et al. (2022a) Steinwandel U. P., Kim C.-G., Bryan G. L., Ostriker E. C., Somerville R. S., Fielding D. B., 2022a, The Structure and Composition of Multiphase Galactic Winds in a Large Magellanic Cloud Mass Simulated Galaxy * Steinwandel et al. (2022b) Steinwandel U. P., Bryan G. L., Somerville R. S., Hayward C. C., Burkhart B., 2022b, On the Impact of Runaway Stars on Dwarf Galaxies with Resolved Interstellar Medium (arXiv:2205.09774) * Summers et al. (2003) Summers L. K., Stevens I. R., Strickland D. K., Heckman T. M., 2003, MNRAS, 342, 690 * Summers et al. (2004) Summers L. K., Stevens I. R., Strickland D. K., Heckman T. M., 2004, MNRAS, 351, 1 * Suresh et al. (2019) Suresh J., Nelson D., Genel S., Rubin K., Hernquist L., 2019, MNRAS, 483, 4040 * Tan et al. (2021) Tan B., Oh S. P., Gronke M., 2021, MNRAS, 502, 3179 * Teyssier (2002) Teyssier R., 2002, A&A, 385, 337 * Teyssier (2015) Teyssier R., 2015, ARAA, 53, 325 * Tonnesen (2019) Tonnesen S., 2019, ApJ, 874, 161 * Tonnesen & Bryan (2012) Tonnesen S., Bryan G. L., 2012, MNRAS, 422, 1609 * Toro et al. (1994) Toro E. F., Spruce M., Speares W., 1994, Shock Waves, 4, 25 * Vazza et al. (2011) Vazza F., Dolag K., Ryu D., Brunetti G., Gheller C., Kang H., Pfrommer C., 2011, MNRAS, 418, 960 * Veilleux et al. (2020) Veilleux S., Maiolino R., Bolatto A. D., Aalto S., 2020, ARAA, 28, 2 * Virtanen et al. (2020) Virtanen P., et al., 2020, Nat Methods, 17, 261 * Voit et al. (2015) Voit G. M., Donahue M., Bryan G. L., McDonald M., 2015, Nature, 519, 203 * Weinberger & Hernquist (2023) Weinberger R., Hernquist L., 2023, MNRAS, 519, 3011 * Zheng et al. (2019) Zheng Y., et al., 2019, MNRAS, 490, 467 * Zheng et al. (2020) Zheng Y., Emerick A., Putman M. E., Werk J. K., Kirby E. N., Peek J. E. G., 2020, ApJ, 2020, 133 * Zheng et al. (2023) Zheng Y., et al., 2023, A Comprehensive Investigation of Metals in the Circumgalactic Medium of Nearby Dwarf Galaxies (arXiv:2301.12233), doi:10.17909/ve0k-ps78 * van de Voort et al. (2019) van de Voort F., Springel V., Mandelker N., van den Bosch F. C., Pakmor R., 2019, MNRAS, 482, L85 * van der Walt et al. (2011) van der Walt S., Colbert S. C., Varoquaux G., 2011, CiSE, 13, 22 ## Appendix A Star formation statistics We check in this Appendix that the star formation properties of our simulations are unaffected by the additional refinement scheme on the cooling length, ensuring that the stellar distribution and its associated feedback are consistent between runs. Figure 3 shows that the total amount of stars formed over the course of the simulation is similar to within 15 per cent between runs at different resolutions. We also checked that the stochastic re-simulations at the same resolution presented in Appendix B verify this condition as well. This guarantees that the total, integrated feedback budget over the evolution of our galaxies is largely unchanged, but not that its instantaneous coupling and efficiency is unaffected. A formal quantification of this would require storing the properties of each individual SN to obtain their three-dimensional clustering and explosion conditions (e.g. Smith et al. 2021), but this information is unfortunately not recorded by our main simulations. Instead, we save the gas density and galactic spherical radius at which stellar particles are born, providing a useful proxy for the distribution of young stars that are the main feedback contributors. We show their distributions in Figure 10. First comparing runs with an increasingly resolved cooling length (top panels), we observe significant differences between their gas density and radii distributions of stellar particles at birth. We compute the two- dimensional Kolmogorov-Smirnov (KS) distance from the reference distribution (‘ISM 18 pc’) and find (0.06, 0.10, 0.08) for the density distributions and (0.09, 0.10, 0.09) for the galactic radii. Such statistics would be enough to reject that the two samples are drawn from the same underlying distribution at high confidence (p-values $<10^{-7}$). Figure 10: Distribution of gas densities (left) and galactic radii (right) for star particles at their time of birth. Differences between runs with increasingly resolved cooling lengths (top row) are comparable with run to run scatter at a fixed setup (middle and bottom rows), confirming that star formation proceeds similarly in all simulations. This rejection, however, likely arises from the large sample size ($\approx 10,000$) affecting the KS test, rather than from intrinsic differences. In fact, the level of variations between distributions with different cooling lengths is comparable or smaller to run-to-run scatter at a fixed numerical setup (middle and bottom rows). In these cases, we respectively find KS distances of (0.07, 0.09) and (0.04, 0.06) between distribution of gas densities, and (0.06, 0.04) and (0.06, 0.10) between distributions of radii. We also verified that the Wasserstein distances between distributions, which are less sensitive to extrema values, are comparable between stochastic re- simulations and different resolution runs. Figure 11: Distribution of gas densities at which SNe explode in the four re- simulation tests described in Section 5.1. Differences in distributions are minimal, confirming that SN feedback proceeds similarly at all resolutions of the cooling length. We thus conclude that introducing additional refinement on the cooling length does not impact the star formation within our galaxies further than natural stochasticity, with star formation proceeding at a consistent range of densities and radii between all runs. This could have been expected since we minimally modify the AMR structure of the dense disc where star formation occurs. Even when additional resolution is visible in the galactic plane close to the ISM resolution (Figure 2, right-most), such gas is diffuse and thermally-unstable rather than dense and star-forming, and is being better resolved due to its small cooling length. Our star formation criterion based on convergent flows ensures that star formation proceeds in a similar range of densities and radii in this case (green) than in the other runs. To provide an additional verification that feedback conditions in the ISM are comparable between runs, we store the densities within which SNe explode for our four limited-time a-posteriori re-simulations described in Section 5.1. Figure 11 shows their distribution across the four resolutions for the cooling length. Differences in these distributions are minimal, giving strong confidence that the star formation, feedback and outflow launching conditions within the ISM are comparable at all resolutions of the cooling length. Rather than differences in initial energetics, the results in Section 3 stem from better resolving the propagation of the outflow into the CGM. ## Appendix B Sensitivity of our results to stochasticity Figure 12: Distribution of star formation rates (top row), mass and energy loading factors at $|z|=1\,\mathrm{kpc}$ (middle rows), and mass and energy loading factors at $r=5\,\mathrm{kpc}$ (bottom rows) for our resolution study (left column), three stochastic re-runs of our fiducial setup (middle column) and three stochastic re-runs refining the cooling length down to 36 pc (right column). All simulations show consistent star formation rates and loading factors close to the disc, confirming that star formation activity and the launching of the winds close to the disc occur similarly in all cases. Larger stochastic shifts are visible further away from the galaxy (bottom rows, middle and right columns), but better resolving the cooling length (right column) always sharply increases mass and energy loading factors compared to the fiducial case (middle column). In fact, stochasticity probed in our fiducial setup is incapable of reproducing the high loading factors obtained at higher resolutions (left and right columns), making it a sub-dominant effect compared to the physical gain of better resolving the multiphase structure of outflows. Although all simulations in the main suite form a similar amount of stars overall (Figure 3), their instantaneous star formation rate and their ability to drive powerful outflows at a given time also depends on more stochastic parameters, such as the instantaneous ISM conditions and clustering of SNe. In this Appendix, we perform additional simulations to understand the level of variance from different star formation realizations, and confirm that results in Section 3 are robust to this stochasticity. To this end, we first re-simulate the base fiducial setup (i.e. only with the quasi-Lagrangian refinement scheme) three times, changing only the random seed of star formation to obtain a different evolution at fixed physical setup. We then repeat this exercise with refinement on the cooling length down to 36 pc. Figure 12 then compares their distribution of star formation rates (top panels), mass and energy outflow rates at 1 kpc (middle rows), and mass and energy outflow rates at 5 kpc (bottom rows) as defined in Section 3.3. As in Section 3.3, we remove inflows from the presented statistics which only occur for the fiducial setups, making trends when comparing to higher resolution runs more conservative. Focussing first on each suite of three re-simulations (middle and right columns), we observe visible shifts in the median star formation, mass and energy outflow rates that confirm the presence of stochasticity at fixed numerical setup. Nonetheless, star formation rates have well overlapping 16-84 confidence intervals (black lines) both within each individual suite (top middle or top right), across the two stochastic re-simulation suites (top middle and right), and across the resolution study (top left). This further supports that star formation is regulated similarly across all simulations (see also Appendix A). Mass and energy outflow rates close to the disc ($|z|=1\,\mathrm{kpc}$; second and third row) are also very stable against stochasticity, with both re- simulations suites (middle panels, middle and right columns) displaying strongly overlapping 16-84 confidence intervals that also overlap with the resolution study (middle, left). Away from the disc ($r=5\,\mathrm{kpc}$, bottom rows), mass and energy outflow rates show larger differences between stochastic re-simulations (yellow and red), whether in the fiducial case or when refining on the cooling length (middle and right columns, respectively). We also note that higher average star formation rates (e.g. top, yellow) does not necessarily translate to more vigorous outflows on average (bottom, yellow), reflecting the complexity of sustaining galactic winds. Despite these larger shifts, the stochasticity probed in our fiducial setup (middle column) is incapable of reproducing the sharp increase in outflow loading factors at 5 kpc observed when increasing numerical resolution out-of-the-plane (left and right columns). Even our most vigorous fiducial case (middle column, red) has mass and energy loading factors a factor $\approx 3$ short of the quietest run at 36 pc (right column, yellow) and is even further away from other resolved runs (left). We thus conclude that stochasticity at a fixed numerical setup impacts quantitative determinations of outflow loading factors, but that in our case, stochastic differences are sub-dominant to the physical gain of better resolving the multiphase treatment of the outflow. In particular, even fairly limited improvements (e.g. violet in left column) bring a significant gain in loading factors that already outweigh stochasticity without drastically impacting the computational cost (Section 5.2).
# Multi-Armed Bandits in Brain-Computer Interfaces Frida Heskebeck 1,∗, Carolina Bergeling 2 and Bo Bernhardsson 1 Frida Heskebeck Department of Automatic Control, Lund University, Lund, Sweden <EMAIL_ADDRESS>Carolina Bergeling Department of Mathematics and Natural Sciences, Blekinge Tekniska Högskola, Karlskrona, Sweden Bo Bernhardsson Department of Automatic Control, Lund University, Lund, Sweden ###### Abstract The multi-armed bandit (MAB) problem models a decision-maker that optimizes its actions based on current and acquired new knowledge to maximize its reward. This type of online decision is prominent in many procedures of Brain- Computer Interfaces (BCIs) and MAB has previously been used to investigate, e.g., what mental commands to use to optimize BCI performance. However, MAB optimization in the context of BCI is still relatively unexplored, even though it has the potential to improve BCI performance during both calibration and real-time implementation. Therefore, this review aims to further introduce MABs to the BCI community. The review includes a background on MAB problems and standard solution methods, and interpretations related to BCI systems. Moreover, it includes state-of-the-art concepts of MAB in BCI and suggestions for future research. ## Keywords: Multi-Armed Bandits, Brain-Computer Interfaces, Reinforcement Learning, Calibration, Real-Time Optimization Paper under review, year 2022. ## 1 Introduction The multi-armed bandit (MAB) problem, introduced by Robbins in 1952 [Robbins, 1952], models an agent (decision-maker) that wishes to optimize its actions with the goal of maximizing the expected reward from these actions. The agent must decide between multiple competing actions based on only partial knowledge of their expected rewards and only gains new knowledge after an action is taken. In other words, the agent has to _explore_ the action-space before it has enough knowledge to start to _exploit_ the learned best action. This procedure is known as the exploration vs. exploitation trade-off, which is nowadays often recognized from reinforcement learning. The MAB problem is, in fact, one of the simplest forms of reinforcement learning [Sutton and Barto, 2018] and has been applied to many different fields of research, such as healthcare, finance, and recommender systems [Bouneffouf and Rish, 2019]. The name ‘multi-armed bandit’ comes from a casino setting, where the agent, at each round, chooses one out of a given number of slot machines to play, with the objective to maximize the total payoff over time [Sutton and Barto, 2018]. The aim of this paper is to review the MAB framework in the context of Brain- Computer Interfaces (BCIs). The exploration vs. exploitation tradeoff exists naturally within the procedures of BCI systems, such as deciding which data or paradigm to utilize for a particular task. It is especially so in the online setting, where properties of different choices might only be partially known but become better understood as more data is gathered. It is assumed that the reader is familiar with the BCI-field and we refer to, e.g., [Nam et al., 2018] or [Nicolas-Alonso and Gomez-Gil, 2012] for any of the BCI-related nomenclature used in the paper. In Section 2, MABs are introduced as well as the algorithms often used to solve them. Section 3 highlights existing examples of MABs in the context of BCIs, while Section 4 provides suggestions for future research. Finally, some MAB programming packages are listed in Section 5 and the paper is concluded in Section 6. ## 2 Multi-Armed Bandits theory - a crash course ### 2.1 The MAB problem formulation The MAB problem is described as: at each time instant $t$, an agent chooses an action $a_{t}$ out of $K$ possible actions and receives a reward $r_{a_{t}}$. In a BCI setting, MABs could be used to optimize calibration data collection for motor imagery (MI) experiments as in [Fruitet et al., 2012]. Then, $t$ corresponds to the next time for data collection, $K$ to the available MI classes, $a_{t}$ to the class for the next data collection, and $r_{a_{t}}$ to the increase of classification accuracy when retraining the classifier with the newly gathered data. The reward for each action is not known beforehand. Moreover, the rewards are not deterministic, or fixed, but governed by some probability distribution. This means that the agent needs to perform an action $a$, often multiple times, in order to gain enough knowledge to accurately estimate or predict the reward $r_{a}$ [Sutton and Barto, 2018]. The aim in a MAB problem is to design a strategy, or policy, for the agent on how to choose the actions such that the total reward after $T$ time steps, i.e., $\sum_{t=1}^{T}r_{a_{t}}$, is maximized. The policy is based on the agent’s gathered knowledge from previous actions. The time horizon $T$, also called the agent’s budget, is always finite in practice. However, when theoretically analyzing MAB problems, an infinite time-horizon, $T\rightarrow\infty$, is often assumed [Burtini et al., 2015]. In the original MAB problem the rewards are stationary with a binary distribution; 1 or 0, win or lose, with a probability $\theta_{a}$ of a win [Robbins, 1952]. A beta distribution (see, e.g, [Faisal et al., 2020]) is often used to describe the distribution of $\theta_{a}$ (different actions have different beta distributions). [Scott, 2010]. An estimate of the probability to win with an action, $\hat{\theta}_{a}$, can for instance be sampled as $\frac{\alpha_{a}}{\alpha_{a}+\beta_{a}}$ where $\alpha_{a}$ and $\beta_{a}$ are the number of wins and losses for that action, respectively. The certainty of the estimate increases with the number of samples. Another common assumption on the rewards’ distribution is Gaussianity, see [Faisal et al., 2020] for a definition. The reward can then take any value, not only 0 or 1. Each action has an unknown true mean $\mu$ for the reward and a standard deviation $\sigma$. Upon receiving a reward, the agent can update the estimated values $\hat{\mu}$ and $\hat{\sigma}$ [Sutton and Barto, 2018]. The MAB problem can be varied in multiple ways. For instance, the probability distributions of the rewards $r_{a_{t}}$ can be considered to be stationary or changing over time. The set of possible actions $K$ can be fixed or non-fixed. The reward distributions could change depending on contextual information, and the policy of the agent needs not be restricted to one action at a time. In Table 1 we illustrate some of the most common variants of MAB problems: the so-called original MAB problem, restless and switching bandits, mortal and sleeping bandits, contextual bandits as well as dueling bandits. Table 1: Overview of MAB variants and their characteristics compared to the original MAB problem. Multi-Armed Bandit variant | Characteristic ---|--- Original MAB problem | Static reward and fixed set of actions Restless and Switching bandits | Non-static reward Mortal and Sleeping bandits | Set of available actions changes Contextual bandits | Rewards change based on state of surrounding environment Dueling bandits | Agent chooses two actions at each time step ### 2.2 Algorithms for solving MAB problems The aim for all algorithms, also called policies, is to balance the exploration vs. exploitation of the actions [Sutton and Barto, 2018]. Here, we present the most common algorithms, in the context of the original MAB problem formulation. There are two commonly used criteria when evaluating algorithms: 1. i) The _cumulative reward_ , also called the gain, $G_{\phi}(T)$, is the total reward over the time horizon $T$, averaged over multiple trials, i.e. Equation 1, where $r_{a_{t}}$ is the reward at each time step using the policy $\phi$ [Mahajan and Teneketzis, 2008]. $G_{\phi}(T)=\mathbb{E}\left[\sum_{t=1}^{T}r_{a_{t}}\right]$ (1) 2. ii) The _regret_ , $R_{\phi}(T)$, is the difference between the total reward for the best action and the agent’s total received reward over the time horizon $T$. In Equation 2, $r^{*}$ is the best achievable reward, i.e., the expected reward for the best action, and $r_{a_{t}}$ is the agent’s received reward at each time step using the policy $\phi$. The theoretical (upper) bounds on the regret, meaning the worst-case expected regret after $n$ number of plays, are often compared for different policies. If the regret bound is logarithmic, the optimal action is found with the policy. Analysis of the lower bounds on the regret shows the best case for finding the optimal action [Burtini et al., 2015]. $R_{\phi}(T)=Tr^{*}-\mathbb{E}\left[\sum_{t=1}^{T}r_{a_{t}}\right]$ (2) #### 2.2.1 Random policy In the random policy, the agent takes a random action at each time instance. This policy is often used as a baseline when comparing policies – a policy should not be worse than the random policy. #### 2.2.2 $\epsilon$-greedy policy The agent gets an initial estimate of each action’s reward by performing each action once. In a greedy policy, the agent always chooses the action with the highest estimated reward. This method only exploits and never explores after the initial phase. If the agent’s initial reward estimates are off, the policy will be stuck in always choosing a non-optimal action, giving a linear regret growth. In the $\epsilon$-greedy policy on the other hand, the agent chooses the best action but with an $\epsilon$ probability picks a random action, [Sutton and Barto, 2018]. The occasional random action forces the agent to explore all actions, which helps the agent to better estimate the actions’ rewards so the agent can exploit the best action. Using the $\epsilon$-greedy policy, the agent can eventually guess quite well what action is best, but will still take a random action with $\epsilon$ probability. In that case, the random action will force the agent to act non-optimally, which is unwanted. Gradually decreasing $\epsilon$ over time reduces the probability of such non-optimal actions. Theoretically, such $\epsilon$-decreasing policies can be constructed which guarantee logarithmic bounds on regret [Burtini et al., 2015], which is a significant improvement over linear growth. Another variant of the $\epsilon$-greedy policy is the $\epsilon$-first policy. It requires the time horizon $T$ (how many actions the agent will be allowed to take in total) to be known beforehand. The agent takes a random action for the first $\epsilon T$ time steps and picks the action with the highest estimated reward for the remaining $(1-\epsilon)T$ steps. This policy has proven to be superior to the $\epsilon$-greedy policy when the time horizon is known and the rewards are stationary [Burtini et al., 2015]. #### 2.2.3 Upper Confidence Bound (UCB) In the Upper Confidence Bound (UCB) algorithm, the agent looks at the estimated reward plus an extra margin based on the uncertainty of the reward’s estimate. The extra margin is calculated from the number of actions that have been taken in total and the number of times that action has been taken. The algorithm for the next action $a_{t}$ is mathematically described as Equation 3 where $\hat{r}_{a}$ is the estimated reward for that action, $t$ is the current time step, $n_{a}$ is the number of times the action has been taken and $c>0$ is a parameter [Sutton and Barto, 2018]. $a_{t}=\operatorname*{arg\,max}_{a}\left[\hat{r}_{a}+c\sqrt{\frac{\ln{t}}{n_{a}}}\right]$ (3) The UCB algorithm does not have any assumption on the distribution of the rewards, and its regret is logarithmically bounded, as proven in [Auer et al., 2002]. There are many variants of the UCB algorithm that cope with non- stationary rewards or contextual information, such as LinUCB, Adapt-Eve, DiscountedUCB, and SlidingWindowUCB [Burtini et al., 2015]. #### 2.2.4 Thompson sampling Thompson sampling, also called probability matching, is an algorithm for MABs with binary rewards. The idea is to match the probability of choosing an action to its probability of being the best action. Mathematically this is quite advanced, but in practice, it means that the agent samples an estimated reward, $\hat{\theta_{a}}$, from each action’s beta distribution and chooses the action with the highest such $\hat{\theta_{a}}$. The theoretical regret bound is logarithmic [Burtini et al., 2015]. ## 3 Current use of Multi-Armed Bandits in Brain-Computer Interfaces To our knowledge, there is a limited use of multi-armed bandits in BCI systems today. We have found two main applications, described in the following subsections. ### 3.1 One button BCI – Improving Calibration [Fruitet et al., 2012] have a BCI system with one button that the user can press by Motor Imagery (MI) movements, e.g., imagining moving the right hand [Pfurtscheller and Neuper, 2010]. Different motor imagery tasks are optimal for different users and might also differ between sessions. Fruitet et al. aim to improve the calibration of such systems by focusing data collection on MI tasks with high performance rather than collecting data for all MI tasks, as in uniform calibration. In their MAB problem formulation, the set of actions $K$ correspond to the available MI tasks, the time-horizon $T$ to the total number of data samples to collect, the action $a_{t}$ to the MI task of the following data sample to collect, and the reward $r_{a_{t}}$ to the classification rate of the corresponding MI task. The goal for MAB problems is to maximize the total reward, while the goal for Fruitet et al. is to maximize the classification rate of the optimal MI task. Despite the slight goal difference, the exploration vs. exploitation trade-off is the same, and Fruitet et al. have based their algorithm on the UCB algorithm. They report higher classification rates with their algorithm than the uniform calibration approach. In a follow-up paper, [Fruitet et al., 2013], they try their algorithm in an online setting and proves it to be more efficient than the uniform calibration approach, confirming their findings in the first paper. ### 3.2 Multi-Armed Bandits in P300 spellers – Real-time Implementations In a P300 speller, the letters are arranged in a grid, and a P300 signal is elicited when the row/column with the target letter is highlighted [Rezeika et al., 2018, Riggins and Scott, 2020]. In the paper [Ma et al., 2021], they use Thompson sampling to shorten the time for finding the target letter by reducing the number of non-target row/column highlights. In their MAB problem formulation, the set of actions $K$ correspond to the available stimuli groups of letters to highlight, the action $a_{t}$ to the next group, and the reward $r_{a_{t}}$ (being 0 or 1) to whether the selected group contained the target letter or not. The reward for each action follows a beta distribution where $\hat{\theta}_{a}$ represents the probability of the action’s corresponding stimuli group containing the target letter. Their algorithm selects and evaluates multiple actions in each iteration, in contrast to classical MAB algorithms that select one action at each step. They use a pre-defined stopping criterion rather than a fixed time-horizon $T$. They conclude that the use of MABs improve the performance of the BCI system. There are multiple variants of MABs in P300 spellers, e.g., [Koçanaoğulları et al., 2018] and [Guo and Huang, 2021]. The MAB problem formulation in [Koçanaoğulları et al., 2018] is similar to [Ma et al., 2021] (above), but Koçanaoğullari et al. additionally include language models as a priori information for the MAB algorithm. In [Guo and Huang, 2021], the agent uses a variant of the UCB algorithm which interprets EEG signals as contextual information when choosing actions. Only two actions with a binary reward $r_{a_{t}}$ are available at each time step (the set of $K$ is two actions), respectively representing if the EEG signal had a P300 component or not. ## 4 Discussion of Future use of Multi-Armed Bandits in Brain-Computer Interfaces There are many possible uses for multi-armed bandits in BCI systems. Here, we present some directions for future research. ### 4.1 Attention selection For users of hearing aids, it would be preferable, especially in a noisy environment, if the hearing aids would amplify only the attended sound source. Deciphering a particular sound source from a combination of sources is often referred to as the cocktail party problem. For the case of BCIs, there is research done on extracting the target source based on EEG measurements of the user’s brain activity facilitating so called attention steering in hearing aids, [Alickovic et al., 2019]. In a MAB formulation, each sound source corresponds to one action for the hearing aids. The reward for each action could be measured from the EEG data as Error potentials (ErrP) [Abiri et al., 2019], or the overall mental state [Krol et al., 2018]. The problem can be formulated in a few different ways based on different assumptions: 1. i) Within a limited time, the surrounding sound sources are the same, and the user keeps the same interest in them. Hence, the reward for each action is stationary, analogous to the original MAB formulation. 2. ii) The user can change their preferred sound source at any time, which can be modeled with non-stationary rewards, such as a switching bandit formulation. One can assume as in [Hartland et al., 2007] that it is only the best action that has a change in the reward, which means that the user can only lose interest in the target source, rather than hearing something else that gains their interest. 3. iii) Another approach would be to assume that sound sources can appear and disappear more or less randomly, which could be viewed as a mortal bandit problem as in [Chakrabarti et al., 2008]. ### 4.2 Data for transfer learning A problem for BCI systems is the long calibration time due to the need for diverse data. Using data from previous sessions or persons and using transfer learning to adapt the old data to the current session is one solution [Lotte, 2015]. To find relevant data, one can among other approaches use tensor decomposition [Jeng et al., 2021], Riemannian geometry [Khazem et al., 2021], or a generic machine learning model [Jin et al., 2020]. In [Gutiérrez et al., 2017] they use the classic MAB problem to find clusters of data in a big data set which increases the classification accuracy. The set of actions $K$, corresponds to the clusters, and the reward $r_{a_{t}}$ mirrors the classification accuracy when using training data from the selected cluster. The “one button BCI” described in Section 3.1 aims at finding clusters of good data in an online calibration setting. Here, the clusters are for finding suitable data for transfer learning. The MAB problem formulation is similar in both cases despite the difference in application. ### 4.3 Optimal calibration data Another solution than transfer learning to the problem with calibration time [Lotte, 2015] is to collect calibration data cleverly. Instead of collecting from all classes, as in uniform calibration, data could be collected from the class that would improve the classification accuracy the most. Finding the optimal class could be formulated as a MAB problem where the set of actions $K$ represent the available classes, and the reward $r_{a_{t}}$ the gain in classification accuracy. Non-stationary rewards are a challenge in this setup since they will change with the current classification performance. Compared to the “one button BCI” described in Section 3.1, the aim here is to have a “multi-button BCI system” using all classes for control, while the “One button BCI system” aims to find a single optimal class and solely use that one for control [Fruitet et al., 2012, Fruitet et al., 2013]. ### 4.4 Best stopping time Another interesting aspect of the calibration phase raised in [Fruitet et al., 2012] is to find the best stopping time. There are two variants of this formulation: 1. i) The BCI system gets a limited time horizon $T$ of data samples to collect during the calibration and should reach a classification accuracy as high as possible by wisely collecting the data. 2. ii) Assuming instead that there is a level of ‘good enough’ classification accuracy the multi-armed bandit formulation can be used to minimize regret or decrease the time $T$ for reaching the required accuracy level. It is not evident that the multi-armed bandit formulation is the best way to solve either of these two problems, but it is worthwhile considering. ## 5 Getting started with multi-armed bandits For most popular programming languages one can find examples of MABs [Github, 2022]. Among other ready to use packages are: “SymPyBandits” for Python [Besson, 2018], “Bandits” for Julia [Celles et al., 2020], and “Contextual” for R [van Emden and Kruijswijk, 2020]. None of these packages are aimed at MABs in BCIs. Hence, we provide a brief Python example on BCI data that can act as a starting point for other researchers: gitlab.control.lth.se/FridaH/mab_for_bci-public. ## 6 Conclusion Multi-armed bandits (MABs) have been used successfully in many fields, yet few applications for Brain-Computer Interfaces (BCIs) exist. Firstly, this review introduces MABs to the BCI field. Common algorithms to solve the classic MAB problem with stationary rewards include the $\epsilon$-greedy policy, the UCB algorithm, and Thompson sampling, all with the aim to balance the trade-off between exploration and exploitation of available actions. Secondly, the review highlights current research that interprets and solves BCI problems as MAB problems, prominently occurring in calibration optimization and real-time implementations of BCI systems. Finally, some suggestions are provided on further research directions in the intersection of MABs and BCIs. ## Conflict of Interest Statement The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be interpreted as a potential conflict of interest. ## Author Contributions All authors have contributed to the conceptualization of the manuscript, manuscript revision, read, and approved the submitted version. FH wrote the first draft of the manuscript. ## Funding This work was partially supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation. All authors are also members of the ELLIIT Strategic Research Area. ## References * [Abiri et al., 2019] Abiri, R., Borhani, S., Sellers, E. W., Jiang, Y., and Zhao, X. (2019). A comprehensive review of EEG-based brain-computer interface paradigms. Journal of Neural Engineering, 16(1):011001. * [Alickovic et al., 2019] Alickovic, E., Lunner, T., Gustafsson, F., and Ljung, L. (2019). A Tutorial on Auditory Attention Identification Methods. Frontiers in Neuroscience, 13. * [Auer et al., 2002] Auer, P., Cesa-Bianchi, N., and Fischer, P. (2002). Finite-time Analysis of the Multiarmed Bandit Problem. Machine Learning, 47(2):235–256. * [Besson, 2018] Besson, L. (2018). SMPyBandits: An open-source research framework for single and multi-players multi-arms bandits (MAB) algorithms in python. Online at: GitHub.com/SMPyBandits/SMPyBandits. * [Bouneffouf and Rish, 2019] Bouneffouf, D. and Rish, I. (2019). A Survey on Practical Applications of Multi-Armed and Contextual Bandits. arXiv:1904.10040 [cs, stat]. * [Burtini et al., 2015] Burtini, G., Loeppky, J., and Lawrence, R. (2015). A Survey of Online Experiment Design with the Stochastic Multi-Armed Bandit. arXiv:1510.00757 [cs, stat]. * [Celles et al., 2020] Celles, S., Squire, K., and Aridor, G. (2020). Bandits. https://github.com/rawls238/Bandits.jl. * [Chakrabarti et al., 2008] Chakrabarti, D., Kumar, R., Radlinski, F., and Upfal, E. (2008). Mortal multi-armed bandits. In Proceedings of the 21st International Conference on Neural Information Processing Systems, NIPS’08, pages 273–280, Red Hook, NY, USA. Curran Associates Inc. * [Faisal et al., 2020] Faisal, A. A., Deisenroth, M. P., and Ong, C. S. (2020). Probability and Distributions. In Mathematics for Machine Learning, pages 172–224. Cambridge University Press. * [Fruitet et al., 2012] Fruitet, J., Carpentier, A., Munos, R., and Clerc, M. (2012). Bandit Algorithms boost Brain Computer Interfaces for motor-task selection of a brain-controlled button. In Bartlett, P., Pereira, F. C. N., Burges, C. J. C., Bottou, L., and Weinberger, K. Q., editors, Advances in Neural Information Processing Systems, volume 25, pages 458–466, Lake Tahoe, Nevada, United States. Neural Information Processing Systems (NIPS) Foundation. * [Fruitet et al., 2013] Fruitet, J., Carpentier, A., Munos, R., and Clerc, M. (2013). Automatic motor task selection via a bandit algorithm for a brain-controlled button. Journal of Neural Engineering, 10(1):016012. * [Github, 2022] Github (2022). Multi-armed-bandit. https://github.com/topics/multi-armed-bandit. * [Guo and Huang, 2021] Guo, J. and Huang, Z. (2021). A calibration-free P300 BCI system using an on-line updating classifier based on reinforcement learning. In 2021 14th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pages 1–5. * [Gutiérrez et al., 2017] Gutiérrez, B., Peter, L., Klein, T., and Wachinger, C. (2017). A Multi-armed Bandit to Smartly Select a Training Set from Big Medical Data. In Descoteaux, M., Maier-Hein, L., Franz, A., Jannin, P., Collins, D. L., and Duchesne, S., editors, Medical Image Computing and Computer Assisted Intervention - MICCAI 2017, Lecture Notes in Computer Science, pages 38–45, Cham. Springer International Publishing. * [Hartland et al., 2007] Hartland, C., Baskiotis, N., Gelly, S., Sebag, M., and Teytaud, O. (2007). Change Point Detection and Meta-Bandits for Online Learning in Dynamic Environments. In CAp 2007 : 9è Conférence Francophone Sur l’apprentissage Automatique, page 237. * [Jeng et al., 2021] Jeng, P.-Y., Wei, C.-S., Jung, T.-P., and Wang, L.-C. (2021). Low-Dimensional Subject Representation-Based Transfer Learning in EEG Decoding. IEEE Journal of Biomedical and Health Informatics, 25(6):1915–1925. * [Jin et al., 2020] Jin, J., Li, S., Daly, I., Miao, Y., Liu, C., Wang, X., and Cichocki, A. (2020). The Study of Generic Model Set for Reducing Calibration Time in P300-Based Brain–Computer Interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 28(1):3–12. * [Khazem et al., 2021] Khazem, S., Chevallier, S., Barthélemy, Q., Haroun, K., and Noûs, C. (2021). Minimizing Subject-dependent Calibration for BCI with Riemannian Transfer Learning. In 2021 10th International IEEE/EMBS Conference on Neural Engineering (NER), pages 523–526. * [Koçanaoğulları et al., 2018] Koçanaoğulları, A., Marghi, Y. M., Akçakaya, M., and Erdoğmuş, D. (2018). Optimal Query Selection Using Multi-Armed Bandits. IEEE Signal Processing Letters, 25(12):1870–1874. * [Krol et al., 2018] Krol, L. R., Andreesen, L. M., and Zander, T. O. (2018). Passive Brain-Computer Interfaces: A Perspective on Increased Interactivity. In Nam, C. S., Nijholt, A., and Lotte, F., editors, Brain–Computer Interfaces Handbook: Technological and Theoretical Advances, pages 70–86. CRC Press, New York. * [Lotte, 2015] Lotte, F. (2015). Signal Processing Approaches to Minimize or Suppress Calibration Time in Oscillatory Activity-Based Brain–Computer Interfaces. Proceedings of the IEEE, 103(6):871–890. * [Ma et al., 2021] Ma, T., Huggins, J. E., and Kang, J. (2021). Adaptive Sequence-Based Stimulus Selection in an ERP-Based Brain-Computer Interface by Thompson Sampling in a Multi-Armed Bandit Problem. In 2021 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), pages 3648–3655. * [Mahajan and Teneketzis, 2008] Mahajan, A. and Teneketzis, D. (2008). Multi-Armed Bandit Problems. In Hero, A. O., Castañón, D. A., Cochran, D., and Kastella, K., editors, Foundations and Applications of Sensor Management, pages 121–151. Springer US, Boston, MA. * [Nam et al., 2018] Nam, C. S., Choi, I., Wadeson, A., and Whang, M. (2018). Brain–Computer Interface: An Emerging Interaction Technology. In Nam, C. S., Nijholt, A., and Lotte, F., editors, Brain–Computer Interfaces Handbook: Technological and Theoretical Advances, pages 12–52. CRC Press, New York. * [Nicolas-Alonso and Gomez-Gil, 2012] Nicolas-Alonso, L. F. and Gomez-Gil, J. (2012). Brain Computer Interfaces, a Review. Sensors (Basel, Switzerland), 12(2):1211–1279. * [Pfurtscheller and Neuper, 2010] Pfurtscheller, G. and Neuper, C. (2010). Dynamics of Sensorimotor Oscillations in a Motor Task. In Graimann, B., Pfurtscheller, G., and Allison, B., editors, Brain-Computer Interfaces: Revolutionizing Human-Computer Interaction, The Frontiers Collection, pages 47–64. Springer, Berlin, Heidelberg. * [Rezeika et al., 2018] Rezeika, A., Benda, M., Stawicki, P., Gembler, F., Saboor, A., and Volosyak, I. (2018). Brain–Computer Interface Spellers: A Review. Brain Sciences, 8(4):57. * [Riggins and Scott, 2020] Riggins, T. and Scott, L. S. (2020). P300 development from infancy to adolescence. Psychophysiology, 57(7):e13346. * [Robbins, 1952] Robbins, H. (1952). Some aspects of the sequential design of experiments. Bulletin of the American Mathematical Society, 58(5):527–535. * [Scott, 2010] Scott, S. L. (2010). A modern Bayesian look at the multi-armed bandit. Applied Stochastic Models in Business and Industry, 26(6):639–658. * [Sutton and Barto, 2018] Sutton, R. S. and Barto, A. G. (2018). Multi-armed Bandits. In Reinforcement Learning, Second Edition: An Introduction, pages 46–64. MIT Press. * [van Emden and Kruijswijk, 2020] van Emden, R. and Kruijswijk, J. (2020). Contextual: Multi-Armed Bandits in R. https://github.com/Nth-iteration-labs/contextual.
# Putting NeRF on a Diet: Semantically Consistent Few-Shot View Synthesis Ajay Jain UC Berkeley <EMAIL_ADDRESS>Matthew Tancik UC Berkeley <EMAIL_ADDRESS>Pieter Abbeel UC Berkeley <EMAIL_ADDRESS> ###### Abstract We present DietNeRF, a 3D neural scene representation estimated from a few images. Neural Radiance Fields (NeRF) learn a continuous volumetric representation of a scene through multi-view consistency, and can be rendered from novel viewpoints by ray casting. While NeRF has an impressive ability to reconstruct geometry and fine details given many images, up to 100 for challenging 360∘ scenes, it often finds a degenerate solution to its image reconstruction objective when only a few input views are available. To improve few-shot quality, we propose DietNeRF. We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses. DietNeRF is trained on individual scenes to (1) correctly render given input views from the same pose, and (2) match high-level semantic attributes across different, random poses. Our semantic loss allows us to supervise DietNeRF from arbitrary poses. We extract these semantics using a pre-trained visual encoder such as CLIP, a Vision Transformer trained on hundreds of millions of diverse single- view, 2D photographs mined from the web with natural language supervision. In experiments, DietNeRF improves the perceptual quality of few-shot view synthesis when learned from scratch, can render novel views with as few as one observed image when pre-trained on a multi-view dataset, and produces plausible completions of completely unobserved regions. Our project website is available at https://www.ajayj.com/dietnerf. ## 1 Introduction Figure 1: Neural Radiance Fields are trained to represent a scene by supervising renderings from the same pose as ground-truth observations ( MSE loss). However, when only a few views are available, the problem is underconstrained. NeRF often finds degenerate solutions unless heavily regularized. Based on the principle that “a bulldozer is a bulldozer from any perspective”, our proposed DietNeRF supervises the radiance field from arbitrary poses ( DietNeRF cameras). This is possible because we compute a semantic consistency loss in a feature space capturing high-level scene attributes, not in pixel space. We extract semantic representations of renderings using the CLIP Vision Transformer [33], then maximize similarity with representations of ground-truth views. In effect, we use prior knowledge about scene semantics learned by single-view 2D image encoders to constrain a 3D representation. Figure 2: Few-shot view synthesis is a challenging problem for Neural Radiance Fields. (A) When we have 100 observations of an object from uniformly sampled poses, NeRF estimates a detailed and accurate representation that allows for high-quality view synthesis purely from multi- view consistency. (B) However, with only 8 views, the same NeRF overfits by placing the object in the near-field of the training cameras, leading to misplaced objects at poses near training cameras and degeneracies at novel poses. (C) We find that NeRF can converge when regularized, simplified, tuned and manually reinitialized, but no longer captures fine details. (D) Finally, without prior knowledge about similar objects, single-scene view synthesis cannot plausibly complete unobserved regions, such as the left side of an object seen from the right. In this work, we find that these failures occur because NeRF is only supervised from the sparse training poses. In the novel view synthesis problem, we seek to rerender a scene from arbitrary viewpoint given a set of sparsely sampled viewpoints. View synthesis is a challenging problem that requires some degree of 3D reconstruction in addition to high-frequency texture synthesis. Recently, great progress has been made on high-quality view synthesis when many observations are available. A popular approach is to use Neural Radiance Fields (NeRF) [30] to estimate a continuous neural scene representation from image observations. During training on a particular scene, the representation is rendered from observed viewpoints using volumetric ray casting to compute a reconstruction loss. At test time, NeRF can be rendered from novel viewpoints by the same procedure. While conceptually very simple, NeRF can learn high-frequency view-dependent scene appearances and accurate geometries that allow for high-quality rendering. Still, NeRF is estimated per-scene, and cannot benefit from prior knowledge acquired from other images and objects. Because of the lack of prior knowledge, NeRF requires a large number of input views to reconstruct a given scene at high-quality. Given 8 views, Figure 2B shows that novel views rendered with the full NeRF model contain many artifacts because the optimization finds a degenerate solution that is only accurate at observed poses. We find that the core issue is that prior 3D reconstruction systems based on rendering losses are only supervised at known poses, so they overfit when few poses are observed. Regularizing NeRF by simplifying the architecture avoids the worst artifacts, but comes at the cost of fine-grained detail. Further, prior knowledge is needed when the scene reconstruction problem is underdetermined. 3D reconstruction systems struggle when regions of an object are never observed. This is particularly problematic when rendering an object at significantly different poses. When rendering a scene with an extreme baseline change, unobserved regions during training become visible. A view synthesis system should generate plausible missing details to fill in the gaps. Even a regularized NeRF learns poor extrapolations to unseen regions due to its lack of prior knowledge (Figure 2D). Recent work trained NeRF on multi-view datasets of similar scenes [52, 44, 38, 43, 49] to bias reconstructions of novel scenes. Unfortunately, these models often produce blurry images due to uncertainty, or are restricted to a single object category such as ShapeNet classes as it is challenging to capture large, diverse, multi-view data. In this work, we exploit the consistency principle that “a bulldozer is a bulldozer from any perspective”: objects share high-level semantic properties between their views. Image recognition models learn to extract many such high- level semantic features including object identity. We transfer prior knowledge from pre-trained image encoders learned on highly diverse 2D single-view image data to the view synthesis problem. In the single-view setting, such encoders are frequently trained on millions of realistic images like ImageNet [7]. CLIP is a recent multi-modal encoder that is trained to match images with captions in a massive web scrape containing 400M images [33]. Due to the diversity of its data, CLIP showed promising zero- and few-shot transfer performance to image recognition tasks. We find that CLIP and ImageNet models also contain prior knowledge useful for novel view synthesis. We propose DietNeRF, a neural scene representation based on NeRF that can be estimated from only a few photos, and can generate views with unobserved regions. In addition to minimizing NeRF’s mean squared error losses at known poses in pixel-space, DietNeRF penalizes a semantic consistency loss. This loss matches the final activations of CLIP’s Vision Transformer [9] between ground-truth images and rendered images at different poses, allowing us to supervise the radiance field from arbitrary poses. In experiments, we show that DietNeRF learns realistic reconstructions of objects with as few as 8 views without simplifying the underlying volumetric representation, and can even produce reasonable reconstructions of completely occluded regions. To generate novel views with as few as 1 observation, we fine-tune pixelNeRF [52], a generalizable scene representation, and improve perceptual quality. ## 2 Background on Neural Radiance Fields A plenoptic function, or light field, is a five-dimensional function that describes the light radiating from every point in every direction in a volume such as a bounded scene. While explicitly storing or estimating the plenoptic function at high resolution is impractical due to the dimensionality of the input, Neural Radiance Fields [30] parameterize the function with a continuous neural network such as a multi-layer perceptron (MLP). A Neural Radiance Field (NeRF) model is a five-dimensional function $f_{\theta}(\mathbf{x},\mathbf{d})=(\mathbf{c},\sigma)$ of spatial position $\mathbf{x}=(x,y,z)$ and viewing direction $(\theta,\phi)$, expressed as a 3D unit vector $\mathbf{d}$. NeRF predicts the RGB color $\mathbf{c}$ and differential volume density $\sigma$ from these inputs. To encourage view- consistency, the volume density only depends on $\mathbf{x}$, while the color also depends on viewing direction $\mathbf{d}$ to capture viewpoint dependent effects like specular reflections. Images are rendered from a virtual camera at any position by integrating color along rays cast from the observer according to volume rendering [22]: $\mathbf{C}(\mathbf{r})=\int_{t_{n}}^{t_{f}}T(t)\sigma(\mathbf{r}(t))\mathbf{c}(\mathbf{r}(t),\mathbf{d})dt$ (1) where the ray originating at the camera origin $\mathbf{o}$ follows path $\mathbf{r}(t)=\mathbf{o}+t\mathbf{d}$, and the transmittance ${T(t)=\exp\left(-\int_{t_{n}}^{t_{f}}\sigma(\mathbf{r}(s))ds\right)}$ weights the radiance by the probability that the ray travels from the image plane at $t_{n}$ to $t$ unobstructed. To approximate the integral, NeRF employs a hierarchical sampling algorithm to select function evaluation points near object surfaces along each ray. NeRF separately estimates two MLPs, a coarse network and a fine network, and uses the coarse network to guide sampling along the ray for more accurately estimating (1). The networks are trained from scratch on each scene given tens to hundreds of photos from various perspectives. Given observed multi-view training images $\\{I_{i}\\}$ of a scene, NeRF uses COLMAP SfM [37] to estimate camera extrinsics (rotations and origins) $\\{\mathbf{p}_{i}\\}$, creating a posed dataset $\mathcal{D}=\\{(I_{i},\mathbf{p}_{i})\\}$. ## 3 NeRF Struggles at Few-Shot View Synthesis View synthesis is a challenging problem when a scene is only sparsely observed. Systems like NeRF that train on individual scenes especially struggle without prior knowledge acquired from similar scenes. We find that NeRF fails at few-shot novel view synthesis in several settings. NeRF overfits to training views Conceptually, NeRF is trained by mimicking the image-formation process at observed poses. The radiance field can be estimated repeatedly sampling a training image and pose $(I,\mathbf{p}_{i})$, rendering an image $\hat{I}_{\mathbf{p}_{i}}$ from the same pose by volume integration (1), then minimizing the mean-squared error (MSE) between the images, which should align pixel-wise: $\mathcal{L}_{\text{full}}(I,\hat{I}_{\mathbf{p}_{i}})=\frac{1}{HW}\|I-\hat{I}_{\mathbf{p}_{i}}\|_{2}^{2}$ (2) In practice, NeRF samples a smaller batch of rays across all training images to avoid the computational expense of rendering full images during training. Given subsampled rays $\mathcal{R}$ cast from the training cameras, NeRF minimizes: $\mathcal{L}_{\text{MSE}}(\mathcal{R})=\frac{1}{|\mathcal{R}|}\sum_{\mathbf{r}\in\mathcal{R}}\|\mathbf{C}(\mathbf{r})-\hat{\mathbf{C}}(\mathbf{r})\|_{2}^{2}$ (3) With many training views, $\mathcal{L}_{\text{MSE}}$ provides training signal to $f_{\theta}$ densely in the volume and does not overfit to individual training views. Instead, the MLP recovers accurate textures and occupancy that allow interpolations to new views (Figure 2A). Radiance fields with sinusoidal positional embeddings are quite effective at learning high-frequency functions [43], which helps the MLP represent fine details. Unfortunately, this high-frequency representational capacity allows NeRF to overfit to each input view when only a few are available. $\mathcal{L}_{\text{MSE}}$ can be minimized by packing the reconstruction $\hat{I}_{\mathbf{p}}$ of training view $(I,\mathbf{p})$ close to the camera. Fundamentally, the plenoptic function representation suffers from a near-field ambiguity [53] where distant cameras each observe significant regions of space that no other camera observes. In this case, the optimal scene representation is underdetermined. Degenerate solutions can also exploit the view-dependence of the radiance field. Figure 2B shows novel views from the same NeRF trained on 8 views. While a rendered view from a pose near a training image has reasonable textures, it is skewed incorrectly and has cloudy artifacts from incorrect geometry. As the geometry is not estimated correctly, a distant view contains almost none of the correct information. High-opacity regions block the camera. Without supervision from any nearby camera, opacity is sensitive to random initialization. Regularization fixes geometry, but hurts fine-detail High-frequency artifacts such as spurious opacity and rapidly varying colors can be avoided in some cases by regularizing NeRF. We simplify the NeRF architecture by removing hierarchical sampling and learning only a single MLP, and reducing the maximum frequency positional embedding in the input layer. This biases NeRF toward lower frequency solutions, such as placing content in the center of the scene farther from the training cameras. We also can address some few-shot optimization challenges by lowering the learning rate to improve initial convergence, and manually restarting training if renderings are degenerate. Figure 2C shows that these regularizers successfully allow NeRF to recover plausible object geometry. However, high-frequency, fine details are lost compared to 2A. No prior knowledge, no generalization to unseen views As NeRF is estimated from scratch per-scene, it has no prior knowledge about natural objects such as common symmetries and object parts. In Figure 2D, we show that NeRF trained with 14 views of the right half of a Lego vehicle generalizes poorly to its left side. We regularized NeRF to remove high-opacity regions that originally blocked the left side entirely. Even so, the essential challenge is that NeRF receives no supervisory signal from $\mathcal{L}_{\text{MSE}}$ to the unobserved regions, and instead relies on the inductive bias of the MLP for any inpainting. We would like to introduce prior knowledge that allows NeRF to exploit bilateral symmetry for plausible completions. ## 4 Semantically Consistent Radiance Fields Motivated by these challenges, we introduce the DietNeRF scene representation. DietNeRF uses prior knowledge from a pre-trained image encoder to guide the NeRF optimization process in the few-shot setting. ### 4.1 Semantic consistency loss DietNeRF supervises $f_{\theta}$ at arbitrary camera poses during training with a semantic loss. While pixel-wise comparison between ground-truth observed images and rendered images with $\mathcal{L}_{\text{MSE}}$ is only useful when the rendered image is aligned with the observed pose, humans are easily able to detect whether two images are views of the same object from semantic cues. We can in general compare a representation of images captured from different viewpoints: $\mathcal{L}_{\text{SC},\ell_{2}}(I,\hat{I})=\frac{\lambda}{2}\|\phi(I)-\phi(\hat{I})\|_{2}^{2}$ (4) If $\phi(x)=x$, Eq. (4) reduces to $\mathcal{L}_{\text{full}}$ up to a scaling factor. However, the identity mapping is view-dependent. We need a representation that is similar across views of the same object and captures important high-level semantic properties like object class. We evaluate the utility of two sources of supervision for representation learning. First, we experiment with the recent CLIP model pre-trained for multi-modal language and vision reasoning with contrastive learning [33]. We then evaluate visual classifiers pre-trained on labeled ImageNet images [9]. In both cases, we use similar Vision Transformer (ViT) architectures. A Vision Transformer is appealing because its performance scales very well to large amounts of 2D data. Training on a large variety of images allows the network to encounter multiple views of an object class over the course of training without explicit multi-view data capture. It also allows us to transfer the visual encoder to diverse objects of interest in graphics applications, unlike prior class-specific reconstruction work that relies on homogeneous datasets [3, 23]. ViT extracts features from non-overlapping image patches in its first layer, then aggregates increasingly abstract representations with Transformer blocks based on global self-attention [48] to produce a single, global embedding vector. ViT outperformed CNN encoders in our early experiments. In practice, CLIP produces normalized image embeddings. When $\phi(\cdot)$ is a unit vector, Eq. (4) simplifies to cosine similarity up to a constant and a scaling factor that can be absorbed into the loss weight $\lambda$: $\mathcal{L}_{\text{SC}}(I,\hat{I})=\lambda\phi(I)^{T}\phi(\hat{I})$ (5) We refer to $\mathcal{L}_{\text{SC}}$ (5) as a semantic consistency loss because it measures the similarity of high-level semantic features between observed and rendered views. In principle, semantic consistency is a very general loss that can be applied to any 3D reconstruction system based on differentiable rendering. Data: Observed views $\mathcal{D}=\\{(I,\mathbf{p})\\}$, semantic embedding function $\phi(\cdot)$, pose distribution $\pi$, consistency interval $K$, weight $\lambda$, rendering size, batch size $|\mathcal{R}|$, lr $\eta_{it}$ Result: Trained Neural Radiance Field $f_{\theta}(\cdot,\cdot)$ Initialize NeRF $f_{\theta}(\cdot,\cdot)$; Pre-compute target embeddings $\\{\phi(I):I\in\mathcal{D}\\}$; for _it from 1 to num_iters_ do Sample ray batch $\mathcal{R}$, ground-truth colors $\mathbf{C}(\cdot)$; Render rays $\hat{\mathbf{C}}(\cdot)$ by (1); $\mathcal{L}\leftarrow\mathcal{L}_{\text{MSE}}(\mathcal{R},\mathbf{C},\hat{\mathbf{C}})$; if _$\text{it}~{}\%~{}K=0$_ then Sample target image, pose $(I,\mathbf{p})\sim\mathcal{D}$; Sample source pose $\hat{\mathbf{p}}\sim\pi$; Render image $\hat{I}$ from pose $\hat{\mathbf{p}}$; $\mathcal{L}\leftarrow\mathcal{L}+\mathcal{L}_{\text{SC}}(I,\hat{I})$; end if Update parameters: $\theta\leftarrow Adam(\theta,\eta_{it},\nabla_{\theta}\mathcal{L})$; end for Algorithm 1 Training DietNeRF on a single scene ### 4.2 Interpreting representations across views The pre-trained CLIP model that we use is trained on hundreds of millions of images with captions of varying detail. Image captions provide rich supervision for image representations. On one hand, short captions express semantically sparse learning signal as a flexible way to express labels [8]. For example, the caption “A photo of hotdogs” describes Fig. 2A. Language also provides semantically dense learning signal by describing object properties, relationships and appearances [8] such as the caption “Two hotdogs on a plate with ketchup and mustard”. To be predictive of such captions, an image representation must capture some high-level semantics that are stable across viewpoints. Concurrently, [12] found that CLIP representations capture visual attributes of images like art style and colors, as well as high-level semantic attributes including object tags and categories, facial expressions, typography, geography and brands. In Figure 3, we measure the pairwise cosine similarity between CLIP representations of views circling an object. We find that pairs of views have highly similar CLIP representations, even for diametrically opposing cameras. This suggests that large, diverse single-view datasets can induce useful representations for multi-view applications. ### 4.3 Pose sampling distribution We augment the NeRF training loop with $\mathcal{L}_{\text{SC}}$ minimization. Each iteration, we compute $\mathcal{L}_{\text{SC}}$ between a random training image sampled from the observation dataset $I\sim\mathcal{D}$ and rendered image $\hat{I}_{\mathbf{p}}$ from random pose $\mathbf{p}\sim\pi$. For bounded scenes like NeRF’s Realistic Synthetic scenes where we are interested in 360∘ view synthesis, we define the pose sampling distribution $\pi$ to be a uniform distribution over the upper hemisphere, with radius sampled uniformly in a bounded range. For unbounded forward-facing scenes or scenes where a pose sampling distribution is difficult to define, we interpolate between three randomly sampled known poses $\mathbf{p}_{1},\mathbf{p}_{2},\mathbf{p}_{3}\sim\mathcal{D}$ with pairwise interpolation weights $\alpha_{1},\alpha_{2}\sim\mathcal{U}(0,1)$. ### 4.4 Improving efficiency and quality Volume rendering is computationally intensive. Computing a pixel’s color evaluates NeRF’s MLP $f_{\theta}$ at many points along a ray. To improve the efficiency of DietNeRF during training, we render images for semantic consistency at low resolution, requiring only 15-20% of the rays as a full resolution training image. Rays are sampled on a strided grid across the full extent of the image plane, ensuring that objects are mostly visible in each rendering. We found that sampling poses from a continuous distribution was helpful to avoid aliasing artifacts when training at a low resolution. In experiments, we found that $\mathcal{L}_{\text{SC}}$ converges faster than $\mathcal{L}_{\text{MSE}}$ for many scenes. We hypothesize that the semantic consistency loss encourages DietNeRF to recover plausible scene geometry early in training, but is less helpful for reconstructing fine-grained details due to the relatively low dimensionality of the ViT representation $\phi(\cdot)$. We exploit the rapid convergence of $\mathcal{L}_{\text{SC}}$ by only minimizing $\mathcal{L}_{\text{SC}}$ every $k$ iterations. DietNeRF is robust to the choice of $k$, but a value between 10 and 16 worked well in our experiments. StyleGAN2 [24] used a similar strategy for efficiency, referring to periodic application of a loss as lazy regularization. As backpropagation through rendering is memory intensive with reverse-mode automatic differentiation, we render images for $\mathcal{L}_{\text{SC}}$ with mixed precision computation and evaluate $\phi(\cdot)$ at half-precision. We delete intermediate MLP activations during rendering and rematerialize them during the backward pass [6, 19]. All experiments use a single 16 GB NVIDIA V100 or 11 GB 2080 Ti GPU. Since $\mathcal{L}_{\text{SC}}$ converges before $\mathcal{L}_{\text{MSE}}$, we found it helpful to fine-tune DietNeRF with $\mathcal{L}_{\text{MSE}}$ alone for 20-70k iterations to refine details. Alg. 1 details our overall training process. Figure 3: CLIP’s Vision Transformer learns low-dimensional image representations through language supervision. We find that these representations transfer well to multi-view 3D settings. We sample pairs of ground-truth views of the same scene and of different scenes from NeRF’s Realistic Synthetic object dataset, then compute a histogram of representation cosine similarity. Even though camera poses vary dramatically (views are sampled from the upper hemisphere), views within a scene have similar representations (green). Across scenes, representations have low similarity (red) ## 5 Experiments In experiments, we evaluate the quality of novel views synthesized by DietNeRF and baselines for both synthetically rendered objects and real photos of multi-object scenes. (1) We evaluate training from scratch on a specific scene with 8 views §5.1. (2) We show that DietNeRF improves perceptual quality of view synthesis from only a single real photo §5.2. (3) We find that DietNeRF can reconstruct regions that are never observed §5.3, and finally (4) run ablations §6. Datasets The Realistic Synthetic benchmark of [29] includes detailed multi- view renderings of 8 realistic objects with view-dependent light transport effects. We also benchmark on the DTU multi-view stereo (MVS) dataset [20] used by pixelNeRF [52]. DTU is a challenging dataset that includes sparsely sampled real photos of physical objects. Low-level full reference metrics Past work evaluates novel view quality with respect to ground-truth from the same pose with Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) [41]. PSNR expresses mean-squared error in log space. However, SSIM often disagrees with human judgements of similarity [54]. Perceptual metrics Deep CNN activations mirror aspects of human perception. NeRF measures perceptual image quality using LPIPS [54], which computes MSE between normalized features from all layers of a pre-trained VGG encoder [39]. Generative models also measure sample quality with feature space distances. The Fréchet Inception Distance (FID) [15] computes the Fréchet distance between Gaussian estimates of penultimate Inception v3 [42] features for real and fake images. However, FID is a biased metric at low sample sizes. We adopt the conceptually similar Kernel Inception Distance (KID), which measures the MMD between Inception features and has an unbiased estimator [2, 31]. All metrics use a different architecture and data than our CLIP ViT encoder. ### 5.1 Realistic Synthetic scenes from scratch Table 1: Quality metrics for novel view synthesis on subsampled splits of the Realistic Synthetic dataset [30]. We randomly sample 8 views from the available 100 ground truth training views to evaluate how DietNeRF performs with limited observations. Method | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ | FID $\downarrow$ | KID $\downarrow$ ---|---|---|---|---|--- NeRF | 14.934 | 0.687 | 0.318 | 228.1 | 0.076 NV | 17.859 | 0.741 | 0.245 | 239.5 | 0.117 Simplified NeRF | 20.092 | 0.822 | 0.179 | 189.2 | 0.047 DietNeRF (ours) | 23.147 | 0.866 | 0.109 | 74.9 | 0.005 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft | 23.591 | 0.874 | 0.097 | 72.0 | 0.004 NeRF, 100 views | 31.153 | 0.954 | 0.046 | 50.5 | 0.001 Figure 4: Novel views synthesized from eight observations of scenes in the Realistic Synthetic dataset. NeRF’s Realistic Synthetic dataset includes 8 detailed synthetic objects with 100 renderings from virtual cameras arranged randomly on a hemisphere pointed inward. To test few-shot performance, we randomly sample a training subset of 8 images from each scene. Table 1 shows results. The original NeRF model achieves much poorer quantitative quality with 8 images than with the full 100 image dataset. Neural Volumes [28] performs better as it tightly constrains the size of the scene’s bounding box and explicitly regularizes its scene representation using a penalty on spatial gradients of voxel opacity and a Beta prior on image opacity. This avoids the worst artifacts, but reconstructions are still low-quality. Simplifying NeRF and tuning it for each individual scene also regularizes the representation and helps convergence (+5.1 PSNR over the full NeRF). The best performance is achieved by regularizing with DietNeRF’s $\mathcal{L}_{\text{SC}}$ loss. Additionally, fine-tuning with $\mathcal{L}_{\text{MSE}}$ even further improves quality, for a total improvement of +8.5 PSNR, -0.2 LPIPS, and -156 FID over NeRF. This shows that semantic consistency is a valuable prior for high-quality few-shot view synthesis. Figure 4 visualizes results. ### 5.2 Single-view synthesis by fine-tuning Figure 5: Novel views synthesized from a single input image from the DTU object dataset. Even with 3 input views, NeRF [30] fails to learn accurate geometry or textures (reprinted from [52]). While pixelNeRF [52] has mostly consistent object geometry as the camera pose is varied, renderings are blurry and contain artifacts like inaccurate placement of density along the observed camera’s z-axis. In contrast, fine-tuning with DietNeRF (DietPixelNeRF) learns realistic textures visually consistent with the input image, though some geometric defects are present due to the ambiguous nature of the view synthesis problem. Table 2: Single-view novel view synthesis on the DTU dataset. NeRF and pixelNeRF PSNR, SSIM and LPIPS results are from [52]. Finetuning pixelNeRF with DietNeRF’s semantic consistency loss (DietPixelNeRF) improves perceptual quality measured by the deep perceptual LPIPS, FID and KID evaluation metrics, but can degrade PSNR and SSIM which are local pixel-aligned metrics due to geometric defects. Method | PSNR | SSIM | LPIPS | FID | KID ---|---|---|---|---|--- NeRF | 8.000 | 0.286 | 0.703 | — | — pixelNeRF | 15.550 | 0.537 | 0.535 | 266.1 | 0.166 pixelNeRF, $\mathcal{L}_{\text{MSE}}$ ft | 16.048 | 0.564 | 0.515 | 265.2 | 0.159 DietPixelNeRF | 14.242 | 0.481 | 0.487 | 190.7 | 0.066 Figure 6: Semantic consistency improves perceptual quality. Fine-tuning pixelNeRF with $\mathcal{L}_{\text{MSE}}$ slightly improves a rendering of the input view, but does not remove most perceptual flaws like blurriness in novel views. Fine-tuning with both $\mathcal{L}_{\text{MSE}}$ and $\mathcal{L}_{\text{SC}}$ (DietPixelNeRF, bottom) improves sharpness of all views. NeRF only uses observations during training, not inference, and uses no auxiliary data. Accurate 3D reconstruction from a single view is not possible purely from $\mathcal{L}_{\text{MSE}}$, so NeRF performs poorly in the single- view setting (Table 2). To perform single- or few-shot view synthesis, pixelNeRF [52] learns a ResNet-34 encoder and a feature-conditioned neural radiance field on a multi- view dataset of similar scenes. The encoder learns priors that generalize to new single-view scenes. Table 2 shows that pixelNeRF significantly outperforms NeRF given a single photo of a held-out scene. However, novel views are blurry and unrealistic (Figure 5). We propose to fine-tune pixelNeRF on a single scene using $\mathcal{L}_{\text{MSE}}$ alone or using both $\mathcal{L}_{\text{MSE}}$ and $\mathcal{L}_{\text{SC}}$. Fine-tuning per- scene with MSE improves local image quality metrics, but only slightly helps perceptual metrics. Figure 6 shows that pixel-space MSE fine-tuning from one view mostly only improves quality for that view. We refer to fine-tuning with both losses for a short period as DietPixelNeRF. Qualitatively, DietPixelNeRF has significantly sharper novel views (Fig. 5, 6). DietPixelNeRF outperforms baselines on perceptual LPIPS, FID, and KID metrics (Tab. 2). For the very challenging single-view setting, ground-truth novel views will contain content that is completely occluded in the input. Because of uncertainty, blurry renderings will outperform sharp but incorrect renderings on average error metrics like MSE and PSNR. Arguably, perceptual quality and sharpness are better metrics than pixel error for graphics applications like photo editing and virtual reality as plausibility is emphasized. Figure 7: Renderings of occluded regions during training. 14 images of the right half of the Realistic Synthetic lego scene are used to estimate radiance fields. NeRF either learns high-opacity occlusions blocking the left of the object, or fails to generalize properly to the unseen left side. In contrast, DietNeRF fills in details for a reconstruction that is mostly consistent with the observed half. ### 5.3 Reconstructing unobserved regions We evaluate whether DietNeRF produces plausible completions when the reconstruction problem is underdetermined. For training, we sample 14 nearby views of the right side of the Realistic Synthetic Lego scene (Fig. 7, right). Narrow baseline multi-view capture rigs are less costly than 360∘ captures, and support unbounded scenes. However, narrow-baseline observations suffer from occlusions: the left side of the Lego bulldozer is unobserved. NeRF fails to reconstruct this side of the scene, while our Simplified NeRF learns unrealistic deformations and incorrect colors (Fig. 7, left). Remarkably, DietNeRF learns quantitatively (Tab. 3) and qualitatively more accurate colors in the missing regions, suggesting the value of semantic image priors for sparse reconstruction problems. We exclude FID and KID since a single scene has too few samples for an accurate estimate. Table 3: Extrapolation metrics. Novel view synthesis with observations of only one side of the Realistic Synthetic Lego scene. Views | Method | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ ---|---|---|---|--- 14 | NeRF | 19.662 | 0.799 | 0.202 14 | Simplified NeRF | 21.553 | 0.818 | 0.160 14 | DietNeRF (ours) | 20.753 | 0.810 | 0.157 14 | DietNeRF + $\mathcal{L}_{\text{MSE}}$ ft | 22.211 | 0.824 | 0.143 100 | NeRF [30] | 31.618 | 0.965 | 0.033 ## 6 Ablations Choosing an image encoder Table 4 shows quality metrics with different semantic encoder architectures and pre-training datasets. We evaluate on the Lego scene with 8 views. Large ViT models (ViT L) do not improve results over the base ViT B. Fixing the architecture, CLIP offers a +1.8 PSNR improvement over an ImageNet model, suggesting that data diversity and language supervision is helpful for 3D tasks. Still, both induce useful representations that transfer to view synthesis. Table 4: Ablating supervision and architectural parameters for the ViT image encoder $\phi(\cdot)$ used to compare image features. Metrics are measured on the Realistic Synthetic Lego scene. Semantic image encoder | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ ---|---|---|--- ImageNet ViT L/16, 3842 | 21.501 | 0.809 | 0.167 ImageNet ViT L/32, 3842 | 20.498 | 0.801 | 0.174 ImageNet ViT B/32, 2242 | 22.059 | 0.836 | 0.131 CLIP ViT B/32, 2242 | 23.896 | 0.863 | 0.110 Varying $\mathcal{L}_{\text{MSE}}$ fine-tuning duration Fine-tuning DietNeRF with $\mathcal{L}_{\text{MSE}}$ can improve quality by better reconstructing fine-details. In Table 5, we vary the number of iterations of fine-tuning for the Realistic Synthetic scenes with 8 views. Fine-tuning for up to 50k iterations is helpful, but reduces performance with longer optimization. It is possible that the model starts overfitting to the 8 input views. Table 5: Varying the number of iterations that DietNeRF is fine-tuned with $\mathcal{L}_{\text{MSE}}$ on Realistic Synthetic scenes. All models are initially trained for 200k iterations with $\mathcal{L}_{\text{MSE}}$ and $\mathcal{L}_{\text{SC}}$. Further minimizing $\mathcal{L}_{\text{MSE}}$ is helpful, but the model can overfit. Method | PSNR $\uparrow$ | SSIM $\uparrow$ | LPIPS $\downarrow$ ---|---|---|--- DietNeRF, no fine-tuning | 23.147 | 0.866 | 0.109 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft 10k iters | 23.524 | 0.872 | 0.101 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft 50k iters | 23.591 | 0.874 | 0.097 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft 100k iters | 23.521 | 0.874 | 0.097 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft 200k iters | 23.443 | 0.872 | 0.098 ## 7 Related work Few-shot radiance fields Several works condition NeRF on latent codes describing scene geometry or appearance rather than estimating NeRF per scene [38, 44, 52]. An image encoder and radiance field decoder are learned on a multi-view dataset of similar objects or scenes ahead of time. At test time, on a new scene, novel viewpoints are rendered using the decoder conditioned on encodings of a few observed images. GRAF renders patches of the scene every iteration to supervise the network with a discriminator [38]. Concurrent to our work, IBRNet [49] also fine-tunes a latent-conditioned radiance field on a specific scene using NeRF’s reconstruction loss, but needed at least 50 views. Rather than generalizing between scenes through a shared encoder and decoder, [43, 11] meta-learn radiance field weights that can be adapted to a specific scene in a few gradient steps. Meta-learning improves performance in the few- view setting. Similarly, a signed distance field can be meta-learned for shape representation problems [40]. Much literature studies single-view reconstruction with other, explicit 3D representations. Notable recent examples include voxel [45], mesh [16] and point-cloud [50] approaches. Novel view synthesis, image-based rendering Neural Volumes [28] proposes a VAE [26, 34] encoder-decoder architecture to predict a volumetric representation of a scene from posed image observations. NV uses priors as auxiliary objectives like DietNeRF, but penalizes opacity based on geometric intuitions rather than RGB image semantics. TBNs [32] learn an autoencoder with a 3-dimensional latent that can be rotated to render new perspectives for a single-category. SRNs [41] fit a continuous representation to a scene and also generalize to novel single-category objects if trained on a large multi-view dataset. It can be extended to predict per-point semantic segmentation maps [27]. Local Light Field Fusion [29] estimates and blends multiple MPI representations for each scene. Free View Synthesis [35] uses geometric approaches to improve view synthesis in unbounded in-the-wild scenes. NeRF++ [53] also improves unbounded scenes using multiple NeRF models and changing NeRF’s parameterization. Semantic representation learning Representation learning with deep supervised and unsupervised approaches has a long history [1]. Without labels, generative models can learn useful representations for recognition [4], but self- supervised models like CPC [46, 14] tend to be more parameter efficient. Contrastive methods including CLIP learn visual representations by matching similar pairs of items, such as captions and images [33, 21], augmentated variants of an image [5], or video patches across frames [18]. ## 8 Conclusions Our results suggest that single-view 2D representations transfer effectively to challenging, underconstrained 3D reconstruction problems such as volumetric novel view synthesis. While pre-trained image encoder representations have certainly been transferred to 3D vision applications in the past by fine- tuning, the recent emergence of visual models trained on enormous 100M+ image datasets like CLIP have enabled surprisingly effective few-shot transfer. We exploited this transferrable prior knowledge to solve optimization issues as well as to cope with partial observability in the NeRF family of scene representations, offering notable improvements in perceptual quality. In the future, we believe “diet-friendly” few-shot transfer will play a greater role in a wide range of 3D applications. ## Acknowledgements This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under grant number DGE-1752814 and by Berkeley Deep Drive. We would like to thank Paras Jain, Aditi Jain, Alexei Efros, Angjoo Kanazawa, Aravind Srinivas, Deepak Pathak and Alex Yu for helpful feedback and discussions. ## References * [1] Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine intelligence, 35(8):1798–1828, 2013. * [2] Mikołaj Bińkowski, Dougal J. Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In International Conference on Learning Representations, 2018. * [3] Thomas J. Cashman and Andrew W. Fitzgibbon. What shape are dolphins? building 3d morphable models from 2d images. IEEE Trans. Pattern Anal. Mach. Intell., 35(1):232–244, 2013. * [4] Mark Chen, Alec Radford, Rewon Child, Jeff Wu, Heewoo Jun, Prafulla Dhariwal, David Luan, and Ilya Sutskever. Generative pretraining from pixels. 2020\. * [5] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020. * [6] Tianqi Chen, Bing Xu, Chiyuan Zhang, and Carlos Guestrin. Training deep nets with sublinear memory cost, 2016. * [7] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. * [8] Karan Desai and Justin Johnson. VirTex: Learning Visual Representations from Textual Annotations. In CVPR, 2021. * [9] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021. * [10] Patrick Esser, Robin Rombach, and Björn Ommer. Taming transformers for high-resolution image synthesis, 2020. * [11] Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang, and Jia-Bin Huang. Portrait neural radiance fields from a single image. arXiv preprint arXiv:2012.05903, 2020. * [12] Gabriel Goh, Nick Cammarata, Chelsea Voss, Shan Carter, Michael Petrov, Ludwig Schubert, Alec Radford, and Chris Olah. Multimodal neurons in artificial neural networks. Distill, 2021. https://distill.pub/2021/multimodal-neurons. * [13] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Z. Ghahramani, M. Welling, C. Cortes, N. Lawrence, and K. Q. Weinberger, editors, Advances in Neural Information Processing Systems, volume 27. Curran Associates, Inc., 2014. * [14] Olivier J Henaff, Aravind Srinivas, Jeffrey De Fauw, Ali Razavi, Carl Doersch, S. M. Ali Eslami, and Aaron van den Oord. Data-efficient image recognition with contrastive predictive coding, 2020\. * [15] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, page 6629–6640, Red Hook, NY, USA, 2017. Curran Associates Inc. * [16] Ronghang Hu and Deepak Pathak. Worldsheet: Wrapping the world in a 3d sheet for view synthesis from a single image. arXiv preprint arXiv:2012.09854, 2020. * [17] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. CVPR, 2017. * [18] Allan Jabri, Andrew Owens, and Alexei A Efros. Space-time correspondence as a contrastive random walk. Advances in Neural Information Processing Systems, 2020. * [19] Paras Jain, Ajay Jain, Aniruddha Nrusimha, Amir Gholami, Pieter Abbeel, Joseph Gonzalez, Kurt Keutzer, and Ion Stoica. Checkmate: Breaking the memory wall with optimal tensor rematerialization. In I. Dhillon, D. Papailiopoulos, and V. Sze, editors, Proceedings of Machine Learning and Systems, volume 2, pages 497–511, 2020. * [20] R. Jensen, A. Dahl, G. Vogiatzis, E. Tola, and H. Aanæs. Large scale multi-view stereopsis evaluation. In 2014 IEEE Conference on Computer Vision and Pattern Recognition, pages 406–413, 2014. * [21] Chao Jia, Yinfei Yang, Ye Xia, Yi-Ting Chen, Zarana Parekh, Hieu Pham, Quoc V Le, Yunhsuan Sung, Zhen Li, and Tom Duerig. Scaling up visual and vision-language representation learning with noisy text supervision. arXiv preprint arXiv:2102.05918, 2021. * [22] James T. Kajiya and Brian P Von Herzen. Ray tracing volume densities. In Proceedings of the 11th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’84, page 165–174, New York, NY, USA, 1984. Association for Computing Machinery. * [23] Angjoo Kanazawa, Shubham Tulsiani, Alexei A. Efros, and Jitendra Malik. Learning category-specific mesh reconstruction from image collections. In ECCV, 2018. * [24] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of StyleGAN. In Proc. CVPR, 2020. * [25] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. * [26] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. * [27] Amit Kohli, Vincent Sitzmann, and Gordon Wetzstein. Semantic implicit neural scene representations with semi-supervised training, 2021. * [28] Stephen Lombardi, Tomas Simon, Jason Saragih, Gabriel Schwartz, Andreas Lehrmann, and Yaser Sheikh. Neural volumes: Learning dynamic renderable volumes from images. ACM Trans. Graph., 38(4):65:1–65:14, July 2019. * [29] Ben Mildenhall, Pratul P. Srinivasan, Rodrigo Ortiz-Cayon, Nima Khademi Kalantari, Ravi Ramamoorthi, Ren Ng, and Abhishek Kar. Local light field fusion: Practical view synthesis with prescriptive sampling guidelines. ACM Transactions on Graphics (TOG), 2019. * [30] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis, 2020. * [31] Anton Obukhov, Maximilian Seitzer, Po-Wei Wu, Semen Zhydenko, Jonathan Kyl, and Elvis Yu-Jing Lin. High-fidelity performance metrics for generative models in pytorch, 2020\. Version: 0.2.0, DOI: 10.5281/zenodo.3786540. * [32] Kyle Olszewski, Sergey Tulyakov, Oliver Woodford, Hao Li, and Linjie Luo. Transformable bottleneck networks. The IEEE International Conference on Computer Vision (ICCV), Nov 2019. * [33] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021. * [34] Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. Stochastic backpropagation and approximate inference in deep generative models. In Eric P. Xing and Tony Jebara, editors, Proceedings of the 31st International Conference on Machine Learning, volume 32 of Proceedings of Machine Learning Research, pages 1278–1286, Bejing, China, 22–24 Jun 2014. PMLR. * [35] Gernot Riegler and Vladlen Koltun. Free view synthesis. In Andrea Vedaldi, Horst Bischof, Thomas Brox, and Jan-Michael Frahm, editors, Computer Vision – ECCV 2020, pages 623–640, Cham, 2020. Springer International Publishing. * [36] Tamar Rott Shaham, Tali Dekel, and Tomer Michaeli. Singan: Learning a generative model from a single natural image. In Computer Vision (ICCV), IEEE International Conference on, 2019\. * [37] Johannes L. Schonberger and Jan-Michael Frahm. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016. * [38] Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. Graf: Generative radiance fields for 3d-aware image synthesis, 2020. * [39] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In International Conference on Learning Representations, 2015. * [40] Vincent Sitzmann, Eric R. Chan, Richard Tucker, Noah Snavely, and Gordon Wetzstein. Metasdf: Meta-learning signed distance functions, 2020. * [41] Vincent Sitzmann, Michael Zollhöfer, and Gordon Wetzstein. Scene representation networks: Continuous 3d-structure-aware neural scene representations. In Advances in Neural Information Processing Systems, 2019. * [42] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. Rethinking the inception architecture for computer vision. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,, 2016. * [43] Matthew Tancik, Ben Mildenhall, Terrance Wang, Divi Schmidt, Pratul P. Srinivasan, Jonathan T. Barron, and Ren Ng. Learned initializations for optimizing coordinate-based neural representations, 2020. * [44] Alex Trevithick and Bo Yang. Grf: Learning a general radiance field for 3d scene representation and rendering. In arXiv:2010.04595, 2020. * [45] Shubham Tulsiani, Tinghui Zhou, Alexei A. Efros, and Jitendra Malik. Multi-view supervision for single-view reconstruction via differentiable ray consistency. In Computer Vision and Pattern Regognition (CVPR), 2017. * [46] Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Representation learning with contrastive predictive coding, 2019. * [47] Stéfan van der Walt, Johannes L. Schönberger, Juan Nunez-Iglesias, François Boulogne, Joshua D. Warner, Neil Yager, Emmanuelle Gouillart, Tony Yu, and the scikit-image contributors. scikit-image: image processing in Python. PeerJ, 2:e453, 6 2014. * [48] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need, 2017. * [49] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul Srinivasan, Howard Zhou, Jonathan T. Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. CVPR, 2021. * [50] Olivia Wiles, Georgia Gkioxari, Richard Szeliski, and Justin Johnson. SynSin: End-to-end view synthesis from a single image. In CVPR, 2020. * [51] Lin Yen-Chen. PyTorchNeRF: a PyTorch implementation of NeRF, 2020. * [52] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images, 2020. * [53] Kai Zhang, Gernot Riegler, Noah Snavely, and Vladlen Koltun. Nerf++: Analyzing and improving neural radiance fields. arXiv:2010.07492, 2020. * [54] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. * [55] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017. ## Appendix A Experimental details #### View selection For most few-view Realistic Synthetic experiments, we randomly subsample 8 of the available 100 training renders. Views are not manually selected. However, to compare the ability of NeRF and DietNeRF to extrapolate to unseen regions, we manually selected 14 of the 100 views mostly showing the right side of the Lego scene. For DTU experiments where we fine-tune pixelNeRF [52], we use the same source view as [52]. This viewpoint was manually selected and is shared across all 15 scenes. #### Simplified NeRF baseline The published version of NeRF [30] can be unstable to train with 8 views, often converging to a degenerate solution. We found that NeRF is sensitive to MLP parameter initialization, as well as hyperparameters that control the complexity of the learned scene representation. For a fair comparison, we tuned the Simplified NeRF baseline on each Realistic Synthetic scene by modifying hyperparameters until object geometry converged. Table 6 shows the resulting hyperparameter settings for initial learning rate prior to decay, whether the MLP $f_{\theta}$ is viewpoint dependent, number of samples per ray queried from the fine and coarse networks, and the maximum frequency sinusoidal encoding of spatial position $(x,y,z)$. The fine and coarse networks are used in [30] for hierarchical sampling. ✗ denotes that we do not use the fine network. Table 6: Simplified NeRF training details by scene in the Realistic Synthetic dataset. We tune the initial learning rate, view dependence, number of samples from fine and coarse networks for hierarchical sampling, and the maximum frequency of the $(x,y,z)$ spatial positional encoding. Scene | LR | View dep. | Fine | Coarse | Max freq. ---|---|---|---|---|--- Full NeRF | $5\times 10^{-4}$ | ✓ | 128 | 64 | $2^{9}$ Lego | $5\times 10^{-5}$ | ✓ | ✗ | 128 | $2^{5}$ Chair | $5\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Drums | $5\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Ficus | $5\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Mic | $5\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Ship | $5\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Materials | $1\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{5}$ Hotdog | $1\times 10^{-5}$ | ✗ | ✗ | 128 | $2^{3}$ #### Implementation Our implementation is based on a PyTorch port [51] of NeRF’s original Tensorflow code. We re-train and evaluate NeRF using this code. For memory efficiency, we use 400$\times$400 images of the scenes as in [51] rather than full-resolution 800$\times$800 images. NV is trained with full-resolution $800\times 800$ views. NV renderings are downsampled with a 2x2 box filter to $400\times 400$ to compute metrics. We train all NeRF, Simplified NeRF and DietNeRF models with the Adam optimizer [25] for 200k iterations. #### Metrics Our PSNR, SSIM, and LPIPS metrics use the same implementation as [52] based on the scikit-image Python package [47]. For the DTU dataset, [52] excluded some poses from the validation set as ground truth photographs had excessive shadows due to the physical capture setup. We use the same subset of validation views. For both Realistic Synthetic and DTU scenes, we also included FID and KID perceptual image quality metrics. While PSNR, SSIM and LPIPS are measured between pairs of pixel-aligned images, FID and KID are measured between two sets of image samples. These metrics compare the distribution of image features computed on one set of images to those computed on another set. As distributions are compared rather than individual images, a sufficiently large sample size is needed. For the Realistic Synthetic dataset, we compute the FID and KID between all 3200 ground-truth images (across train, validation and testing splits and across scenes), and 200 rendered test images at the same resolution (25 test views per scene). Aggregating across scenes allows us to have a larger sample size. Due to the setup of the Neural Volumes code, we use additional samples for rendered images for that baseline. For the DTU dataset, we compute FID and KID between 720 rendered images (48 per scene across 15 validation scenes, excluding the viewpoint of the source image provided to pixelNeRF) and 6076 ground-truth images (49 images including the source viewpoint across 124 training and validation scenes). FID and KID metrics are computed using the torch-fidelity Python package [31]. ## Appendix B Per-scene metrics Figure 8: CLIP ViT embeddings are more similar between views of the same scene than across different scenes. We show a 2D histogram for each pair of Realistic Synthetic scenes comparing ViT embedding similarity and the distance between views. The dashed line shows mean cosine similarity, and green histograms have mean similarity is greater than 0.6. On the diagonal, two views from the upper hemisphere of the same scene are sampled. Embeddings of different views of the same scene are generally highly similar. Nearby (distance 0) and diagonally opposing (distance 8) views are most similar. In comparison, when sampling views from different scenes (lower triangle), embeddings are dissimilar. Table 7: Quality metrics for each scene in the Realistic Synthetic dataset with 8 observed views. PSNR $\uparrow$ | Lego | Chair | Drums | Ficus | Mic | Ship | Materials | Hotdog ---|---|---|---|---|---|---|---|--- NeRF | 9.726 | 21.049 | 17.472 | 13.728 | 26.287 | 12.929 | 7.837 | 10.446 NV [28] | 17.652 | 20.515 | 16.271 | 19.448 | 18.323 | 14.457 | 16.846 | 19.361 Simplified NeRF | 16.735 | 21.870 | 15.021 | 21.091 | 24.206 | 17.092 | 20.659 | 24.060 DietNeRF (ours) | 23.897 | 24.633 | 20.034 | 20.744 | 26.321 | 23.043 | 21.254 | 25.250 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft (ours) | 24.311 | 25.595 | 20.029 | 20.940 | 26.794 | 22.536 | 21.621 | 26.626 NeRF, 100 views | 31.618 | 34.073 | 25.530 | 29.163 | 33.197 | 29.407 | 29.340 | 36.899 SSIM $\uparrow$ | Lego | Chair | Drums | Ficus | Mic | Ship | Materials | Hotdog ---|---|---|---|---|---|---|---|--- NeRF | 0.526 | 0.861 | 0.770 | 0.661 | 0.944 | 0.605 | 0.484 | 0.644 NV [28] | 0.707 | 0.795 | 0.675 | 0.815 | 0.816 | 0.602 | 0.721 | 0.796 Simplified NeRF | 0.775 | 0.859 | 0.727 | 0.872 | 0.930 | 0.694 | 0.823 | 0.894 DietNeRF (ours) | 0.863 | 0.898 | 0.843 | 0.872 | 0.944 | 0.758 | 0.843 | 0.904 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft (ours) | 0.875 | 0.912 | 0.845 | 0.874 | 0.950 | 0.757 | 0.851 | 0.924 NeRF, 100 views | 0.965 | 0.978 | 0.929 | 0.966 | 0.979 | 0.875 | 0.958 | 0.981 LPIPS $\downarrow$ | Lego | Chair | Drums | Ficus | Mic | Ship | Materials | Hotdog ---|---|---|---|---|---|---|---|--- NeRF | 0.467 | 0.163 | 0.231 | 0.354 | 0.067 | 0.375 | 0.467 | 0.422 NV [28] | 0.253 | 0.175 | 0.299 | 0.156 | 0.193 | 0.456 | 0.223 | 0.203 Simplified NeRF | 0.218 | 0.152 | 0.280 | 0.132 | 0.080 | 0.283 | 0.151 | 0.139 DietNeRF (ours) | 0.110 | 0.092 | 0.117 | 0.097 | 0.053 | 0.204 | 0.102 | 0.097 DietNeRF, $\mathcal{L}_{\text{MSE}}$ ft (ours) | 0.096 | 0.077 | 0.117 | 0.094 | 0.043 | 0.193 | 0.095 | 0.067 NeRF, 100 views | 0.033 | 0.025 | 0.064 | 0.035 | 0.023 | 0.125 | 0.037 | 0.025 #### Embedding similarity In Figure 8, we compare the cosine similarity of two views with the distance between their camera origins for each pair of scenes in the Realistic Synthetic dataset. When sampling both views from the same scene, views have high cosine similarity (diagonal). For 6 of the 8 scenes, there is some dependence on the relative poses of the camera views, though similarity is high across all camera distances. For views sampled from different scenes, similarity is low (cosine similarity around 0.5). #### Quality metrics Table 7 shows PSNR, SSIM and LPIPS metrics on a per-scene basis for the Realistic Synthetic dataset. FID and KID metrics are excluded as they need a larger sample size. We bold the best method on each scene, and underline the second-best method. Across all scenes in the few-shot setting, DietNeRF or DietNeRF fine-tuned for 50k iterations with $\mathcal{L}_{\text{MSE}}$ performs best or second-best. ## Appendix C Qualitative results and ground-truth In this section, we provide additional qualitative results. Figure 9 shows the ground-truth training views used for 8-shot Realistic Synthetic experiments. These views are sampled at random from the training set of [30]. Random sampling models challenges with real-world data capture such as uneven view sampling. It may be possible to improve results if views are carefully selected. In Figure 10, we provide additional renderings of Realistic Synthetic scenes from testing poses for baseline methods and DietNeRF. Neural Volumes generally converges to recover coarse object geometry, but has wispy artifacts and distortions. On the Ship scene, Neural Volumes only recovers very low- frequency detail. Simplified NeRF suffers from occluders that are not visible from the 8 training poses. DietNeRF has the highest quality reconstructions without these distortions or occluders, but does miss some high-frequency detail. An interesting artifact is the leakage of green coloration to the back of the chair. Finally, in Figure 11, we show renderings from pixelNeRF and DietPixelNeRF on all DTU dataset validation scenes not included in the main paper. Starting from the same checkpoint, pixelNeRF is fine-tuned using $\mathcal{L}_{\text{MSE}}$ for 20k iterations, whereas DietPixelNeRF is fine- tuned using $\mathcal{L}_{\text{MSE}}+\mathcal{L}_{\text{SC}}$ for 20k iterations. DietPixelNeRF has sharper renderings. On scenes with rectangular objects like bricks and boxes, DietPixelNeRF performs especially well. However, the method struggles to preserve accurate geometry in some cases. Note that the problem is under-determined as only a single view is observed per scene. Figure 9: Training views used for Realistic Synthetic scenes. These views are randomly sampled from the available 100 views. This is a challenging setting for view synthesis and 3D reconstruction applications as objects are not uniformly observed. Some views are mostly redundant, like the top two Lego views. Other regions are sparsely observed, such as a single side view of Hotdog. Figure 10: Additional renderings of Realistic Synthetic scenes. Figure 11: One-shot novel view synthesis: Additional renderings of DTU scenes generated from a single observed view (left). Ground truth views are shows for reference, but are not provided to the model. pixelNeRF and DietPixelNeRF are pre-trained on the same dataset of other scenes, then fine-tuned on the single input view for 20k iterations with $\mathcal{L}_{\text{MSE}}$ alone (pixelNeRF) or $\mathcal{L}_{\text{MSE}}+\mathcal{L}_{\text{SC}}$ (DietPixelNeRF). ## Appendix D Adversarial approaches While NeRF is only supervised from observed poses, conceptually, a GAN [13] uses a discriminator to compute a realism loss between real and generated images that need not align pixel-wise. Patch GAN discriminators were introduced for image translation problems [17, 55] and can be useful for high- resolution image generation [10]. SinGAN [36] trains multiscale patch discriminators on a single image, comparable to our single-scene few-view setting. In early experiments, we trained patch-wise discriminators per-scene to supervise $f_{\theta}$ from novel poses in addition to $\mathcal{L}_{\text{SC}}$. However, an auxiliary adversarial loss led to artifacts on Realistic Synthetic scenes, both in isolation and in combination with our semantic consistency loss.
# Signatures of Bulk Neutrinos in the Early Universe David McKeen<EMAIL_ADDRESS>TRIUMF, 4004 Wesbrook Mall, Vancouver, BC V6T 2A3, Canada John N. Ng<EMAIL_ADDRESS>TRIUMF, 4004 Wesbrook Mall, Vancouver, BC V6T 2A3, Canada Michael Shamma<EMAIL_ADDRESS>TRIUMF, 4004 Wesbrook Mall, Vancouver, BC V6T 2A3, Canada ###### Abstract Neutrino masses and quantum gravity are strong reasons to extend the standard model of particle physics. A large extra dimension can be motivated by quantum gravity and can explain the small neutrino masses with new singlet states that propagate in the bulk. In such a case, a Kaluza-Klein tower of sterile neutrinos emerges. We revisit constraints on towers of sterile neutrinos that come from cosmological observables such as the effective number of noninteracting relativistic species and the dark matter density. These limits generically rule out micron-sized extra dimensions. We explore the weakening of these constraints to accommodate an extra dimension close to the micron size by assuming that the universe reheated after inflation to a low temperature. We discuss how such a possibility can be distinguished in the event of a positive signal in a cosmological observable. ## I Introduction The observation of neutrino flavor oscillations in astrophysical and terrestrial neutrino experiments is unambiguous evidence of nonzero neutrino masses [1, 2, 3]. In order to accommodate this observation, the standard model (SM) must be extended since one cannot construct a gauge-invariant dimension-4 mass term for the neutrino with SM fields alone. The most minimal model that gives all three neutrino masses extends the SM with three color and hypercharge singlet, right-handed (or “sterile”) fields $\nu_{R}$. Each singlet couples to the SM through the operator $y\bar{L}H\nu_{R}$ so that after electroweak symmetry breaking, each neutrino obtains a Dirac mass $m_{\nu}\sim yv$ where $v$ is the Higgs vacuum expectation value. To obtain eV-scale neutrino masses requires $y\sim 10^{-12}-10^{-11}$ which makes terrestrial and cosmological production of the sterile states exceedingly improbable. However, new interactions beyond this could lead to additional channels of production for the sterile neutrino and rich phenomenological consequences at neutrino experiments on (under) the ground and in the sky [4, 5, 6, 7, 8, 9, 10, 11, 12, 13]. At the same time there are theoretical reasons to think that the SM should be extended. First, the SM lacks a quantum mechanical description of gravitational interactions. Formulating a consistent quantum description of gravity has been the main objective of string theory [14, 15]. Much effort has been expended on appropriate low-energy compactifications of the extra dimensions in string theory that produce the four dimensional SM with gravity. In particular, attention has been recently focused on low energy effective field theories that are and are not consistent with quantum gravity, referred to as the “string theory landscape” and “swampland”, respectively (see e.g. [16, 17, 18, 19] for recent reviews). Motivated by these conjectures in string theory, a micron-sized dark dimension, or large extra dimension (LED), has been proposed to explain the small cosmological constant and includes a graviton whose tower of Kaluza-Klein (KK) modes may comprise (some of) the dark matter (DM) [20, 21, 22, 23, 24]. The dark dimension scenario can only accommodate SM singlet fields so additional contributions to the DM may include KK excitations of a bulk neutrino. In fact, the dark dimension also presents an elegant explanation of the smallness of neutrino masses [25, 26, 27, 28]. In LED models of neutrino mass, new SM gauge singlet right-handed neutrinos, fields propagate in the $4+n$ dimensional space called the “bulk”, where $n$ are the number of additional spatial dimensions. On the other hand, the fields which have SM charges are confined to propagate in the 4D “brane”. Yukawa couplings of the SM (active) neutrinos $\nu_{L}$, SM Higgs $H$, and the bulk neutrinos give rise to Dirac neutrino masses through the familiar Higgs mechanism at the weak scale. The neutrino mass is suppressed not by tiny couplings in this case, but by the volume of the extra dimensions. Large extra dimensions can be investigated in terrestrial experiments via their instigation of short-baseline neutrino oscillations. Additionally, the extra dimension can be explored via the perturbations they generate in solar, atmospheric, and long-baseline neutrino experiments. Specifically, the LEDs are probed through the effects of the Kaluza-Klein (KK) modes which describe the right-handed singlet neutrino fields on the brane. Given the micron size of the LED, the observable effect at oscillation experiments is quite similar to that of eV-scale sterile neutrinos. When connecting to these experiments, it is typically sufficient to consider the effects of the largest LED [29, 30, 31, 32, 33, 34, 35, 36]. In this situation, the model is described by two parameters: the (active) neutrino mass scale $m$ and the radius of the extra dimension $R$. In this work, we point out that the KK modes of a bulk neutrino are produced abundantly in the early universe and in turn may be observed or constrained by ongoing and upcoming cosmological surveys. In particular, these modes contribute to the DM density and to the relativistic energy density at the time of recombination. Whether a given KK mode contributes to the DM density or to relativistic energy is determined by its mass and its lifetime, factors which depend on its position in the KK tower and the size of the extra dimension. Crucially, the number of bulk neutrino modes which are produced depends on the temperature at which the universe reheats with the production of heavier modes exponentially suppressed. If the universe reheats to very high temperatures, then higher states in the KK tower can be produced and drastically affect cosmological observables [37]. On the other hand, there has been model building effort focused on the possibility of constructing the observed universe at $T\sim\mathcal{O}(10~{}\text{MeV})$ [38], especially recently [39, 40, 41, 42, 43, 44, 45, 46, 47, 48], and low reheat temperatures can lessen constraints on sterile neutrinos [49]. In this work we not only recast cosmological constraints on models of extra dimensional neutrinos in the case of high reheating temperatures but also provide a fresh analysis of cosmological implications of this model if the universe was only as hot as a few MeV. The paper is organized as follows. In Sec. II we review the necessary features for neutrino mass generation in models of LEDs. Then, in Sec. III, we review how sterile neutrinos are probed in terrestrial experiments. Next, in Sec. IV we review how BSM physics affects $\Lambda$CDM cosmology and study in detail the effects of LED models of neutrino masses on cosmological observables. We conclude this work in Sec. V with a summary of our main results. ## II Neutrinos in Large Extra Dimensions Our starting point is an extra dimensional model in which all SM fields are localized on a single 4D brane while SM singlet fields are free to propagate in the extra dimensions. In particular, we allow sterile neutrinos to be bulk fields. The ingredients in our model are SM lepton doublets $L_{\alpha}=(\nu_{\alpha L},\ell_{\alpha L})^{T}$ with $\alpha=e,\mu,\tau$ labeling the lepton flavour bulk fermions $N_{\alpha}$.111To give the three active neutrinos masses we have added three sterile neutrino fields. We are interested in the situation where one of the extra dimensions, denoted by $y$, is compactified onto a circle of radius $R$ which is much larger than the sizes of the other extra dimensions [31, 32, 33, 34, 35]. The Bulk fermions satisfy the boundary condition $N_{\alpha}(x,y)=N_{\alpha}(x,y+2\pi R)$. The relevant terms in the 5D action, assuming the interactions of the SM fields take place on the SM brane at $y=0$, are $\displaystyle S$ $\displaystyle=\int d^{4}xdy\bar{N}_{\alpha}i\Gamma^{A}D_{A}N_{\alpha}+\int d^{4}x\bar{L}_{\alpha}i\gamma^{\mu}\partial_{\mu}L_{\alpha}$ (1) $\displaystyle\quad\quad\quad\quad\quad\quad-\frac{\lambda_{\alpha\beta}}{\sqrt{M_{\ast}}}\int d^{4}xdy\bar{L}_{\alpha}(x)N_{\beta}(x,y)H(x)\delta(y)+\text{h.c.}$ where $\Gamma^{A}=(\gamma^{\mu},i\gamma^{5})$, $D_{A}$ is the 5D partial derivative operator, and $H$ is the SM Higgs fields. The Yukawa couplings $\lambda_{\alpha\beta}$ are dimensionless and we introduced $M_{\ast}$, which is the scale at which this extra dimensional description breaks down. Although there are a number of interpretations of $M_{*}$, we simply assume $M_{*}\gg M_{\text{EW}}$ where $M_{\text{EW}}$ is the electroweak energy scale. We perform a Kaluza-Klein decomposition of the extra dimensional fields and set $H=(v/\sqrt{2},0)^{T}$ where $v=246~{}~{}\text{GeV}$ is the Higgs vacuum expectation value. Doing so generates kinetic terms for the active neutrinos $\nu_{\alpha L}$ and infinite towers of bulk neutrinos $n_{\alpha L,R}^{\pm k}$ with $k=1,2,\dots$ labelling the state. In addition, the Yukawa interaction generates interaction terms between the bulk and active neutrinos, $\displaystyle S$ $\displaystyle\supset-\int d^{4}x\bigg{\\{}\sum_{k=1}^{\infty}\bigg{(}m_{k}\bar{n}_{\alpha L}^{k}n_{\alpha R}^{k}+m_{-k}\bar{n}_{\alpha L}^{-k}n_{\alpha R}^{-k}\bigg{)}$ (2) $\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad+m^{D}_{\alpha\beta}\bar{\nu}_{\alpha L}\bigg{[}n_{\beta R}^{0}+\sum_{k=1}^{\infty}\big{(}n_{\beta R}^{k}+n_{\beta R}^{-k}\big{)}\bigg{]}+\text{h.c.}\bigg{\\}}$ where $m_{k}=-m_{-k}=k/R$. The first term above represents masses for Dirac fermions made up of bulk neutrinos while the second involves Dirac masses that mix the active neutrinos $\nu_{\alpha L}$ with the bulk neutrinos, $m_{\alpha\beta}^{D}=\frac{\lambda_{\alpha\beta}v}{\sqrt{4\pi M_{*}R}}.$ (3) We can also relate the scale $M_{*}$ to the gravitational scale through the relation [25, 26, 27] $\bigg{(}\frac{M_{\text{Pl}}}{M_{*}}\bigg{)}^{2}=M_{*}(2\pi R).$ (4) One can check for what values of $\lambda,~{}M_{*}$ reproduce neutrino masses roughly on the order of current bounds. For example, with $\lambda=0.1$, $M_{*}\sim 10^{8}~{}\text{GeV}$ the neutrino mass is $m^{D}\sim 0.1~{}\text{eV}$. Hence small Dirac neutrino masses are natural in the LED scenario. The interactions in Eq. (2) give rise to a mass matrix which can be diagonalized by a unitary transformation. After diagonalization, the mixing of the three active neutrino flavor eigenstates can be decomposed into mass eigenstates, $\nu_{i}^{(j)}$, as $\nu_{\alpha L}=\sum_{i=1}^{3}U_{\alpha i}\sum_{j=0}^{\infty}V_{ij}\nu_{i}^{(j)},$ (5) where $U_{\alpha i}$ is the $3\times 3$ PMNS matrix describing the mixing of the active neutrino flavours. The three KK towers correspond to the three light neutrino masses observed in oscillation data. The $\nu_{i}^{(k)}$ have $k=0$ modes that are mostly active while $k>0$ modes correspond to bulk neutrino modes with small active admixture. The masses of the neutrinos, $m_{i}^{(k)}$, are determined by the eigenvalue equation $m_{i}^{(k)}R-(m_{i}^{D}R)^{2}\pi\cot(\pi m_{i}^{(j)}R)=0$ (6) where $m_{i}^{D}$ are the eigenvalues of the Dirac neutrino mass matrix. In the above expression We observe that the usual mass eigenstates are each accompanied by an infinite tower of bulk, or sterile, neutrinos. The admixture of these heavy neutrinos with the active neutrinos is controlled by the components of the matrix $V_{ij}$ are determined by [26, 27] $V_{ij}^{2}=\frac{2}{1+\pi^{2}(m_{i}^{D}R)^{2}+(m_{i}^{(j)}/m_{i}^{D})^{2}}$ (7) The transcendental equation can be analytically solved when $m_{i}^{D}R\ll 1$. In this limit, $\displaystyle m_{i}$ $\displaystyle\equiv m_{i}^{(0)}=m_{i}^{D}\bigg{[}1-\frac{\pi^{2}}{6}(m_{i}^{D}R)^{2}+\dots\bigg{]},$ (8) $\displaystyle M_{ik}$ $\displaystyle\equiv m_{i}^{(k)}=\frac{k}{R}\bigg{[}1+\left(\frac{m_{i}^{D}R}{k}\right)^{2}+\dots\bigg{]},~{}{\rm for}~{}k>0,$ (9) $\displaystyle V_{i0}$ $\displaystyle=1-\frac{\pi^{2}}{6}(m_{i}^{D}R)^{2}+\dots,$ (10) $\displaystyle V_{ik}$ $\displaystyle=\frac{\sqrt{2}m_{i}^{D}R}{k}\bigg{[}1-\frac{3}{2}\frac{(m_{i}^{D}R)^{2}}{k^{2}}+\dots\bigg{]},~{}{\rm for}~{}k>0.$ (11) For the $k>0$ modes, the masses increase and the mixings with active flavours $V_{ik}$ decrease for increasing $k$. We note also that the mixings between the sterile neutrinos are parametrically smaller than the active-sterile mixings by an additional factor of $m_{i}^{D}R$. In the parameter space we are interested in, $m_{i}^{D}R\ll 1$ so that $m_{i}\simeq m_{i}^{D}$ and $M_{ik}\simeq k/R$. Thus, each of the three KK modes with the same value of $k>0$ are nearly degenerate. In what follows, we will refer to their common mass without the zero mode label, $M_{ik}\equiv M_{k}$ for $i=1,2,3$. Consequences of the interactions between the sterile and active neutrinos given in Eq. 5 include the ability for SM neutrinos to oscillate into $k\neq 0$ sterile neutrinos and, as we will see, production $k>0$ modes in the early universe. ## III Probes of Sterile Neutrinos The main focus of this work will be on the effects of the production of KK- modes in the early universe. However, it will be important to compare the reach of cosmological experiments to terrestrial experiments in their ability to observe or constrain an extra dimensional scenario. In this section we review some of the terrestrial experiments which can observe and constrain sterile neutrinos with a particular focus on how the excitations of bulk neutrinos can be observed. ### III.1 Terrestrial Probes of Sterile Neutrinos It behooves us to begin our discussion with the connection to searches for sterile neutrinos at oscillation experiments. As is well known, because neutrinos have mass, their mass eigenstates do not correspond to the states that couple to the charged leptons, i.e. lepton flavours, through weak interactions. This leads to neutrino flavour oscillations while they propagate between production and detection via charged current weak interaction. In the dark dimension scenario, the charged leptons couple dominantly to the zero modes of the KK neutrino towers but have nonzero couplings to each of the $k>0$ modes. The probability to produce a neutrino of flavour $\alpha$ and with energy $E$ and measure it as flavour $\beta$ after traveling a distance $L$ can be written as $P(\nu_{\alpha}\rightarrow\nu_{\beta})=\bigg{|}\sum_{i=1}^{3}\sum_{k=0}^{\infty}U_{\alpha i}^{*}U_{\beta i}V_{ik}^{2}\exp\bigg{(}-i\frac{(m_{i}^{k})^{2}L}{2E}\bigg{)}\bigg{|}^{2}.$ (12) Since we are working in the part of parameter space with $R\ll 1/m_{i}$, oscillations amongst active neutrinos are not grossly changed from the SM scenario; solar, atmospheric, and reactor neutrino oscillation data can be fit by splittings among the mostly active zero mode neutrinos determined by $\Delta m_{21}^{2}=\Delta m^{2}_{\odot}\simeq 7.5\times 10^{-5}~{}{\rm eV}^{2}$, $\left|\Delta m_{31}^{2}\right|=\Delta m^{2}_{\rm atm}\simeq 2.5\times 10^{-3}~{}{\rm eV}^{2}$, and $U$ the usual (nearly) unitary $3\times 3$ PMNS matrix [50, 51] , $\displaystyle U=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}\\\ U_{\mu 1}&U_{\mu 2}&U_{\mu 3}\\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\end{pmatrix}\approx\begin{pmatrix}0.81&0.56&0.15\\\ -0.47&0.49&0.73\\\ 0.34&-0.66&0.67\end{pmatrix}.$ (13) The numerical values here are the central values reported in [52] for a normal ordered mass hierarchy and ignoring the so far unmeasured CP phase. An inverted hierarchy gives very slightly different values. The presence of more than three neutrinos in the expression in Eq. (12) can alter the pattern of neutrino oscillations from the usual SM picture. Because the coupling to mode $k$ decreases with increasing $k$, we can focus on the mixing with the $k=1$ modes. Moreover, the near degeneracy among the $k=1$ states simplifies the analysis. In particular, for $R\sim{\cal O}(\mu{\rm m})$, experiments sensitive to $\rm eV$-scale sterile neutrinos can impact this scenario. In the $m_{i}^{D}R\ll 1$ limit, active-sterile mass splittings are $\Delta m^{2}=M_{1}^{2}-m_{i}^{2}\simeq M_{1}^{2}=1/R^{2}$ for $i=1,2,3$ and the probability that an electron neutrino remains an electron neutrino can be written $\displaystyle P_{e\rightarrow e}=1-\sin^{2}2\theta_{ee}\sin^{2}\bigg{(}\frac{M_{1}^{2}L}{4E}\bigg{)},$ (14) with $\sin^{2}2\theta_{ee}\simeq\frac{8}{M_{1}^{2}}\sum_{i=1}^{3}m_{i}^{2}|U_{ei}|^{2}.$ (15) In the limit that the light, mostly active neutrinos are degenerate, the approximate unitarity of the PMNS matrix simplifies this mixing angle considerably, $\sin^{2}2\theta_{ee}\simeq 8m^{2}R^{2}$ with $\displaystyle m\equiv\left(\sum_{i=1}^{3}m_{i}^{2}\right)^{1/2}.$ (16) In Fig. 1, we show constraints and favoured regions in the parameter space of $R$ and $m$ from experiments sensitive to electron neutrino disappearance due to ${\cal O}({\rm eV})$ sterile neutrinos. We assume that the light neutrinos are degenerate $m_{1}=m_{2}=m_{3}$ and consider only active-sterile oscillations into the first KK mode. The lower bound $m>\sqrt{\Delta m^{2}_{\rm atm}}$ required to accommodate atmospheric neutrino oscillations [53, 54, 55] and the upper bound $m/\sqrt{3}<m_{\beta}=0.8~{}\rm eV$ from measurements of the tritium $\beta$-decay endpoint [56] are plotted. Active neutrino oscillations impose the lower bound $m^{2}\gtrsim\Delta m^{2}_{\rm atm}$ which we also indicate. While some hints for sterile neutrinos in this mass range can be accommodated with $R\sim(0.1-1~{}\mu{\rm m})$, we will see in Sec. IV that cosmological constraints generally preclude this region unless the standard cosmological history is changed quite drastically. Figure 1: Constraints on $m$, defined in Eq. (16), and $R$ from terrestrial experiments KATRIN [57], STEREO [58], Daya Bay [59], NEOS [60], Double Chooz [61], and RENO [62]. We also show the favored regions for explaining the reactor anti-neutrino anomaly (RAA) [63] and BEST [64] results. The lower limit on $m$ from atmospheric neutrino oscillations [53] and as well as the upper bound on $m$ from the limit on the effective electron neutrino mass in tritium beta decay [56] are also displayed (black, dot-dashed). Also shown is the boundary of the large active-sterile mass splitting regime, $m=R^{-1}$, above which higher order terms in Eqs. (8)-(11) cannot be neglected. Lastly, we mention in passing that active-active oscillations such as $\nu_{\mu}\to\nu_{e}$ are parametrically suppressed in this setup as is typical of $3+1$ setups which this essentially is [65, 66, 67]. ## IV Cosmological Probes of Sterile Neutrinos Depending on their properties such as whether they are relativistic or non- relativistic and if they come into chemical equilibrium with the SM bath, the spectrum of KK-modes contribute to the radiation energy density or DM energy density. Before we conduct a detailed study of these effects, we review how the contributions of additional degrees of freedom lead to changes in $\Lambda$-cold dark matter ($\Lambda$CDM) cosmology. In this section we will describe the constraints on extradimensional models of neutrino masses arising in cosmology. These constraints can be quite strong because a large amount of the neutrino KK towers can be appreciably produced even if their mixing angles are small, i.e. even for small $R$ and large $k$ in Eq. (11). In IV.1, we estimate the number density of each sterile mode produced through “freeze in” via small couplings to weak interactions. For the parameter space we are interested in, most of these states are nonrelativistic after the epoch of primordial nucleosynthesis and can, depending on their lifetimes, contribute to the dark matter density or act as a decaying dark matter component as we estimate in IV.2. We will see in IV.3 that, although they behave as matter, very strong constraints come from the contribution to the effective number of relativistic degrees of freedom when they decay and increase the number of light SM neutrinos. This is particularly important given near-future CMB observations that will increase these constraints or discover the effects of the heavy KK towers of sterile neutrinos. ### IV.1 Sterile neutrino production in the early universe Despite their small couplings to SM states, the sterile neutrinos can be produced in appreciable quantities in the early universe. The production of the heavier, mostly sterile neutrinos comes through weak interactions controlled by their small active admixture.222Although we do not explore this possibility, allowing the sterile neutrinos to couple to other bosons can qualitatively change this picture; see, e.g., [68, 69]. In the part of parameter space that we are interested in, $m_{i}^{D}R\ll 1$. We can therefore identify each of the three light neutrino masses with the Dirac mass parameters in the Lagrangian, $m_{i}=m_{i}^{D}$, and ignore the mass splittings between the three mostly sterile neutrinos corresponding to each $k>0$. Thus, for a fixed $k=1,2,\dots$, each of the three states $\nu_{i}^{(k)}$ can be identified with a single state labelled with mass $M_{k}=k/R$. If we make the assumption that the differences among the neutrino flavors in the early Universe is negligible (which is good to the ${\cal O}(1)$ level, sufficient for our purposes), then the production of each of these $k$ states in the early Universe is controlled by the square of the active-sterile mixing $\theta_{k}\simeq\sqrt{2}mR/k$ with $m$ defined in Eq. (16). The number density of $\nu_{k}$ evolves with time according to $\displaystyle\frac{dn_{k}}{dt}+3Hn_{k}$ $\displaystyle=\frac{1}{4}\sin^{2}2\theta_{k,\rm eff}\Gamma_{\nu}n_{\nu}$ (17) In this expression, $H$ is the Hubble expansion rate, $n_{\nu}$ is the number density of an active neutrino species, $\Gamma_{\nu}$ is the production rate of that neutrino, and $\theta_{k,{\rm eff}}$ is effective active-sterile mixing element in the presence of a nontrivial density of SM particles. Generically, cosmological limits preclude the sterile neutrinos from achieving chemical equilibrium with the rest of the SM plasma, so we focus on their production through “freeze in,” ignoring a depletion term on the right-hand side of Eq. (17). The effective active-sterile mixing angle can be described in terms of the density of the SM plasma, or equivalently, its temperature $T$,333We use $T$ to refer to the temperature of the active neutrinos. Before neutrino decoupling at $T\simeq 3~{}{\rm MeV}$ this is the same temperature as the rest of the SM plasma. Afterwards, $e^{+}e^{-}$ annihilations reheat the photons to $T_{\gamma}=(11/4)^{1/3}T$. $\displaystyle\sin 2\theta_{k,\rm eff}=\frac{\sin 2\theta_{k}}{1+(T/T_{V}^{k})^{6}}$ (18) where [70, 37, 71] $\displaystyle T_{V}^{k}$ $\displaystyle\sim 250~{}{\rm MeV}\left(\frac{M_{k}}{1~{}\rm keV}\right)^{1/3}\simeq 320~{}{\rm MeV}\times k^{1/3}\left(\frac{10^{-4}\mu\rm m}{R}\right)^{1/3}.$ (19) At temperatures below the weak scale, $T<T_{w}\sim 100~{}\rm GeV$, $\Gamma_{\nu}=AG_{F}^{2}T^{5}$. We ignore the flavor-dependence of the ${\cal O}(1)$ prefactor $A$ in this rate and fix it to $A=1$ since the ${\cal O}(1)$ error that doing so introduces is subleading. For $T>T_{w}$, $\Gamma_{\nu}\propto T$ although we will see that production above this temperature is negligible—equivalently production of modes with mass $M_{k}\gtrsim T_{w}$ is very subleading. Rewriting Eq. (17) in terms of yields, $Y_{k,\nu}=n_{k,\nu}/s$, where $s$ is the entropy density of the Universe, to express the sterile neutrino production as a function of temperature gives $\displaystyle\frac{dY_{k}}{dT}$ $\displaystyle=-\frac{1}{4}\frac{\sin^{2}2\theta_{k}}{\left[1+(T/T_{V}^{k})^{6}\right]^{2}}\frac{\Gamma_{\nu}Y_{\nu}}{HT}.$ (20) In all of the parameter space that we are interested in, the production of the sterile states occurs in a radiation-dominated Universe, with $H=1.66\sqrt{g_{\star}(T)}T^{2}/M_{\rm Pl}$ with $g_{\star}(T)$ the effective number of relativistic degrees of freedom, and $M_{\rm Pl}=1.2\times 10^{19}~{}\rm GeV$ the Planck mass. In what follows, we will investigate two scenarios: (i) a a high-reheat situation where the SM plasma is reheated after inflation to a temperature well above the weak scale, $T_{\rm rh}\gg T_{w}$, and (ii) a low-reheat scenario where the SM plasma is only reheated to a temperature $T_{\rm rh}\sim{\cal O}(\rm few~{}MeV)$ which we will fix to $T_{\rm rh}=5~{}\rm MeV$. This low-reheat scenario leads to relaxed bounds from the production of sterile neutrinos while also still being high enough to successfully accommodate primordial nucleosynthesis. The production of sterile neutrino $k$ is dominantly at temperatures $T\sim{\rm min}(T_{V}^{k},T_{w})$, which we assume the SM plasma achieves in the high-reheat scenario. Equation 20 does not reflect the Boltzmann- suppression of the production of modes that with masses larger than this temperature. To mock up this effect, we cut off the production of modes with $M_{k}>{\rm min}(T_{V}^{k},T_{w})$ or, equivalently, those with $\displaystyle k>k_{\rm kin}\equiv 5.1\times 10^{7}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right).$ (21) Note that for all $k\lesssim k_{\rm kin}$, $T_{V}^{k}\lesssim T_{w}$ which means that we can ignore production for temperatures above the weak scale. The kinematic cutoff is stricter in the low-reheat case. Modes with $M_{k}>T_{\rm rh}$, i.e. $\displaystyle k>k_{\rm kin}^{\rm low}\simeq 2.54\times 10^{3}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)$ (22) have exponentially suppressed production and we do not include them in our estimates. This weakens the limits drastically. So far, we have not discussed the lifetime of the sterile states that are produced. Since we can focus on the production of states with mass below the weak scale, decays of such states are mediated by 4-Fermi weak interaction dressed by an active-sterile mixing angle, $\displaystyle\tau_{k}$ $\displaystyle\sim\frac{\tau_{\mu}}{\theta_{k}^{2}}\left(\frac{m_{\mu}}{M_{k}}\right)^{5}\simeq 1.9\times 10^{26}~{}{\rm s}\left(\frac{0.1~{}\rm eV}{m}\right)^{2}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)^{3}k^{-3}.$ (23) The heaviest state that is produced has a lifetime $\displaystyle\tau_{k_{\rm kin}}$ $\displaystyle\sim 1.4\times 10^{3}~{}{\rm s}\left(\frac{0.1~{}\rm eV}{m}\right)^{2}$ (24) and all other states have a longer lifetime (since $k<k_{\rm kin}$). Thus, for $m\lesssim 10~{}{\rm eV}$, all of the states decay after SM neutrino decoupling. We can integrate Eq. (20) in the high-reheat scenario to find $\displaystyle\frac{n_{k}}{n_{\nu}}$ $\displaystyle\sim 2.32\times 10^{-2}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left(\frac{R}{10^{-4}~{}\mu\rm m}\right)\frac{1}{k}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}.$ (25) To arrive at this simple expression, we have ignored the change in the number of relativistic degrees of freedom during the relatively short temperature range where most production of mode $k$ occurs. We have also multiplied the result by a factor of $\left[10.75/g_{\star}(T_{V}^{k})\right]^{4/3}$ to account for entropy injection between the production at the epoch of SM neutrino decoupling. Note the dependence of Eq. 25 on the effective number of degrees of freedom agrees with [7]. In the low-reheat scenario, we will make use of the fact that for $R\lesssim 3\mu{\rm m}$, i.e. the parameter space that we are interested in, $T_{V}^{k}>T_{\rm rh}\sim 5~{}{\rm MeV}$. This means that during production the active-sterile mixing angle is close to its vacuum value. Integrating Eq. (20) from $T_{\rm rh}$ in this case gives $\displaystyle\frac{n_{k}^{\rm low}}{n_{\nu}}$ $\displaystyle\sim 8.8\times 10^{-8}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)^{2}\frac{1}{k^{2}}\left[\frac{10.75}{g_{\star}(T_{\rm rh})}\right]^{11/6}\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3}.$ (26) ### IV.2 Contribution to the matter density Figure 2: Upper bounds on $R$ as a function of $m$ defined in Eq. (16) in the high-reheat (left) and low-reheat (right) cases. We show bounds from the present limit $\Delta N_{\rm eff}<0.3$ (red, solid), not overclosing the universe (green, solid), as well as our estimate of the bound from a decaying dark matter component (magenta, solid). The region where astrophysical X-ray searches for the decay of sterile neutrino dark matter could be sensitive is within the region labeled “DM Decay” (black, solid). Also shown are terrestrial constraints from Fig. 1 and the preferred regions for the RAA anomaly [63] (teal, solid) and BEST [64] (pink, solid). We plot projected sensitivities of the HUNTER [72, 73], MAGNETO-$\nu$ [74], TRISTAN [75], and BeEST [76, 77] sterile neutrino search experiments (blue, dashed). See text for details. The presence of cold DM on cosmological scales can be distinguished from the presence of ordinary matter through measurements of the CMB angular power spectrum at small angular scales [78]. The best current measurements indicate DM contributes $\Omega_{\rm CDM}h^{2}\simeq 0.1$ to the universe’s overall energy density [78] where $\Omega$ represents the energy density in units of the critical energy density $\rho_{\rm cr}=10h^{2}~{}{\rm keV/cm^{3}}$ with $h$ related to the current Hubble expansion rate through $H=100h~{}{\rm km/s/kpc}$. That said, DM need only be stable on cosmological time scales and it is possible that some fraction of what we observe as DM today has decayed since the formation of the CMB. If DM decays between the time of recombination and today, this will alter the growth of structure and this effect can be measured or constrained by angular anisotropies in the CMB [79, 80, 81]. The constraints on dark matter that decays to radiation are that no more than about 5% of the dark matter density could have decayed between recombination and the present-day; this translates to $\Omega_{\rm ddm}h^{2}<0.005$ if the decaying component’s lifetime is short compared to $t_{U}=13.6\times 10^{9}~{}{\rm yr}$, the age of the Universe, or to the decaying dark matter’s lifetime being larger than about $20t_{U}$ if it makes up the entirety of the dark matter energy budget. Those KK-modes which have lifetimes $\tau_{k}\geq t_{U}$ contribute to the present-day DM density $\Omega_{\rm dm}h^{2}$ while those with $t_{U}\gtrsim\tau_{k}\geq t_{\rm CMB}$ contribute to a decaying DM density $\Omega_{\rm ddm}h^{2}$.444Sterile neutrinos are of course one of the most well studied DM candidates [82, 70]. If their masses are $\sim 10~{}\rm keV$ or smaller (corresponding to $R\gtrsim 10^{-3}$) they are a warm dark matter candidate and bounds can be placed from structure formation [83, 84]. Determining the precise bounds when there could be multiple DM subcomponents with different masses and mixing angles is extremely complicated and beyond the scope of the present work. See, e.g. Ref. [85] for a recent study on warm dark matter in a nonstandard sterile neutrino setup. Once produced, the sterile modes have kinetic energies comparable to the SM neutrinos and free stream until the temperature is of order their mass.555The relatively small reheating of the active neutrinos by the annihilation of SM states after they become nonrelativistic does not change our results appreciably. Since we are interested in $R\lesssim\mu{\rm m}$, even the lightest of the sterile modes is nonrelativistic at the epoch of matter-radiation equality. The energy density in these neutrinos contributes to the matter density. In the high-reheat scenario, we use Eq. (25) to write $\displaystyle\rho_{k}\simeq M_{k}n_{k}$ $\displaystyle\sim 45.7~{}{\rm eV}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}n_{\nu}.$ (27) Importantly, except for the mild dependence through the number of SM degrees of freedom at production, this expression is independent of the mass of the state $k/R$ so that each state in the KK tower that is produced contributes roughly equally. The contribution of these states to the present-day matter density in units of the critical density is $\displaystyle\Omega_{\rm dm}h^{2}$ $\displaystyle=\sum_{k=1}^{k_{\rm kin}}\frac{\rho_{k}}{\rho_{\rm cr}}e^{-t_{U}/\tau_{k}}\sim\sum_{k=1}^{k_{t_{U}}}0.5\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}.$ (28) In the last equality above we have approximated $\exp(-t_{U}/\tau_{k})=\theta(k_{t_{U}}-k)$ where $\displaystyle k_{t_{U}}\equiv 765\left(\frac{0.1~{}\rm eV}{m}\right)^{2/3}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)$ (29) is the value of $k$ such that $\tau_{k}=t_{U}$. In addition, the sterile states can behave as a decaying dark matter component if they decay between the time of CMB formation, $t_{\rm CMB}=3.8\times 10^{5}~{}{\rm yr}$, and the present day, $\displaystyle\Omega_{\rm ddm}h^{2}$ $\displaystyle=\sum_{k=1}^{k_{\rm kin}}\frac{\rho_{k}}{\rho_{\rm cr}}e^{-t_{\rm CMB}/\tau_{k}}\left[1-e^{-t_{U}/\tau_{k}}\right]\sim\sum_{k=k_{t_{U}}}^{k_{t_{\rm CMB}}}0.5\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}.$ (30) In the last step we have again made the same step function approximations of the exponentials and introduced $\displaystyle k_{t_{\rm CMB}}\equiv 2.5\times 10^{4}\left(\frac{0.1~{}\rm eV}{m}\right)^{2/3}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right),$ (31) which satisfies $\tau_{k_{\rm CMB}}=t_{\rm CMB}$. In the low-reheat case, the energy density in mode $k$ is reduced (and $k$-dependent). From Eq. (26), $\displaystyle\rho_{k}^{\rm low}$ $\displaystyle\sim 1.75\times 10^{-4}~{}{\rm eV}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)\frac{1}{k}\left[\frac{10.75}{g_{\star}(T_{\rm rh})}\right]^{11/6}\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3}n_{\nu}.$ (32) Note that, in contrast to the high reheat case in Eq. (27), this energy density depends on the mass of mode $k$, with heavier modes contributing less. To find the contribution to the (decaying) dark matter density, we sum the contributions up to $k_{\rm kin}^{\rm low}$ in Eq. (22) and weight each term by the appropriate factors to account for their lifetimes as done in the high- reheat case in Eqs. (28) and (30). In Fig. 2, we show the upper bounds on $R$ as a function of $m$ that come from not overclosing the universe, $\Omega_{\rm dm}h^{2}<0.12$ [78]. We also show an estimated upper bound from the decaying DM limit, $\Omega_{\rm ddm}h^{2}<5\%\times 0.12$ [79, 80, 81]. This applies in our case since the majority of the energy in the sterile neutrino decays is carried away by relativistic species. For values of $m$ larger than about $0.3~{}\text{eV}$ this decaying DM limit could be weakened by the slight degeneracy between the effects of nonzero light neutrino masses and decaying DM. The left panel shows the bounds in the high reheat temperature case while the right does so for $T_{\rm rh}=5~{}{\rm MeV}$. As we have displayed, proposed searches for sterile neutrinos, such as HUNTER [72, 73], MAGNETO-$\nu$ [74], TRISTAN [75], and BeEST [76, 77], can also probe some of this parameter space which we have also plotted, interpreting each as a search for the lightest KK sterile neutrino mode. In the high reheat case, the bounds are far stronger than those from terrestrial experiments and exclude regions of parameter space that explain neutrino oscillation data anomalies. The bounds are weakened by several orders of magnitude in the low reheat case, opening up some of the neutrino anomaly parameter space; however, this region is in some conflict with the lower bound on the light neutrino masses. The upper bound on $R$ from the observed matter density in the low reheat case, for $m\lesssim 1~{}{\rm eV}$, is larger than $10^{-3}~{}\mu{\rm m}$. In this part of parameter space, the lightest KK modes are at the $\rm keV$ scale or below, and therefore some of them act as a warm dark matter component. Although beyond the scope of the current work, studying the impact of the mixed warm and cold dark matter components on structure formation could lead to slightly tighter bounds. Astrophysical searches for the decay products of the sterile states that survive until the present day can also be important. For sterile modes with masses of ${\cal O}(10~{}\rm keV)$, i.e. $R\sim 10^{-4}~{}\mu{\rm m}$, are subject to searches using x-ray observatories such as Chandra, NuSTAR, and DEBRA [86, 87, 88, 89, 90] due to the loop-level decay to a light neutrino and a monochromatic photon. Owing to its larger number density, the $k=1$ mode dominates this signal. Generally speaking, these searches limit $\Omega_{k=1}h^{2}/\tau_{k=1}\lesssim 10^{-26}~{}{\rm s}^{-1}.$ (33) On the right panel of Fig. 2, we show the region in the low-reheat case where (33) is not satisfied as a rough guide to where indirect searches for decaying dark matter are sensitive. In the high-reheat case, the region where such searches can probe is already excluded by overclosing the universe, or by the sterile neutrino states having too short a lifetime. We defer a more complete study taking into account the multiple lines from other bulk neutrino modes to future study. ### IV.3 Shift of $\Delta N_{\rm eff}$ Measurements of the CMB provide a very precise picture of the Universe around and after the time of matter-radiation equality, when the temperature of the SM plasma was around $0.1~{}\rm eV$. In particular, such measurements are very sensitive to the relativistic energy density that is not coupled to the SM plasma, such as neutrinos or other light, noninteracting species. Conventionally, this is written in terms of $N_{\rm eff}$, which is the ratio of the noninteracting relativistic energy density to that of the electromagnetically interacting plasma, $\rho_{\gamma}$, $N_{\rm eff}=\frac{8}{7}\left(\frac{11}{4}\right)^{4/3}\frac{\sum_{\nu}\rho_{\nu}+\rho_{\rm new}}{\rho_{\gamma}}=N_{\rm eff}^{\rm SM}+\Delta N_{\rm eff},~{}\Delta N_{\rm eff}\equiv\frac{8}{7}\left(\frac{11}{4}\right)^{4/3}\frac{\rho_{\rm new}}{\rho_{\nu}},$ (34) where $\rho_{\rm new}$ is the contribution to the noninteracting relativistic energy density beyond the SM expectation.666Primordial nucleosynthesis is of course also sensitive to the number of relativistic degrees of freedom when the temperature of the universe was $T\sim 100~{}\rm keV$, with the shift from the standard value typically also parameterized in terms of $\Delta N_{\rm eff}$. Modes with $M_{k}\lesssim 1~{}~{}\text{MeV}$ contribute to this value of $\Delta N_{\rm eff}$. However, in the scenario we study, this shift is always extremely subleading when compared to the CMB value. The prefactor is chosen so that a single thermalized neutrino species contributes $N_{\rm eff}\simeq 1$. Larger $N_{\rm eff}$ corresponds to a larger Hubble expansion rate, in a form that is free-streaming, which leads to a particular imprint on the CMB at small scales [91, 92, 93]. Observations from Planck presently limit $\Delta N_{\text{eff}}<0.3$ [78]. Upcoming cosmological surveys such as CMB Stage-IV will probe $\Delta N_{\text{eff}}$ to around $0.06$ [94, 95]. The heavy, mostly sterile neutrinos whose production we have computed above are all massive during the CMB formation epoch. Therefore they do not directly contribute to $N_{\rm eff}$. However, they decay via weak interactions through their small active admixture into standard model states including light, active neutrinos. The active neutrinos that are produced in such decays after neutrino decoupling at $T\sim{\rm MeV}$ but before the formation of the CMB contribute to $\rho_{\rm new}$, i.e. to $\Delta N_{\rm eff}$. To compute $\Delta N_{\rm eff}$ in this scenario, we have to track the energy density in the EM plasma as well as the new contribution in light neutrinos. If sterile mode $k$ decays after going nonrelativistic and deposits energies of $r_{k}M_{k}$ into SM neutrinos and $(1-r_{k})M_{k}$ into the EM plasma, the evolution of the relevant energy densities is given by $\displaystyle\frac{d\rho_{\rm new}}{dt}+4H\rho_{\rm new}$ $\displaystyle=\sum_{k}r_{k}\frac{M_{k}n_{k}}{\tau_{k}},$ (35) $\displaystyle\frac{d\rho_{\gamma}}{dt}+4H\rho_{\gamma}$ $\displaystyle=\sum_{k}(1-r_{k})\frac{M_{k}n_{k}}{\tau_{k}}.$ (36) The sterile neutrino number density $n_{k}$ is given by $\displaystyle n_{k}=\frac{n_{k}(t_{0})}{a^{3}}e^{-t/\tau_{k}},$ (37) where $t_{0}$ is an early time after production but before decays become important ($t_{0}\ll\tau_{k}$) and we define the scale factor such that $a(t_{0})=1$. Integrating Eqs. (35) and (36) we can write the shift in $N_{\rm eff}$ as $\displaystyle\Delta N_{\rm eff}$ $\displaystyle\simeq\sum_{k}f_{k}\frac{M_{k}n_{k}(t_{0})}{\rho_{\nu}(t_{0})\tau_{k}}\int_{t_{0}}^{t_{\rm CMB}}dt\,a(t)e^{-t/\tau_{k}}$ (38) with $\displaystyle f_{k}\equiv r_{k}-\frac{21}{8}\left(\frac{4}{11}\right)^{4/3}(1-r_{k}).$ (39) A simple accounting of available final states, including subsequent decays into neutrinos, naively gives $r_{k}=f_{k}=1$ for $M_{k}<2m_{e}$ and $r_{k}\simeq 0.5$, $f_{k}\simeq 0.16$ for $M_{k}>2m_{e}$.777Note that some of the electromagnetically interacting objects created in the decay of the sterile modes even before decoupling may not fully thermalize with the rest of the plasma, in this case their energy density would contribute to $\rho_{\rm new}$. The error introduced by ignoring this is only ${\cal O}(1)$ and moreover the bounds we derive are conservative. Further bounds could also come from distortions of the spectrum of CMB photons from electromagnetic energy injected after the freeze-out of (double) Compton scattering. For the high- reheat case, in the part of parameter space where the current $\Delta N_{\rm eff}$ bound lies, all modes have $\tau_{k}\ll t_{\rm CMB}$ and the Universe remains radiation-dominated until the usual point of matter-radiation equality around $t_{\rm CMB}$ with $a(t)\propto\sqrt{t}$. In this case, using the freeze-in number density in Eq. (25), the shift can be approximated as $\displaystyle\Delta N_{\rm eff}$ $\displaystyle\simeq\sum_{k=1}^{k_{\rm kin}}0.103\,f_{k}\left(\frac{m}{0.1~{}\rm eV}\right)\left(\frac{R}{10^{-9}~{}\mu{\rm m}}\right)^{3/2}k^{-3/2}\left[\frac{106.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}$ (40) $\displaystyle\simeq 0.03\left(\frac{f_{k}}{0.16}\right)\left(\frac{m}{0.1~{}\rm eV}\right)\left(\frac{R}{10^{-9}~{}\mu{\rm m}}\right)^{3/2}\left[\frac{106.75}{g_{\star}(T_{V}^{k})}\right]^{11/6},$ where we have ignored the variations in $f_{k}$ and $g_{\star}(T_{V}^{k})$ with $k$ in the last step. In the low-reheat case, most of the calculation goes through as above but with the appropriate value of the sterile mode production from Eq. (26). However, as $m$ varies, the dominant contribution to $N_{\rm eff}$ can come from modes with lifetimes smaller or larger than $t_{\rm CMB}$, which affects how the limits scale. For sterile modes with $\tau_{k}>t_{\rm CMB}$, the contribution of a single mode $k$ is $\displaystyle\Delta N_{{\rm eff},k}^{\rm low}$ $\displaystyle\simeq 10^{-17}\,f_{k}\left(\frac{m}{0.1~{}\rm eV}\right)^{4}\left(\frac{10^{-4}~{}\mu{\rm m}}{R}\right)^{2}k^{2}\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6},$ (41) while for those with $\tau_{k}<t_{\rm CMB}$, it is $\displaystyle\Delta N_{{\rm eff},k}^{\rm low}$ $\displaystyle\sim 8.4\times 10^{2}\,f_{k}\left(\frac{m}{0.1~{}\rm eV}\right)\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)^{5/2}k^{-5/2}\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}.$ (42) If $m\lesssim 6~{}{\rm eV}(5~{}{\rm MeV}/T_{\rm rh})^{3/2}$, all modes decay after the formation of the CMB and the sum only includes modes with $\Delta N_{{\rm eff},k}$ as in Eq. (41). For $m\gtrsim 6~{}{\rm eV}(5~{}{\rm MeV}/T_{\rm rh})^{3/2}$ there are contributions from modes with lifetimes longer and shorter than the time of the CMB epoch and the total includes sums over modes with $\Delta N_{{\rm eff},k}$ of the forms in Eqs. (41) and (42); those with $\tau_{k}\sim t_{\rm CMB}$ dominate the contribution. Summarizing, we have $\displaystyle\Delta N_{\rm eff}^{\rm low}$ $\displaystyle\simeq f_{k}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left(\frac{R}{10^{-4}~{}\mu{\rm m}}\right)\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3}\left[\frac{10.75}{g_{\star}(T_{V}^{k})}\right]^{11/6}$ (43) $\displaystyle\times\begin{cases}5.3\times 10^{-8}\left(\frac{m}{0.1~{}\rm eV}\right)^{2}\left(\frac{T_{\rm rh}}{5~{}{\rm MeV}}\right)^{3},&m\lesssim 6~{}\text{eV}\left(\frac{5~{}{\rm MeV}}{T_{\rm rh}}\right)^{3/2},\\\ 1.9\times 10^{-4},&m\gtrsim 6~{}\text{eV}\left(\frac{5~{}{\rm MeV}}{T_{\rm rh}}\right)^{3/2}.\end{cases}$ Again, we have ignored the variation with $k$ of $f_{k}$ and $g_{\star}(T_{V}^{k})$ to arrive at these approximate expressions. We show the upper bounds on $R$ varying $m$ from $\Delta N_{\rm eff}<0.3$ in Fig. 2 for both the high reheat and $T_{\rm RH}=5~{}{\rm MeV}$ scenarios. Also shown is the area of parameter space that can be probed by a future measurement of $\Delta N_{\rm eff}$ as small as $0.05$ as hoped for with CMB-S4. In the high reheat scenario, this provides the strongest constraint, limiting $R\lesssim 10^{-9}~{}\mu{\rm m}$ while in the low reheat case, it is subleading to the limits from the (decaying) dark matter density. ## V Conclusions Although extremely successful, the SM has several deficiencies, notably the lack of an explanation of the origin of neutrino masses and the omission of a quantum theory of gravity. Extra dimensions beyond the four we know of are common features of quantum gravity theories and can provide a natural understanding of the smallness of neutrino masses. Such explanations require the addition of new states that propagate in the extra dimensions and couple weakly to the active neutrinos of the standard model, leading to a KK tower of sterile neutrinos. The mass scale of the sterile neutrinos is determined by the size of the compactified extra dimensions. From the point of view of string theory and swampland conjectures, the existence of a LED is well motivated and provides a natural explanation for the tiny masses of SM neutrinos. Models with an extra dimension of size around a micron accommodate a tower of sterile neutrinos with masses starting at the eV scale. Neutrinos of this mass have been hinted at in some terrestrial neutrino experiments. We have computed the production rate of these towers of sterile neutrinos in the early universe as functions of the size of the extra dimension. Although very weakly coupled, the towers of sterile neutrinos can be produced in large enough numbers to drastically affect cosmological observables, such as the (potentially decaying) dark matter density and the number of noninteracting degrees of freedom present at the CMB epoch. Currently, cosmological bounds place very strong limits on the size of the extra dimension in these models, at roughly the $10^{-9}~{}\mu{\rm m}$ level in the case of a standard cosmological history. In this case the leading effect is the shift in the number of noninteracting relativistic degrees of freedom, $N_{\rm eff}$. This rules out a micron-sized extra dimension as an explanation of neutrino experiment anomalies (which is also the case in most sterile neutrino explanations of such anomalies). We have explored the weakening of these limits in the case of a low reheat temperature at the MeV scale, i.e. if the SM plasma never achieved temperatures well above an MeV during its earliest stages. This is the lowest reheat temperature compatible with successful primordial nucleosynthesis. In contrast to the high reheat case, the leading bound comes from the sterile neutrinos’ contributions to the matter density, particularly as a component that decays after the formation of the CMB. The bounds are substantially weakened in the low reheat case, potentially opening up an explanation of some electron neutrino disappearance anomalies. However, such an explanation could be in tension with the overall light neutrino mass scale implied by active neutrino oscillation data and limits on a decaying dark matter component. Taken together, seeing deviations from future observations that point to either a nonzero $\Delta N_{\rm eff}$ or a decaying dark matter component could help to distinguish these two scenarios from one another and to shine light on the early stages of the universe’s existence. ###### Acknowledgements. We thank Carlos Henrique de Lima, David Morrissey, and Douglas Tuckler for helpful discussions and comments. The work of DM is supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC). MS is supported by TRIUMF which receives federal funding via a contribution agreement with the National Research Council (NRC) of Canada. ## References * Pontecorvo [1957a] B. Pontecorvo, Sov. Phys. JETP 6, 429 (1957a). * Pontecorvo [1967] B. Pontecorvo, Zh. Eksp. Teor. Fiz. 53, 1717 (1967). * Abe _et al._ [2008] S. Abe _et al._ (KamLAND), Phys. Rev. Lett. 100, 221803 (2008), arXiv:0801.4589 [hep-ex] . * Abazajian and Heeck [2019] K. N. Abazajian and J. Heeck, Phys. Rev. D 100, 075027 (2019), arXiv:1908.03286 [hep-ph] . * Adshead _et al._ [2021] P. Adshead, Y. Cui, A. J. Long, and M. Shamma, Phys. Lett. B 823, 136736 (2021), arXiv:2009.07852 [hep-ph] . * Luo _et al._ [2020] X. Luo, W. Rodejohann, and X.-J. Xu, JCAP 06, 058 (2020), arXiv:2005.01629 [hep-ph] . * Luo _et al._ [2021] X. Luo, W. Rodejohann, and X.-J. Xu, JCAP 03, 082 (2021), arXiv:2011.13059 [hep-ph] . * Mahanta and Borah [2022] D. Mahanta and D. Borah, Eur. Phys. J. C 82, 495 (2022), arXiv:2101.02092 [hep-ph] . * Du and Yu [2021] Y. Du and J.-H. Yu, JHEP 05, 058 (2021), arXiv:2101.10475 [hep-ph] . * Heeck _et al._ [2023] J. Heeck, J. Heisig, and A. Thapa, Phys. Rev. D 108, 035014 (2023), arXiv:2304.09893 [hep-ph] . * King _et al._ [2023] S. F. King, D. Marfatia, and M. H. Rahat, (2023), arXiv:2306.05389 [hep-ph] . * Cox _et al._ [2023] P. Cox, T. Gherghetta, and A. Paul, (2023), arXiv:2310.08557 [hep-ph] . * Batell _et al._ [2016] B. Batell, M. Pospelov, and B. Shuve, JHEP 08, 052 (2016), arXiv:1604.06099 [hep-ph] . * Ibanez and Uranga [2012] L. E. Ibanez and A. M. Uranga, _String theory and particle physics: An introduction to string phenomenology_ (Cambridge University Press, 2012). * Cvetic _et al._ [2022] M. Cvetic, J. Halverson, G. Shiu, and W. Taylor, (2022), arXiv:2204.01742 [hep-th] . * Brennan _et al._ [2017] T. D. Brennan, F. Carta, and C. Vafa, PoS TASI2017, 015 (2017), arXiv:1711.00864 [hep-th] . * Palti [2019] E. Palti, Fortsch. Phys. 67, 1900037 (2019), arXiv:1903.06239 [hep-th] . * Agmon _et al._ [2022] N. B. Agmon, A. Bedroya, M. J. Kang, and C. Vafa, (2022), arXiv:2212.06187 [hep-th] . * Vafa [2024] C. Vafa, (2024), arXiv:2402.00981 [hep-ph] . * Montero _et al._ [2023] M. Montero, C. Vafa, and I. Valenzuela, JHEP 02, 022 (2023), arXiv:2205.12293 [hep-th] . * Schwarz [2024] J. H. Schwarz, (2024), arXiv:2403.12899 [hep-th] . * Obied _et al._ [2024] G. Obied, C. Dvorkin, E. Gonzalo, and C. Vafa, Phys. Rev. D 109, 063540 (2024), arXiv:2311.05318 [astro-ph.CO] . * Law-Smith _et al._ [2023] J. A. P. Law-Smith, G. Obied, A. Prabhu, and C. Vafa, (2023), arXiv:2307.11048 [hep-ph] . * Gonzalo _et al._ [2023] E. Gonzalo, M. Montero, G. Obied, and C. Vafa, JHEP 11, 109 (2023), arXiv:2209.09249 [hep-ph] . * Arkani-Hamed _et al._ [2001] N. Arkani-Hamed, S. Dimopoulos, G. R. Dvali, and J. March-Russell, Phys. Rev. D 65, 024032 (2001), arXiv:hep-ph/9811448 . * Dienes _et al._ [1999] K. R. Dienes, E. Dudas, and T. Gherghetta, Nucl. Phys. B 557, 25 (1999), arXiv:hep-ph/9811428 . * Mohapatra and Perez-Lorenzana [2001] R. N. Mohapatra and A. Perez-Lorenzana, Nucl. Phys. B 593, 451 (2001), arXiv:hep-ph/0006278 . * Anchordoqui _et al._ [2023] L. A. Anchordoqui, I. Antoniadis, and D. Lust, Phys. Rev. D 107, 083530 (2023), arXiv:2212.08527 [hep-ph] . * McLaughlin and Ng [2000] G. C. McLaughlin and J. N. Ng, Phys. Lett. B 493, 88 (2000), arXiv:hep-ph/0008209 . * McLaughlin and Ng [2001] G. C. McLaughlin and J. N. Ng, Phys. Rev. D 63, 053002 (2001), arXiv:nucl-th/0003023 . * Machado _et al._ [2011] P. A. N. Machado, H. Nunokawa, and R. Zukanovich Funchal, Phys. Rev. D 84, 013003 (2011), arXiv:1101.0003 [hep-ph] . * Machado _et al._ [2012] P. A. N. Machado, H. Nunokawa, F. A. P. dos Santos, and R. Z. Funchal, Phys. Rev. D 85, 073012 (2012), arXiv:1107.2400 [hep-ph] . * Basto-Gonzalez _et al._ [2013] V. S. Basto-Gonzalez, A. Esmaili, and O. L. G. Peres, Phys. Lett. B 718, 1020 (2013), arXiv:1205.6212 [hep-ph] . * Girardi and Meloni [2014] I. Girardi and D. Meloni, Phys. Rev. D 90, 073011 (2014), arXiv:1403.5507 [hep-ph] . * Stenico _et al._ [2018] G. V. Stenico, D. V. Forero, and O. L. G. Peres, JHEP 11, 155 (2018), arXiv:1808.05450 [hep-ph] . * Forero _et al._ [2022] D. V. Forero, C. Giunti, C. A. Ternes, and O. Tyagi, Phys. Rev. D 106, 035027 (2022), arXiv:2207.02790 [hep-ph] . * Abazajian _et al._ [2003] K. Abazajian, G. M. Fuller, and M. Patel, Phys. Rev. Lett. 90, 061301 (2003), arXiv:hep-ph/0011048 . * Dimopoulos and Hall [1987] S. Dimopoulos and L. J. Hall, Phys. Lett. B 196, 135 (1987). * Ghalsasi _et al._ [2015] A. Ghalsasi, D. McKeen, and A. E. Nelson, Phys. Rev. D 92, 076014 (2015), arXiv:1508.05392 [hep-ph] . * Aitken _et al._ [2017] K. Aitken, D. McKeen, T. Neder, and A. E. Nelson, Phys. Rev. D 96, 075009 (2017), arXiv:1708.01259 [hep-ph] . * Elor _et al._ [2019] G. Elor, M. Escudero, and A. Nelson, Phys. Rev. D 99, 035031 (2019), arXiv:1810.00880 [hep-ph] . * Nelson and Xiao [2019] A. E. Nelson and H. Xiao, Phys. Rev. D 100, 075002 (2019), arXiv:1901.08141 [hep-ph] . * Alonso-Álvarez _et al._ [2020] G. Alonso-Álvarez, G. Elor, A. E. Nelson, and H. Xiao, JHEP 03, 046 (2020), arXiv:1907.10612 [hep-ph] . * Elor and McGehee [2021] G. Elor and R. McGehee, Phys. Rev. D 103, 035005 (2021), arXiv:2011.06115 [hep-ph] . * Bhattiprolu _et al._ [2023] P. N. Bhattiprolu, G. Elor, R. McGehee, and A. Pierce, JHEP 01, 128 (2023), arXiv:2210.15653 [hep-ph] . * Berger and Elor [2024] J. Berger and G. Elor, Phys. Rev. Lett. 132, 081002 (2024), arXiv:2301.04165 [hep-ph] . * Silva-Malpartida _et al._ [2023] J. Silva-Malpartida, N. Bernal, J. Jones-Pérez, and R. A. Lineros, JCAP 09, 015 (2023), arXiv:2306.14943 [hep-ph] . * Bernal _et al._ [2024] N. Bernal, P. Konar, and S. Show, Phys. Rev. D 109, 035018 (2024), arXiv:2311.01587 [hep-ph] . * Abazajian and Escudero [2023] K. N. Abazajian and H. G. Escudero, Phys. Rev. D 108, 123036 (2023), arXiv:2309.11492 [hep-ph] . * Pontecorvo [1957b] B. Pontecorvo, Zh. Eksp. Teor. Fiz. 34, 247 (1957b). * Maki _et al._ [1962] Z. Maki, M. Nakagawa, and S. Sakata, Prog. Theor. Phys. 28, 870 (1962). * Esteban _et al._ [2020] I. Esteban, M. C. Gonzalez-Garcia, M. Maltoni, T. Schwetz, and A. Zhou, JHEP 09, 178 (2020), arXiv:2007.14792 [hep-ph] . * Fukuda _et al._ [1998] Y. Fukuda _et al._ (Super-Kamiokande), Phys. Rev. Lett. 81, 1562 (1998), arXiv:hep-ex/9807003 . * Ambrosio _et al._ [2001] M. Ambrosio _et al._ (MACRO), Phys. Lett. B 517, 59 (2001), arXiv:hep-ex/0106049 . * Allison _et al._ [2003] W. W. M. Allison _et al._ (Soudan 2), Phys. Rev. D 68, 113004 (2003), arXiv:hep-ex/0307069 . * Aker _et al._ [2022a] M. Aker _et al._ (KATRIN), Nature Phys. 18, 160 (2022a), arXiv:2105.08533 [hep-ex] . * Aker _et al._ [2022b] M. Aker _et al._ (KATRIN), Phys. Rev. D 105, 072004 (2022b), arXiv:2201.11593 [hep-ex] . * Almazán _et al._ [2020] H. Almazán _et al._ (STEREO), Phys. Rev. D 102, 052002 (2020), arXiv:1912.06582 [hep-ex] . * Adamson _et al._ [2020] P. Adamson _et al._ (MINOS+, Daya Bay), Phys. Rev. Lett. 125, 071801 (2020), arXiv:2002.00301 [hep-ex] . * Ko _et al._ [2017] Y. J. Ko _et al._ (NEOS), Phys. Rev. Lett. 118, 121802 (2017), arXiv:1610.05134 [hep-ex] . * Abrahão _et al._ [2021] T. Abrahão _et al._ (Double Chooz), Eur. Phys. J. C 81, 775 (2021), arXiv:2009.05515 [hep-ex] . * Atif _et al._ [2022] Z. Atif _et al._ (RENO, NEOS), Phys. Rev. D 105, L111101 (2022), arXiv:2011.00896 [hep-ex] . * Mention _et al._ [2011] G. Mention, M. Fechner, T. Lasserre, T. A. Mueller, D. Lhuillier, M. Cribier, and A. Letourneau, Phys. Rev. D 83, 073006 (2011), arXiv:1101.2755 [hep-ex] . * Barinov _et al._ [2022] V. V. Barinov _et al._ , Phys. Rev. C 105, 065502 (2022), arXiv:2201.07364 [nucl-ex] . * Bilenky _et al._ [1998] S. M. Bilenky, C. Giunti, and W. Grimus, Eur. Phys. J. C 1, 247 (1998), arXiv:hep-ph/9607372 . * Okada and Yasuda [1997] N. Okada and O. Yasuda, Int. J. Mod. Phys. A 12, 3669 (1997), arXiv:hep-ph/9606411 . * Kopp _et al._ [2013] J. Kopp, P. A. N. Machado, M. Maltoni, and T. Schwetz, JHEP 05, 050 (2013), arXiv:1303.3011 [hep-ph] . * De Gouvêa _et al._ [2020] A. De Gouvêa, M. Sen, W. Tangarife, and Y. Zhang, Phys. Rev. Lett. 124, 081802 (2020), arXiv:1910.04901 [hep-ph] . * Kelly _et al._ [2020] K. J. Kelly, M. Sen, W. Tangarife, and Y. Zhang, Phys. Rev. D 101, 115031 (2020), arXiv:2005.03681 [hep-ph] . * Dodelson and Widrow [1994] S. Dodelson and L. M. Widrow, Phys. Rev. Lett. 72, 17 (1994), arXiv:hep-ph/9303287 . * Abazajian [2006] K. Abazajian, Phys. Rev. D 73, 063506 (2006), arXiv:astro-ph/0511630 . * Smith [2019] P. F. Smith, New J. Phys. 21, 053022 (2019), arXiv:1607.06876 [physics.ins-det] . * Martoff _et al._ [2021] C. J. Martoff _et al._ , Quantum Sci. Technol. 6, 024008 (2021). * Kim [2023] G.-B. Kim (MAGNETO), (UCLA Dark Matter 2023). * Mertens _et al._ [2019] S. Mertens _et al._ (KATRIN), J. Phys. G 46, 065203 (2019), arXiv:1810.06711 [physics.ins-det] . * Friedrich _et al._ [2021] S. Friedrich _et al._ , Phys. Rev. Lett. 126, 021803 (2021), arXiv:2010.09603 [nucl-ex] . * Leach and Friedrich [2022] K. G. Leach and S. Friedrich (BeEST), J. Low Temp. Phys. 209, 796 (2022), arXiv:2112.02029 [nucl-ex] . * Aghanim _et al._ [2020] N. Aghanim _et al._ (Planck), Astron. Astrophys. 641, A6 (2020), [Erratum: Astron.Astrophys. 652, C4 (2021)], arXiv:1807.06209 [astro-ph.CO] . * Poulin _et al._ [2016] V. Poulin, P. D. Serpico, and J. Lesgourgues, JCAP 08, 036 (2016), arXiv:1606.02073 [astro-ph.CO] . * Nygaard _et al._ [2021] A. Nygaard, T. Tram, and S. Hannestad, JCAP 05, 017 (2021), arXiv:2011.01632 [astro-ph.CO] . * Simon _et al._ [2022] T. Simon, G. Franco Abellán, P. Du, V. Poulin, and Y. Tsai, Phys. Rev. D 106, 023516 (2022), arXiv:2203.07440 [astro-ph.CO] . * Gunn _et al._ [1978] J. E. Gunn, B. W. Lee, I. Lerche, D. N. Schramm, and G. Steigman, Astrophys. J. 223, 1015 (1978). * Bode _et al._ [2001] P. Bode, J. P. Ostriker, and N. Turok, Astrophys. J. 556, 93 (2001), arXiv:astro-ph/0010389 . * Schneider _et al._ [2012] A. Schneider, R. E. Smith, A. V. Maccio, and B. Moore, Mon. Not. Roy. Astron. Soc. 424, 684 (2012), arXiv:1112.0330 [astro-ph.CO] . * An _et al._ [2023] R. An, V. Gluscevic, E. O. Nadler, and Y. Zhang, Astrophys. J. Lett. 954, L18 (2023), arXiv:2301.08299 [astro-ph.CO] . * Horiuchi _et al._ [2014] S. Horiuchi, P. J. Humphrey, J. Onorbe, K. N. Abazajian, M. Kaplinghat, and S. Garrison-Kimmel, Phys. Rev. D 89, 025017 (2014), arXiv:1311.0282 [astro-ph.CO] . * Sicilian _et al._ [2020] D. Sicilian, N. Cappelluti, E. Bulbul, F. Civano, M. Moscetti, and C. S. Reynolds, Astrophys. J. 905, 146 (2020), arXiv:2008.02283 [astro-ph.HE] . * Roach _et al._ [2020] B. M. Roach, K. C. Y. Ng, K. Perez, J. F. Beacom, S. Horiuchi, R. Krivonos, and D. R. Wik, Phys. Rev. D 101, 103011 (2020), arXiv:1908.09037 [astro-ph.HE] . * Roach _et al._ [2023] B. M. Roach, S. Rossland, K. C. Y. Ng, K. Perez, J. F. Beacom, B. W. Grefenstette, S. Horiuchi, R. Krivonos, and D. R. Wik, Phys. Rev. D 107, 023009 (2023), arXiv:2207.04572 [astro-ph.HE] . * Boyarsky _et al._ [2006] A. Boyarsky, A. Neronov, O. Ruchayskiy, and M. Shaposhnikov, Mon. Not. Roy. Astron. Soc. 370, 213 (2006), arXiv:astro-ph/0512509 . * Hou _et al._ [2013] Z. Hou, R. Keisler, L. Knox, M. Millea, and C. Reichardt, Phys.Rev. D87, 083008 (2013), arXiv:1104.2333 [astro-ph.CO] . * Bashinsky and Seljak [2004] S. Bashinsky and U. Seljak, Phys. Rev. D 69, 083002 (2004), arXiv:astro-ph/0310198 . * Follin _et al._ [2015] B. Follin, L. Knox, M. Millea, and Z. Pan, Phys. Rev. Lett. 115, 091301 (2015), arXiv:1503.07863 [astro-ph.CO] . * Abazajian _et al._ [2016] K. N. Abazajian _et al._ (CMB-S4), (2016), arXiv:1610.02743 [astro-ph.CO] . * Abazajian _et al._ [2019] K. Abazajian _et al._ , (2019), arXiv:1907.04473 [astro-ph.IM] .
# Meta Learning on a Sequence of Imbalanced Domains with Difficulty Awareness Zhenyi Wang {zhenyiwa, tiehangd, lefang, qiulings<EMAIL_ADDRESS>Tiehang Duan {zhenyiwa, tiehangd, lefang, qiulings<EMAIL_ADDRESS>Le Fang {zhenyiwa, tiehangd, lefang, qiulings<EMAIL_ADDRESS>Qiuling Suo {zhenyiwa, tiehangd, lefang, qiulings<EMAIL_ADDRESS>Mingchen Gao {zhenyiwa, tiehangd, lefang, qiulings<EMAIL_ADDRESS> ###### Abstract Recognizing new objects by learning from a few labeled examples in an evolving environment is crucial to obtain excellent generalization ability for real- world machine learning systems. A typical setting across current meta learning algorithms assumes a stationary task distribution during meta training. In this paper, we explore a more practical and challenging setting where task distribution changes over time with domain shift. Particularly, we consider realistic scenarios where task distribution is highly imbalanced with domain labels unavailable in nature. We propose a kernel-based method for domain change detection and a difficulty-aware memory management mechanism that jointly considers the imbalanced domain size and domain importance to learn across domains continuously. Furthermore, we introduce an efficient adaptive task sampling method during meta training, which significantly reduces task gradient variance with theoretical guarantees. Finally, we propose a challenging benchmark with imbalanced domain sequences and varied domain difficulty. We have performed extensive evaluations on the proposed benchmark, demonstrating the effectiveness of our method. We made our code publicly available at https://github.com/joey-wang123/Imbalancemeta.git. ## 1 Introduction Learning from a few labeled examples to acquire skills for new task is essential to achieve machine intelligence. Take object recognition in personalized self-driving system as an example [10]. Learning each user’s personal driving preference model forms one task. The system is first deployed in the small city, Rochester. The company later extends its market to New York. The user base of New York is much larger than that of Rochester, causing domain imbalance. Also, after adapting to New York users, the learned user behavior from Rochester will be easily forgotten. Similar scenarios occur when learning to solve NLP tasks on a sequence of different languages [13] with imbalanced resources of different languages. Figure 1: Illustration of meta learning for few shot object recognition on a sequence of imbalanced domains. Our focused problems including domain change detection, how to manage memory and sample memory tasks for joint training with streaming tasks. Meta learning is a promising approach for solving such few-shot learning problems. One common assumption of current models is that the task distribution is stationary during meta training. However, real world scenarios (such as the above self-driving system) are more complex and often involve learning across different domains (environments), with challenges such as: (1) task distributions change among different domains; (2) tasks from previous domains are usually unavailable when training on a new domain; (3) the number of tasks from each domain could be highly imbalanced; (4) domain difficulty could vary significantly in nature across the domain sequence. An example is shown in Figure 1. Directly applying current meta learning models to such scenarios is not suitable to tackle these challenges, e.g., the object recognition accuracy of meta learned neural networks generally deteriorates significantly on previous context after adapting to a new environment [24, 47, 64]. In this work, we cope with such challenges by considering a more realistic problem setting that (1) learning on a sequence of domains; (2) task stream contains significant domain size imbalance; (3) domain labels and boundaries remain unavailable during both training and testing; (4) domain difficulty is non-uniform across the domain sequence. We term such problem setup as Meta Learning on a Sequence of Imbalanced Domains with Varying Difficulty (MLSID). MLSID requires the meta learning model both adapting to a new domain and retaining the ability to recognize objects from previous domains. To tackle this challenging problem, we adopt replay-based approaches, i.e., a small number of tasks from previous domains are maintained in a memory buffer. Accordingly, there are two main problems that need to be solved: (1) how to determine which task should be stored into the memory buffer and which to be moved out. To address this problem, we propose an adaptive memory management mechanism based on the domain distribution and difficulty, so that the tasks in memory buffer could maximize the retained knowledge of previous domains; (2) how to determine which tasks to sample from memory during meta training. We propose an efficient adaptive task sampling approach to accelerate meta training and reduce gradient estimation variance according to our derived optimal task sampling distribution. Our intuition is that not all tasks are equally important for joint training at different iterations. It is thus desirable to dynamically determine which tasks to sample and to be jointly trained with current tasks to mitigate catastrophic forgetting at each training iteration. Our contributions are summarized as following: * • To our best knowledge, this is the first work of meta learning on a sequence of imbalanced domains. For convenient evaluation of different models, we propose a new challenging benchmark consisting of imbalanced domain sequences. * • We propose a novel mechanism, “Memory Management with Domain Distribution and Difficulty Awareness”, to maximize the retained knowledge of previous domains in the memory buffer. * • We propose an efficient adaptive task sampling method during meta training, which significantly reduces gradient estimation variance with theoretical guarantees, making the meta training process more stable and boosting the model performance. * • Our method is orthogonal to specific meta learning methods and can be integrated with them seamlessly. Extensive experiments with gradient-based and metric-based meta learning methods on the proposed benchmark demonstrate the effectiveness of our method. ## 2 Problem Setting A series of mini-batch training tasks ${\mathcal{T}}_{1},{\mathcal{T}}_{2},\dots,{\mathcal{T}}_{N}$ arrive sequentially, with possible domain shift occurring in the stream, i.e., the task stream can be segmented by continual latent domains, ${\mathcal{D}}_{1},{\mathcal{D}}_{2},\dots,{\mathcal{D}}_{L}$. $\mathcal{T}_{t}$ denotes the mini-batch of tasks arrived at time $t$. The domain identity associated with each task remains unavailable during both meta training and testing. Domain boundaries, i.e., indicating current domain has finished and the next domain is about to start, are unknown. This is a more practical and general setup. Each task $\mathcal{T}$ is divided into training and testing data $\\{\mathcal{T}^{train},\mathcal{T}^{test}\\}$. Suppose $\mathcal{T}_{t}^{train}$ consists of $K$ examples, $\\{(\bm{x}^{k},\bm{y}^{k})\\}_{k=1}^{K}$, where in object recognition, $\bm{x}^{k}$ is the image data and $\bm{y}^{k}$ is the corresponding object label. We assume the agent stays within each domain for some consecutive time. Also, we consider a simplified setting where the agent will not return back to previous domains and put the contrary case into future work. Our proposed learning system maintains a memory buffer $\mathcal{M}$ to store a small number of training tasks from previous domains for replay to avoid forgetting of previous knowledge. Old tasks are not revisited during training unless they are stored in the memory $\mathcal{M}$. The total number of tasks processed is much larger than memory capacity. At the end of meta training, we randomly sample a large number of unseen few-shot tasks from each latent domain, ${\mathcal{D}}_{1},{\mathcal{D}}_{2},\dots,{\mathcal{D}}_{L}$ for meta testing. The model performance is the average accuracy on all the sampled tasks. ## 3 Methodology ### 3.1 Conventional Reservoir Sampling and Its Limitations Reservoir sampling (RS) [58, 15] is a random sampling method for choosing $k$ samples from a data stream in a single pass without knowing the actual value of total number of items in advance. Straightforward adoption of RS here is to maintain a fixed memory and uniformly sample tasks from the task stream. Each task in the stream is assigned equal probability $\frac{n}{N}$ of being moved into the memory buffer, where $n$ is the memory capacity size and $N$ is the total number of tasks seen so far. However, it is not suitable for the practical scenarios previously described, with two major shortcomings: (1) the task distribution in memory can be skewed when the input task stream is highly imbalanced in our setting. This leads to under-representation of the minority domains; (2) the importance of each task varies as some domains are more difficult to learn than others. This factor is also not taken into account with RS. (a) reservoir sampling (b) proposed memory management Figure 2: An example of (a) reservoir sampling and (b) proposed memory management method jointly considering domain distribution and difficulty when meta learning on task stream from three latent domains. To address the above issues, we propose to first detect domain change in the input task stream to associate each task with a latent domain label. We then present a new mechanism, called Memory Management with Domain Distribution and Difficulty Awareness by utilizing the associated latent domain label with each task. For simple illustration, we construct an imbalanced input task stream from Miniimagenet, Omniglot and Aircraft as shown in Figure 2. Evidently, the resulting distribution of stored tasks with RS is highly imbalanced and dramatically influenced by the input task stream distribution. In contrast, our memory management mechanism balances the three domain proportions by jointly considering domain distribution and difficulty. Model Summary: We first illustrate on our domain change detection component in Section 3.2, which is used for (1) managing and balancing tasks in the memory buffer by incorporating the task difficulty (defined in Section 3.4) to determine whether the new incoming task should be moved into the memory and which old task should be moved out of memory in Section 3.3; (2) adaptive sampling tasks from memory buffer during meta training by dynamically adjusting the sampling probability of each task in the memory according to the task gradient for mitigating catastrophic forgetting in Section 3.4. ### 3.2 Online Domain Change Detection Online domain change detection is a difficult problem due to: (1) few shot tasks are highly diverse within a single domain; (2) there are varying degree of variations at domain boundaries across the sequence. In our initial study, we found that it is inadequate to set a threshold on the change of mini-batch task loss value for detecting domain change. We thus construct a low dimensional projected space and perform online domain change detection on this space. #### Projected space Tasks $\mathcal{T}_{t}$ are mapped into a common space $\bm{o}_{t}=f_{{\bm{\theta}}_{t}}({\mathcal{T}_{t}^{train}})=\frac{1}{K}\sum_{k=1}^{K}f_{{\bm{\theta}}_{t}}(\\{\bm{x}^{k}\\})$ where $K$ is the number of training data and $f_{{\bm{\theta}}_{t}}$ is the CNN embedding network. The task embedding could be further refined by incorporating the image labels, e.g., concatenating the word embedding of the image categories with image embedding. We leave this direction as interesting future work. To reduce the variance across different few shot tasks and capture the general domain information, we compute the exponential moving average of task embedding $\bm{O}_{t}$ as $\bm{O}_{t}=\alpha\bm{o}_{t}+(1-\alpha)\bm{O}_{t-1}$, where the constant $\alpha$ is the weighting multiplier which encodes the relative importance between current task embedding and past moving average. A sliding window stores the past $m$ ($m$ is a small number) steps moving average, $\bm{O}_{t-1},\bm{O}_{t-2},\cdots,\bm{O}_{t-m}$, which are used to form the low dimensional projection vector $\operatorname{\mathbf{z}}_{t}$, where the $i$-th dimensional element of $\operatorname{\mathbf{z}}_{t}$ is the distance between $\bm{o}_{t}$ and $\bm{O}_{t-i}$, $d(\bm{o}_{t},\bm{O}_{t-i})$. The projected $m$ dimensional vector $\operatorname{\mathbf{z}}_{t}$ captures longer context similarity information spanning across multiple consecutive tasks. Online domain change detection At each time $t$, we utilize the above constructed projected space for online domain change detection. Assume we have two windows of projected embedding of previous tasks $\mathcal{U}^{B}=\\{\operatorname{\mathbf{z}}_{t-2B},\operatorname{\mathbf{z}}_{t-2B+1},\cdots,\operatorname{\mathbf{z}}_{t-B-1}\\}$ with distribution $Q$ and $\mathcal{V}^{B}=\\{\operatorname{\mathbf{z}}_{t-B},\operatorname{\mathbf{z}}_{t-B+1},\cdots,\operatorname{\mathbf{z}}_{t}\\}$ with distribution $R$, where $B$ is the window size. In other words, $\mathcal{V}^{B}$ represents the most recent window of projection space (test window) and $\mathcal{U}^{B}$ represents the projection space of previous window (reference window). $\mathcal{U}^{B}$ and $\mathcal{V}^{B}$ are non- overlapping windows. For notation clarity and presentation convenience, we use another notation to denote the $\mathcal{U}^{B}=\\{\operatorname{\mathbf{u}}_{1},\operatorname{\mathbf{u}}_{2},\cdots,\operatorname{\mathbf{u}}_{B}\\}$ and $\mathcal{V}^{B}=\\{\operatorname{\mathbf{v}}_{1},\operatorname{\mathbf{v}}_{2},\cdots,\operatorname{\mathbf{v}}_{B}\\}$, i.e., $\operatorname{\mathbf{u}}_{i}=\operatorname{\mathbf{z}}_{t-2B+i-1}$ and $\operatorname{\mathbf{v}}_{i}=\operatorname{\mathbf{z}}_{t-B+i-1}$. Our general framework is to first measure the distance between the two distributions $Q$ and $R$, $d(Q,R)$; then, by setting a threshold $b$, the domain change is detected when $d(Q,R)>b$. Here, we use Maximum Mean Discrepancy (MMD) to measure the distribution distance. Following [38], the MMD distance between $Q$ and $R$ is defined as: $\text{MMD}[\mathcal{F},Q,R]:=\sup\limits_{f\in\mathcal{F}}\\{\mathbb{E}_{\operatorname{\mathbf{u}}\sim Q}[f(\operatorname{\mathbf{u}})]-\mathbb{E}_{\operatorname{\mathbf{v}}\sim R}[f(\operatorname{\mathbf{v}})]\\}$ (1) U-statistics [26] can be used for estimating ${\text{MMD}}^{2}$: $W^{B}_{t}=\text{MMD}^{2}[\mathcal{U}^{B},\mathcal{V}^{B}]=\frac{1}{B(B-1)}\sum_{i\neq j}^{B}h(\operatorname{\mathbf{u}}_{i},\operatorname{\mathbf{u}}_{j},\operatorname{\mathbf{v}}_{i},\operatorname{\mathbf{v}}_{j})$ (2) and $h(\cdot)$ is defined as: $h(\operatorname{\mathbf{u}}_{i},\operatorname{\mathbf{u}}_{j},\operatorname{\mathbf{v}}_{i},\operatorname{\mathbf{v}}_{j})=k(\operatorname{\mathbf{u}}_{i},\operatorname{\mathbf{u}}_{j})+k(\operatorname{\mathbf{v}}_{i},\operatorname{\mathbf{v}}_{j})-k(\operatorname{\mathbf{u}}_{i},\operatorname{\mathbf{v}}_{j})-k(\operatorname{\mathbf{u}}_{j},\operatorname{\mathbf{v}}_{i})$ (3) where $k(\cdot,\cdot)$ is RKHS kernel. In this paper, we assume RBF kernel $k(x,x^{\prime})=exp(-||x-x^{\prime}||^{2}/2\sigma^{2})$ is used. The detection statistics at time $t$ is $W^{B}_{t}$. If $Q$ and $R$ are close, $W^{B}_{t}$ is expected to be small, implying small probability of existence of domain change. If $Q$ and $R$ are significantly different distributions, $W^{B}_{t}$ is expected to be large, implying higher chance of domain shift. Thus, $W^{B}_{t}$ characterizes the chance of domain shift at time $t$. We then test on the condition of $W^{B}_{t}>b$ to determine whether domain change occurs, where $b$ is a threshold. Each task $\mathcal{T}_{t}$ is associated with a latent domain label $L_{t}$, $L_{0}=0$. If $W^{B}_{t}>b$, $L_{t}=L_{t-1}+1$, i.e., a new domain arrives (Note that the actual domain changes could happen a few steps ago, but for simplicity, we could assume domain changes occur at time $t$); otherwise, $L_{t}=L_{t-1}$, i.e., the current domain continues. We leave the more general case with domain revisiting as future work. How to set the threshold is a non-trivial task and is described in the following. Setting the threshold Clearly, setting the threshold $b$ involves a trade-off between two aspects: (1) the probability of $W^{B}_{t}>b$ when there is no domain change; (2) the probability of $W^{B}_{t}>b$ when there is domain change. As a result, if the domain similarity and difficulty vary significantly, simply setting a fixed threshold across the entire training process is highly insufficient. In other words, adaptive threshold of $b$ is necessary. Before we present the adaptive threshold method, we first show the theorem which characterizes the property of detection statistics $W^{B}_{t}$ in the following. Algorithm 1 Online Domain Change Detection (ODCD). 0: stream of detection statistics $W^{B}_{t}$; constant $\rho$; desired quantile (significance level) $\delta$; Initialize $\mu_{0}=0$ and $\mu_{0}^{(2)}=0$ 1: Function ODCD ($W^{B}_{t}$, $\rho$, $\delta$) 2: $\operatorname{\mathbf{d}}=False$; // indicator of domain shift 3: $\mu_{t}=(1-\rho)\mu_{t-1}+\rho(W^{B}_{t})^{2}$ 4: $\mu_{t}^{(2)}=(1-\rho)\mu_{t-1}^{(2)}+\rho(W^{B}_{t})^{4}$ 5: $\sigma_{t}=\sqrt{\mu_{t}^{(2)}-\mu_{t}^{2}}$ 6: if $W^{B}_{t}>\mu_{t}+\delta\sigma_{t}$ then 7: $\operatorname{\mathbf{d}}=True$; //there is domain shift at time $t$ 8: end if 9: return $\operatorname{\mathbf{d}}$ 10: EndFunction ###### Theorem 1 Assume $\operatorname{\mathbf{z}}_{i}$ are drawn i.i.d. from $Q$. Suppose that $\mathbb{E}_{Q}||k(\operatorname{\mathbf{z}},\cdot)||^{4}<\infty$. Set $\mu\overset{def}{=}\mathbb{E}_{Q}k(\operatorname{\mathbf{z}},\cdot)$ and $K(\operatorname{\mathbf{z}},\operatorname{\mathbf{z}}^{\prime})\overset{def}{=}\langle k(\operatorname{\mathbf{z}},\cdot)-\mu,k(\operatorname{\mathbf{z}}^{\prime},\cdot)-\mu\rangle$. Suppose the eigenvalue $\xi_{l}$ and eigenvectors $\phi_{l}^{2}$ of $K$ satisfy $\xi_{l}\geq 0$ and $\mathbb{E}_{Q}\phi_{l}^{2}<\infty$ such that $K(\operatorname{\mathbf{z}},\operatorname{\mathbf{z}}^{\prime})=\sum_{l\geq 1}\xi_{l}\phi_{l}(\operatorname{\mathbf{z}})\phi_{l}(\operatorname{\mathbf{z}}^{\prime})$ and $\langle\phi_{l},\phi_{l^{\prime}}\rangle=\textbf{1}_{l=l^{\prime}}$. Then, $W^{B}_{t}\overset{d}{\to}\beta\sum_{l\geq 1}\xi_{l}Z_{l}^{2}$ (4) Where $\overset{d}{\to}$ means converge in distribution and $(Z_{l})_{l\geq 1}$ is a collection of infinite independent standard normal random variables and $\beta$ is a constant. The theorem and proof follow from [51, 31]. We can observe that $W^{B}_{t}$ asymptotically follows a distribution formed by a weighted linear combination of independent normal distribution. By Lindeberg’s central limit theorem [56], it is reasonable to assume $W^{B}_{t}$ is approximately Gaussian distribution. The problem is thus reduced to estimate its mean $\mu_{t}$ and $\sigma_{t}$. The adaptive threshold $b$, following from [31], can be estimated by online approximation, $b=\mu_{t}+\delta\sigma_{t}$, where $\delta$ is a constant and set to be the desired quantile of the normal distribution. This adaptive method for online domain change detection is shown in Algorithm 1. ### 3.3 Memory Management with Domain Distribution and Difficulty Awareness In this section, we design the memory management mechanism for determining which task to be stored in the memory and which task to be moved out. The mechanism, named Memory Management with Domain Distribution and Difficulty Awareness (M2D3), jointly considers the difficulty and distribution of few shot tasks in our setting. M2D3 first estimates the probability of the current task $\mathcal{T}_{t}$ to be moved into the memory. The model will then determine the task to be moved out in the event that a new task move-in happens. To improve efficiency, we utilize the obtained latent domain information associated with each task (as described in previous section) to first estimate this move-out probability at cluster-level before sampling single task, as in Figure 3. Figure 3: Illustration on the memory management process. Each colored circle represents one cluster in the buffer and each dot denotes one task. Here we define the notations involved in the following method description. Each task $\mathcal{T}_{t}$ in the memory is associated with a latent domain label $L_{t}$ and all the tasks with the same latent domain label form one cluster. $\mathcal{M}_{i}$ denotes the cluster formed by all the tasks with latent domain label $i$ in memory $\mathcal{M}$, $n_{i}=|\mathcal{M}_{i}|$ denotes the number of tasks in $\mathcal{M}_{i}$ and $n=|\mathcal{M}|$ denotes the total number of tasks in memory, and $\mathcal{I}_{i}$ denotes the importance score of cluster $\mathcal{M}_{i}$. Probability of new task moving into memory When the new task $\mathcal{T}_{t}$ arrives, the chance of $\mathcal{T}_{t}$ being stored in memory is estimated, with the basic principle being the more incremental knowledge is brought by $\mathcal{T}_{t}$, the higher the probability of $\mathcal{T}_{t}$ being stored. This depends on the difficulty and prevalence of current latent domain. We propose an approach on top of this principle to estimate this probability. The score function of $\mathcal{T}_{t}$ is defined as: $S_{new}=(1-\frac{n_{L_{t}}}{n})\mathcal{I}_{t}^{T}$ (5) Where $\mathcal{I}_{t}^{T}$ represents the importance for $\mathcal{T}_{t}$ , which is defined as the task-specific gradient norm in Section 3.4. $n_{L_{t}}$ denotes the number of tasks of current latent domain cluster in memory buffer. $\frac{n_{L_{t}}}{n}$ denotes the prevalence of current latent domain in memory. $\mathcal{I}_{i}$ represents the importance for cluster $\mathcal{M}_{i}$, which is defined as the cluster-specific gradient norm $G_{i}$ in Section 3.4 (The computation is shared and corresponding terms are computed only once.). The importance of in-memory tasks is defined as $M_{s}=\frac{1}{L_{t}-1}\sum_{i=1}^{L_{t}-1}\frac{n_{i}}{n}\mathcal{I}_{i}$. The score function of in-memory tasks is defined as: $S_{mem}=\frac{n_{L_{t}}}{n}M_{s}$ (6) The probability of moving $\mathcal{T}_{t}$ into the memory is: $P_{in}=\frac{e^{S_{new}}}{e^{S_{new}}+e^{S_{mem}}}$ (7) This task selection mechanism maximizes the incremental knowledge of each task added into memory. Algorithm 2 Memory Management with Domain Distribution and Difficulty Awareness (M2D3). 0: mini-batch training tasks ${\mathcal{T}}_{t}$; memory tasks $\mathcal{M}$; domain label $L_{t-1}$ 1: Function M2D3 $(\mathcal{M},\mathcal{T}_{t})$ 2: calculate the probability $\mathcal{P}_{in}$ to move $\mathcal{T}_{t}$ into memory as Eq. 7. calculate detection statistics of $W^{B}_{t}$ 3: $\operatorname{\mathbf{d}}=$ ODCD($W^{B}_{t}$, $\rho$, $\delta$); detect domain change by Alg. 1. 4: if $\operatorname{\mathbf{d}}$ then 5: $L_{t}=L_{t-1}+1$ 6: $\mathcal{M}_{L_{t}}=\\{\\}$ 7: end if 8: if memory $\mathcal{M}$ is not full then 9: $\mathcal{M}_{L_{t}}\leftarrow\mathcal{M}_{L_{t}}\cup\mathcal{T}_{t}$ 10: else 11: if ${\mathcal{T}}_{t}$ is moved into memory by Eq. 7 then 12: calculate the move-out probability for each cluster $\mathcal{P}_{t}^{i}$ and sample cluster $j$ according to Eq. 8 and 9. 13: sample task from $\mathcal{M}_{j}$ to move out of memory. 14: move $\mathcal{T}_{t}$ into memory $\mathcal{M}_{L_{t}}\leftarrow\mathcal{M}_{L_{t}}\cup{\mathcal{T}}_{t}$ 15: end if 16: end if 17: return updated memory buffer $\mathcal{M}$ 18: EndFunction Probability of existing tasks moving out of memory To improve the efficiency of removing the tasks currently in memory, we perform a hierarchical sampling approach. We perform sampling first at cluster level before focusing on individual tasks, as shown in Figure 3. The estimated probability is correlated with both the size of each cluster in memory and its importance. The factor for each cluster $\mathcal{M}_{i}$ is defined as: $\mathcal{A}_{i}\propto-(1-\frac{n_{i}}{n})\mathcal{I}_{i}$ (8) The moving out probability for each cluster $\mathcal{M}_{i}$ at time $t$ is then defined as $\mathcal{P}_{t}^{i}=\frac{e^{\mathcal{A}_{i}}}{\sum_{i=1}^{i=L_{t}-1}e^{\mathcal{A}_{i}}}$ (9) The complete mechanism is summarized in Algorithm 2. ### 3.4 Adaptive Memory Task Sampling for Training During meta training, a mini-batch of tasks are sampled from the memory and are jointly trained with current tasks to mitigate catastrophic forgetting. Direct uniform sampling tasks from memory incurs high variance, and results in unstable training [32, 9]. On the other hand, our intuition for non-uniform task sampling mechanism is that the tasks are not equally important for retaining the knowledge from previous domains. The tasks that carry more information are more beneficial for the model to remember previous domains, and should be sampled more frequently. To achieve this goal, we propose an efficient adaptive task sampling scheme in memory that accelerates training and reduces gradient estimation variance. As shown in Figure 4, the sampling probability of Miniimagenet and Aircraft are adjusted and increased based on the scheme suggesting the importance of these domains are higher than that of Omniglot for retaining knowledge. Figure 4: A simple example of uniform task sampling and our adaptive memory task sampling method for sampling tasks from memory buffer during meta training. With the task specific loss function $\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{i})=P(\mathcal{T}^{test}_{i}|{\bm{\theta}},\mathcal{T}^{train}_{i})$. The optimization objective at time $t$ is defined as minimizing on the loss function of both the new tasks and memory tasks $\mathcal{H}({\bm{\theta}})=\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{t})+\sum\limits_{\mathcal{T}_{i}\in\mathcal{M}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{i})$. At time $t$, our proposed adaptive sampling mechanism assigns each task $\mathcal{T}_{i}\in\mathcal{M}$ a probability $q_{i}^{t}$ such that $\sum_{i=1}^{i=n}q_{i}^{t}=1$, we then sample $\mathcal{T}_{i_{t}}$ based on the distribution $\operatorname{\mathbf{q}}_{t}$ $=(q_{1}^{t},q_{2}^{t},\cdots,q_{n}^{t})$. We temporally omit the subscript (superscript) $t$ for the following theorem for notation clarity. ###### Theorem 2 Let $\operatorname{\mathbf{p}}(\mathcal{T})$ be the distribution of the tasks in memory $\mathcal{M}$. Then, $\mathbb{E}_{\operatorname{\mathbf{p}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})=\mathbb{E}_{\operatorname{\mathbf{q}}(\mathcal{T})}[\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})]=\Omega$ (10) Let $\mathbb{V}_{\operatorname{\mathbf{q}}}[\Omega]$ denotes the covariance of the above estimator associated with $\operatorname{\mathbf{q}}$. Then, the trace of $\mathbb{V}_{\operatorname{\mathbf{q}}}[\Omega]$ is minimized by the following optimal $\operatorname{\mathbf{q}}^{*}$ $\operatorname{\mathbf{q}}^{*}(\mathcal{T})=\frac{\operatorname{\mathbf{p}}(\mathcal{T})||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}}{\int\operatorname{\mathbf{p}}(\mathcal{T})||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}}.$ (11) In particular, if no prior information is available on task distribution, uniform sampling of tasks from memory is adopted and $\operatorname{\mathbf{p}}(\mathcal{T})=\frac{1}{n}$, $\operatorname{\mathbf{q}}^{*}(\mathcal{T}_{i})=\frac{||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{i})||_{2}}{\sum_{j=1}^{n}||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{j})||_{2}}.$ Thus, $w(\mathcal{T}_{i})=\frac{\operatorname{\mathbf{p}}(\mathcal{T}_{i})}{\operatorname{\mathbf{q}}(\mathcal{T}_{i})}=\frac{1}{n\operatorname{\mathbf{q}}(\mathcal{T}_{i})}$ Proof is provided in Appendix C. The parameters are updated as: ${\bm{\theta}}_{t+1}={\bm{\theta}}_{t}-\eta w_{i}^{t}\nabla_{{\bm{\theta}}_{t}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T}_{i_{t}})$ (12) Where $\eta$ is the learning rate, $w_{i}^{t}=\frac{1}{nq_{i}^{t}}$. Similar to standard SGD analysis [25], we define the convergence speed of meta training as the shrinkage of distance to optimal parameters ${\bm{\theta}}^{*}$ between two consecutive iterations $C=-\mathbb{E}_{\operatorname{\mathbf{q}}_{t}}[||{\bm{\theta}}_{t+1}-{\bm{\theta}}^{*}||_{2}^{2}-||{\bm{\theta}}_{t}-{\bm{\theta}}^{*}||_{2}^{2}]$. Following [30, 1], it can be expressed as: $C=2\eta({\bm{\theta}}_{t}-{\bm{\theta}}^{*})\Omega-\eta^{2}\Omega^{T}\Omega-\eta^{2}\text{Tr}(\mathbb{V}_{\operatorname{\mathbf{q}}_{t}}[\Omega])$ (13) Theorem 2 illustrates the optimal task sampling distribution for reducing the gradient variance is proportional to the per-task gradient norm. Minimizing the gradient variance (last term of RHS in Eq.13) as in Theorem 2 also speeds up the convergence (maximize $C$) as a byproduct. However, it is computationally prohibitive to compute this distribution. We therefore propose efficient approximation to it. By Section 3.2, each memory task is associated with a latent cluster label. Utilizing this property, we can first sample $R$ (small) tasks from each cluster, then calculate the gradient norm for each cluster as $G_{i}$. By doing so, the computational efficiency of the optimal task sampling distribution will be significantly improved. The sampling probability for each cluster is calculated as: $\mathcal{Z}_{t}^{i}=\frac{n_{i}G_{i}}{\sum_{j=1}^{j=L_{t}}n_{j}G_{j}}$ (14) The sampling scheme is to first sample cluster indexes from memory according to Eq. 14, then randomly sample tasks from the specified clusters. We name this task sampling scheme as adaPtive mEmory Task Sampling (PETS). Eq. 14 illustrates that the original sampling distribution of each cluster (measured by the frequency of each cluster in the memory buffer) is weighted by the corresponding importance of each cluster measured by the gradient norm $G_{i}$. In practice, the computational efficiency can be further improved by computing the sampling distribution every $s$ steps with the same distribution during each time interval. PETS is summarized in Algorithm 3. Algorithm 3 Adaptive Memory Task Sampling (PETS). 0: A sequence of mini-batch training tasks ${\mathcal{T}}_{1},{\mathcal{T}}_{2},\dots,{\mathcal{T}}_{N}$; memory buffer $\mathcal{M}$; model parameters ${\bm{\theta}}$; 1: for $t=1$ to $N$ do 2: for each cluster $\mathcal{M}_{j}$ in $\mathcal{M}$ do 3: sample mini-batch tasks from cluster $\mathcal{M}_{j}$ and calculate gradient norm $G_{j}$ for $\mathcal{M}_{j}$. 4: end for 5: calculate the task sampling distribution from each cluster $\mathcal{M}_{j}$ as in Eq. 14. 6: sample tasks $\mathcal{B}$ from $\mathcal{M}$ according to the distribution $\mathcal{Z}_{t}$ as in Eq. 14. 7: update ${\bm{\theta}}$ by meta training on $\mathcal{T}_{t}\cup\mathcal{B}$ 8: Memory tasks update $\mathcal{M}=\text{M2D3}(\mathcal{M},\mathcal{T}_{t})$ 9: end for ## 4 Related Work #### Meta Learning: Meta learning [50] focuses on rapidly adapting to unseen tasks by learning on a large number of similar tasks. Representative works include [57, 52, 20, 21, 23, 49, 7, 42, 6, 41, 37, 61, 66, 46, 53, 65], etc. All of these methods work on the simplified setting where task distributions are stationary during meta training. Completely different from these works, we focus on the more challenging setting where task distributions are non-stationary and imbalanced. Online meta learning [22] stores all previous tasks in online setting to avoid forgetting with small number of tasks. [28] use Dirichlet process mixtures (DPM) to model the latent tasks structure and expand network. By contrast, ours focuses on mitigating catastrophic forgetting with single model when meta learning on imbalanced domain sequences with only limited access to previous domains. Multi-domain meta learning [54, 55, 59] assume tasks from all domains are available during meta training. We focus on the case that each domain in an imbalanced domain sequence sequentially arrives. #### Continual Learning: Continual learning (CL) aims to maintain previous knowledge when learning on sequentially arriving data with distribution shift. Many works focus on mitigating catastrophic forgetting during the learning process. Representative works include [39, 14, 48, 63, 34, 43, 19, 2, 11, 4], etc. Continual few-shot learning [8] (CFSL) focuses on remembering previously learned few-shot tasks in a single domain. To our best knowledge, the replay-based approach to imbalanced streaming setting of continual learning has been only considered in [5, 17, 33]. Different from these works, which focus on learning on a small number of tasks and aim to generalize to previous tasks, our work focuses on the setting where the model learns on a large number of tasks with domain shift and imbalance, and aims to generalize to the unseen tasks from previous domains without catastrophic forgetting instead of remembering on a specific task. #### Incremental and Continual Few-shot Learning: Incremental few-shot learning [24, 47, 64] aim to learn new categories while retaining knowledge on old categories within a single domain and assume access to the base categories is unlimited. This paper, by contrast, requires good generalization to unseen categories in previous domains and access to previous domains is limited. Continual-MAML [12] aims for online fast adaptation to new tasks while accumulating knowledge on old tasks and assume previous tasks can be unlimited revisited. MOCA [27] works in online learning and learns the experiences from previous data to improve sequential prediction. In contrast, ours focuses on generalizing to previous domain when learning on a large number of tasks with sequential domain shift and limited access to previous domains. ## 5 Experiments Our method is orthogonal to specific meta learning models and can be integrated into them seamlessly. For illustration, we evaluate our method on representative meta learning models including (1) gradient-based meta learning ANIL [44], which is a simplified model of MAML [21]; (2) metric-based meta learning Prototypical Network (PNet) [52]. Extension to other meta learning models is straightforward. Baselines: (1) sequential training, which learns the latent domains sequentially without any external mechanism and demonstrates the model forgetting behavior; (2) reservoir sampling (RS) [58]; (3) joint offline training, which learns all the domains jointly in a multi-domain meta-learning setting; (4) independent training, which trains each domain independently. Among them, joint offline training and independent training serve as the performance upper bound. In addition, since continual learning (CL) methods only apply to a small number of tasks, directly applying CL methods to our setting with large number of tasks (more than 40K) is infeasible. Instead, we combine several representative CL methods with meta learning base model. We modify and adapt GSS [5], MIR [3], AGEM [14] and MER [48] to our setting and combine them with meta learning base models to serve as strong baselines. We denote these baselines as PNet-GSS, ANIL-GSS, etc. Proposed benchmark To simulate realistic imbalanced domain sequences, we construct a new benchmark and collect 6 domains with varying degree of similarity and difficulty, including Quickdraw [29], AIRCRAFT [40], CUB [62], Miniimagenet [57], Omniglot [35], Necessities from Logo-2K+ [60]. We resize all images into the same size of $84\times 84$. All the methods are compared for 5-way 1-shot and 5-way 5-shot learning. All the datasets are publicly available with more details provided in Appendix A. We calculate the average accuracy on unseen testing tasks from all the domains for evaluation purpose. Implementation details For ANIL-based [44] baselines, following [7], we use a four-layer CNN with 48 filters and one fully-connected layer as the meta learner. For PNet-based [52] baselines, we use a five-layer CNN with 64 filters of kernel size being 3 for meta learning. Following [52], we do not use any fully connected layers for PNet-based models. Similar architecture is commonly used in existing meta learning literature. We do not use any pre- trained network feature extractors which may contain prior knowledge on many pre-trained image classes, as this violates our problem setting that future domain knowledge is completely unknown. We perform experiments on different domain orderings, with the default ordering being Quickdraw, MiniImagenet, Omniglot, CUB, Aircraft and Necessities. To simulate imbalanced domains in streaming setting, each domain on this sequence is trained on 5000, 2000, 6000, 2000, 2000, 24000 steps respectively. In this setup, reservoir sampling will underrepresent most domains. All experiments are averaged over three independent runs. More implementation details are given in Appendix B. ### 5.1 Comparison to Baselines We compare our methods to the baselines. The memory maintains 300 batches (2) tasks. Results are shown in Table 1 and 2. We can observe that our method significantly outperforms baselines by a large margin of $5.21\%$ for 5-shot learning and $4.95\%$ for 1-shot learning with PNet-based model. For ANIL- based baselines, our method outperforms baselines by $4.60\%$ for 5-shot learning and $2.19\%$ for 1-shot learning. This shows the effectiveness of our method. ### 5.2 Effect of Memory Capacity We explore the effect of memory capacity for the performance of baselines and our method. Table 3 and 4 show the results with memory capacity (batches) of 200, 300 and 500 respectively. Our method significantly outperforms all the baselines in each capacity case. Table 1: Comparisons with PNet-based baselines 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-Sequential $31.82\pm 0.56$ $48.21\pm 0.50$ PNet-RS $34.68\pm 1.96$ $53.69\pm 0.76$ PNet-GSS $36.15\pm 1.59$ $55.16\pm 0.72$ PNet-AGEM $34.07\pm 1.71$ $52.61\pm 0.68$ PNet-MIR $34.53\pm 1.45$ $53.91\pm 0.56$ PNet-MER $35.82\pm 1.69$ $54.28\pm 0.61$ PNet- Ours $\mathbf{41.10\pm 0.42}$ $\mathbf{60.37\pm 0.32}$ Joint-training $52.96\pm 0.45$ $68.56\pm 0.37$ Independent-training $58.25\pm 0.36$ $72.23\pm 0.29$ Table 2: Comparisons with ANIL-based baselines 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC ANIL-Sequential $30.68\pm 0.67$ $41.39\pm 0.37$ ANIL-RS $32.11\pm 0.90$ $48.72\pm 0.79$ ANIL-GSS $31.78\pm 1.08$ $48.93\pm 0.83$ ANIL-AGEM $32.23\pm 1.21$ $48.56\pm 0.91$ ANIL-MIR $31.85\pm 0.97$ $48.34\pm 0.72$ ANIL-MER $32.72\pm 1.06$ $49.05\pm 0.96$ ANIL- Ours $\mathbf{34.91\pm 0.73}$ $\mathbf{53.65\pm 0.56}$ Joint-training $52.37\pm 0.72$ $66.21\pm 0.61$ Independent-training $56.52\pm 0.57$ $69.67\pm 0.53$ Table 3: Effect of memory size for PNet-based baselines 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-RS ($n=200$) $34.12\pm 1.12$ $53.29\pm 0.42$ PNet-Ours ($n=200$) $40.11\pm 0.73$ $59.86\pm 0.27$ PNet-RS ($n=300$) $34.68\pm 1.96$ $53.69\pm 0.76$ PNet-Ours ($n=300$) $41.10\pm 0.42$ $60.37\pm 0.32$ PNet-RS ($n=500$) $35.67\pm 0.82$ $55.95\pm 0.79$ PNet-Ours ($n=500$) $41.82\pm 0.90$ $61.05\pm 0.60$ Table 4: Effect of memory size for ANIL-based baselines 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC ANIL-RS ($n=200$) $31.03\pm 0.97$ $45.96\pm 0.81$ ANIL-Ours ($n=200$) $32.83\pm 0.71$ $48.21\pm 0.61$ ANIL-RS ($n=300$) $32.11\pm 0.90$ $48.72\pm 0.79$ ANIL-Ours ($n=300$) $34.91\pm 0.73$ $53.65\pm 0.56$ ANIL-RS ($n=500$) $39.35\pm 0.76$ $53.86\pm 0.68$ ANIL-Ours ($n=500$) $42.79\pm 0.67$ $59.23\pm 0.49$ ### 5.3 Effect of Domain Ordering We also compare to other two orderings: Necessities, CUB, Omniglot, Aircraft, MiniImagenet, Quickdraw; and Omniglot, Aircraft, Necessities, CUB, Quickdraw, MiniImagenet. The results are shown in Appendix D. In all cases, our method substantially outperforms the baselines. ### 5.4 Effect of Different Ratios of Domains To explore how the different domain ratios affect the model performance, we did another set of experiments with iterations of 4K, 4K, 3K, 4K, 4K, 22K steps on each domain respectively. The results are shown in Table 8 in Appendix. ### 5.5 Effect of Domain Revisiting To investigate the effect of domain revisiting on baselines and our method, we perform experiment on the domain sequence with domain revising of Quickdraw. The details and results are shown in Table 7 in Appendix. We currently assume that there is no domain-revising, properly handling domain-revisiting is left as interesting future work. ### 5.6 Ablation Study Effect of memory management mechanism To verify the effectiveness of M2D3 proposed in section 3.3, Table 9 in Appendix shows the experiments with simple reservoir sampling without M2D3 (PNet-RS) and with M2D3 (PNet-Ours (without PETS)) respectively. Our method with M2D3 significantly outperforms baseline by $4.1\%$ and $4.2\%$ respectively. The memory proportion for each latent domain is shown in Figure 5 in Appendix. For RS baseline, the memory proportion for each domain is highly imbalanced. On the contrary, our memory management mechanism enables the memory proportion for each domain is relatively balanced, demonstrating the effectiveness of our method. Effect of PETS To verify the effectiveness of PETS proposed in section 3.4, we compare the gradient variance with uniform sampling and our adaptive task sampling method, the gradient variance during training is shown in Figure 6 in Appendix. We can see that our adaptive task sampling achieves much less gradient variance especially when training for longer iterations. Table 9 in Appendix shows that with PETS, the performance is improved by more than $2.2\%$ and $2.4\%$ for 1-shot and 5-shot learning respectively. ## 6 Conclusion This paper addresses the forgetting problem when meta learning on non- stationary and imbalanced task distributions. To address this problem, we propose a new memory management mechanism to balance the proportion of each domain in the memory buffer. Also, we introduce an efficient adaptive memory task sampling method to reduce the task gradient variance. Experiments demonstrate the effectiveness of our proposed methods. For future work, it would be interesting to meta learn the proportional of each domain automatically. Acknowledgements This research was supported in part by NSF through grants IIS-1910492. ## References * [1] Guillaume Alain, Alex Lamb, Chinnadhurai Sankar, Aaron Courville, and Yoshua Bengio. Variance reduction in sgd by distributed importance sampling. https://arxiv.org/abs/1511.06481, 2016. * [2] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, and Tinne Tuytelaars. Memory aware synapses: Learning what (not) to forget. The 2018 European Conference on Computer Vision (ECCV), 2018. * [3] Rahaf Aljundi, Lucas Caccia, Eugene Belilovsky, Massimo Caccia, Min Lin, Laurent Charlin, and Tinne Tuytelaars. Online continual learning with maximally interfered retrieval. Advances in Neural Information Processing Systems, 2019. * [4] Rahaf Aljundi, Klaas Kelchtermans, and Tinne Tuytelaars. Task-free continual learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [5] Rahaf Aljundi, Min Lin, Baptiste Goujaud, and Yoshua Bengio. Gradient based sample selection for online continual learning. In Advances in Neural Information Processing Systems 30, 2019. * [6] Marcin Andrychowicz, Misha Denil, Sergio Gomez, Matthew W. Hoffman, David Pfau, Tom Schaul, Brendan Shillingford, and Nando de Freitas. Learning to learn by gradient descent by gradient descent. Advances in Neural Information Processing Systems, 2016. * [7] Antreas Antoniou, Harrison Edwards, and Amos Storkey. How to train your maml. International Conference on Learning Representations, 2019. * [8] Antreas Antoniou, Massimiliano Patacchiola, Mateusz Ochal, and Amos Storkey. Defining benchmarks for continual few-shot learning. https://arxiv.org/abs/2004.11967, 2020. * [9] Sébastien Arnold, Pierre-Antoine Manzagol, Reza Babanezhad Harikandeh, Ioannis Mitliagkas, and Nicolas Le Roux. Reducing the variance in online optimization by transporting past gradients. Advances in Neural Information Processing Systems, 2019. * [10] I. Bae, J. Moon, J. Jhung, H. Suk, T. Kim, H. Park, J. Cha, J. Kim, D. Kim, and Shiho Kim. Self-driving like a human driver instead of a robocar: Personalized comfortable driving experience for autonomous vehicles. NeurIPS 2019 Workshop: Machine Learning for Autonomous Driving, 2020\. * [11] E. Belouadah and A. Popescu. Il2m: Class incremental learning with dual memory. In 2019 IEEE/CVF International Conference on Computer Vision (ICCV), pages 583–592, 2019. * [12] Massimo Caccia, P. Rodríguez, O. Ostapenko, Fabrice Normandin, Min Lin, L. Caccia, Issam H. Laradji, I. Rish, Alexande Lacoste, D. Vázquez, and Laurent Charlin. Online fast adaptation and knowledge accumulation: a new approach to continual learning. advances in neural information processing systems, 2020. * [13] Giuseppe Castellucci, Simone Filice, Danilo Croce, and Roberto Basili. Learning to solve NLP tasks in an incremental number of languages. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021. * [14] Arslan Chaudhry, Marc’Aurelio Ranzato, Marcus Rohrbach, and Mohamed Elhoseiny. Efficient lifelong learning with a-gem. Proceedings of the International Conference on Learning Representations, 2019. * [15] Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K. Dokania, Philip H. S. Torr, and Marc’Aurelio Ranzato. Continual learning with tiny episodic memories. https://arxiv.org/abs/1902.10486, 2019. * [16] Wei-Yu Chen, Yen-Cheng Liu, Zsolt Kira, Yu-Chiang Wang, and Jia-Bin Huang. A closer look at few-shot classification. In International Conference on Learning Representations, 2019. * [17] Aristotelis Chrysakis and Marie-Francine Moens. Online continual learning from imbalanced data. In Proceedings of the 37th International Conference on Machine Learning, 2020. * [18] Tristan Deleu, Tobias Würfl, Mandana Samiei, Joseph Paul Cohen, and Yoshua Bengio. Torchmeta: A Meta-Learning library for PyTorch. 2019\. Available at: https://github.com/tristandeleu/pytorch-meta. * [19] Sayna Ebrahimi, Franziska Meier, Roberto Calandra, Trevor Darrell, and Marcus Rohrbach. Adversarial continual learning. The 2020 European Conference on Computer Vision (ECCV), 2020. * [20] H. Edwards and A. Storkey. Towards a neural statistician. International Conference on Learning Representations, 2017. * [21] Chelsea Finn, Pieter Abbeel, and Sergey Levine. Model-agnostic meta-learning for fast adaptation of deep networks. International Conference on Machine Learning, 2017. * [22] Chelsea Finn, Aravind Rajeswaran, Sham Kakade, and Sergey Levine. Online meta-learning. In Proceedings of International Conference on Machine Learning, 2019\. * [23] Chelsea Finn, Kelvin Xu, and Sergey Levine. Probabilistic model-agnostic meta-learning. Advances in Neural Information Processing Systems, 2018. * [24] Spyros Gidaris and Nikos Komodakis. Dynamic few-shot visual learning without forgetting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018. * [25] Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, and Peter Richtárik. SGD: General analysis and improved rates. In Proceedings of the 36th International Conference on Machine Learning, 2019. * [26] Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. A kernel two-sample test. Journal of Machine Learning Research, 13(25):723–773, 2012. * [27] James Harrison, Apoorva Sharma, Chelsea Finn, and Marco Pavone. Continuous meta-learning without tasks. Advances in Neural Information Processing Systems, 2020. * [28] Ghassen Jerfel, Erin Grant, Thomas L. Griffiths, and Katherine Heller. Reconciling meta-learning and continual learning with online mixtures of tasks. Advances in Neural Information Processing Systems, 2019. * [29] Jonas Jongejan, Henry Rowley, Takashi Kawashima, Jongmin Kim, and Nick Fox-Gieg. The quick, draw! – a.i. experiment. 2016\. * [30] Angelos Katharopoulos and Francois Fleuret. Not all samples are created equal: Deep learning with importance sampling. Proceedings of the 35th International Conference on Machine Learning. * [31] N. Keriven, D. Garreau, and I. Poli. Newma: A new method for scalable model-free online change-point detection. IEEE Transactions on Signal Processing, 68:3515–3528, 2020. * [32] Mikhail Khodak, Maria-Florina Balcan, and Ameet Talwalkar. Provable guarantees for gradient-based meta-learning. Proceedings of the International Conference on Machine Learning, 2019. * [33] Chris Dongjoo Kim, Jinseo Jeong, and Gunhee Kim. Imbalanced continual learning with partitioning reservoir sampling. In European Conference on Computer Vision (ECCV), 2020. * [34] James Kirkpatrick, Razvan Pascanu, Neil Rabinowitz, Joel Veness, Guillaume Desjardins, Andrei A. Rusu, Kieran Milan, John Quan, Tiago Ramalho, Agnieszka Grabska-Barwinska, Demis Hassabis, Claudia Clopath, Dharshan Kumaran, and Raia Hadsell. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 2017. * [35] Brenden Lake, Ruslan Salakhutdinov, Jason Gross, and Joshua Tenenbaum. One shot learning of simple visual concepts. Conference of the Cognitive Science Society, 2011. * [36] Hae Beom Lee, Hayeon Lee, Donghyun Na, Saehoon Kim, Minseop Park, Eunho Yang, and Sung Ju Hwang. Learning to balance: Bayesian meta-learning for imbalanced and out-of-distribution tasks. In International Conference on Learning Representations, 2020. * [37] Kwonjoon Lee, Subhransu Maji, Avinash Ravichandran, and Stefano Soatto. Meta-learning with differentiable convex optimization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019. * [38] Shuang Li, Yao Xie, Hanjun Dai, and Le Song. Scan $b$-statistic for kernel change-point detection. Advances in Neural Information Processing Systems, 2015. * [39] David Lopez-Paz and Marc’Aurelio Ranzato. Gradient episodic memory for continual learning. Advances in Neural Information Processing Systems, 2017. * [40] Subhransu Maji, Esa Rahtu, Juho Kannala, Matthew Blaschko, and Andrea Vedaldi. Fine-grained visual classification of aircraft. https://arxiv.org/abs/1306.5151, 2013. * [41] Nikhil Mishra, Mostafa Rohaninejad, Xi Chen, and Pieter Abbeel. A simple neural attentive meta-learner. International Conference on Learning Representations, 2018. * [42] Tsendsuren Munkhdalai and Hong Yu. Meta networks. Proceedings of the 34th International Conference on Machine Learning, 2017. * [43] Cuong V. Nguyen, Yingzhen Li, Thang D. Bui, and Richard E. Turner. Variational continual learning. Proceedings of the International Conference on Learning Representations, 2018. * [44] Aniruddh Raghu, Maithra Raghu, Samy Bengio, and Oriol Vinyals. Rapid learning or feature reuse? towards understanding the effectiveness of maml. International Conference on Learning Representations, 2020. * [45] S. Ravi and H. Larochelle. Optimization as a model for few-shot learning. In International Conference on Learning Representations, 2017. * [46] Avinash Ravichandran, Rahul Bhotika, and Stefano Soatto. Few-shot learning with embedded class models and shot-free meta training. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019. * [47] Mengye Ren, Renjie Liao, Ethan Fetaya, and Richard S. Zemel. Incremental few-shot learning with attention attractor networks. Advances in Neural Information Processing Systems, 2019. * [48] Matthew Riemer, Ignacio Cases, Robert Ajemian, Miao Liu, Irina Rish, Yuhai Tu, and Gerald Tesauro. Learning to learn without forgetting by maximizing transfer and minimizing interference. International Conference on Learning Representations, 2019. * [49] Adam Santoro, Sergey Bartunov, Matthew Botvinick, Daan Wierstra, and Timothy Lillicrap. Meta-learning with memory-augmented neural networks. Proceedings of the 34th International Conference on Machine Learning, 2016. * [50] J. Schmidhuber. A neural network that embeds its own meta-levels. IEEE International Conference on Neural Networks, 1993. * [51] Robert J. Serfling. Approximation theorems of mathematical statistics. Wiley series in probability and mathematical statistics : Probability and mathematical statistics. Wiley, 1980. * [52] Jake Snell, Kevin Swersky, and Richard S. Zemel. Prototypical networks for few-shot learning. Advances in Neural Information Processing Systems, 2017. * [53] Pavel Tokmakov, Yu-Xiong Wang, and Martial Hebert. Learning compositional representations for few-shot recognition. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019. * [54] Eleni Triantafillou, Tyler Zhu, Vincent Dumoulin, Pascal Lamblin, Utku Evci, Kelvin Xu, Ross Goroshin, Carles Gelada, Kevin Swersky, Pierre-Antoine Manzagol, and Hugo Larochelle. Meta-dataset: A dataset of datasets for learning to learn from few examples. Proceedings of the International Conference on Learning Representations, 2020. * [55] Hung-Yu Tseng, Hsin-Ying Lee, Jia-Bin Huang, and Ming-Hsuan Yang. Cross-domain few-shot classification via learned feature-wise transformation. Proceedings of the International Conference on Learning Representations, 2020. * [56] A. W. van der Vaart. Asymptotic Statistics. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, 1998. * [57] Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, and Daan Wierstra. Matching networks for one shot learning. Advances in Neural Information Processing Systems, 2016. * [58] Jeffrey S Vitter. Random sampling with a reservoir. ACM Transactions on Mathematical Software, 1985. * [59] Risto Vuorio, Shao-Hua Sun, Hexiang Hu, and Joseph J. Lim. Multimodal model-agnostic meta-learning via task-aware modulation. Proceedings of the Advances in Neural Information Processing Systems, 2019. * [60] Jing Wang, Weiqing Min, Sujuan Hou, Shengnan Ma, Yuanjie Zheng, Haishuai Wang, and Shuqiang Jiang. Logo-2k+: A large-scale logo dataset for scalable logo classification. AAAI Conference on Artificial Intelligence, 2019. * [61] Zhenyi Wang, Yang Zhao, Ping Yu, Ruiyi Zhang, and Changyou Chen. Bayesian meta sampling for fast uncertainty adaptation. International Conference on Learning Representations, 2020. * [62] P. Welinder, S. Branson, T. Mita, C. Wah, F. Schroff, S. Belongie, and P. Perona. Caltech-UCSD Birds 200. 2010\. * [63] Jaehong Yoon, Eunho Yang, Jeongtae Lee, and Sung Ju Hwang. Lifelong learning with dynamically expandable networks. International Conference on Learning Representations, 2018. * [64] Sung Whan Yoon, Do-Yeon Kim, Jun Seo, and Jaekyun Moon. Xtarnet: Learning to extract task-adaptive representation for incremental few-shot learning. International Conference on Machine Learning, 2020. * [65] Jian Zhang, Chenglong Zhao, Bingbing Ni, Minghao Xu, and Xiaokang Yang. Variational few-shot learning. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019. * [66] Yufan Zhou, Zhenyi Wang, Jiayi Xian, Changyou Chen, and Jinhui Xu. Meta-learning with neural tangent kernels. International Conference on Learning Representations, 2021. Appendix ## Appendix A Dataset Details #### Miniimagenet [57] A subset dataset from ImageNet with 100 different classes, each class with 600 images. The meta train/validation/test splits are 64/16/20 classes respectively, following the same splits of [45]. #### Omniglot [35] An image dataset handwritten characters from 50 different alphabets, with each class of 20 examples, following the same setup and data split in [57]. #### CUB [62] A dataset consisting of 200 bird species. Following the same split of [16], the meta train/validation/test splits are of 100/50/50 classes respectively. #### AIRCRAFT [40] An image dataset for aircraft models consisting of 102 categories, with 100 images per class. Following the split in [59], the dataset is split into 70/15/15 classes for meta- training/validation/test. #### Quickdraw [29] An image dataset consisting of 50 million black-and-white drawings with 345 categories. Following [36], the dataset is split into 241/52/52 classes for meta-training/validation/test. #### Necessities Necessities Logo images from the large-scale publicly available dataset Logo-2K+ [60]. The dataset is randomly split into 100/41/41 classes for meta- training/validation/test. ## Appendix B Implementation Detail We use 750 evaluation tasks from each domain for meta testing. $m=5$ for constructing the projected space. $\delta=1.64$ (corresponds to confidence level of $95\%$) and window size $B=100$ for domain change detection. The meta batch size (number of training tasks at each iteration) is 2. Our implementation is based on the Torchmeta library [18]. We approximate each $||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}\sim||\nabla_{x_{i}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}=G_{i}$; where $x_{i}$ is the pre-activation of last layer output of the network as in [30]. ## Appendix C Theorem proof Proof Let $\mu=\mathbb{E}_{\operatorname{\mathbf{p}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})$ $Tr(\mathbb{V}_{\operatorname{\mathbf{q}}}[\Omega])=\mathbb{E}_{\operatorname{\mathbf{q}}(\mathcal{T})}[(\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})-\mu)\\\ (\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})-\mu)^{T}]=\mathbb{E}_{\operatorname{\mathbf{q}}(\mathcal{T})}[||\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}^{2}]-||\mu||_{2}^{2}$ (15) By Jensen’s inequality: $\mathbb{E}_{\operatorname{\mathbf{q}}(\mathcal{T})}[||\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}^{2}]\geq\mathbb{E}_{\operatorname{\mathbf{q}}(\mathcal{T})}[||\frac{\operatorname{\mathbf{p}}(\mathcal{T})}{\operatorname{\mathbf{q}}(\mathcal{T})}\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}]^{2}\\\ =(\mathbb{E}_{\operatorname{\mathbf{p}}(\mathcal{T})}[||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}])^{2}$ (16) The equality holds at $\operatorname{\mathbf{q}}^{*}(\mathcal{T})=\frac{\operatorname{\mathbf{p}}(\mathcal{T})||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}}{\int\operatorname{\mathbf{p}}(\mathcal{T})||\nabla_{{\bm{\theta}}}\mathcal{L}_{{\bm{\theta}}}(\mathcal{T})||_{2}}.$ by plugging the above $\operatorname{\mathbf{q}}^{*}(\mathcal{T})$ into the covariance expression. ## Appendix D Additional Results ### D.1 New ordering Order 1: Omniglot, Aircraft, Necessities, CUB, Quickdraw, MiniImagenet To simulate imbalanced domains in streaming setting, each domain on this sequence is trained on 3000, 2000, 4000, 2000, 4000, 40000 steps respectively. Order 2: Necessities, CUB, Omniglot, Aircraft, MiniImagenet, Quickdraw To simulate imbalanced domains in streaming setting, each domain on this sequence is trained on 6000, 2000, 6000, 3000, 3000, 24000 steps respectively. Order 1. Table 5 shows the results. Order 2. Table 6 shows the results. Table 5: Effect of memory size with order 1 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-RS ($n=100$) $36.82\pm 1.32$ $54.83\pm 1.12$ PNet-Ours ($n=100$) $42.35\pm 0.91$ $59.85\pm 0.79$ PNet-RS ($n=200$) $37.30\pm 1.21$ $55.23\pm 0.87$ PNet-Ours ($n=200$) $43.61\pm 0.86$ $61.32\pm 0.68$ PNet-RS ($n=300$) $37.76\pm 1.19$ $55.79\pm 0.91$ PNet-Ours ($n=300$) $44.32\pm 0.83$ $61.67\pm 0.57$ PNet-RS ($n=500$) $38.82\pm 1.27$ $55.95\pm 0.98$ PNet-Ours ($n=500$) $44.81\pm 0.63$ $62.08\pm 0.61$ Table 6: Effect of memory size with order 2 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-RS ($n=100$) $43.08\pm 0.79$ $57.97\pm 0.87$ PNet-Ours ($n=100$) $46.67\pm 0.61$ $62.14\pm 0.50$ PNet-RS ($n=200$) $43.36\pm 0.72$ $58.23\pm 0.72$ PNet-Ours ($n=200$) $46.95\pm 0.52$ $62.83\pm 0.58$ PNet-RS ($n=300$) $44.16\pm 0.80$ $58.65\pm 0.79$ PNet-Ours ($n=300$) $47.64\pm 0.45$ $63.21\pm 0.46$ PNet-RS ($n=500$) $45.29\pm 0.82$ $59.36\pm 0.85$ PNet-Ours ($n=500$) $47.16\pm 0.49$ $63.02\pm 0.46$ (a) reservoir sampling (b) our memory management mechanism Figure 5: Results of different domain proportion in the memory of our memory management methods and reservoir sampling when meta learning on an imbalanced task stream from three latent domains. ### D.2 Effect of domain revisiting This section shows the results of effect of domain revisiting with domain ordering, Quickdraw, MiniImagenet, Omniglot, CUB, Quickdraw, Aircraft, Necessities. The domain Quickdraw is revisited. To simulate imbalanced domains in streaming setting, each domain on this sequence is trained on 5000, 2000, 6000, 2000, 3000, 2000, 24000 steps respectively. Table 7: Comparisons with PNet-based baselines with domain revisiting 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-Sequential $32.02\pm 0.50$ $49.60\pm 0.45$ PNet-RS $37.31\pm 1.56$ $56.29\pm 1.35$ PNet-Ours $40.25\pm 0.98$ $60.36\pm 0.83$ Joint-training $52.96\pm 0.45$ $68.56\pm 0.57$ Independent-training $58.25\pm 0.36$ $72.23\pm 0.29$ ### D.3 Effect of different ratios of domains Table 8: Comparisons with PNet-based baselines with different imbalanced ratio of each domain 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-Sequential $29.91\pm 0.71$ $46.97\pm 0.65$ PNet-RS $34.97\pm 1.52$ $54.79\pm 0.69$ PNet-GSS $35.65\pm 1.28$ $56.65\pm 0.81$ PNet-AGEM $34.53\pm 1.36$ $54.91\pm 0.73$ PNet-MIR $35.09\pm 1.29$ $54.56\pm 0.90$ PNet-MER $35.16\pm 1.32$ $55.71\pm 0.78$ PNet- Ours $40.57\pm 0.68$ $61.53\pm 0.58$ Joint-training $52.96\pm 0.45$ $68.56\pm 0.37$ Independent-training $58.25\pm 0.36$ $72.23\pm 0.29$ ### D.4 Ablation Study #### Effect of PETS Figure 6 shows the effect of sampling tasks with PETS. Figure 6: Gradient variance comparison between uniform sampling and PETS, each step (1000 iterations). #### Effect of memory management mechanism Table 9 shows the effect of the proposed memory management mechanism. Table 9: Effect of Adaptive memory task sampling 5-Way 1-Shot 5-Way 5-Shots Algorithm ACC ACC PNet-RS $34.68\pm 1.96$ $53.69\pm 0.76$ PNet-Ours (without PETS) $38.85\pm 0.79$ $57.95\pm 0.67$ PNet-Ours (with PETS) $41.10\pm 0.42$ $60.37\pm 0.32$
# Bertrand types of regular curves and Bertrand framed curves in the Euclidean 3-space Nozomi Nakatsuyama and Masatomo Takahashi ###### Abstract A Bertrand (respectively, Mannheim) curve is a space curve whose principal normal line is the same as the principal normal (respectively, bi-normal) line of another curve. By definition, another curve is a parallel curve with respect to the direction of the principal normal vector. In this paper, we consider the other cases, that is, a space curve whose tangent (or, principal normal, bi-normal) line is the same as the tangent (or, principal normal, bi- normal) line of another curve, respectively. We say that a Bertrand type curve if there exists such another curve. We clarify that the existence conditions of Bertrand type curves in all cases. There are times when the Bertrand type curve does not exist. On the other hand, since the another curve may have singular points, we also consider curves with singular points. As smooth curves with singular points, it is useful to use the framed curves in the Euclidean space. Then we define and investigate Bertrand framed curves. We also clarify that the existence conditions of the Bertrand framed curves in all cases. 0002020 Mathematics Subject classification: 53A04, 58K05000Key Words and Phrases. Bertrand type, regular curve, Bertrand framed curve, framed curve ## 1 Introduction Bertrand and Mannheim curves are classical objects in differential geometry ([1, 2, 3, 4, 7, 15, 16, 17, 18, 21]). A Bertrand (respectively, Mannheim) curve is a space curve whose principal normal line is the same as the principal normal (respectively, bi-normal) line of another curve. By definition, another curve is a parallel curve with respect to the direction of the principal normal vector. In order to define the principal normal vector, the non-degenerate condition is needed. In general, the parallel curve does not satisfy the non-degenerate condition. Even if regular cases, the existence conditions of the Bertrand and Mannheim curves seem to be missed the non- degenerate condition in some books and papers. Bertrand curves have been applied in computer-aided geometric design (cf. [20]). In [11], we investigated Bertrand and Mannheim curves of non-degenerate curves under the assumption that the torsion does not vanish. In this paper, we give existence conditions of Bertrand and Mannheim curves without this assumption in §2. Since we have three lines, that is, the tangent, principal normal and bi- normal lines, it is natural to ask how about the other cases. We consider the other cases, that is, a space curve whose tangent (or, principal normal, bi- normal) line is the same as the tangent (or, principal normal, bi-normal) line of another curve, respectively. We say that a Bertrand type curve if there exists such another curve. In §3, we clarify that the existence conditions of Bertrand type curves in all cases. As a consequence, there are times when the Bertrand type curve does not exist. Moreover, the planar involutes and planar evolutes appear as the Bertrand type curves (Theorems 3.3 and 3.6). For more details of properties of involutes and evolutes see [6, 8, 13, 14, 19]. On the other hand, since the another curve may have singular points, we also consider curves with singular points. As smooth curves with singular points, it is useful to use the framed curves in the Euclidean space (cf. [10]). In §2, we also review the Bertrand and Mannheim curves of framed curves (cf. [11]). Then the same idea of Bertrand type curves is applied to the framed curves. In §4, we define and investigate Bertrand framed curves (Bertrand types of framed curves). We also clarify that the existence conditions of the Bertrand framed curves in all cases. As a consequence, the involutes and circular evolutes of framed curves (cf. [12]) appear as the Bertrand framed curves (Theorems 4.5, 4.10 and 4.18). Therefore, it is useful to find new framed curves by using Bertrand framed curves. We shall assume throughout the whole paper that all maps and manifolds are $C^{\infty}$ unless the contrary is explicitly stated. Acknowledgement. The second author was supported by JSPS KAKENHI Grant Number 20K03573. ## 2 Preliminaries We review the theories of Bertrand and Mannheim regular curves and framed curves. Let $\mathbb{R}^{3}$ be the $3$-dimensional Euclidean space equipped with the inner product $\mbox{\boldmath$a$}\cdot\mbox{\boldmath$b$}=a_{1}b_{1}+a_{2}b_{2}+a_{3}b_{3}$, where $\mbox{\boldmath$a$}=(a_{1},a_{2},a_{3})$ and $\mbox{\boldmath$b$}=(b_{1},b_{2},b_{3})\in\mathbb{R}^{3}$. The norm of $a$ is given by $|\mbox{\boldmath$a$}|=\sqrt{\mbox{\boldmath$a$}\cdot\mbox{\boldmath$a$}}$ and the vector product is given by $\mbox{\boldmath$a$}\times\mbox{\boldmath$b$}={\rm det}\left(\begin{array}[]{ccc}\mbox{\boldmath$e$}_{1}&\mbox{\boldmath$e$}_{2}&\mbox{\boldmath$e$}_{3}\\\ a_{1}&a_{2}&a_{3}\\\ b_{1}&b_{2}&b_{3}\end{array}\right)$ where $\\{\mbox{\boldmath$e$}_{1},\mbox{\boldmath$e$}_{2},\mbox{\boldmath$e$}_{3}\\}$ is the canonical basis of $\mathbb{R}^{3}$. Let $S^{2}$ be the unit sphere in $\mathbb{R}^{3}$, that is, $S^{2}=\\{\mbox{\boldmath$a$}\in\mathbb{R}^{3}||\mbox{\boldmath$a$}|=1\\}$. We denote the $3$-dimensional smooth manifold $\\{(\mbox{\boldmath$a$},\mbox{\boldmath$b$})\in S^{2}\times S^{2}|\mbox{\boldmath$a$}\cdot\mbox{\boldmath$b$}=0\\}$ by $\Delta$. ### 2.1 Regular curves Let $I$ be an interval of $\mathbb{R}$ and let $\gamma:I\to\mathbb{R}^{3}$ be a regular space curve, that is, $\dot{\gamma}(t)\not=0$ for all $t\in I$, where $\dot{\gamma}(t)=(d\gamma/dt)(t)$. We say that $\gamma$ is non- degenerate, or $\gamma$ satisfies the non-degenerate condition if $\dot{\gamma}(t)\times\ddot{\gamma}(t)\not=0$ for all $t\in I$. If we take the arc-length parameter $s$, that is, $|\gamma^{\prime}(s)|=1$ for all $s$, then the tangent vector, the principal normal vector and the bi- normal vector are given by $\mbox{\boldmath$t$}(s)=\gamma^{\prime}(s),\ \mbox{\boldmath$n$}(s)=\frac{\gamma^{\prime\prime}(s)}{|\gamma^{\prime\prime}(s)|},\ \mbox{\boldmath$b$}(s)=\mbox{\boldmath$t$}(s)\times\mbox{\boldmath$n$}(s),$ where $\gamma^{\prime}(s)=(d\gamma/ds)(s)$. Then $\\{\mbox{\boldmath$t$}(s),\mbox{\boldmath$n$}(s),\mbox{\boldmath$b$}(s)\\}$ is a moving frame of $\gamma(s)$ and we have the Frenet-Serret formula: $\left(\begin{array}[]{c}\mbox{\boldmath$t$}^{\prime}(s)\\\ \mbox{\boldmath$n$}^{\prime}(s)\\\ \mbox{\boldmath$b$}^{\prime}(s)\end{array}\right)=\left(\begin{array}[]{ccc}0&\kappa(s)&0\\\ -\kappa(s)&0&\tau(s)\\\ 0&-\tau(s)&0\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$t$}(s)\\\ \mbox{\boldmath$n$}(s)\\\ \mbox{\boldmath$b$}(s)\end{array}\right),$ where $\kappa(s)=|\gamma^{\prime\prime}(s)|,\ \tau(s)=\frac{{\rm det}(\gamma^{\prime}(s),\gamma^{\prime\prime}(s),\gamma^{\prime\prime\prime}(s))}{\kappa^{2}(s)}.$ If we take a general parameter $t$, then the tangent vector, the principal normal vector and the bi-normal vector are given by $\mbox{\boldmath$t$}(t)=\frac{\dot{\gamma}(t)}{|\dot{\gamma}(t)|},\ \mbox{\boldmath$n$}(t)=\mbox{\boldmath$b$}(t)\times\mbox{\boldmath$t$}(t),\ \mbox{\boldmath$b$}(t)=\frac{\dot{\gamma}(t)\times\ddot{\gamma}(t)}{|\dot{\gamma}(t)\times\ddot{\gamma}(t)|}.$ Then $\\{\mbox{\boldmath$t$}(t),\mbox{\boldmath$n$}(t),\mbox{\boldmath$b$}(t)\\}$ is a moving frame of $\gamma(t)$ and we have the Frenet-Serret formula: $\left(\begin{array}[]{c}\dot{\mbox{\boldmath$t$}}(t)\\\ \dot{\mbox{\boldmath$n$}}(t)\\\ \dot{\mbox{\boldmath$b$}}(t)\end{array}\right)=\left(\begin{array}[]{ccc}0&|\dot{\gamma}(t)|\kappa(t)&0\\\ -|\dot{\gamma}(t)|\kappa(t)&0&|\dot{\gamma}(t)|\tau(t)\\\ 0&-|\dot{\gamma}(t)|\tau(t)&0\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$t$}(t)\\\ \mbox{\boldmath$n$}(t)\\\ \mbox{\boldmath$b$}(t)\end{array}\right),$ where $\kappa(t)=\frac{|\dot{\gamma}(t)\times\ddot{\gamma}(t)|}{|\dot{\gamma}(t)|^{3}},\ \tau(t)=\frac{{\rm det}(\dot{\gamma}(t),\ddot{\gamma}(t),\dddot{\gamma}(t))}{|\dot{\gamma}(t)\times\ddot{\gamma}(t)|^{2}}.$ Note that in order to define $\mbox{\boldmath$t$}(t),\mbox{\boldmath$n$}(t),\mbox{\boldmath$b$}(t),\kappa(t)$ and $\tau(t)$, we assume that $\gamma$ is not only regular, but also non- degenerate. ### 2.2 Bertrand and Mannheim non-degenerate curves Let $\gamma$ and $\overline{\gamma}:I\to\mathbb{R}^{3}$ be different non- degenerate curves. ###### Definition 2.1 We say that $\gamma$ and $\overline{\gamma}$ are Bertrand mates if there exists a smooth function $\lambda:I\to\mathbb{R}$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$n$}(t)$ and $\mbox{\boldmath$n$}(t)=\pm\overline{\mbox{\boldmath$n$}}(t)$ for all $t\in I$. We also say that $\gamma:I\to\mathbb{R}^{3}$ is a Bertrand curve if there exists another non-degenerate curve $\overline{\gamma}:I\to\mathbb{R}^{3}$ such that $\gamma$ and $\overline{\gamma}$ are Bertrand mates. In [11], we investigated Bertrand curves of non-degenerate curves under the assumption $\tau(s)\not=0$ for all $s\in I$. However, we give an existence condition without this assumption. By a parameter change, we may assume that $s$ is the arc-length parameter of $\gamma$. ###### Theorem 2.2 Let $\gamma:I\to\mathbb{R}^{3}$ be non-degenerate with the arc-length parameter. $(1)$ Suppose that there exists a point $s_{0}\in I$ such that $\tau(s_{0})\not=0$. Then $\gamma$ is a Bertrand curve if and only if there exists a non-zero constant $A$ and a constant $B$ such that $A\kappa(s)+B\tau(s)=1$ and $\tau(s)(B\kappa(s)-A\tau(s))\not=0$ for all $s\in I$. $(2)$ Suppose that $\tau(s)=0$ for all $s\in I$. Then $\gamma$ is always a Bertrand curve. Proof. $(1)$ Suppose that $\gamma$ is a Bertrand curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-\lambda(s)\kappa(s))\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$n$}(s)+\lambda(s)\tau(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$n$}}(s)$, we have $\lambda^{\prime}(s)=0$ for all $s\in I$. Therefore $\lambda$ is a constant. If $\lambda=0$, then $\overline{\gamma}(t)=\gamma(t)$ for all $t\in I$. Hence, $\lambda$ is a non-zero constant. We rewrite $\lambda$ as $A$. Note that $s$ is not the arc-length parameter of $\overline{\gamma}$. By differentiating $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s).$ If $\mbox{\boldmath$n$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}{\mbox{\boldmath$b$}}(s)\\\ {\mbox{\boldmath$t$}}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=A\tau(s)$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=1-A\kappa(s)$. It follows that $\displaystyle-A\tau(s)\cos\theta(s)+(1-A\kappa(s))\sin\theta(s)=0.$ (1) By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\sin\theta(s)\mbox{\boldmath$b$}(s)+\cos\theta(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\theta^{\prime}(s)\cos\theta(s)\mbox{\boldmath$b$}(s)-\theta^{\prime}(s)\sin\theta(s)\mbox{\boldmath$t$}(s)+(-\tau(s)\sin\theta(s)+\kappa(s)\cos\theta(s))\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$n$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, $\theta^{\prime}(s)=0$ for all $s\in I$. Therefore $\theta$ is a constant. If $\tau(s)=0$ at some point $s\in I$, then $\sin\theta=0$ and hence $\tau(s)=0$ for all $s\in I$. It is a contradict of the condition $\tau(s_{0})\not=0$. It follows that $\tau(s)\not=0$ for all $s\in I$ and $\sin\theta\not=0$. By equation (1), we have $A\kappa(s)+A(\cos\theta/\sin\theta)\tau(s)=1$. Hence, if we put $B=A\cos\theta/\sin\theta$, then $A\kappa(s)+B\tau(s)=1$ for all $s\in I$. Moreover, $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=-\tau(s)\sin\theta+\kappa(s)\cos\theta=\frac{\sin\theta}{A}(-A\tau(s)+B\kappa(s)).$ Since $\overline{\kappa}(s)\not=0$, we have $B\kappa(s)-A\tau(s)\not=0$ for all $s\in I$. On the other hand, if $\mbox{\boldmath$n$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}{\mbox{\boldmath$t$}}(s)\\\ {\mbox{\boldmath$b$}}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=1-A\kappa(s)$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=A\tau(s)$. It follows that $\displaystyle-A\tau(s)\sin\theta(s)+(1-A\kappa(s))\cos\theta(s)=0.$ (2) By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\sin\theta(s)\mbox{\boldmath$t$}(s)+\cos\theta(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\theta^{\prime}(s)\cos\theta(s)\mbox{\boldmath$t$}(s)-\theta^{\prime}(s)\sin\theta(s)\mbox{\boldmath$b$}(s)+(\kappa(s)\sin\theta(s)-\tau(s)\cos\theta(s))\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$n$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, $\theta^{\prime}(s)=0$ for all $s\in I$. Therefore $\theta$ is a constant. If $\tau(s)=0$ at some point $s\in I$, then $\cos\theta=0$ and hence $\tau(s)=0$ for all $s\in I$. It is a contradict of the condition $\tau(s_{0})\not=0$. It follows that $\tau(s)\not=0$ for all $s\in I$ and $\cos\theta\not=0$. By equation (2), we have $A\kappa(s)+A(\sin\theta/\cos\theta)\tau(s)=1$. Hence, if we put $B=A\sin\theta/\cos\theta$, then $A\kappa(s)+B\tau(s)=1$ for all $s\in I$. Moreover, $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=\kappa(s)\sin\theta+\tau(s)\cos\theta=\frac{\cos\theta}{A}(-A\tau(s)+B\kappa(s)).$ Since $\overline{\kappa}(s)\not=0$, we have $B\kappa(s)-A\tau(s)\not=0$ for all $s\in I$. Conversely, suppose there exists a non-zero constant $A$ and a constant $B$ such that $A\kappa(s)+B\tau(s)=1$ and $\tau(s)(B\kappa(s)-A\tau(s))\not=0$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s)=\tau(s)(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s)),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=\tau^{\prime}(s)(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s))+\tau(s)(B\kappa(s)-A\tau(s))\mbox{\boldmath$n$}(s),$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=\tau^{2}(s)B(B\kappa(s)-A\tau(s))\mbox{\boldmath$b$}(t)-\tau^{2}(s)A(B\kappa(s)-A\tau(s))\mbox{\boldmath$t$}(s)\not=0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $|\dot{\overline{\gamma}}(s)|=\sqrt{A^{2}+B^{2}}|\tau(s)|$, we have $\overline{\mbox{\boldmath$t$}}(s)={\rm sgn}(\tau(s))(1/\sqrt{A^{2}+B^{2}})(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s)),$ where ${\rm sgn}(\tau(s))=1$ if $\tau(s)>0$ and ${\rm sgn}(\tau(s))=-1$ if $\tau(s)<0$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)={\rm sgn}(\tau(s))(1/\sqrt{A^{2}+B^{2}})(B\kappa(s)-A\tau(s))\mbox{\boldmath$n$}(s)$. By the assumption, we have $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$n$}}(s)$ for all $s\in I$. It follows that $\gamma$ and $\overline{\gamma}$ are Bertrand mates. $(2)$ Since $\kappa(s)>0$ for all $s\in I$, we can take a non-zero constant $\lambda$ such that $1-\lambda\kappa(s)\not=0$ for all $s\in I$. In fact, if $\lambda$ is a negative constant, then the condition hold. Set $\overline{\gamma}(s)=\gamma(s)+\lambda\mbox{\boldmath$n$}(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-\lambda\kappa(s))\mbox{\boldmath$t$}(s),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=-\lambda\kappa^{\prime}(s)\mbox{\boldmath$t$}(s)+(1-\lambda\kappa(s))\kappa(s)\mbox{\boldmath$n$}(s),$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=(1-\lambda\kappa(s))^{2}\kappa(s)\mbox{\boldmath$b$}(s)\not=0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $|\dot{\overline{\gamma}}(s)|=|1-\lambda\kappa(s)|$ and $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\pm\kappa(s)\mbox{\boldmath$n$}(s)$. By the assumption, we have $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$n$}}(s)$ for all $s\in I$. It follows that $\gamma$ and $\overline{\gamma}$ are Bertrand mates. $\Box$ ###### Proposition 2.3 Let $\gamma$ and $\overline{\gamma}:I\rightarrow\mathbb{R}^{3}$ be different non-degenerate curves. $(1)$ Suppose that there exists a point $s_{0}\in I$ such that $\tau(s_{0})\not=0$ and $\gamma$ and $\overline{\gamma}$ are Bertrand mates with $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$ and $A\kappa(s)+B\tau(s)=1$ for all $s\in I$, where $A$ is a non-zero constant and $B$ is a constant. Then the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\frac{|B\kappa(s)-A\tau(s)|}{(A^{2}+B^{2})|\tau(s)|},\ \overline{\tau}(s)=\frac{1}{(A^{2}+B^{2})\tau(s)}.$ $(2)$ Suppose that $\tau(s)=0$ and $\gamma$ and $\overline{\gamma}$ are Bertrand mates with $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, where $A$ is non-zero constant and $1-A\kappa(s)\not=0$ for all $s\in I$. Then the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\frac{\kappa(s)}{|1-A\kappa(s)|},\ \overline{\tau}(s)=0.$ Proof. $(1)$ Since $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s)=\tau(s)(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s)).$ Therefore, we have $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=\tau^{\prime}(s)(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s))+\tau(s)(B\kappa(s)-A\tau(s))\mbox{\boldmath$n$}(s),$ $\displaystyle\dddot{\overline{\gamma}}(s)$ $\displaystyle=\tau^{\prime\prime}(s)(B\mbox{\boldmath$t$}(s)+A\mbox{\boldmath$b$}(s))+2\tau^{\prime}(s)(B\kappa(s)-A\tau(s))\mbox{\boldmath$n$}(s)$ $\displaystyle\quad+\tau(s)(B\kappa^{\prime}(s)-A\tau^{\prime}(s))\mbox{\boldmath$n$}(s)+\tau(s)(B\kappa(s)-A\tau(s))(-\kappa(s)\mbox{\boldmath$t$}(s)+\tau(s)\mbox{\boldmath$b$}(s)).$ Since $\displaystyle|\dot{\overline{\gamma}}(s)|$ $\displaystyle=|\tau(s)|\sqrt{A^{2}+B^{2}},$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=\tau^{2}(s)(B\kappa(s)-A\tau(s))(B\mbox{\boldmath$b$}(s)-A\mbox{\boldmath$t$}(s)),$ $\displaystyle|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|$ $\displaystyle=\tau^{2}(s)|B\kappa(s)-A\tau(s)|\sqrt{A^{2}+B^{2}},$ $\displaystyle{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))$ $\displaystyle=\tau^{3}(s)(B\kappa(s)-A\tau(s))^{2},$ we have the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ as $\displaystyle\overline{\kappa}(s)$ $\displaystyle=\frac{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}{|\dot{\overline{\gamma}}(s)|^{3}}=\frac{|B\kappa(s)-A\tau(s)|}{(A^{2}+B^{2})|\tau(s)|},$ $\displaystyle\overline{\tau}(s)$ $\displaystyle=\frac{{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|^{2}}=\frac{1}{(A^{2}+B^{2})\tau(s)}.$ $(2)$ Since $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=(1-A\kappa(s))\mbox{\boldmath$t$}(s),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=-A\kappa^{\prime}(s)\mbox{\boldmath$t$}(s)+(1-A\kappa(s))\kappa(s)\mbox{\boldmath$n$}(s),$ $\displaystyle\dddot{\overline{\gamma}}(s)$ $\displaystyle=\left(-A\kappa^{\prime\prime}(s)-(1-A\kappa(s))\kappa^{2}(s)\right)\mbox{\boldmath$t$}(s)+\left(\kappa^{\prime}(s)-3A\kappa(s)\kappa^{\prime}(s)\right)\mbox{\boldmath$n$}(t).$ Since $\displaystyle|\dot{\overline{\gamma}}(s)|$ $\displaystyle=|1-A\kappa(s)|,$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=\kappa(s)(1-A\kappa(s))^{2}\mbox{\boldmath$b$}(s),$ $\displaystyle|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|$ $\displaystyle=\kappa(s)(1-A\kappa(s))^{2},$ $\displaystyle{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))$ $\displaystyle=0,$ we have the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ as $\displaystyle\overline{\kappa}(s)=\frac{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}{|\dot{\overline{\gamma}}(s)|^{3}}=\frac{\kappa(s)}{|1-A\kappa(s)|},\ \overline{\tau}(s)=\frac{{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|^{2}}=0.$ $\Box$ ###### Definition 2.4 We say that $\gamma$ and $\overline{\gamma}$ are Mannheim mates if there exists a smooth function $\lambda:I\to\mathbb{R}$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$n$}(t)$ and $\mbox{\boldmath$n$}(t)=\pm\overline{\mbox{\boldmath$b$}}(t)$ for all $t\in I$. We also say that $\gamma:I\to\mathbb{R}^{3}$ is a Mannheim curve if there exists another non-degenerate curve $\overline{\gamma}:I\to\mathbb{R}^{3}$ such that $\gamma$ and $\overline{\gamma}$ are Mannheim mates. In [11], we also investigated Mannheim curves of non-degenerate curves under the assumption $\tau(s)\not=0$ for all $s\in I$. However, the assumption does not need. We give a result and proof in detail. ###### Theorem 2.5 Let $\gamma:I\to\mathbb{R}^{3}$ be non-degenerate with the arc-length parameter. Then $\gamma$ is a Mannheim curves if and only if there exists a non-zero constant $A$ such that $A(\kappa^{2}(s)+\tau^{2}(s))=\kappa(s)$ and $\tau(s)(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))\not=0$ for all $s\in I$. Proof. Suppose that $\gamma$ is a Mannheim curve. There exists a smooth function $\lambda:I\to\mathbb{R}$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$n$}(t)$ and $\mbox{\boldmath$n$}(t)=\pm\overline{\mbox{\boldmath$b$}}(t)$ for all $t\in I$. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-\lambda(s)\kappa(s))\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$n$}(s)+\lambda(s)\tau(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$b$}}(s)$, we have $\lambda^{\prime}(s)=0$ for all $s\in I$. Therefore $\lambda$ is a constant. If $\lambda=0$, then $\overline{\gamma}(t)=\gamma(t)$ for all $t\in I$. Hence, $\lambda$ is a non-zero constant. We rewrite $\lambda$ as $A$. Note that $s$ is not the arc-length parameter of $\overline{\gamma}$. By differentiating $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s).$ If $\mbox{\boldmath$n$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}{\mbox{\boldmath$b$}}(s)\\\ {\mbox{\boldmath$t$}}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=A\tau(s)$ and $-|\dot{\overline{\gamma}}(s)|\sin\theta(s)=1-A\kappa(s)$. It follows that $\displaystyle A\tau(s)\sin\theta(s)+(1-A\kappa(s))\cos\theta(s)=0.$ (3) By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\cos\theta(s)\mbox{\boldmath$b$}(s)-\sin\theta(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=-\theta^{\prime}(s)\sin\theta(s)\mbox{\boldmath$b$}(s)-\theta^{\prime}(s)\cos\theta(s)\mbox{\boldmath$t$}(s)-(\tau(s)\cos\theta(s)+\kappa(s)\sin\theta(s))\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$n$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, $\displaystyle\tau(s)\cos\theta(s)+\kappa(s)\sin\theta(s)=0$ (4) for all $s\in I$. If $\tau(s)=0$ at a point $s\in I$, then $\cos\theta(s)=0$ and $\sin\theta(s)=0$ by (4). Hence $\tau(s)\not=0$ for all $s\in I$. It follows that $\cos\theta(s)\not=0$ and $\sin\theta(s)\not=0$. Since $\overline{\mbox{\boldmath$n$}}(s)=\sin\theta(s)\mbox{\boldmath$b$}(s)+\cos\theta(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=-\theta^{\prime}(s)$. By equations (3) and (4), we have $\displaystyle A(\kappa^{2}(s)+\tau^{2}(s))=\kappa(s).$ By differentiating (4), we have $-\tau(s)\theta^{\prime}(s)\sin\theta(s)+\tau^{\prime}(s)\cos\theta(s)+\kappa(s)\theta^{\prime}(s)\cos\theta(s)+\kappa^{\prime}(s)\sin\theta(s)=0.$ Hence $\theta^{\prime}(s)=(-\kappa(s)\tau^{\prime}(s)+\kappa^{\prime}(s)\tau(s))/(\kappa^{2}(s)+\tau^{2}(s))$. Since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s)>0$ for all $s\in I$. On the other hand, if $\mbox{\boldmath$n$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}{\mbox{\boldmath$t$}}(s)\\\ {\mbox{\boldmath$b$}}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=1-A\kappa(s)$ and $-|\dot{\overline{\gamma}}(s)|\sin\theta(s)=A\tau(s)$. It follows that $\displaystyle A\tau(s)\sin\theta(s)+(1-A\kappa(s))\cos\theta(s)=0.$ (5) By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\cos\theta(s)\mbox{\boldmath$t$}(s)-\sin\theta(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=-\theta^{\prime}(s)\sin\theta(s)\mbox{\boldmath$t$}(s)-\theta^{\prime}(s)\cos\theta(s)\mbox{\boldmath$b$}(s)-(\tau(s)\sin\theta(s)+\kappa(s)\cos\theta(s))\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$n$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, $\displaystyle\tau(s)\sin\theta(s)+\kappa(s)\cos\theta(s)=0$ (6) for all $s\in I$. If $\tau(s)=0$ at a point $s\in I$, then $\sin\theta(s)=0$ and $\cos\theta(s)=0$ by (6). Hence $\tau(s)\not=0$ for all $s\in I$. It follows that $\sin\theta(s)\not=0$ and $\cos\theta(s)\not=0$. Since $\overline{\mbox{\boldmath$n$}}(s)=\sin\theta(s)\mbox{\boldmath$t$}(s)+\cos\theta(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=-\theta^{\prime}(s)$. By equations (5) and (6), we have $\displaystyle A(\kappa^{2}(s)+\tau^{2}(s))=\kappa(s).$ By differentiating (6), we have $\tau(s)\theta^{\prime}(s)\cos\theta(s)+\tau^{\prime}(s)\sin\theta(s)-\kappa(s)\theta^{\prime}(s)\sin\theta(s)+\kappa^{\prime}(s)\cos\theta(s)=0.$ Hence $\theta^{\prime}(s)=(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))/(\kappa^{2}(s)+\tau^{2}(s))$. Since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s)<0$ for all $s\in I$. Conversely, suppose that there exists a non-zero constant $A$ such that $A(\kappa^{2}(s)+\tau^{2}(s))=\kappa(s)$ and $\tau(s)(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))\not=0$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(t)=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s)=A\frac{\tau(s)}{\kappa(s)}(\tau(s)\mbox{\boldmath$t$}(s)+\kappa(s)\mbox{\boldmath$b$}(s)),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\left(\frac{\tau(s)}{\kappa(s)}\right)^{\prime}(\tau(s)\mbox{\boldmath$t$}(s)+\kappa(s)\mbox{\boldmath$b$}(s))+A\frac{\tau(s)}{\kappa(s)}(\tau^{\prime}(s)\mbox{\boldmath$t$}(s)+\kappa^{\prime}(s)\mbox{\boldmath$b$}(s)),$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=A^{2}\frac{\tau^{2}(s)}{\kappa^{2}(s)}(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))\mbox{\boldmath$n$}(s)\not=0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $\ddot{\overline{\gamma}}(s)=({d}/{ds})(|\dot{\overline{\gamma}}(s)|)\overline{\mbox{\boldmath$t$}}(s)+|\dot{\overline{\gamma}}(s)|^{2}\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|^{3}\overline{\kappa}(s)\overline{\mbox{\boldmath$b$}}(s)=A^{2}(\tau^{2}(s)/\kappa^{2}(s))(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))\mbox{\boldmath$n$}(s)$. By the assumption, we have $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$b$}}(s)$. It follows that $\gamma$ and $\overline{\gamma}$ are Mannheim mates. $\Box$ ###### Proposition 2.6 Let $\gamma$ and $\overline{\gamma}:I\rightarrow\mathbb{R}^{3}$ be different non-degenerate curves. Under the same assumptions in Theorem 2.5, suppose that $\gamma$ and $\overline{\gamma}$ are Mannheim mates with $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$ for all $s\in I$, where $A$ is a non-zero constant. Then the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\frac{\kappa(s)|\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s)|}{|A\tau(s)|(\tau^{2}(s)+\kappa^{2}(s))^{\frac{3}{2}}},\ \overline{\tau}(s)=\frac{\kappa^{2}(s)+\tau^{2}(s)}{\tau(s)}.$ Proof. Since $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$n$}(s)$, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=(1-A\kappa(s))\mbox{\boldmath$t$}(s)+A\tau(s)\mbox{\boldmath$b$}(s)=A\frac{\tau(s)}{\kappa(s)}(\tau(s)\mbox{\boldmath$t$}(s)+\kappa(s)\mbox{\boldmath$b$}(s)).$ Therefore, we have $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\left(\frac{\tau(s)}{\kappa(s)}\right)^{\prime}(\tau(s)\mbox{\boldmath$t$}(s)+\kappa(s)\mbox{\boldmath$b$}(s))+A\frac{\tau(s)}{\kappa(s)}(\tau^{\prime}(s)\mbox{\boldmath$t$}(s)+\kappa^{\prime}(s)\mbox{\boldmath$b$}(s)),$ $\displaystyle\dddot{\overline{\gamma}}(s)$ $\displaystyle=A\left(\frac{\tau(s)}{\kappa(s)}\right)^{\prime\prime}(\tau(s)\mbox{\boldmath$t$}(s)+\kappa(s)\mbox{\boldmath$b$}(s))+2A\left(\frac{\tau(s)}{\kappa(s)}\right)^{\prime}(\tau^{\prime}(s)\mbox{\boldmath$t$}(s)+\kappa^{\prime}(s)\mbox{\boldmath$b$}(s)),$ $\displaystyle\quad+A\frac{\tau(s)}{\kappa(s)}(\tau^{\prime\prime}(s)\mbox{\boldmath$t$}(s)+\kappa(s)\tau^{\prime}(s)\mbox{\boldmath$n$}(s)+\kappa^{\prime\prime}(s)\mbox{\boldmath$b$}(s)-\kappa^{\prime}(s)\tau(s)\mbox{\boldmath$n$}(s)).$ Since $\displaystyle|\dot{\overline{\gamma}}(s)|$ $\displaystyle=\frac{|A\tau(s)|}{\kappa(s)}\sqrt{\tau^{2}(s)+\kappa^{2}(s)},$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=A^{2}\left(\frac{\tau(s)}{\kappa(s)}\right)^{2}(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))\mbox{\boldmath$n$}(s),$ $\displaystyle|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|$ $\displaystyle=A^{2}\left(\frac{\tau(s)}{\kappa(s)}\right)^{2}|\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s)|,$ $\displaystyle{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))$ $\displaystyle=A^{3}\left(\frac{\tau(s)}{\kappa(s)}\right)^{3}(\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s))^{2},$ we have the curvature and the torsion as $\displaystyle\overline{\kappa}(s)$ $\displaystyle=\frac{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}{|\dot{\overline{\gamma}}(s)|^{3}}=\frac{\kappa(s)|\kappa(s)\tau^{\prime}(s)-\kappa^{\prime}(s)\tau(s)|}{|A\tau(s)|(\tau^{2}(s)+\kappa^{2}(s))^{\frac{3}{2}}},$ $\displaystyle\overline{\tau}(s)$ $\displaystyle=\frac{{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|^{2}}=\frac{\kappa^{2}(s)+\tau^{2}(s)}{\tau(s)}.$ $\Box$ ### 2.3 Framed curves A framed curve in the $3$-dimensional Euclidean space is a smooth space curve with a moving frame, in detail see [10]. ###### Definition 2.7 We say that $(\gamma,\nu_{1},\nu_{2}):I\rightarrow\mathbb{R}^{3}\times\Delta$ is a framed curve if $\dot{\gamma}(t)\cdot\nu_{1}(t)=0$ and $\dot{\gamma}(t)\cdot\nu_{2}(t)=0$ for all $t\in I$. We say that $\gamma:I\to\mathbb{R}^{3}$ is a framed base curve if there exists $(\nu_{1},\nu_{2}):I\to\Delta$ such that $(\gamma,\nu_{1},\nu_{2})$ is a framed curve. We denote $\mu(t)=\nu_{1}(t)\times\nu_{2}(t)$. Then $\\{\nu_{1}(t),\nu_{2}(t),\mbox{\boldmath$\mu$}(t)\\}$ is a moving frame along the framed base curve $\gamma(t)$ in $\mathbb{R}^{3}$ and we have the Frenet type formula, $\left(\begin{array}[]{c}\dot{\nu_{1}}(t)\\\ \dot{\nu_{2}}(t)\\\ \dot{\mu}(t)\end{array}\right)=\left(\begin{array}[]{ccc}0&\ell(t)&m(t)\\\ -\ell(t)&0&n(t)\\\ -m(t)&-n(t)&0\end{array}\right)\left(\begin{array}[]{c}\nu_{1}(t)\\\ \nu_{2}(t)\\\ \mu(t)\end{array}\right),\ \dot{\gamma}(t)=\alpha(t)\mu(t),$ where $\ell(t)=\dot{\nu_{1}}(t)\cdot\nu_{2}(t)$, $m(t)=\dot{\nu_{1}}(t)\cdot\mu(t),n(t)=\dot{\nu_{2}}(t)\cdot\mu(t)$ and $\alpha(t)=\dot{\gamma}(t)\cdot\mu(t)$. We call the mapping $(\ell,m,n,\alpha)$ the curvature of the framed curve $(\gamma,\nu_{1},\nu_{2})$. Note that $t_{0}$ is a singular point of $\gamma$ if and only if $\alpha(t_{0})=0$. ###### Definition 2.8 Let $(\gamma,\nu_{1},\nu_{2})$ and $(\widetilde{\gamma},\widetilde{\nu}_{1},\widetilde{\nu}_{2}):I\rightarrow\mathbb{R}^{3}\times\Delta$ be framed curves. We say that $(\gamma,\nu_{1},\nu_{2})$ and $(\widetilde{\gamma},\widetilde{\nu}_{1},\widetilde{\nu}_{2})$ are congruent as framed curves if there exist a constant rotation $A\in SO(3)$ and a translation $\mbox{\boldmath$a$}\in\mathbb{R}^{3}$ such that $\widetilde{\gamma}(t)=A(\gamma(t))+\mbox{\boldmath$a$}$, $\widetilde{\nu_{1}}(t)=A(\nu_{1}(t))$ and $\widetilde{\nu_{2}}(t)=A(\nu_{2}(t))$ for all $t\in I$. We gave the existence and uniqueness theorems for framed curves in terms of the curvatures in [10], also see [9]. ###### Theorem 2.9 (Existence Theorem for framed curves) Let $(\ell,m,n,\alpha):I\rightarrow\mathbb{R}^{4}$ be a smooth mapping. Then, there exists a framed curve $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ whose curvature is given by $(\ell,m,n,\alpha)$. ###### Theorem 2.10 (Uniqueness Theorem for framed curves) Let $(\gamma,\nu_{1},\nu_{2})$ and $(\widetilde{\gamma},\widetilde{\nu}_{1},\widetilde{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be framed curves with curvatures $(\ell,m,n,\alpha)$ and $(\widetilde{\ell},\widetilde{m},\widetilde{n},\widetilde{\alpha})$, respectively. Then $(\gamma,\nu_{1},\nu_{2})$ and $(\widetilde{\gamma},\widetilde{\nu}_{1},\widetilde{\nu}_{2})$ are congruent as framed curves if and only if the curvatures $(\ell,m,n,\alpha)$ and $(\widetilde{\ell},\widetilde{m},\widetilde{n},\widetilde{\alpha})$ coincide. Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ be a framed curve with the curvature $(\ell,m,n,\alpha)$. For the normal plane of $\gamma(t)$, spanned by $\nu_{1}(t)$ and $\nu_{2}(t)$, there is some ambient of framed curves similarly to the case of the Bishop frame of a regular space curve (cf. [5]). We define $(\widetilde{\nu}_{1}(t),\widetilde{\nu}_{2}(t))\in\Delta_{2}$ by $\left(\begin{array}[]{c}\widetilde{\nu}_{1}(t)\\\ \widetilde{\nu}_{2}(t)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(t)&-\sin\theta(t)\\\ \sin\theta(t)&\cos\theta(t)\end{array}\right)\left(\begin{array}[]{c}\nu_{1}(t)\\\ \nu_{2}(t)\end{array}\right),$ where $\theta(t)$ is a smooth function. Then $(\gamma,\widetilde{\nu}_{1},\widetilde{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ is also a framed curve and $\widetilde{\mu}(t)=\mu(t).$ By a direct calculation, we have $\displaystyle\dot{\widetilde{\nu}}_{1}(t)$ $\displaystyle=(\ell(t)-\dot{\theta}(t))\sin\theta(t)\nu_{1}(t)+(\ell(t)-\dot{\theta}(t))\cos\theta(t)\nu_{2}(t)$ $\displaystyle\quad+(m(t)\cos\theta(t)-n(t)\sin\theta(t))\mu(t),$ $\displaystyle\dot{\widetilde{\nu}}_{2}(t)$ $\displaystyle=-(\ell(t)-\dot{\theta}(t))\cos\theta(t)\nu_{1}(t)+(\ell(t)-\dot{\theta}(t))\sin\theta(t)\nu_{2}(t)$ $\displaystyle\quad+(m(t)\sin\theta(t)+n(t)\cos\theta(t))\mu(t).$ If we take a smooth function $\theta:I\rightarrow\mathbb{R}$ which satisfies $\dot{\theta}(t)=\ell(t)$, then we call the frame $\\{\widetilde{\nu}_{1}(t),\widetilde{\nu}_{2}(t),\mu(t)\\}$ an adapted frame along $\gamma(t)$. It follows that the Frenet-Serret type formula is given by $\displaystyle\left(\begin{array}[]{c}\dot{\widetilde{\nu}}_{1}(t)\\\ \dot{\widetilde{\nu}}_{2}(t)\\\ \dot{\mu}(t)\end{array}\right)=\left(\begin{array}[]{ccc}0&0&\widetilde{m}(t)\\\ 0&0&\widetilde{n}(t)\\\ -\widetilde{m}(t)&-\widetilde{n}(t)&0\end{array}\right)\left(\begin{array}[]{c}\widetilde{\nu}_{1}(t)\\\ \widetilde{\nu}_{2}(t)\\\ \mu(t)\end{array}\right),$ (16) where $\widetilde{m}(t)$ and $\widetilde{n}(t)$ are given by $\displaystyle\left(\begin{array}[]{c}\widetilde{m}(t)\\\ \widetilde{n}(t)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(t)&-\sin\theta(t)\\\ \sin\theta(t)&\cos\theta(t)\end{array}\right)\left(\begin{array}[]{c}m(t)\\\ n(t)\end{array}\right).$ By a direct calculation, we have the following (cf. [10]). ###### Proposition 2.11 Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a framed curve with curvature $(\ell,m,n,\alpha)$. Then $(\gamma,\nu_{2},\nu_{1}):I\to\mathbb{R}^{3}\times\Delta$ is also a framed curve with curvature $(-\ell,-n,-m,-\alpha)$. ### 2.4 Bertrand and Mannheim curves of framed curves Let $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be framed curves with the curvature $(\ell,m,n,\alpha)$ and $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$, respectively. Suppose that $\gamma$ and $\overline{\gamma}$ are different curves. In [11], we define Bertrand and Mannheim curves of framed curves, and give a characterization of the Bertrand and Mannheim curves. ###### Definition 2.12 We say that framed curves $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are Bertrand mates (or, $(\nu_{1},\overline{\nu}_{1})$-mates) if there exists a smooth function $\lambda:I\to\mathbb{R}$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{1}(t)$ and $\nu_{1}(t)=\overline{\nu}_{1}(t)$ for all $t\in I$. We also say that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a Bertrand curve if there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ such that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are Bertrand mates. ###### Theorem 2.13 Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ be a framed curve with the curvature $(\ell,m,n,\alpha)$. Then $(\gamma,\nu_{1},\nu_{2})$ is a Bertrand curve if and only if there exists a non-zero constant $\lambda$ and a smooth function $\theta:I\to\mathbb{R}$ such that $\lambda\ell(t)\cos\theta(t)-(\alpha(t)+\lambda m(t))\sin\theta(t)=0$ for all $t\in I$. ###### Proposition 2.14 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are Bertrand mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda\nu_{1}(t),\overline{\nu_{1}}(t)=\sin\theta(t)\nu_{2}(t)+\cos\theta(t)\mu(t),\overline{\nu_{2}}(t)=\nu_{1}(t)$ and $\theta:I\to\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\displaystyle\overline{\ell}(t)$ $\displaystyle=\ell(t)\cos\theta(t)-m(t)\sin\theta(t),$ $\displaystyle\overline{m}(t)$ $\displaystyle=\ell(t)\sin\theta(t)+m(t)\cos\theta(t),$ $\displaystyle\overline{n}(t)$ $\displaystyle=n(t)-\dot{\theta}(t),$ $\displaystyle\overline{\alpha}(t)$ $\displaystyle=\lambda\ell(t)\sin\theta(t)+(\alpha(t)+\lambda m(t))\cos\theta(t).$ ###### Definition 2.15 We say that framed curves $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are Mannheim mates (or, $(\nu_{1},\overline{\nu}_{2})$-mates) if there exists a smooth function $\lambda:I\to\mathbb{R}$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{1}(t)$ and $\nu_{1}(t)=\overline{\nu}_{2}(t)$ for all $t\in I$. We also say that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a Mannheim curve if there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ such that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are Mannheim mates. ###### Theorem 2.16 Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ be a framed curve with the curvature $(\ell,m,n,\alpha)$. Then $(\gamma,\nu_{1},\nu_{2})$ is a Mannheim curve if and only if there exists a non-zero constant $\lambda$ and a smooth function $\theta:I\to\mathbb{R}$ such that $\lambda\ell(t)\sin\theta(t)+(\alpha(t)+\lambda m(t))\cos\theta(t)=0$ for all $t\in I$. As a difference between non-degenerate regular space curves and framed curves, we have a relation between Bertrand and Mannheim curves of framed curves. ###### Theorem 2.17 Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ be a framed curve with the curvature $(\ell,m,n,\alpha)$. Then $(\gamma,\nu_{1},\nu_{2})$ is a Bertrand curve if and only if $(\gamma,\nu_{1},\nu_{2})$ is a Mannheim curve. ###### Remark 2.18 By definition, if $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are Bertrand mates, then $\overline{\gamma}=\gamma+\lambda\nu_{1}$ and $\nu_{1}=\overline{\nu}_{1}$. Since $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ is also a framed curve, $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ are Mannheim mates and vice versa. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a Bertrand curve if and only if $(\gamma,\nu_{1},\nu_{2})$ is a Mannheim curve. ## 3 Bertrand types of non-degenerate curves Let $\gamma$ and $\overline{\gamma}:I\to\mathbb{R}^{3}$ be non-degenerate curves. We consider a space curve whose tangent (or, principal normal, bi- normal) line is the same as the tangent (or, principal normal, bi-normal) line of another curve, respectively. ###### Definition 3.1 We say that $\gamma$ and $\overline{\gamma}$ are $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-mates if there exists a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$v$}(t)$ and $\mbox{\boldmath$v$}(t)=\pm\overline{\mbox{\boldmath$w$}}(t)$ for all $t\in I$, where $v$ and $w$ are $\mbox{\boldmath$t$},\mbox{\boldmath$n$}$ or $b$. We also say that $\gamma$ is a $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-Bertrand type curve (or, $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-Bertrand-Mannheim type curve) if there exists another non-degenerate regular curve $\overline{\gamma}$ such that $\gamma$ and $\overline{\gamma}$ are $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-mates. We clarify the notation $\lambda\not\equiv 0$. Throughout this paper, $\lambda\not\equiv 0$ means that $\\{t\in I|\lambda(t)\not=0\\}$ is a dense subset of $I$. Then $\lambda$ is not identically zero for any non-trivial subintervals of $I$. It follows that $\gamma$ and $\overline{\gamma}$ are different space curves for any non-trivial subintervals of $I$. Note that if $\lambda$ is constant, then $\lambda\not\equiv 0$ means that $\lambda$ is a non-zero constant. We give all characterizations of Bertrand type curves of non-degenerate regular curves. Let $\gamma:I\to\mathbb{R}^{3}$ be a non-degenerate curve with curvature and torsion $(\kappa,\tau)$. By a parameter change, we may assume that $s$ is the arc-length parameter of $\gamma$. Note that $s$ is not the arc-length parameter of $\overline{\gamma}$. ###### Proposition 3.2 $\gamma$ is not a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1+\lambda^{\prime}(s))\mbox{\boldmath$t$}(s)+\lambda(s)\kappa(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$t$}(s)=\pm\overline{\mbox{\boldmath$t$}}(s)$, we have $\lambda(s)\kappa(s)=0$ for all $s\in I$. Furthermore, since $\kappa(s)>0$, $\lambda(s)=0$ for all $s\in I$ and hence $\overline{\gamma}(t)=\gamma(t)$ for all $t\in I$. It follows that $\gamma$ is not a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. $\Box$ ###### Theorem 3.3 $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve if and only if $\tau(s)=0$ and there exists a constant $c\in\mathbb{R}$ such that $-s+c\not=0$ for all $s\in I$. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1+\lambda^{\prime}(s))\mbox{\boldmath$t$}(s)+\lambda(s)\kappa(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$t$}(s)=\pm\overline{\mbox{\boldmath$n$}}(s)$, we have $1+\lambda^{\prime}(s)=0$ for all $s\in I$. Therefore there exists a constant $c\in\mathbb{R}$ such that $\lambda(s)=-s+c$. If $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$n$}(s)\\\ \mbox{\boldmath$b$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=0$ and $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=\lambda(s)\kappa(s)$. Since $|\dot{\overline{\gamma}}(s)|\neq 0$, we have $\cos\theta(s)=0$, $\sin\theta(s)=\pm 1$ and hence $\overline{\mbox{\boldmath$b$}}(s)=\mp\mbox{\boldmath$b$}(s)$ and $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$ for all $s\in I$. Furthermore, since $|\dot{\overline{\gamma}}(s)|>0$ and $\kappa(s)>0$, we have $\lambda(s)=-s+c\neq 0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$b$}}(s)=\mp\mbox{\boldmath$b$}(s)$, we have $-|\dot{\overline{\gamma}}(s)|\overline{\tau}(s)\overline{\mbox{\boldmath$n$}}(s)=\pm\tau(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, we have $\tau(s)=0$ and $\overline{\tau}(s)=0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\mp\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\mp\kappa(s)\mbox{\boldmath$t$}(s).$ Since $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=\mp\kappa(s)$ for all $s\in I$. Furthermore, since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\overline{\mbox{\boldmath$t$}}(s)=-\mbox{\boldmath$n$}(s)$, $\sin\theta(s)=-1$ and $\overline{\mbox{\boldmath$b$}}(s)=\mbox{\boldmath$b$}(s)$. Hence, we have $\displaystyle|\dot{\overline{\gamma}}(s)|=-\lambda(s)\kappa(s),\ \kappa(s)=-\frac{|\dot{\overline{\gamma}}(s)|}{\lambda(s)}=\frac{|\dot{\overline{\gamma}}(s)|}{s-c}.$ Since $\kappa(s)>0$, if $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, we have $s-c>0$ for all $s\in I$. On the other hand, if $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$b$}(s)\\\ \mbox{\boldmath$n$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=0$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=\lambda(s)\kappa(s)$. Since $|\dot{\overline{\gamma}}(s)|\neq 0$, we have $\sin\theta(s)=0$, $\cos\theta(s)=\pm 1$ and hence $\overline{\mbox{\boldmath$b$}}(s)=\pm\mbox{\boldmath$b$}(s)$ and $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$ for all $s\in I$. Furthermore, since $|\dot{\overline{\gamma}}(s)|>0$ and $\kappa(s)>0$, we have $\lambda(s)=-s+c\neq 0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$b$}}(s)=\pm\mbox{\boldmath$b$}(s)$, we have $-|\dot{\overline{\gamma}}(s)|\overline{\tau}(s)\overline{\mbox{\boldmath$n$}}(s)=\mp\tau(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, we have $\tau(s)=0$ and $\overline{\tau}(s)=0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\mp\kappa(s)\mbox{\boldmath$t$}(s).$ Since $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=\pm\kappa(s)$ for all $s\in I$. Furthermore, since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$n$}(s)$, $\cos\theta(s)=1$ and $\overline{\mbox{\boldmath$b$}}(s)=\mbox{\boldmath$b$}(s)$. Hence, we have $\displaystyle|\dot{\overline{\gamma}}(s)|=\lambda(s)\kappa(s),\ \kappa(s)=\frac{|\dot{\overline{\gamma}}(s)|}{\lambda(s)}=\frac{|\dot{\overline{\gamma}}(s)|}{-s+c}.$ Since $\kappa(s)>0$, if $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, we have $-s+c>0$ for all $s\in I$. It follows that we have $-s+c\neq 0$ for all $s\in I$. Conversely, suppose that $\tau(s)=0$ and there exists a constant $c\in\mathbb{R}$ such that $-s+c\neq 0$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+(-s+c)\mbox{\boldmath$t$}(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(-s+c)\kappa(s)\mbox{\boldmath$n$}(s),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=\left((-s+c)\kappa(s)\right)^{\prime}\mbox{\boldmath$n$}(s)+(-s+c)\kappa^{2}(s)\mbox{\boldmath$t$}(s),$ $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle\times\ddot{\overline{\gamma}}(s)=(-s+c)^{2}\kappa^{3}(s)\mbox{\boldmath$b$}(s)\neq 0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $|\dot{\overline{\gamma}}(s)|=\sqrt{(-s+c)^{2}\kappa^{2}(s)}$, we have $\displaystyle\overline{\mbox{\boldmath$t$}}(s)$ $\displaystyle=\frac{(-s+c)\kappa(s)}{\sqrt{(-s+c)^{2}\kappa^{2}(s)}}\mbox{\boldmath$n$}(s)=\frac{(-s+c)\kappa(s)}{|-s+c|\kappa(s)}\mbox{\boldmath$n$}(s)=\pm\mbox{\boldmath$n$}(s).$ By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\mp\kappa(s)\mbox{\boldmath$t$}(s)$. Hence, we have $\overline{\mbox{\boldmath$n$}}(s)=\pm\mbox{\boldmath$t$}(s)$. It follows that $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve. $\Box$ ###### Remark 3.4 If $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve, then $\overline{\gamma}$ is an involute of $\gamma$ (cf. [6, 13, 14]). Moreover, the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=1/|-s+c|,\overline{\tau}(s)=\tau(s)=0$ for all $s\in I$. ###### Proposition 3.5 $\gamma$ is not a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1+\lambda^{\prime}(s))\mbox{\boldmath$t$}(s)+\lambda(s)\kappa(s)\mbox{\boldmath$n$}(s).$ If $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$n$}(s)\\\ \mbox{\boldmath$b$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=0$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=\lambda(s)\kappa(s)$. Since $|\dot{\overline{\gamma}}(s)|\neq 0$, we have $\sin\theta(s)=0$, $\cos\theta(s)=\pm 1$ and hence $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$, we have $-|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\mp\kappa(s)\mbox{\boldmath$t$}(s)\pm\tau(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$t$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, we have $\kappa(s)=0$ for all $s\in I$. Therefore $\gamma$ is degenerate curve. It is a contradict of the definition. On the other hand, if $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$b$}(s)\\\ \mbox{\boldmath$n$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=0$ and $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=-\lambda(s)\kappa(s)$. Since $|\dot{\overline{\gamma}}(s)|\neq 0$, we have $\cos\theta(s)=0$, $\sin\theta(s)=\pm 1$ and hence $\overline{\mbox{\boldmath$t$}}(s)=\mp\mbox{\boldmath$n$}(s)$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\mp\mbox{\boldmath$n$}(s)$, we have $-|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\pm\kappa(s)\mbox{\boldmath$t$}(s)\mp\tau(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$t$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, we have $\kappa(s)=0$ for all $s\in I$. Therefore $\gamma$ is degenerate curve. It is a contradict of the definition. It follows that $\gamma$ is not a $(\mbox{\boldmath$t$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve. $\Box$ ###### Theorem 3.6 $\gamma$ is a $(\mbox{\boldmath$n$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve if and only if $\tau(s)=0$ and $\kappa^{\prime}(s)\neq 0$ for all $s\in I$. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$n$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=(1-\lambda(s)\kappa(s))\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$n$}(s)+\lambda(s)\tau(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$n$}(s)=\pm\overline{\mbox{\boldmath$t$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|=\pm\lambda^{\prime}(s)$, $1-\lambda(s)\kappa(s)=0$ and $\lambda(s)\tau(s)=0$ for all $s\in I$. Since $\kappa(s)>0$, we have $\lambda(s)=1/\kappa(s)>0$ and hence $\tau(s)=0$ for all $s\in I$. By differentiating $\lambda(s)=1/\kappa(s)$, we have $\lambda^{\prime}(s)=-{\kappa^{\prime}(s)}/{\kappa^{2}(s)}$. Since $|\dot{\overline{\gamma}}(s)|=\pm\lambda^{\prime}(s)$ and $|\dot{\overline{\gamma}}(s)|>0$, we have $|\dot{\overline{\gamma}}(s)|={\kappa^{\prime}(s)}/{\kappa^{2}(s)}\neq 0$ for all $s\in I$. Conversely, suppose that $\tau(s)=0$ and $\kappa^{\prime}(s)\neq 0$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+\mbox{\boldmath$n$}(s)/\kappa(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=-\frac{\kappa^{\prime}(s)}{\kappa^{2}(s)}\mbox{\boldmath$n$}(s),\ \ddot{\overline{\gamma}}(s)=\left(-\frac{\kappa^{\prime}(s)}{\kappa^{2}(s)}\right)^{\prime}\mbox{\boldmath$n$}(s)+\frac{\kappa^{\prime}(s)}{\kappa(s)}\mbox{\boldmath$t$}(s),$ $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle\times\ddot{\overline{\gamma}}(s)=\frac{\kappa^{\prime}(s)^{2}}{\kappa^{3}(s)}\mbox{\boldmath$b$}(s)\neq 0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $|\dot{\overline{\gamma}}(s)|=|\kappa^{\prime}(s)|/\kappa^{2}(s)$, we have $\frac{|\kappa^{\prime}(s)|}{\kappa^{2}(s)}\overline{\mbox{\boldmath$t$}}(s)=-\frac{\kappa^{\prime}(s)}{\kappa^{2}(s)}\mbox{\boldmath$n$}(s).$ Hence, we have $\overline{\mbox{\boldmath$t$}}(s)=\pm\mbox{\boldmath$n$}(s)$. It follows that $\gamma$ is a $(\mbox{\boldmath$n$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. $\Box$ ###### Remark 3.7 If $\gamma$ is a $(\mbox{\boldmath$n$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve, then $\overline{\gamma}$ is an evolute of $\gamma$ (cf. [6, 13, 14]). Moreover, the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\kappa^{3}(s)/|\kappa^{\prime}(s)|,\overline{\tau}(s)=\tau(s)=0$ for all $s\in I$. ###### Proposition 3.8 $\gamma$ is not a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$b$}(s)-\lambda(s)\tau(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$b$}(s)=\pm\overline{\mbox{\boldmath$t$}}(s)$, we have $|\dot{\overline{\gamma}}(s)|=\pm\lambda^{\prime}(s)$, $\mbox{\boldmath$t$}(s)-\lambda(s)\tau(s)\mbox{\boldmath$n$}(s)=0$ and hence $\mbox{\boldmath$t$}(s)=\lambda(s)\tau(s)\mbox{\boldmath$n$}(s)$ for all $s\in I$. It is a contradict of a moving frame of $\gamma(s)$. It follows that $\gamma$ is not a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$t$}})$-Bertrand type curve. $\Box$ In [18], they investigate $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$n$}})$-Bertrand type as Mannheim partner. However, it seems to be missed the non-degenerate condition. ###### Theorem 3.9 $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve if and only if $\tau(s)\neq 0$ and there exists a non-zero constant $A$ such that $A\tau^{\prime}(s)=\kappa(s)(A^{2}\tau^{2}(s)+1)$ for all $s\in I$. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$b$}(s)-\lambda(s)\tau(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$b$}(s)=\pm\overline{\mbox{\boldmath$n$}}(s)$, we have $\lambda^{\prime}(s)=0$ for all $s\in I$. Therefore $\lambda$ is a constant. If $\lambda=0$, then $\overline{\gamma}(s)=\gamma(s)$ for all $s\in I$. Hence, $\lambda$ is a non-zero constant. We rewrite $\lambda$ as $A$. By differentiating $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)-A\tau(s)\mbox{\boldmath$n$}(s).$ If $\mbox{\boldmath$b$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$t$}(s)\\\ \mbox{\boldmath$n$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=1$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=-A\tau(s)$. It follows that $\sin\theta(s)>0$ and $\displaystyle\cos\theta(s)+A\tau(s)\sin\theta(s)=0$ (18) for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\sin\theta(s)\mbox{\boldmath$t$}(s)+\cos\theta(s)\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=(\kappa(s)-\theta^{\prime}(s))\overline{\mbox{\boldmath$b$}}(s)+\tau(s)\cos\theta(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$b$}(s)=\overline{\mbox{\boldmath$n$}}(s)$, we have $\kappa(s)-\theta^{\prime}(s)=0$ and $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=\tau(s)\cos\theta(s)$ for all $s\in I$. Since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\tau(s)\cos\theta(s)>0$ for all $s\in I$. If $\tau(s)=0$ at a point $s\in I$, then $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=0$. It is a contradict of $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$. Hence, $\tau(s)\neq 0$ for all $s\in I$. By differentiating (18), we have $\theta^{\prime}(s)(A\tau(s)\cos\theta(s)-\sin\theta(s))+A\tau^{\prime}(s)\sin\theta(s)=0.$ By (18), we have $\kappa(s)(A^{2}\tau^{2}(s)+1)=A\tau^{\prime}(s)$ for all $s\in I$. On the other hand, if $\mbox{\boldmath$b$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$b$}}(s)\\\ \overline{\mbox{\boldmath$t$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$n$}(s)\\\ \mbox{\boldmath$t$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=1$ and $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=-A\tau(s)$. It follows that $\cos\theta(s)>0$ and $\displaystyle\sin\theta(s)+A\tau(s)\cos\theta(s)=0$ (19) for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\sin\theta(s)\mbox{\boldmath$n$}(s)+\cos\theta(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=(\kappa(s)+\theta^{\prime}(s))\overline{\mbox{\boldmath$b$}}(s)+\tau(s)\sin\theta(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$b$}(s)=-\overline{\mbox{\boldmath$n$}}(s)$, we have $\kappa(s)+\theta^{\prime}(s)=0$ and $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=-\tau(s)\sin\theta(s)$ for all $s\in I$. Since $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$, we have $\tau(s)\sin\theta(s)<0$ for all $s\in I$. If $\tau(s)=0$ at a point $s\in I$, then $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)=0$. It is a contradict of $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)>0$. Hence, $\tau(s)\neq 0$ for all $s\in I$. By differentiating (19), we have $\theta^{\prime}(s)(\cos\theta(s)+A\tau(s)\sin\theta(s))+A\tau^{\prime}(s)\cos\theta(s)=0.$ By (19), we have $\kappa(s)(A^{2}\tau^{2}(s)+1)=A\tau^{\prime}(s)$ for all $s\in I$. Conversely, suppose that $\tau(s)\neq 0$ and there exists a non-zero constant $A$ such that $A\tau^{\prime}(s)=\kappa(s)(A^{2}\tau^{2}(s)+1)$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)$ $\displaystyle=\mbox{\boldmath$t$}(s)-A\tau(s)\mbox{\boldmath$n$}(s),$ $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\kappa(s)\tau(s)\mbox{\boldmath$t$}(s)+(\kappa(s)-A\tau^{\prime}(s))\mbox{\boldmath$n$}(s)-A\tau^{2}(s)\mbox{\boldmath$b$}(s),$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\tau^{3}(s)\mbox{\boldmath$t$}(s)+A\tau^{2}(s)\mbox{\boldmath$n$}(s)+(\kappa(s)-A\tau^{\prime}(s)+A^{2}\kappa(s)\tau^{2}(s))\mbox{\boldmath$b$}(s)$ $\displaystyle=A\tau^{2}(s)(A\tau(s)\mbox{\boldmath$t$}(s)+\mbox{\boldmath$n$}(s))\neq 0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Furthermore, we have $\displaystyle(\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s))\times\dot{\overline{\gamma}}(s)=-A\tau^{2}(s)(A^{2}\tau^{2}(s)+1)\mbox{\boldmath$b$}(s).$ Since $\overline{\mbox{\boldmath$n$}}(s)=\overline{\mbox{\boldmath$b$}}(s)\times\overline{\mbox{\boldmath$t$}}(s)$, we have $\overline{\mbox{\boldmath$n$}}(s)=\frac{\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}\times\frac{\dot{\overline{\gamma}}(s)}{|\dot{\overline{\gamma}}(s)|}=-\frac{A\tau^{2}(s)(A^{2}\tau^{2}(s)+1)}{|A|\tau^{2}(s)(A^{2}\tau^{2}(s)+1)}\mbox{\boldmath$b$}(s)=\pm\mbox{\boldmath$b$}(s).$ It follows that $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$n$}})$-Bertrand type curve. $\Box$ ###### Proposition 3.10 Let $\gamma$ and $\overline{\gamma}:I\rightarrow\mathbb{R}^{3}$ be different non-degenerate curves. Under the same assumptions in Theorem 3.9, suppose that $\gamma$ and $\overline{\gamma}$ are $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$n$}})$-Bertrand mates with $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$, $\tau(s)\neq 0$ and $A\tau^{\prime}(s)=\kappa(s)(A^{2}\tau^{2}(s)+1)$ for all $s\in I$, where $A$ is a non-zero constant. Then the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\frac{|A|\tau^{2}(s)}{1+A^{2}\tau^{2}(s)},\ \overline{\tau}(s)=\frac{\tau(s)}{1+A^{2}\tau^{2}(s)}.$ Proof. Since $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$, we have $\dot{\overline{\gamma}}(s)=\mbox{\boldmath$t$}(s)-A\tau(s)\mbox{\boldmath$n$}(s).$ Therefore, we have $\displaystyle\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\tau(s)(\kappa(s)\mbox{\boldmath$t$}(s)-A\tau(s)\kappa(s)\mbox{\boldmath$n$}(s)-\tau(s)\mbox{\boldmath$b$}(s)),$ $\displaystyle\dddot{\overline{\gamma}}(s)$ $\displaystyle=A\left((\kappa(s)\tau(s))^{\prime}+A\kappa^{2}(s)\tau^{2}(s)\right)\mbox{\boldmath$t$}(s)$ $\displaystyle\quad+A\tau(s)\left(-2\kappa(s)\tau^{\prime}(s)+\kappa^{2}(s)-A\kappa^{\prime}(s)\tau(s)+\tau^{2}(s)\right)\mbox{\boldmath$n$}(s)$ $\displaystyle\quad-A\tau(s)\left(2\tau^{\prime}(s)+A\kappa(s)\tau^{2}(s)\right)\mbox{\boldmath$b$}(s).$ Since $\displaystyle|\dot{\overline{\gamma}}(s)|$ $\displaystyle=\sqrt{1+A^{2}\tau^{2}(s)},$ $\displaystyle\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)$ $\displaystyle=A\tau^{2}(s)\left(\mbox{\boldmath$n$}(s)+A\tau(s)\mbox{\boldmath$t$}(s)\right),$ $\displaystyle|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|$ $\displaystyle=|A|\tau^{2}(s)\sqrt{1+A^{2}\tau^{2}(s)},$ $\displaystyle{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))$ $\displaystyle=A^{2}\tau^{5}(s),$ we have the curvature and the torsion as $\displaystyle\overline{\kappa}(s)=\frac{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}{|\dot{\overline{\gamma}}(s)|^{3}}=\frac{|A|\tau^{2}(s)}{1+A^{2}\tau^{2}(s)},\ \overline{\tau}(s)=\frac{{\rm det}(\dot{\overline{\gamma}}(s),\ddot{\overline{\gamma}}(s),\dddot{\overline{\gamma}}(s))}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|^{2}}=\frac{\tau(s)}{1+A^{2}\tau^{2}(s)}.$ $\Box$ ###### Theorem 3.11 $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve if and only if $\tau(s)=0$ for all $s\in I$. Proof. Suppose that $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve. By differentiating $\overline{\gamma}(s)=\gamma(s)+\lambda(s)\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)+\lambda^{\prime}(s)\mbox{\boldmath$b$}(s)-\lambda(s)\tau(s)\mbox{\boldmath$n$}(s).$ Since $\mbox{\boldmath$b$}(s)=\pm\overline{\mbox{\boldmath$b$}}(s)$, we have $\lambda^{\prime}(s)=0$ for all $s\in I$. Therefore $\lambda$ is a constant. If $\lambda(s)=0$, then $\overline{\gamma}(s)=\gamma(s)$ for all $s\in I$. Hence, $\lambda$ is a non-zero constant. We rewrite $\lambda$ as $A$. By differentiating $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)-A\tau(s)\mbox{\boldmath$n$}(s).$ If $\mbox{\boldmath$b$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$t$}(s)\\\ \mbox{\boldmath$n$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=1$ and $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=A\tau(s)$. It follows that $\cos\theta(s)>0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\cos\theta(s)\mbox{\boldmath$t$}(s)-\sin\theta(s)\mbox{\boldmath$n$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=(\kappa(s)-\theta^{\prime}(s))\overline{\mbox{\boldmath$n$}}(s)-\tau(s)\sin\theta(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$b$}(s)=\overline{\mbox{\boldmath$b$}}(s)$, we have $\tau(s)\sin\theta(s)=0$. If $\tau(s)\neq 0$ at a point $s\in I$, then $\sin\theta(s)=0$. It is contradict of $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=A\tau(s)$. It follows that we have $\tau(s)=0$ for all $s\in I$. On the other hand, if $\mbox{\boldmath$b$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\left(\begin{array}[]{c}\overline{\mbox{\boldmath$t$}}(s)\\\ \overline{\mbox{\boldmath$n$}}(s)\end{array}\right)=\left(\begin{array}[]{cc}\cos\theta(s)&-\sin\theta(s)\\\ \sin\theta(s)&\cos\theta(s)\end{array}\right)\left(\begin{array}[]{c}\mbox{\boldmath$n$}(s)\\\ \mbox{\boldmath$t$}(s)\end{array}\right).$ Then $|\dot{\overline{\gamma}}(s)|\sin\theta(s)=-1$ and $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=-A\tau(s)$. It follows that $\sin\theta(s)<0$ for all $s\in I$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\cos\theta(s)\mbox{\boldmath$n$}(s)-\sin\theta(s)\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=-(\kappa(s)+\theta^{\prime}(s))\overline{\mbox{\boldmath$n$}}(s)+\tau(s)\cos\theta(s)\mbox{\boldmath$b$}(s).$ Since $\mbox{\boldmath$b$}(s)=-\overline{\mbox{\boldmath$b$}}(s)$, we have $\tau(s)\cos\theta(s)=0$. If $\tau(s)\neq 0$ at a point $s\in I$, then $\cos\theta(s)=0$. It is contradict of $|\dot{\overline{\gamma}}(s)|\cos\theta(s)=-A\tau(s)$. It follows that $\tau(s)=0$ for all $s\in I$ and $\cos\theta(s)=0$ and hence $\sin\theta(s)=-1$. Hence, we have $\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)$ and $\overline{\mbox{\boldmath$n$}}(s)=-\mbox{\boldmath$n$}(s)$. By differentiating $\overline{\mbox{\boldmath$t$}}(s)=\mbox{\boldmath$t$}(s)$, we have $|\dot{\overline{\gamma}}(s)|\overline{\kappa}(s)\overline{\mbox{\boldmath$n$}}(s)=\kappa(s)\mbox{\boldmath$n$}(s).$ Since $|\dot{\overline{\gamma}}(s)|>0$, $\overline{\kappa}(s)>0$ and $\kappa(s)>0$, we have $\overline{\mbox{\boldmath$n$}}(s)=\mbox{\boldmath$n$}(s)$. It is contradict of $\overline{\mbox{\boldmath$n$}}(s)=-\mbox{\boldmath$n$}(s)$. Hence, this case does not occur. Conversely, suppose that $\tau(s)=0$ for all $s\in I$. Set $\overline{\gamma}(s)=\gamma(s)+A\mbox{\boldmath$b$}(s)$, where $A$ is a non- zero constant. By a direct calculation, we have $\displaystyle\dot{\overline{\gamma}}(s)=\mbox{\boldmath$t$}(s),\ \ddot{\overline{\gamma}}(s)=\kappa(s)\mbox{\boldmath$n$}(s),\ \dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)=\kappa(s)\mbox{\boldmath$b$}(s)\neq 0.$ It follows that $\overline{\gamma}$ is a non-degenerate curve. Since $\overline{\mbox{\boldmath$b$}}(s)=\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)/|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|$, we have $\overline{\mbox{\boldmath$b$}}(s)=\frac{\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)}{|\dot{\overline{\gamma}}(s)\times\ddot{\overline{\gamma}}(s)|}=\frac{\kappa(s)}{\kappa(s)}\mbox{\boldmath$b$}(s)=\mbox{\boldmath$b$}(s)$ It follows that $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve. $\Box$ ###### Remark 3.12 If $\gamma$ is a $(\mbox{\boldmath$b$},\overline{\mbox{\boldmath$b$}})$-Bertrand type curve, $\overline{\gamma}$ is a parallel translation of $\gamma$. Moreover, the curvature $\overline{\kappa}$ and the torsion $\overline{\tau}$ of $\overline{\gamma}$ are given by $\overline{\kappa}(s)=\kappa(s),\overline{\tau}(s)=\tau(s)=0$ for all $s\in I$. ## 4 Bertrand framed curves Let $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be framed curves. ###### Definition 4.1 We say that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-mates if there exists a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$v$}(t)$ and $\mbox{\boldmath$v$}(t)=\overline{\mbox{\boldmath$w$}}(t)$ for all $t\in I$, where $v$ and $w$ are $\nu_{1},\nu_{2}$ or $\mu$. We also say that $(\gamma,\nu_{1},\nu_{2})$ is a $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-Bertrand framed curve (or, $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-Bertrand-Mannheim framed curve) if there exists another framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ such that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ are $(\mbox{\boldmath$v$},\overline{\mbox{\boldmath$w$}})$-mates. A little bit the difference of the sign of $\overline{\mbox{\boldmath$w$}}$ between the Definitions 3.1 and 4.1 comes from a flexibility of framed curves. That is, if $(\gamma,\nu_{1},\nu_{2})$ is a framed curve, then $(\gamma,-\nu_{1},\nu_{2})$ and $(\gamma,\nu_{1},-\nu_{2})$ are also framed curves. Therefore, we may consider $\overline{\mbox{\boldmath$w$}}$ up to sign. Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ be a framed curve with curvature $(\ell,m,n,\alpha)$. We give all characterizations of Bertrand framed curves. ###### Theorem 4.2 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\mu})$-Bertrand framed curve if and only if $m(t)=n(t)=0$ for all $t\in I$. Proof. Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\mu})$-Bertrand framed curve. Then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mu(t)$ and $\mu(t)=\overline{\mu}(t)$ for all $t\in I$. By differentiating, we have $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\dot{\lambda}(t))\mu(t)-\lambda(t)m(t)\nu_{1}(t)-\lambda(t)n(t)\nu_{2}(t)=0$ for all $t\in I$. Since $\mu(t)=\overline{\mu}(t)$, we have $\overline{\alpha}(t)=\alpha(t)+\dot{\lambda}(t),\lambda(t)m(t)=0$ and $\lambda(t)n(t)=0$ for all $t\in I$. Since $\lambda\not\equiv 0$ and the continuous condition, we have $m(t)=n(t)=0$ for all $t\in I$. Conversely, suppose that $(\gamma,\nu_{1},\nu_{2})$ is a framed curve with $m(t)=n(t)=0$ for all $t\in I$. Then $\mu$ is a constant vector $v$ Consider a smooth map $(\overline{\gamma},\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ with $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mbox{\boldmath$v$}$. Since $\dot{\overline{\gamma}}(t)=(\alpha(t)+\dot{\lambda}(t))\mbox{\boldmath$v$}$, we have $\dot{\overline{\gamma}}(t)\cdot\nu_{1}(t)=\dot{\overline{\gamma}}(t)\cdot\nu_{2}(t)=0$ for all $t\in I$. It follows that $(\overline{\gamma},\nu_{1},\nu_{2})$ is a framed curve and $\overline{\mu}(t)=\mu(t)=\mbox{\boldmath$v$}$. Therefore, $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\mu})$-Bertrand framed curve. $\Box$ ###### Remark 4.3 If $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\mu})$-Bertrand framed curve, then $\mu$ is a constant vector $v$ and $\gamma(t)=(\int\alpha(t)dt)\mbox{\boldmath$v$}+\mbox{\boldmath$c$}$, where $\mbox{\boldmath$c$}\in\mathbb{R}^{3}$ is a constant vector. Therefore, $\gamma$ is a part of a line. ###### Proposition 4.4 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are $(\mu,\overline{\mu})$-mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mu(t),\overline{\nu}_{1}(t)=\cos\theta(t){\nu_{1}}(t)-\sin\theta(t){\nu_{2}}(t),\overline{\nu}_{2}(t)=\sin\theta(t){\nu_{1}}(t)+\cos\theta(t){\nu_{2}}(t)$ and $\theta:I\rightarrow\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\overline{\ell}(t)=\ell(t)-\dot{\theta}(t),\overline{m}(t)=0,\overline{n}(t)=0,\overline{\alpha}(t)=\alpha(t)+\dot{\lambda}(t).$ Proof. By differentiating $\overline{\nu}_{1}(t)=\cos\theta(t){\nu_{1}}(t)-\sin\theta(t){\nu_{2}}(t)$, we have $\overline{\ell}(t)\overline{\nu}_{2}(t)+\overline{m}(t)\overline{\mu}(t)=(\ell(t)-\dot{\theta}(t))\overline{\nu}_{2}(t).$ Then we have $\overline{\ell}(t)=\ell(t)-\dot{\theta}(t)$. Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{m}(t)=0$. Moreover, by differentiating $\overline{\nu}_{2}(t)=\sin\theta(t){\nu_{1}}(t)+\cos\theta(t){\nu_{2}}(t)$, we have $-\overline{\ell}(t)\overline{\nu}_{1}(t)+\overline{n}(t)\overline{\mu}(t)=(\ell(t)-\dot{\theta}(t))\overline{\nu}_{1}(t).$ Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{n}(t)=0$. By $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\dot{\lambda}(t))\mu(t)$ and $\mu(t)=\overline{\mu}(t)$, we have $\overline{\alpha}(t)=\alpha(t)+\dot{\lambda}(t)$. $\Box$ ###### Theorem 4.5 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve if and only if there exists a smooth function $\theta:I\to\mathbb{R}$ such that $m(t)\cos\theta(t)-n(t)\sin\theta(t)=0$ for all $t\in I$. Proof. Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve. Then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mu(t)$ and $\mu(t)=\overline{\nu}_{1}(t)$ for all $t\in I$. By differentiating, we have $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\dot{\lambda}(t))\mu(t)-\lambda(t)m(t)\nu_{1}(t)-\lambda(t)n(t)\nu_{2}(t)=0$ for all $t\in I$. Since $\mu(t)=\overline{\nu}_{1}(t)$, we have $\alpha(t)+\dot{\lambda}(t)=0$ for all $t\in I$. Moreover, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\begin{pmatrix}\overline{\nu}_{2}(t)\\\ \overline{\mu}(t)\end{pmatrix}=\begin{pmatrix}\cos\theta(t)&-\sin\theta(t)\\\ \sin\theta(t)&\cos\theta(t)\end{pmatrix}\begin{pmatrix}{\nu}_{1}(t)\\\ {\nu}_{2}(t)\end{pmatrix}.$ Then we have $\overline{\alpha}(t)\sin\theta(t)=-\lambda(t)m(t)$ and $\overline{\alpha}(t)\cos(t)=-\lambda(t)n(t)$. It follows that $\lambda(t)(m(t)\cos\theta(t)-n(t)\sin\theta(t))=0$ for all $t\in I$. Since $\lambda\not\equiv 0$ and the continuous condition, we have $m(t)\cos\theta(t)-n(t)\sin\theta(t)=0$ for all $t\in I$. Conversely, suppose that there exists a smooth function $\theta:I\to\mathbb{R}$ such that $m(t)\cos\theta(t)-n(t)\sin\theta(t)=0$ for all $t\in I$. Let $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be $\overline{\gamma}(t)=\gamma(t)-\left(\int\alpha(t)dt\right)\mu(t),\ \overline{\nu}_{1}(t)=\mu(t),\ \overline{\nu}_{2}(t)=\cos\theta(t)\nu_{1}(t)-\sin\theta(t)\nu_{2}(t).$ Since $\dot{\overline{\gamma}}(t)=(\int\alpha(t)dt)(m(t)\nu_{1}(t)+n(t)\nu_{2}(t))$, we have $\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{1}(t)=\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{2}(t)=0$ for all $t\in I$. It follows that $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is a framed curve. Therefore, $(\gamma,\nu_{1},\nu_{2})$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve. $\Box$ ###### Remark 4.6 If $(\gamma,\nu_{1},\nu_{2})$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve, then $\overline{\gamma}$ is an involute of the framed curve $(\gamma,\nu_{1},\nu_{2})$ (cf. [12]). ###### Proposition 4.7 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are $(\mu,\overline{\nu}_{1})$-mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mu(t),\overline{\nu}_{2}(t)=\cos\theta(t){\nu_{1}}(t)-\sin\theta(t){\nu_{2}}(t)$ and $\overline{\mu}(t)=\sin\theta(t){\nu_{1}}(t)+\cos\theta(t){\nu_{2}}(t)$ and $\theta:I\to\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\displaystyle\overline{\ell}(t)$ $\displaystyle=-m(t)\cos\theta(t)+n(t)\sin\theta(t)=0,$ $\displaystyle\overline{m}(t)$ $\displaystyle=-m(t)\sin\theta(t)-n(t)\cos\theta(t),$ $\displaystyle\overline{n}(t)$ $\displaystyle=\ell(t)-\dot{\theta}(t),$ $\displaystyle\overline{\alpha}(t)$ $\displaystyle=-\lambda(t)(m(t)\sin\theta(t)+n(t)\cos\theta(t)).$ Proof. By differentiating $\overline{\nu}_{2}(t)=\cos\theta(t){\nu_{1}}(t)-\sin\theta(t){\nu_{2}}(t)$, we have $\displaystyle-\overline{\ell}(t)\overline{\nu}_{1}(t)+\overline{n}(t)\overline{\mu}(t)$ $\displaystyle=(\ell(t)-\dot{\theta}(t))\overline{\mu}(t)+(m(t)\cos\theta(t)-n(t)\sin\theta(t))\mu(t).$ Then we have $\overline{n}(t)=\ell(t)-\dot{\theta}(t)$. Since $\mu(t)=\overline{\nu}_{1}(t)$, we have $\overline{\ell}(t)=-m(t)\cos\theta(t)+n(t)\sin\theta(t)=0$. Moreover, by differentiating $\overline{\mu}(t)=\sin\theta(t){\nu_{1}}(t)+\cos\theta(t){\nu_{2}}(t)$, we have $\displaystyle-\overline{m}(t)\overline{\nu}_{1}(t)-\overline{n}(t)\overline{\nu}_{2}(t)$ $\displaystyle=(\dot{\theta}(t)-\ell(t))\overline{\nu}_{2}(t)+(m(t)\sin\theta(t)+n(t)\cos\theta(t))\mu(t).$ Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{m}(t)=-m(t)\sin\theta(t)-n(t)\cos\theta(t)$. By $\overline{\alpha}(t)\sin\theta(t)=-\lambda(t)m(t)$ and $\overline{\alpha}(t)\cos(t)=-\lambda(t)n(t)$, we have $\overline{\alpha}(t)=-\lambda(t)(m(t)\sin\theta(t)+n(t)\cos\theta(t))$. $\Box$ ###### Theorem 4.8 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\nu}_{2})$-Bertrand framed curve if and only if $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve. Proof. If $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\mu,\overline{\nu}_{2})$-Bertrand framed curve, then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\mu(t)$ and $\mu(t)=\overline{\nu}_{2}(t)$ for all $t\in I$ Since $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ is also a framed curve, $(\gamma,\nu_{1},\nu_{2})$ is a $(\mu,\overline{\nu}_{1})$-Bertrand framed curve and vice versa (see, Theorem 2.17 and Remark 2.18). $\Box$ ###### Remark 4.9 Let $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ be $(\mu,\overline{\nu}_{1})$-mates. The curvature of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$. Then $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ are $(\mu,\overline{\nu}_{2})$-mates and the curvature of $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ is given by $(0,-\overline{n},-\overline{m},-\overline{\alpha})$ by Propositions 2.11 and 4.7. ###### Theorem 4.10 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve if and only if $\ell(t)=0$ and there exists a smooth function $\lambda:I\rightarrow\mathbb{R}$ such that $\alpha(t)+\lambda(t)m(t)=0$ for all $t\in I$. Proof. Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve. Then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{1}(t)$ and $\nu_{1}(t)=\overline{\mu}(t)$ for all $t\in I$. By differentiating, we have $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\lambda(t)m(t))\mu(t)+\dot{\lambda}(t)\nu_{1}(t)+\lambda(t)\ell(t)\nu_{2}(t)=0$ for all $t\in I$. Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\alpha(t)+\lambda(t)m(t)=0$ and $\lambda(t)\ell(t)=0$ for all $t\in I$. Since $\lambda\not\equiv 0$, we have $\ell(t)=0$ for all $t\in I$. Conversely, suppose that $\ell(t)=0$ and there exists a smooth function $\lambda:I\rightarrow\mathbb{R}$ such that $\alpha(t)+\lambda(t)m(t)=0$ for all $t\in I$. Let $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{1}(t),\ \overline{\nu}_{1}(t)=\nu_{2}(t),\ \overline{\nu}_{2}(t)=\mu(t).$ Since $\dot{\overline{\gamma}}(t)=\dot{\lambda}(t)\nu_{1}(t)$, we have $\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{1}(t)=\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{2}(t)=0$ for all $t\in I$. It follows that $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is a framed curve. Moreover, since $\overline{\mu}(t)=\overline{\nu}_{1}(t)\times\overline{\nu}_{2}(t)$, we have $\overline{\mu}(t)={\nu}_{1}(t)$ for all $t\in I$. Therefore, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve. $\Box$ ###### Remark 4.11 If $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve and $m(t)\not=0$ for all $t\in I$, then $\overline{\gamma}$ is a circular evolute with respect to $\nu_{1}$ of the framed curve $(\gamma,\nu_{1},\nu_{2})$ (cf. [12]). ###### Proposition 4.12 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are $(\nu_{1},\overline{\mu})$-mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{1}(t),\overline{\nu}_{1}(t)=\cos\theta(t){\nu_{2}}(t)-\sin\theta(t){\mu}(t),\overline{\nu}_{2}(t)=\sin\theta(t){\nu_{2}}(t)+\cos\theta(t){\mu}(t)$ and $\theta:I\to\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\displaystyle\overline{\ell}(t)=n(t)-\dot{\theta}(t),\ \overline{m}(t)=m(t)\sin\theta(t),\ \overline{n}(t)=-m(t)\cos\theta(t),\ \overline{\alpha}(t)=\dot{\lambda}(t).$ Proof. By differentiating $\overline{\nu}_{1}(t)=\cos\theta(t){\nu_{2}}(t)-\sin\theta(t){\mu}(t)$, we have $\displaystyle\overline{\ell}(t)\overline{\nu}_{2}(t)+\overline{m}(t)\overline{\mu}(t)$ $\displaystyle=(n(t)-\dot{\theta}(t))\overline{\nu}_{2}(t)+m(t)\sin\theta(t)\nu_{1}(t).$ Then we have $\overline{\ell}(t)=n(t)-\dot{\theta}(t)$. Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{m}(t)=m(t)\sin\theta(t)$. Moreover, by differentiating $\overline{\nu}_{2}(t)=\sin\theta(t){\nu_{2}}(t)+\cos\theta(t){\mu}(t)$, we have $\displaystyle-\overline{\ell}(t)\overline{\nu}_{1}(t)+\overline{n}(t)\overline{\mu}(t)$ $\displaystyle=(\dot{\theta}(t)-n(t))\overline{\nu}_{1}(t)-m(t)\cos\theta(t)\nu_{1}(t).$ Since $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{n}(t)=-m(t)\cos\theta(t)$. By $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\lambda(t)m(t))\mu(t)+\dot{\lambda}(t)\nu_{1}(t)+\lambda(t)\ell(t)\nu_{2}(t)=0$ and $\nu_{1}(t)=\overline{\mu}(t)$, we have $\overline{\alpha}(t)=\dot{\lambda}(t)$. $\Box$ ###### Theorem 4.13 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve if and only if there exists a non-zero constant $\lambda$ and a smooth function $\theta:I\to\mathbb{R}$ such that $\displaystyle\lambda\ell(t)\sin\theta(t)+(\alpha(t)+\lambda n(t))\cos\theta(t)=0$ for all $t\in I$. Proof. Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve. Then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{2}(t)$ and $\nu_{2}(t)=\overline{\nu}_{1}(t)$ for all $t\in I$. By differentiating, we have $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\lambda(t)n(t))\mu(t)-\lambda(t)\ell(t)\nu_{1}(t)+\dot{\lambda}(t)\nu_{2}(t)=0$ for all $t\in I$. Since $\nu_{2}(t)=\overline{\nu}_{1}(t)$, we have $\dot{\lambda}(t)=0$ for all $t\in I$. Therefore $\lambda$ is a constant. If $\lambda=0$, then $\overline{\gamma}(t)=\gamma(t)$ for all $t\in I$. Hence, $\lambda$ is a non-zero constant. Moreover, there exists a smooth function $\theta:I\to\mathbb{R}$ such that $\begin{pmatrix}\overline{\nu}_{2}(t)\\\ \overline{\mu}(t)\end{pmatrix}=\begin{pmatrix}\cos\theta(t)&-\sin\theta(t)\\\ \sin\theta(t)&\cos\theta(t)\end{pmatrix}\begin{pmatrix}{\nu}_{1}(t)\\\ {\nu}_{2}(t)\end{pmatrix}.$ Then we have $\overline{\alpha}(t)\sin\theta(t)=\alpha(t)+\lambda n(t)$ and $\overline{\alpha}(t)\cos(t)=-\lambda\ell(t)$. It follows that $\lambda\ell(t)\sin\theta(t)+(\alpha(t)+\lambda n(t))\cos\theta(t)=0$ for all $t\in I$. Conversely, suppose that there exists a non-zero constant $\lambda$ and a smooth function $\theta:I\to\mathbb{R}$ such that $\lambda\ell(t)\sin\theta(t)+(\alpha(t)+\lambda n(t))\cos\theta(t)=0$ for all $t\in I$. Let $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be $\overline{\gamma}(t)=\gamma(t)+\lambda\nu_{2}(t),\ \overline{\nu}_{1}(t)=\nu_{2}(t),\ \overline{\nu}_{2}(t)=\cos\theta(t)\mu(t)-\sin\theta(t)\nu_{1}(t).$ Since $\dot{\overline{\gamma}}(t)=(\alpha(t)+\lambda n(t))\mu(t)-\lambda\ell(t)\nu_{1}(t)$, we have $\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{1}(t)=\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{2}(t)=0$ for all $t\in I$. It follows that $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is a framed curve. Therefore, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve. $\Box$ ###### Proposition 4.14 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are $(\nu_{2},\overline{\nu}_{1})$-mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda\nu_{2}(t),\overline{\nu}_{2}(t)=\cos\theta(t){\mu}(t)-\sin\theta(t){\nu_{1}}(t),\overline{\mu}(t)=\sin\theta(t){\mu}(t)+\cos\theta(t){\nu_{1}}(t)$ and $\theta:I\to\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\displaystyle\overline{\ell}(t)$ $\displaystyle=\ell(t)\sin\theta(t)+n(t)\cos\theta(t),$ $\displaystyle\overline{m}(t)$ $\displaystyle=-\ell(t)\cos\theta(t)+n(t)\sin\theta(t),$ $\displaystyle\overline{n}(t)$ $\displaystyle=-\dot{\theta}(t)-m(t),$ $\displaystyle\overline{\alpha}(t)$ $\displaystyle=(\alpha(t)+\lambda n(t))\sin\theta(t)-\lambda\ell(t)\cos\theta(t).$ Proof. By differentiating $\overline{\nu}_{2}(t)=\cos\theta(t){\mu}(t)-\sin\theta(t){\nu_{1}}(t)$, we have $\displaystyle-\overline{\ell}(t)\overline{\nu}_{1}(t)+\overline{n}(t)\overline{\mu}(t)$ $\displaystyle=(-\dot{\theta}(t)-m(t))\overline{\mu}(t)+(-n(t)\cos\theta(t)-\ell(t)\sin\theta(t))\nu_{2}(t).$ Then we have $\overline{n}(t)=-\dot{\theta}(t)-m(t)$. Since $\nu_{2}(t)=\overline{\nu}_{1}(t)$, we have $\overline{\ell}(t)=\ell(t)\sin\theta(t)+n(t)\cos\theta(s)$. Moreover, by differentiating $\overline{\mu}(t)=\sin\theta(t){\mu}(t)+\cos\theta(t){\nu_{1}}(t)$, we have $\displaystyle-\overline{m}(t)\overline{\nu}_{1}(t)-\overline{n}(t)\overline{\nu}_{2}(t)$ $\displaystyle=(\dot{\theta}(t)+m(t))\overline{\nu}_{2}(t)+(-n(t)\sin\theta(t)+\ell(t)\cos\theta(t))\nu_{2}(t).$ Since $\nu_{2}(t)=\overline{\nu}_{1}(t)$, we have $\overline{m}(t)=-\ell(t)\cos\theta(t)+n(t)\sin\theta(t)$. By $\overline{\alpha}(t)\sin\theta(t)=\alpha(t)+\lambda n(t)$ and $\overline{\alpha}(t)\cos(t)=-\lambda\ell(t)$, we have $\overline{\alpha}(t)=(\alpha(t)+\lambda n(t))\sin\theta(t)-\lambda\ell(t)\cos\theta(t)$. $\Box$ ###### Theorem 4.15 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\nu}_{2})$-Bertrand framed curve if and only if $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve. Proof. If $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\nu}_{2})$-Bertrand framed curve, then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{2}(t)$ and $\nu_{2}(t)=\overline{\nu}_{2}(t)$ for all $t\in I$ Since $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ is also a framed curve, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve and vice versa (see, Theorem 2.17 and Remark 2.18). $\Box$ ###### Remark 4.16 Let $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ be $(\nu_{2},\overline{\nu}_{1})$-mates. The curvature of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$. Then $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ are $(\nu_{2},\overline{\nu}_{2})$-mates and the curvature of $(\overline{\gamma},\overline{\nu}_{2},\overline{\nu}_{1})$ is given by $(-\overline{\ell},-\overline{n},-\overline{m},-\overline{\alpha})$ by Propositions 2.11 and 4.14. ###### Remark 4.17 Let $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a framed curve with an adapted frame. Then curvature of $(\gamma,\nu_{1},\nu_{2})$ is given by $(0,m,n,\alpha)$ by (16). By Theorems 4.13 and 4.15, $(\gamma,\nu_{1},\nu_{2})$ is automatically not only $(\nu_{2},\overline{\nu}_{1})$-Bertrand framed curve, but also $(\nu_{2},\overline{\nu}_{2})$-Bertrand framed curve. ###### Theorem 4.18 $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\mu})$-Bertrand framed curve if and only if $\ell(t)=0$ and there exists a smooth function $\lambda:I\rightarrow\mathbb{R}$ such that $\alpha(t)+\lambda(t)n(t)=0$ for all $t\in I$. Proof. Suppose that $(\gamma,\nu_{1},\nu_{2}):I\to\mathbb{R}^{3}\times\Delta$ is a $(\nu_{2},\overline{\mu})$-Bertrand framed curve. Then there exists a framed curve $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ and a smooth function $\lambda:I\to\mathbb{R}$ with $\lambda\not\equiv 0$ such that $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{2}(t)$ and $\nu_{2}(t)=\overline{\mu}(t)$ for all $t\in I$. By differentiating, we have $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\lambda(t)n(t))\mu(t)-\lambda(t)\ell(t)\nu_{1}(t)+\dot{\lambda}(t)\nu_{2}(t)=0$ for all $t\in I$. Since $\nu_{2}(t)=\overline{\mu}(t)$, we have $\alpha(t)+\lambda(t)n(t)=0$ and $-\lambda(t)\ell(t)=0$ for all $t\in I$. Since $\lambda\not\equiv 0$ and the continuous condition, we have $\ell(t)=0$ for all $t\in I$. Conversely, suppose that $\ell(t)=0$ and there exists a smooth function $\lambda:I\rightarrow\mathbb{R}$ such that $\alpha(t)+\lambda(t)n(t)=0$ for all $t\in I$. Let $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ be $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{2}(t),\ \overline{\nu}_{1}(t)=\mu(t),\ \overline{\nu}_{2}(t)=\nu_{1}(t).$ Since $\dot{\overline{\gamma}}(t)=\dot{\lambda}(t)\nu_{2}(t)$, we have $\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{1}(t)=\dot{\overline{\gamma}}(t)\cdot\overline{\nu}_{2}(t)=0$ for all $t\in I$. It follows that $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is a framed curve. Moreover, since $\overline{\mu}(t)=\overline{\nu}_{1}(t)\times\overline{\nu}_{2}(t)$, we have $\overline{\mu}(t)={\nu}_{2}(t)$ for all $t\in I$. Therefore, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{2},\overline{\mu})$-Bertrand framed curve. $\Box$ ###### Remark 4.19 If $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve and $n(t)\not=0$ for all $t\in I$, then $\overline{\gamma}$ is a circular evolute with respect to $\nu_{2}$ of the framed curve $(\gamma,\nu_{1},\nu_{2})$ (cf. [12]). ###### Proposition 4.20 Suppose that $(\gamma,\nu_{1},\nu_{2})$ and $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2}):I\to\mathbb{R}^{3}\times\Delta$ are $(\nu_{2},\overline{\mu})$-mates, where $\overline{\gamma}(t)=\gamma(t)+\lambda(t)\nu_{2}(t),\overline{\nu}_{1}(t)=\cos\theta(t){\mu}(t)-\sin\theta(t){\nu}_{1}(t),\overline{\nu}_{2}(t)=\sin\theta(t){\mu}(t)+\cos\theta(t){\nu}_{1}(t)$ and $\theta:I\to\mathbb{R}$ is a smooth function. Then the curvature $(\overline{\ell},\overline{m},\overline{n},\overline{\alpha})$ of $(\overline{\gamma},\overline{\nu}_{1},\overline{\nu}_{2})$ is given by $\displaystyle\overline{\ell}(t)=-\dot{\theta}(t)-m(t),\ \overline{m}(t)=-n(t)\cos\theta(t),\ \overline{n}(t)=-n(t)\sin\theta(t),\ \overline{\alpha}(t)=\dot{\lambda}(t).$ Proof. By differentiating $\overline{\nu}_{1}(t)=\cos\theta(t){\mu}(t)-\sin\theta(t){\nu}_{1}(t)$, we have $\displaystyle\overline{\ell}(t)\overline{\nu}_{2}(t)+\overline{m}(t)\overline{\mu}(t)$ $\displaystyle=(-\dot{\theta}(t)-m(t))\overline{\nu}_{2}(t)-n(t)\cos\theta(t)\nu_{2}(t).$ Then we have $\overline{\ell}(t)=-\dot{\theta}(t)-m(t)$. Since $\nu_{2}(t)=\overline{\mu}(t)$, we have $\overline{m}(t)=-n(t)\cos\theta(t)$. Moreover, by differentiating $\overline{\nu}_{2}(t)=\sin\theta(t){\mu}(t)+\cos\theta(t){\nu}_{1}(t)$, we have $\displaystyle-\overline{\ell}(t)\overline{\nu}_{1}(t)+\overline{n}(t)\overline{\mu}(t)$ $\displaystyle=(\dot{\theta}(t)+m(t))\overline{\nu}_{1}(t)-n(t)\sin\theta(t)\nu_{2}(t).$ Since $\nu_{2}(t)=\overline{\mu}(t)$, we have $\overline{n}(t)=-n(t)\sin\theta(t)$. By $\overline{\alpha}(t)\overline{\mu}(t)=(\alpha(t)+\lambda(t)n(t))\mu(t)-\lambda(t)\ell(t)\nu_{1}(t)+\dot{\lambda}(t)\nu_{2}(t)=0$ and $\nu_{2}(t)=\overline{\mu}(t)$, we have $\overline{\alpha}(t)=\dot{\lambda}(t)$. $\Box$ Finally, we give concrete examples of Bertrand framed curves. ###### Example 4.21 Let $(\gamma,\nu_{1},\nu_{2}):[0,2\pi)\to\mathbb{R}^{3}\times\Delta$ be $\displaystyle\gamma(t)=\frac{1}{\sqrt{1+p^{2}}}$ $\displaystyle\Bigl{(}-\frac{p}{2}\left(-\frac{1}{q+\sqrt{1+p^{2}}}\cos(q+\sqrt{1+p^{2}})t-\frac{1}{q-\sqrt{1+p^{2}}}\cos(q-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad\frac{p}{2}\left(\frac{1}{q+\sqrt{1+p^{2}}}\sin(q+\sqrt{1+p^{2}})t-\frac{1}{q-\sqrt{1+p^{2}}}\sin(q-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad-\frac{1}{q}\cos qt\Bigr{)},$ $\displaystyle\nu_{1}(t)=\frac{p}{\sqrt{1+p^{2}}}$ $\displaystyle\Bigl{(}-\frac{p}{2}\left(\frac{1}{1+\sqrt{1+p^{2}}}\sin(1+\sqrt{1+p^{2}})t+\frac{1}{1-\sqrt{1+p^{2}}}\sin(1-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad\frac{p}{2}\left(\frac{1}{1+\sqrt{1+p^{2}}}\cos(1+\sqrt{1+p^{2}})t-\frac{1}{1-\sqrt{1+p^{2}}}\cos(1-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad\sin t\Bigr{)},$ $\displaystyle\nu_{2}(t)=\frac{p}{\sqrt{1+p^{2}}}$ $\displaystyle\Bigl{(}-\frac{p}{2}\left(-\frac{1}{1+\sqrt{1+p^{2}}}\cos(1+\sqrt{1+p^{2}})t-\frac{1}{1-\sqrt{1+p^{2}}}\cos(1-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad\frac{p}{2}\left(\frac{1}{1+\sqrt{1+p^{2}}}\sin(1+\sqrt{1+p^{2}})t-\frac{1}{1-\sqrt{1+p^{2}}}\sin(1-\sqrt{1+p^{2}})t\right),$ $\displaystyle\quad-\cos t\Bigr{)},$ where $p,q\in\mathbb{R}\setminus\\{0\\}$ with $q\neq\pm\sqrt{1+p^{2}}$. By a direct calculation, $\displaystyle\mu(t)$ $\displaystyle=\nu_{1}(t)\times\nu_{2}(t)=\frac{1}{\sqrt{1+p^{2}}}\Bigl{(}-p\cos\sqrt{1+p^{2}}t,-p\sin\sqrt{1+p^{2}}t,1\Bigr{)}$ and $(\gamma,\nu_{1},\nu_{2})$ is a framed curve with the curvature $(\ell(t),m(t),n(t),\alpha(t))=(0,p\cos t,p\sin t,\sin qt).$ If we take $\theta(t)=n\pi$ where $n\in\mathbb{Z}$, then the condition of Theorem 2.13 is satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\nu}_{1})$-Bertrand framed curve. Similarly, if we take $\theta(t)=\pi/2+n\pi$, the conditions of Theorems 2.16 and 4.13 are satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\nu}_{2})$, $(\nu_{2},\overline{\nu}_{1})$ and $(\nu_{2},\overline{\nu}_{2})$-Bertrand framed curve. Moreover, if we take $\theta(t)=\pi/2+n\pi-t$, the conditions of Theorems 4.5 and 4.8 are satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\mu,\overline{\nu}_{1})$ and $(\mu,\overline{\nu}_{2})$-Bertrand framed curve. If we take $p\not=0,\pm\sqrt{3}$, $q=2$ and $\lambda(t)=-(2/p)\sin t$ (respectively, $\lambda(t)=-(2/p)\cos t$), then the condition of Theorem 4.10 (respectively, Theorem 4.18) is satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\mu})$ (respectively, $(\nu_{2},\overline{\mu})$)-Bertrand framed curve. ###### Example 4.22 (Spherical Legendre curves [22]) Let $(\gamma,\nu):I\to\Delta$ be a spherical Legendre curve with curvature $(m,n)$. If we denote $\mu=\gamma\times\nu$, then $\left(\begin{array}[]{c}\dot{\gamma}(t)\\\ \dot{\nu}(t)\\\ \dot{\mu}(t)\end{array}\right)=\left(\begin{array}[]{ccc}0&0&m(t)\\\ 0&0&n(t)\\\ -m(t)&-n(t)&0\end{array}\right)\left(\begin{array}[]{c}\gamma(t)\\\ \nu(t)\\\ \mu(t)\end{array}\right).$ It follows that $(\gamma,\gamma,\nu)$ is a framed curve with curvature $(0,m,n,m)$. Since $\ell=0$, $(\gamma,\nu_{1},\nu_{2})=(\gamma,\gamma,\nu)$ is a $(\nu_{1},\overline{\nu}_{1})$ (respectively, $(\nu_{1},\overline{\nu}_{2})$, $(\nu_{2},\overline{\nu}_{1})$, $(\nu_{2},\overline{\nu}_{2})$)-Bertrand framed curve. Moreover, if we take $\lambda=-1$, then the condition of Theorem 4.10 is satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{1},\overline{\mu})$-Bertrand framed curve. It also follows that $(\gamma,\nu,\gamma)$ is a framed curve with curvature $(0,-n,-m,-m)$. Since $\ell=0$, $(\gamma,\nu_{1},\nu_{2})=(\gamma,\nu,\gamma)$ is a $(\nu_{1},\overline{\nu}_{1})$ (respectively, $(\nu_{1},\overline{\nu}_{2})$, $(\nu_{2},\overline{\nu}_{1})$, $(\nu_{2},\overline{\nu}_{2})$)-Bertrand framed curve. Moreover, if we take $\lambda=-1$, then the condition of Theorem 4.18 is satisfied. Hence, $(\gamma,\nu_{1},\nu_{2})$ is a $(\nu_{2},\overline{\mu})$-Bertrand framed curve. ## References * [1] Y. Aminov, Differential geometry and the topology of curves. Translated from the Russian by V. Gorkavy. Gordon and Breach Science Publishers, Amsterdam, 2000. * [2] Y. Banchoff, S. Lovett, Differential geometry of curves and surfaces. A K Peters, Ltd., Natick, MA, 2010. * [3] M. Berger, B. Gostiaux, Differential geometry: manifolds, curves, and surfaces. Translated from the French by Silvio Levy. Graduate Texts in Mathematics, 115. Springer-Verlag, New York, 1988. * [4] J. Bertrand, Mémoire sur la théorie des courbes à double courbure. J. de methématiques pures et appliquées. 15 1850, 332–350. * [5] R. L. Bishop, There is more than one way to frame a curve. American Mathematical Monthly. 82 1975, 246–251. * [6] J. W. Bruce, P. J. Giblin, Curves and singularities. A geometrical introduction to singularity theory. Second edition. Cambridge University Press, Cambridge, 1992. * [7] M. P. do Carmo, Differential geometry of curves and surfaces. Translated from the Portuguese. Prentice-Hall, Inc., Englewood Cliffs, N.J., 1976. * [8] T. Fukunaga, M. Takahashi, Evolutes and involutes of frontals in the Euclidean plane. Demonstr. Math. 48 (2015), 147–166. * [9] T. Fukunaga, M. Takahashi, Existence conditions of framed curves for smooth curves. Journal of Geometry. 108, 2017, 763–774. doi: 10.1007/s00022-017-0371-5 * [10] S. Honda, M. Takahashi, Framed curves in the Euclidean space. Advances in Geometry. 16, 2017, 265–276. doi: 10.1515/advgeom-2015-0035 * [11] S. Honda, M. Takahashi, Bertrand and Mannheim curves of framed curves in the 3-dimensional Euclidean space. Turkish J. Math. 44, 2020, 883–899. * [12] S. Honda, M. Takahashi, Circular evolutes and involutes of framed curves in the 3-dimensional Euclidean space. Preprint, 2024. * [13] C. G. Gibson, Elementary geometry of differentiable curves. An undergraduate introduction. Cambridge University Press, Cambridge, 2001. * [14] A. Gray, E. Abbena and S. Salamon, Modern differential geometry of curves and surfaces with Mathematica. Third edition. Studies in Advanced Mathematics. Chapman and Hall/CRC, Boca Raton, FL, 2006. * [15] J. Huang, L. Chen, S. Izumiya, D. Pei, Geometry of special curves and surfaces in 3-space form. Journal of Geometry and Physics. 136, 2019, 31–38. * [16] S. Izumiya, N. Takeuchi, Generic properties of helices and Bertrand curves. Journal of Geometry. 74, 2002, 97–109. * [17] W. Kühnel, Differential geometry. Curves-surfaces-manifolds. Translated from the 1999 German original by Bruce Hunt. Student Mathematical Library, 16. American Mathematical Society, Providence, RI, 2002. * [18] H. Liu, F. Wang, Mannheim partner curves in 3-space. Journal of Geometry. 88, 2008, 120–126. * [19] N. Nakatsuyama, M. Takahashi, On vertices of frontals in the Euclidean plane. Preprint, 2024. * [20] S. G. Papaioannou, D. Kiritsis, An application of Bertrand curves and surfaces to CADCAM. Computer-Aided Design 17, 1985, 348–352. * [21] D. J. Struik, Lectures on classical differential geometry. Reprint of the second edition. Dover Publications, Inc., New York, 1988. * [22] M. Takahashi, Legendre curves in the unit spherical bundle over the unit sphere and evolutes. Contemp. Math. 675, 2016, 337–355. Nozomi Nakatsuyama, Muroran Institute of Technology, Muroran 050-8585, Japan, E-mail address<EMAIL_ADDRESS> Masatomo Takahashi, Muroran Institute of Technology, Muroran 050-8585, Japan, E-mail address<EMAIL_ADDRESS>
# Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection Tianyu Wang1,2,3, Xiaowei Hu3, , Zhengzhe Liu1, Chi-Wing Fu1,2 1 The Chinese University of Hong Kong 2 The Shun Hing Institute of Advanced Engineering 3 Shanghai AI Laboratory <EMAIL_ADDRESS><EMAIL_ADDRESS>Corresponding author<EMAIL_ADDRESS> ###### Abstract LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts. ## 1 Introduction Figure 1: Our approach is able to produce dense and good-quality 3D features (d) from regular (raw) point clouds, enabling better detection of small, distant, and incomplete objects (b) vs. [1] (a,c). Red boxes are detection results and green boxes are the ground truths. 3D object detection is an important task for supporting autonomous vehicles to sense their surroundings. Previous works [1, 2, 3, 4] design various neural network structures to improve the detection performance. Yet, it remains extremely challenging to detect small, distant, and incomplete objects, due to the point sparsity, object occlusion, and inaccurate laser reflection. These issues hinder further improvements in the precision and robustness of 3D object detection. To improve the detection performance, some works attempt to leverage additional information, e.g., images [5, 6, 7, 8, 9], image segmentations [10, 11], and multi-frame information [12, 13]. By fusing the information with the input point cloud in physical or latent space, the 3D detector can obtain enhanced features for small and distant objects to improve the 3D detection performance. However, the above works require additional data pre-processing and fusion in inference, thereby unavoidably increasing the computational burden and slowing down the overall detection efficiency in practice. More recently, [14] associates incomplete perceptual features of objects with more complete features of the corresponding class-wise conceptual models via an incomplete-aware re-weighting map and a weighted MSE loss. However, this network still struggles to deal with sparse regions with limited points, due to the difficulty of generating good-quality features in these regions. Recently, [15] proposes to generate semantic points on the predicted object regions and then train modern detectors, leveraging both the generated and original points. However, as the generated points in sparse regions could be incomplete, the generation quality in these regions is still far from satisfactory. Also, it takes a long time to generate the semantic points in large scenes. In this work, we present a new approach to address the point sparsity issue in 3D object detection. Specifically, we design the Sparse2Dense framework with two detectors: (i) the Dense point 3D Detector (DDet), which is pre-trained with dense point clouds for 3D detection, and (ii) the Sparse point 3D Detector (SDet), which is trained with regular (raw) point clouds as input. Very importantly, when we train SDet, we use DDet to teach SDet to simulate densified 3D features, so that it can learn to produce good-quality 3D features from regular point clouds to improve the detection performance. Unlike previous approaches, we design two effective modules to further help densify the sparse features from regular point clouds in latent space. Also, unlike previous multi-modal approaches [9, 10], which require extra information, like image segmentation and images, in both training and inference, our approach needs dense point data only in training but not in inference. Our framework is trained in two stages. First, we prepare dense point clouds by fusing multi-frame point clouds for training DDet. Then, we transfer the densified 3D features derived from DDet for embedding into the SDet features when training SDet, such that it can learn to generate densified 3D features, even from regular normal point clouds. To facilitate SDet to simulate dense 3D features, we design the lightweight and effective (S2D) module to densify sparse 3D features in latent space. Also, to further enhance the feature learning, we formulate the point cloud reconstruction (PCR) module to learn to reconstruct voxel-level point cloud as an auxiliary task. Like DDet, we need this PCR module only in training but not in inference. Furthermore, our framework is generic and compatible with various 3D detectors for boosting their performance. We adopted our framework to work with three different recent works [16, 17, 1] on the large-scale Waymo [18] Open and Waymo Domain Adaptation Datasets, showing that the detection performance of all three methods are improved by our approach. Particularly, the experimental results show that our approach outperforms the state-of-the-art 3D detectors on both datasets, demonstrating the effectiveness and versatility of our approach. Below, we summarize the major contributions of this work. * (i) We design a new approach to address the point sparsity issue in 3D detection and formulate the Sparse2Dense framework to transfer dense point knowledge from the Dense point 3D Detector (DDet) to the Sparse point 3D Detector (SDet). * (ii) We design the lightweight plug-in S2D module to learn dense 3D features in the latent space and the point cloud reconstruction (PCR) module to regularize the feature learning. * (iii) We evaluate our approach on the large-scale benchmark datasets, Waymo [18] open and Waymo domain adaptation, demonstrating its superior performance over the state of the arts. ## 2 Related Work #### 3D Object Detection. Recent years have witnessed the rapid progress of 3D object detection, existing works [19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31] have gained remarkable achievements on 3D object detection. SECOND [16] employs sparse convolution [32, 33] and PointPillars [17] introduces a pillar representation to achieve a good trade-off between speed and performance. Recently, CenterPoint [1] proposes a center-based anchor-free method to localize the object and VoTr [30] joins self-attention with sparse convolution to build a transformer-based 3D backbone. Besides, PV-RCNN [4] fuses deep features by RoI-grid pooling from both point and voxel features, and LiDAR R-CNN [26] presents a PointNet-based second-stage refinement to address size ambiguity. In this work, we adopt our approach to three recent methods as representatives, i.e., [16, 17, 1], showing that our approach can effectively enhance the performance of all three methods, demonstrating the compatibility of our framework. #### Sparse/Dense Domain Transformation for 3D Point Cloud. Raw point clouds are typically sparse and incomplete in practice, thereby limiting the performance of many downstream tasks. To address this issue, several approaches, including point cloud upsampling [34, 35, 36] and completion [37, 38, 39, 40], have been proposed to densify point cloud and complete the objects to improve the 3D segmentation [41, 42] and detection [43, 14, 15, 31, 44] performance. Here, we review some works on sparse/dense domain transformation on 3D object detection. [45] first propose a multi-frame to single-frame distillation framework, which only uses five adjacent frames to generate the dense features as the guidance, thus limiting the performance for distant objects. [43, 14] present a self-contained method to first extract complete cars at similar viewpoints and high-density regions across the dataset, then merge these augmented cars at the associated ground-truth locations for conceptual feature extraction. Later, [15] introduces semantic point generation to address the missing point issue. Recently, [44] presents a two-stage framework: first predict 3D proposals then complete the points by another module, which employs an attention-based GNN to refine the detection results with completed objects. The above works [43, 14, 15, 44] conduct various operations in the point cloud explicitly, $e.g.$, object extraction and matching [43, 14], and point generation [15, 44]; however, the explicit point cloud operations lead to two issues. First, it is challenging to conduct the above operations in distant and occluded regions, due to the high sparsity of points, thus severely limiting their performance in these regions. Second, it typically takes a very long time to conduct these operations explicitly, especially for large scenes. Beyond the prior works, we present an efficient sparse-to-dense approach to learn to densify 3D features in the latent space, instead of explicitly generating points in the point cloud. Notably, our approach needs dense point clouds only in training. In inference, it takes only a regular (sparse) point cloud as input for 3D detection. Quantitative experiments demonstrate that our approach outperforms existing ones in terms of both accuracy and efficiency. ## 3 Methodology ### 3.1 Overall Framework Design Figure 2 gives an overview of our Sparse2Dense framework, which consists of the Dense point 3D Detector (DDet) on top and the Sparse point 3D Detector (SDet) on bottom. Overall, DDet takes a dense point cloud as input, whereas SDet takes a regular (sparse) point cloud as input. Our idea is to transfer dense point knowledge from DDet to SDet and encourage SDet to learn to generate dense 3D features, even with sparse point clouds as inputs, to boost its 3D detection performance. The workflow of our framework design is summarized as follows: * (i) Dense object generation: we prepare dense point clouds for training DDet using raw multi-frame point cloud sequences. Particularly, we design a dense object generation procedure by building voxel grids and filling the voxels with object points (Section 3.2). Then, we replace the object regions in the sparse point cloud $P^{S}$ with the corresponding dense object points to obtain the dense point cloud $P^{D}$. * (ii) From dense detector to sparse detector: The training has two stages. First, as Figure 2 (a) shows, we train DDet with dense point cloud $P^{D}$ with a region proposal network (RPN) to extract region proposals and multiple heads to perform object classification and regression, following VoxelNet-based methods [1, 16] or Pillar-based method [17]. Second, as Figure 2 (b) shows, we initialize SDet with the weight of the pre-trained DDet, and train SDet with regular sparse point cloud $P^{S}$ as input. Meanwhile, we freeze the weights of DDet and adopt an MSE loss to reduce the feature difference ($F_{a}^{D}$ & $F_{a}^{S}$) between DDet and SDet. * (iii) Dense feature generation by S2D module: MSE loss itself is not sufficient to supervise SDet to effectively simulate dense 3D features like DDet. To complement the MSE loss, we further design the S2D module to learn dense 3D features of objects in latent space. In detail, we feed dense object point cloud $P^{D}_{O}$, which is the object region of the dense point cloud $P^{D}$, to DDet and then extract the dense object features $F_{b}^{D}$; After that, we further encourage feature $F_{b}^{S}$ enhanced by S2D to simulate to the dense object feature $F_{b}^{D}$. * (iv) Feature enhancement by point cloud reconstruction (PCR) module: further, we adopt the PCR module to promote the S2D module to simulate better dense 3D features; as an auxiliary task, PCR takes the feature from S2D module and predicts the voxel mask and point offset for reconstructing the voxel-level dense object point cloud $P^{D}_{O}$. Figure 2: The overall framework of our proposed Sparse2Dense. Our framework contains two training stages: (a) in the first stage, we train the Dense point 3D Detector (DDet) by taking the dense point cloud as the input (dark arrows); and (b) in the second stage, we train the Sparse point 3D Detector (SDet) by using the dense features from DDet as the supervision signals (gray and pink arrows). In testing, we only need SDet for 3D object detection on the raw point cloud input (pink arrows), without the DDet and the point cloud reconstruction module. ### 3.2 Dense Object Generation To prepare dense point clouds to train the DDet network, we design an offline pre-processing pipeline to process raw point cloud sequences, each with around 198 frames (see Figure 3 (a)): * (i) First, for each annotated object, we fuse the points inside its bounding box from multiple frames and then filter out the outlier points by using the radius outlier removal algorithm from Open3D [46], as shown in Figure 3 (b). * (ii) To keep the LiDAR-scanned-line patterns and reduce the point number, we voxelize the fused object points and obtain a voxel grid (see Figure 3 (c)) of granularity $(0.1m,0.1m,0.15m)$, which is the same size for point cloud voxelization. Specifically, we sort the frames in descending order of the object point number, as shown in Figure 3 (a). Then, we fill the voxel grid with object points starting from the beginning of the sorted frames until more than $95\%$ voxels have been filled (see Figure 3 (d)). Note that we stop filling a single voxel when the number of points in the voxel has reached five or this voxel has been filled by the previous frames to obtain enough points for training. * (iii) For the vehicle category, whose shape is often symmetric, we flip and copy the denser side of each object about its axial plane to further improve its density (see Figure 3 (e)). Figure 3: Dense object generation pipeline. Note that we use the annotated 3D bounding box to extract points from multiple frames and then fuse the points together with the help of a voxel grid. Figure 4: Left: the architecture of the S2D module. Right: ConvNeXt block [47]. ### 3.3 Dense Feature Distillation with the S2D Module To transfer dense point knowledge from DDet to SDet, a straightforward solution is to pair up associated features in DDet and SDet and minimize the distance between each pair of features with an MSE loss. However, MSE loss itself is struggling to achieve satisfying feature transfer, as the inputs of DDet and SDet differ a lot from each other, especially for objects far from the LiDAR sensor. Also, the backbone structure [16, 1] built up with VoxelNet consists of SPConv, which processes only non-empty voxels. Hence, it cannot generate features on empty voxels. To better densify the 3D features, we formulate the S2D module that takes SDet’s backbone feature as input and learns to output denser features for 3D object detection. Figure 4 shows its architecture, in which we first project sparse 3D feature to obtain BEV feature $F^{S}_{c}$, so that we can employ efficient 2D convolution operations in the BEV space. Then, we down-sample the feature maps to $1/4$ size of the input sparse features by using convolution layers with stride 2. Inspired by the efficient design of convolution block in [47], we embed three ConvNeXt [47] residual blocks to aggregate the object information. Each block contains a $7\times 7$ depth-wise convolution, followed by a layer normalization, a $1\times 1$ conv with a GELU, and a $1\times 1$ conv. As shown on the right of Figure 4, the first $1\times 1$ conv increases the number of feature channels from 256 to 1024 and the second $1\times 1$ conv reduces the channel number back to 256. Next, we upsample the features via a 2D transposed conv and concatenate the result with the previous features. After that, we feed the concatenated feature to a $3\times 3$ conv layer and upsample the features to obtain the final densified feature $F^{S}_{b}$. Note that each conv, except convs in the ConvNeXt blocks, is followed by a batch normalization and a GELU non-linear operation. With the densified feature $F^{S}_{b}$ , we fuse $F^{S}_{b}$ and $F^{S}_{c}$ by feeding each of them into a $1\times 1$ conv layer and add them together as the final output feature $F^{S}_{a}$, as shown in Figure 2. To train the S2D module, we consider two kinds of supervision. The first one is on the high-level features in the DDet network, where $F^{D}_{a}$ and $F^{D}_{b}$ are the features obtained on the dense point cloud with and without background information, respectively. We minimize the feature difference between $F^{S}_{a}$ and $F^{D}_{a}$ as well as between $F^{D}_{b}$ and $F^{S}_{b}$. The second supervision comes from the point reconstruction module to be presented in the next subsection. ### 3.4 Point Cloud Reconstruction Module To encourage the S2D module to produce good-quality dense 3D features, we further design the point cloud reconstruction (PCR) module with an auxiliary task. In short, the PCR module reconstructs a voxel-level dense object point cloud $P^{D}_{O}$ from feature $F_{b}^{S}$, $i.e.$, the output of the S2D module. Yet, it is extremely challenging to directly reconstruct large-scale dense object points [15]. Thus, we propose a voxel-level reconstruction scheme to predict only the average of the input points in each non-empty voxel. Specifically, we decouple this task into two sub-tasks: first predict a soft voxel-occupancy mask indicating the probability that the voxel is non-empty; further predict point offset $P_{offset}$ for each non-empty voxel, $i.e$., an offset from voxel center $V_{c}$ to the averaged input points of this voxel. Figure 5: The architecture of the point cloud reconstruction (PCR) module. Figure 5 shows the architecture of the PCR module. The densified feature $F_{b}^{S}$ is first projected back to 3D view and fed to two 3D convolution layers and one transposed 3D convolution to upscale to $1/4$ scale of the original input. Then, we predict the voxel-occupancy mask $V_{mask}$ by using one $1\times 1$ 3D convolution with sigmoid function and predict point offset $P_{offset}$ by using one $1\times 1$ 3D convolution. Next, we repeat the same steps to further predict voxel mask $V_{mask}$ and point offset $P_{offset}$ at $1/2$ scale. Then, the reconstructed point $P_{c}$ is predicted as $P_{c}\ =\ (P_{offset}+V_{c})\ \times\ V_{mask}\ ,$ (1) and $P_{c}$ is optimized to reconstruct the voxelized dense object point cloud $P^{D}_{O}$. ### 3.5 Training Loss Our Sparse2Dense framework is trained in two stages. First, we train DDet with the following loss: $\mathcal{L}_{\text{DDet }}\ =\ \mathcal{L}_{\text{reg}}+\mathcal{L}_{\text{hm/cls}}\ ,$ (2) where we adopt an $L_{1}$ Loss as the regression loss $\mathcal{L}_{\text{reg}}$ followed [16, 1, 17]; heatmap loss $\mathcal{L}_{\text{hm}}$ is a variant of the focal loss [48] for center-based methods [1, 17]; and classification loss $\mathcal{L}_{\text{cls}}$ is the focal loss for anchor-based method [16]. Here, $\mathcal{L}_{\text{hm/cls}}$ means we use $\mathcal{L}_{\text{hm}}$ or $\mathcal{L}_{\text{cls}}$, depending which method that we adopt our framework to. Second, we train SDet with the following overall loss function: $\mathcal{L}_{\text{SDet}}\ =\ \mathcal{L}_{\text{reg}}+\mathcal{L}_{\text{hm/cls}}+\mathcal{L}_{\text{S2D}}+\mathcal{L}_{\text{mask}}\ +\mathcal{L}_{\text{offset}}+\mathcal{L}_{\text{hm\\_dis}},$ (3) where $\mathcal{L}_{\text{S2D}}$ is for optimizing S2D; $\mathcal{L}_{\text{mask}}$ and $\mathcal{L}_{\text{offset}}$ are for training PCR; and $\mathcal{L}_{\text{hm\\_dis}}$ is the distillation loss like $\mathcal{L}_{\text{hm}}$, but its input is the predicted heat maps of SDet and DDet. In detail, $\mathcal{L}_{\text{S2D}}$ helps to optimize SDet to learn to densify 3D features based on the associated features in DDet and it is an MSE Loss with the masks to indicate the empty and non-empty elements in the feature maps: $\displaystyle\mathcal{L}_{\text{S2D}}$ $\displaystyle=\beta\frac{1}{|N|}\sum^{N}_{i}({F_{a}^{S}}_{i}-{F_{a}^{D}}_{i})^{2}+\gamma\frac{1}{|\widetilde{N}|}\sum^{\widetilde{N}}_{i}({F_{a}^{S}}_{i}-{F_{a}^{D}}_{i})^{2}$ (4) $\displaystyle+\beta\frac{1}{|M|}\sum^{M}_{i}({F_{b}^{S}}_{i}-{F_{b}^{D}}_{i})^{2}+\gamma\frac{1}{|\widetilde{M}|}\sum^{\widetilde{M}}_{i}({F_{b}^{S}}_{i}-{F_{b}^{D}}_{i})^{2},$ where $N$ and $\widetilde{N}$ are numbers of non-zero and zero values, respectively, on $F_{a}^{D}$, while $M$ and $\widetilde{M}$ are numbers of non-zero and zero values, respectively, on $F_{b}^{D}$. Also, we empirically set $\beta$=10 and $\gamma$=20 to balance the loss weight on non-empty and empty features. $\mathcal{L}_{\text{mask}}$ and $\mathcal{L}_{\text{offset}}$ are for training PCR: $\displaystyle\mathcal{L}_{\text{mask}}=\sum_{j}\bigg{(}-\frac{N_{b}}{N_{f}}y_{j}\log(p_{j})-(1-y_{j})\log(1-p_{j})\bigg{)}\ ,$ (5) $\text{and}\ \ \mathcal{L}_{\text{offset}}\ =\ \frac{1}{|N_{f}|}\sum_{i}^{N_{f}}|({P_{offset}}_{i}+{V_{c}}_{i})-{P_{gt}}_{i}|\ ,$ (6) where $N_{b}$ and $N_{f}$ are numbers of background and foreground voxels, respectively; $p_{j}$ and $y_{j}$ are the prediction and ground-truth values of the voxel mask; and $j$ indexes the voxels in $V_{mask}$. Note also that Eq. (3) includes $\mathcal{L}_{\text{hm\\_dis}}$, only when adopting our method to the center-based methods [1, 17]. ## 4 Experiments Table 1: Comparisons on the Waymo Open Dataset on 202 validation sequences with existing works. $\dagger$ means re-produced by [28, 4]. $\star$ means re- produced by [14]. $\ddagger$ means re-produced by us. Note that our re- produced models and most of other re-produced models were trained on 20% data of Waymo Open Dataset following the strategy of [28, 4]. See more details of implementation in Sec. 4.2 and comparison in Sec. 4.3. Note that [30, 29] were trained only on the vehicle category, so they can largely focus on learning features for vehicles, [15] was trained on the vehicle and pedestrian categories, and others were trained on all three categories. Methods | Vehicle-L1 | Pedestrian-L1 | Cyclist-L1 | Vehicle-L2 | Pedestrian-L2 | Cyclist-L2 ---|---|---|---|---|---|--- mAP | mAPH | MAP | mAPH | mAP | mAPH | mAP | mAPH | MAP | mAPH | mAP | mAPH Part-A2-Net† [28] | 71.82 | 71.29 | 63.15 | 54.96 | 65.23 | 63.92 | 64.33 | 63.82 | 54.24 | 47.11 | 62.61 | 61.35 VoTr-SSD [30] | 68.99 | 68.39 | - | - | - | - | 60.22 | 59.69 | - | - | - | - VoTr-TSD [30] | 74.95 | 74.95 | - | - | - | - | 65.91 | 65.29 | - | - | - | - Pyramid-PV [29] | 76.30 | 75.68 | - | - | - | - | 67.23 | 66.68 | - | - | - | - Densified: | | | | | | | | | | | | PV-RCNN† [4] | 74.06 | 73.38 | 62.66 | 52.68 | 63.32 | 60.72 | 64.99 | 64.38 | 53.80 | 45.14 | 60.72 | 59.18 PV-RCNN + SPG [15] | 75.27 | - | 66.93 | - | - | - | 65.98 | - | 57.68 | - | - | - SECOND⋆ [16] | 67.40 | 66.80 | 57.40 | 47.80 | 53.50 | 52.30 | 58.90 | 58.30 | 49.40 | 41.10 | 51.80 | 50.60 SECOND+AGO-Net⋆ [14] | 69.20 | 68.70 | 59.30 | 48.70 | 55.30 | 54.20 | 60.60 | 60.10 | 51.80 | 42.40 | 53.50 | 52.50 SECOND‡ [16] | 67.49 | 66.06 | 55.59 | 44.66 | 57.32 | 54.54 | 59.42 | 57.92 | 47.99 | 38.50 | 55.19 | 52.51 SECOND (ours) | 71.94 | 70.47 | 58.78 | 48.29 | 59.24 | 56.76 | 63.49 | 62.17 | 51.12 | 41.92 | 57.03 | 54.64 CenterPoint-Pillar‡ [17, 1] | 72.36 | 71.73 | 69.16 | 59.16 | 62.11 | 60.42 | 64.12 | 63.54 | 61.14 | 52.13 | 59.76 | 58.14 CenterPoint-Pillar(ours) | 76.10 | 75.53 | 74.29 | 65.20 | 67.81 | 66.22 | 68.11 | 67.58 | 66.41 | 58.06 | 65.28 | 63.74 CenterPoint‡ [1] | 73.70 | 72.96 | 74.73 | 69.07 | 68.85 | 67.73 | 65.52 | 65.01 | 66.30 | 61.09 | 66.32 | 65.24 CenterPoint(ours) | 76.09 | 75.52 | 78.22 | 72.50 | 71.95 | 70.83 | 68.21 | 67.68 | 70.07 | 64.72 | 69.31 | 68.23 ### 4.1 Datasets and Evaluation Metrics We employ the Waymo Open Dataset and the Waymo Domain Adaptation Dataset [18], which are under the Waymo Dataset License Agreement, to evaluate our framework. Waymo Open Dataset is the largest and most informative 3D object detection dataset, which includes $360^{\circ}$ LiDAR point cloud and annotated 3D bounding boxes. The training set contains 798 sequences with around 158K LiDAR frames and the validation set includes 202 sequences with around 40k LiDAR frames. The dataset is captured across California and Arizona. The labeled object categories include vehicle, pedestrian, and cyclist. All the objects in sequences are named with a unique ID that can be used to generate the dense object point cloud in our method. Also, we perform unsupervised domain adaptation on the Waymo Domain Adaptation dataset without re-training our model to show the generalization capability of our method. The Waymo Domain Adaptation dataset contains 20 sequences with 3933 frames for evaluation. This dataset is captured in Kirkland and most frames are captured on rainy day [15], which means the point cloud is sparser and more incomplete than the point cloud in Waymo Open Dataset. The labeled object categories include vehicle and pedestrian in the Waymo Domain Adaptation dataset. Following prior works [1, 4], we adopt Average Precision weighted by Heading (APH) and Average Precision (AP) as evaluation metrics. ### 4.2 Implementation Details In the first training stage, following [1], we train DDet from scratch using Adam with a learning rate of 0.003 and a one-cycle learning rate policy with a dividing factor of 0.1 and a percentage of the cycle of 0.3. We set the detect range as $[-75.2m,75.2m]$ for the $X,Y$ axes and set $[-2m,4m]$ for the $Z$ axis, and the size of each voxel grid as $(0.1m,0.1m,0.15m)$. We apply global rotation around the Z-axis, random flipping, global scaling, and global translating as the data augmentation. We train the DDet on four Nvidia RTX 3090 GPUs with a batch size of four per GPU for 30 epochs. In the second training stage, we adopt DDet’s weights to initialize SDet, then optimize SDet by adopting the same hyperparameters as the first stage with DDet frozen. Following [4, 14, 30], we adopt $20\%$ subset of the Waymo Open Dataset to train our models. Note that we adopt the second stage for CenterPoint-based baselines [1, 17] by following [1]. ### 4.3 Comparison with State-of-the-art Methods on the Waymo Open Dataset We compare our methods with multiple state-of-the-art 3D object detectors [28, 4, 30, 29, 15, 16, 17, 1] on three categories, i.e., pedestrian, vehicle, and cyclist, and on two difficulty levels, i.e., level 1 (L1) and level 2 (L2). Note that our method is a plug-and-play module and can be easily adopted to work with various deep-learning-based 3D object detectors. We obtain the results of the existing works by copying from their papers and GitHub [28, 4, 14, 30, 29, 15] or by re-producing their methods using the public code with recommended parameters [1, 16]. We cannot compare with [44], as the authors did not report the performance of their method on the Waymo Open Dataset and did not release code. Among the existing works, [15, 14] are the state-of-the- art methods that adopt densified operations explicitly in point clouds and can be adopted to work with other methods. Table 1 reports the comparison results, where our method clearly improves all three baseline methods (SECOND, PointPilllar-center, and CenterPoint) for all categories on all evaluation metrics. Notably, our approach achieves more performance gain in comparison than [14] when working with [16], and outperforms [15] when working with [17, 1], even though [15] is built upon a stronger model [4]. Also, level 2 (L2) contains more challenging samples than level 1 (L1), since the point cloud in L2 are much sparser. Despite that, our method consistently shows greater improvement on L2, demonstrating its effectiveness to deal with the sparse point clouds. Furthermore, Table 2 shows the performance gain for three different distant ranges on the Waymo validation set. Our method also achieves significant improvements over all three baseline methods for all distance ranges, including the long-range, showing again that our method can effectively learn to densify 3D features in challenging cases. See more detailed comparison results and the performance of our model trained on the full Waymo Open Dataset in Appendix A. We further provide the visual comparisons in Figure 6, where we can see that: (i) our method successfully detects more objects that contain a few points and compensates sparse features of these objects; (ii) our method generates more accurate 3D bounding boxes, which are consistent with the ground truth boxes; see the orange boxes in the first row; (iii) our method generates more dense and robust features in sparse or distant regions. Please see more visual comparisons in Appendix B. (a) SECOND [16] (b) CenterPoint-Pillar [17, 1] (c) CenterPoint-Voxel [1] (d) SECOND (Ours) (e) CenterPoint-Pillar (Ours) (f) CenterPoint-Voxel (Ours) Figure 6: Visual comparison of 3D object detection results and 3D features produced by (a,b,c) baselines [16, 17, 1] and (d,e,f) our methods (our approach + corresponding baseline), where our approach successfully densifies the object features and help the baseline methods produce more accurate detection results than the three baselines. Note that red boxes show the detection results and green boxes show the ground truths. Orange boxes highlight the improvement brought by our approach. ### 4.4 Quantitative Comparison on the Waymo Domain Adaptation Dataset We further evaluate our methods on the Waymo Domain Adaptation Dataset and compare it with the state-of-the-art methods. Table 3 shows that our method is able to consistently improve the performance of all three methods [16, 17, 1] for all categories and all difficulty levels. Note also that the most recent state-of-the-art method SPG [15] is trained only on the vehicle and pedestrian categories, yet our method can achieve better performance even when trained on three categories. Table 2: Performance gain over baseline approaches on Waymo validation set (level-2) in different ranges. Evaluated on three categories. Methods | All Range | Range [0, 30) | Range [30, 50) | Range [50, +inf) ---|---|---|---|--- | mAP | mAPH | MAP | mAPH | mAP | mAPH | mAP | mAPH SECOND [16] | 54.20 | 49.65 | 70.58 | 66.17 | 52.33 | 46.95 | 32.64 | 28.25 SECOND (ours) | 57.21 +3.01 | 52.91 +3.26 | 72.94 +2.36 | 68.91 +2.74 | 55.30 +2.97 | 50.28 +3.33 | 36.32 +3.68 | 31.96 +3.71 CenterPoint-Pillar [17, 1] | 61.67 | 57.94 | 74.18 | 70.55 | 61.63 | 57.88 | 42.91 | 38.83 CenterPoint-Pillar(ours) | 66.60 +4.93 | 63.13 +5.19 | 77.90 +3.72 | 74.60 +4.05 | 66.72 +5.09 | 63.32 +5.44 | 49.42 +6.51 | 45.41 +6.58 CenterPoint-Voxel [1] | 66.04 | 63.78 | 80.80 | 78.86 | 64.24 | 61.66 | 45.37 | 42.35 CenterPoint-Voxel(Ours) | 69.19 +3.15 | 66.88 +3.10 | 82.72 +1.92 | 80.77 +1.91 | 67.60 +3.36 | 64.96 +3.30 | 49.45 +4.08 | 46.28 +3.93 Table 3: Comparisons on the Waymo Domain Adaptation Dataset on 20 validation sequences with existing works. † means re-produced by [15], $\ddagger$ means re-produced by us. Still, our re-produced models and most of other re-produced models were trained on 20% data of Waymo Open Dataset following the strategy of [28]; see more details in Sec. 4.2. [15] was trained on two categories (vehicle and pedestrian) while ours were trained on all the three categories. Methods | Vehicle-L1 | Pedestrian-L1 | Vehicle-L2 | Pedestrian-L2 ---|---|---|---|--- mAP | mAPH | mAP | mAPH | mAP | mAPH | mAP | mAPH SPG† [15] | 58.31 | - | 30.82 | - | 48.70 | - | 22.05 | - SECOND‡ [16] | 51.56 | 49.55 | 13.96 | 12.14 | 42.90 | 41.22 | 9.83 | 8.54 SECOND(ours) | 55.49 | 53.96 | 17.45 | 15.25 | 46.25 | 44.95 | 12.23 | 10.68 CenterPoint-Pillar‡ [17, 1] | 54.15 | 53.26 | 12.50 | 10.36 | 45.33 | 44.57 | 8.80 | 7.29 CenterPoint-Pillar(ours) | 59.18 | 58.52 | 18.95 | 16.26 | 50.12 | 49.55 | 13.31 | 11.42 CenterPoint-Voxel‡ [1] | 57.54 | 56.99 | 30.21 | 28.30 | 48.36 | 47.88 | 21.16 | 19.82 CenterPoint-Voxel(ours) | 60.54 | 59.87 | 37.15 | 35.21 | 51.01 | 50.43 | 26.03 | 24.66 ### 4.5 Ablation Study Table 4: Ablation studies on the Waymo Open Dataset validation set. Methods | Vehicle-L2 | Pedestrian-L2 | Cyclist-L2 ---|---|---|--- mAP | mAPH | mAP | mAPH | mAP | mAPH Baseline | 63.03 | 62.53 | 63.72 | 58.03 | 65.03 | 63.90 \+ Distillation | 63.84 | 63.32 | 67.04 | 61.21 | 67.59 | 66.44 \+ S2D | 65.75 | 65.22 | 67.62 | 61.65 | 68.50 | 67.34 \+ PCR | 66.12 | 65.58 | 67.47 | 61.59 | 68.69 | 67.54 \- Distillation | 65.61 | 65.08 | 64.75 | 58.80 | 65.79 | 64.62 Table 5: Latency analysis on our S2D module. We evaluate each model with the batch size of 1. The latency is averaged over the Waymo validation set. As a reference, we include SPG [15] (evaluated on KITTI). Our method needs only $\sim$10 ms vs. 16.9 ms by SPG. Detectors | CenterPoint-Pillar [17, 1] | CenterPoint-Pillar+S2D | CenterPoint-Voxel [1] | CenterPoint-Voxel+S2D ---|---|---|---|--- Inference time (ms) | 42.7 | 53.1 (+10.4) | 53.0 | 62.8 (+9.8) Detectors | PV-RCNN [4] | PV-RCNN+SPG [15] Inference time (ms) | 140.0 | 156.9 (+16.9) We conduct experiments to evaluate the key components in our Sparse2Dense framework. Here, we adopt our approach to work with CenterPoint (one stage version) [1]. Then, we conduct the feature distillation (“+ Distillation”) to distill the 3D features from $F_{a}^{D}$ in DDet to $F_{a}^{S}$ in SDet, and adopt heat map distillation $\mathcal{L}_{\text{hm\\_dis}}$ between the heat maps of SDet and DDet, as discussed in Sec.3.5. Next, we further add the S2D module (“+ S2D”) in SDet to enable the feature distillation between the $F_{b}^{D}$ in DDet and $F_{b}^{S}$ in SDet. In addition, we construct our full pipeline by further adding the point cloud reconstruction module (“+ PCR”). Also, we conduct an additional experiment (“- Distillation”) by ablating the feature loss $\mathcal{L}_{\text{S2D}}$ and heat map distillation loss $\mathcal{L}_{\text{hm\\_dis}}$ from our full pipeline. The results are shown in Table 5. First, feature distillation (“+ Distillation”) helps to moderately improve the performance of both categories. By adopting our S2D module (“+ S2D”), we can largely improve the quality of the densified features, boosting the performance of 3D object detection by around 2% on the vehicle and 1% on the cyclist, compared with “+ Distillation”. Next, our point cloud reconstruction module (“+ PCR”) further enhances the performance on both categories and metrics consistently by providing additional supervision to regularize the feature learning. Finally, even without distillation (“- Distillation”), our approach can still improve the baseline performance by more than 2.5% on the vehicle, demonstrating the effectiveness of our S2D and PCR modules. ### 4.6 Latency Analysis for S2D module In our framework, both DDet and PCR are used only in the training stages. In inference, we only add the lightweight S2D module into the basic 3D object detection framework. To evaluate the efficiency of the S2D module, we employ the Waymo Open Dataset validation set and report the average processing time with and without the S2D module in inference. Table 5 reports the results, showing that S2D only brings around extra 10 ms latency to detectors, thus demonstrating our approach’s high efficiency. Also, we show the latency of SPG [15] reported in their paper, $i.e.$, 16.9 ms, which is evaluated on the KITTI dataset with a much smaller number of points and objects than the dataset we employed, $i.e.$, Waymo. The latency analysis results manifest the superior efficiency of our S2D in comparison to the state-of-the-art approach [15]. ## 5 Discussion and Conclusion This paper presents the novel Sparse2Dense framework that learns to densify 3D features to boost 3D object detection performance. Our key idea is to learn to transfer dense point knowledge from the trained dense point 3D detector (DDet) to the sparse point 3D detector (SDet), such that SDet can learn to densify 3D features in the latent space. With the trained SDet, we only need the core component of SDet to detect 3D objects in regular point clouds, so we can enhance the detection accuracy without degrading the speed. Further, to enhance the transfer of dense point knowledge, we design the S2D module and the point cloud reconstruction module in SDet to enhance the sparse features. Last, we adopt our framework to various 3D detectors, showing that their performance can all be improved consistently on multiple benchmark datasets. In the future, we will apply our framework to more point cloud applications that require dense features, such as 3D segmentation and object tracking, to boost their performance while maintaining high computational efficiency. Limitations. First, objects far from the LiDAR sensor in the point cloud sequence contain only a few points. It is still difficult to generate dense features for these objects with our DDet. Second, the training time of our framework is longer than traditional training, as we need multi-stage training to pre-train DDet and then transfer knowledge from the pre-trained DDet to SDet. Third, our models trained on Waymo Open Dataset need inputs containing specific point features like intensity and elongation, which limits our models evaluating on different datasets, like KITTI. We will explore removing the specific point features to make the model more general in future works. Societal Impacts. Our proposed framework can provide better 3D object detection performance for autonomous vehicles. However, like most existing 3D detectors, it may produce errors in some edge cases, due to the limited data, so further research is still needed to improve its robustness. Acknowledgements. Thank all the co-authors, reviewers, and ACs for their remarkable efforts. This work was supported by the project #MMT-p2-21 of the Shun Hing Institute of Advanced Engineering, The Chinese University of Hong Kong, and the Shanghai Committee of Science and Technology (Grant No.21DZ1100100). ## References * Yin et al. [2021a] Tianwei Yin, Xingyi Zhou, and Philipp Krahenbuhl. Center-based 3D object detection and tracking. In _CVPR_ , 2021a. * Sheng et al. [2021] Hualian Sheng, Sijia Cai, Yuan Liu, Bing Deng, Jianqiang Huang, Xian-Sheng Hua, and Min-Jian Zhao. Improving 3D object detection with channel-wise transformer. In _ICCV_ , 2021. * Shi and Rajkumar [2020] Weijing Shi and Raj Rajkumar. Point-GNN: Graph neural network for 3D object detection in a point cloud. In _CVPR_ , 2020. * Shi et al. [2020] Shaoshuai Shi, Chaoxu Guo, Li Jiang, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongsheng Li. PV-RCNN: Point-voxel feature set abstraction for 3D object detection. In _CVPR_ , 2020. * Chen et al. [2017] Xiaozhi Chen, Huimin Ma, Ji Wan, Bo Li, and Tian Xia. Multi-view 3D object detection network for autonomous driving. In _CVPR_ , 2017. * Qi et al. [2018] Charles R. Qi, Wei Liu, Chenxia Wu, Hao Su, and Leonidas J. Guibas. Frustum PointNets for 3D object detection from RGB-D data. In _CVPR_ , 2018. * Yoo et al. [2020] Jin Hyeok Yoo, Yecheol Kim, Jisong Kim, and Jun Won Choi. 3D-CVF: Generating joint camera and lidar features using cross-view spatial feature fusion for 3D object detection. In _ECCV_ , 2020. * Ma et al. [2021] Xinzhu Ma, Yinmin Zhang, Dan Xu, Dongzhan Zhou, Shuai Yi, Haojie Li, and Wanli Ouyang. Delving into localization errors for monocular 3D object detection. In _CVPR_ , 2021. * Yin et al. [2021b] Tianwei Yin, Xingyi Zhou, and Philipp Krähenbühl. Multimodal virtual point 3d detection. In _NIPS_ , 2021b. * Vora et al. [2020] Sourabh Vora, Alex H. Lang, Bassam Helou, and Oscar Beijbom. PointPainting: Sequential fusion for 3D object detection. In _CVPR_ , 2020. * Wang et al. [2021] Chunwei Wang, Chao Ma, Ming Zhu, and Xiaokang Yang. PointAugmenting: Cross-modal augmentation for 3D object detection. In _CVPR_ , 2021. * Yang et al. [2021] Zetong Yang, Yin Zhou, Zhifeng Chen, and Jiquan Ngiam. 3D-MAN: 3D multi-frame attention network for object detection. In _CVPR_ , 2021. * Qi et al. [2021] Charles R. Qi, Yin Zhou, Mahyar Najibi, Pei Sun, Khoa Vo, Boyang Deng, and Dragomir Anguelov. Offboard 3D object detection from point cloud sequences. In _CVPR_ , 2021. * Du et al. [2021] Liang Du, Xiaoqing Ye, Xiao Tan, Edward Johns, Bo Chen, Errui Ding, Xiangyang Sue, and Jianfeng Feng. AGO-Net: Association-guided 3d point cloud object detection network. _IEEE TPAMI_ , 2021. * Xu et al. [2021] Qiangeng Xu, Yin Zhou, Weiyue Wang, Charles R. Qi, and Dragomir Anguelov. SPG: Unsupervised domain adaptation for 3D object detection via semantic point generation. In _ICCV_ , 2021. * Yan et al. [2018] Yan Yan, Yuxing Mao, and Bo Li. SECOND: Sparsely embedded convolutional detection. _Sensors_ , 2018. * Lang et al. [2019] Alex H. Lang, Sourabh Vora, Holger Caesar, Lubing Zhou, Jiong Yang, and Oscar Beijbom. PointPillars: Fast encoders for object detection from point clouds. In _CVPR_ , 2019. * Sun et al. [2020] Pei Sun, Henrik Kretzschmar, Xerxes Dotiwalla, Aurelien Chouard, Vijaysai Patnaik, Paul Tsui, James Guo, Yin Zhou, Yuning Chai, Benjamin Caine, et al. Scalability in perception for autonomous driving: Waymo open dataset. In _CVPR_ , 2020. * Engelcke et al. [2017] Martin Engelcke, Dushyant Rao, Dominic Zeng Wang, Chi Hay Tong, and Ingmar Posner. Vote3Deep: Fast object detection in 3D point clouds using efficient convolutional neural networks. In _ICRA_ , 2017. * Zhou and Tuzel [2018] Yin Zhou and Oncel Tuzel. VoxelNet: End-to-end learning for point cloud based 3D object detection. In _CVPR_ , 2018. * Shi et al. [2019] Shaoshuai Shi, Xiaogang Wang, and Hongsheng Li. PointRCNN: 3D object proposal generation and detection from point cloud. In _CVPR_ , 2019. * Yang et al. [2019] Zetong Yang, Yanan Sun, Shu Liu, Xiaoyong Shen, and Jiaya Jia. STD: Sparse-to-dense 3D object detector for point cloud. In _ICCV_ , 2019. * Yang et al. [2020] Zetong Yang, Yanan Sun, Shu Liu, and Jiaya Jia. 3DSSD: Point-based 3D single stage object detector. In _CVPR_ , 2020. * Deng et al. [2020] Jiajun Deng, Shaoshuai Shi, Peiwei Li, Wengang Zhou, Yanyong Zhang, and Houqiang Li. Voxel R-CNN: Towards high performance voxel-based 3D object detection. _arXiv:2012.15712_ , 2020. * Zheng et al. [2021] Wu Zheng, Weiliang Tang, Li Jiang, and Chi-Wing Fu. SE-SSD: Self-ensembling single-stage object detector from point cloud. In _CVPR_ , 2021. * Li et al. [2021a] Zhichao Li, Feng Wang, and Naiyan Wang. LiDAR R-CNN: An efficient and universal 3D object detector. In _CVPR_ , 2021a. * Kuang et al. [2020] Hongwu Kuang, Bei Wang, Jianping An, Ming Zhang, and Zehan Zhang. Voxel-FPN: Multi-scale voxel feature aggregation for 3d object detection from lidar point clouds. _Sensors_ , 20(3), 2020. * Shi et al. [2021] Shaoshuai Shi, Zhe Wang, Jianping Shi, Xiaogang Wang, and Hongsheng Li. From points to parts: 3D object detection from point cloud with part-aware and part-aggregation network. _IEEE TPAMI_ , 43(8):2647–2664, 2021. * Mao et al. [2021a] Jiageng Mao, Minzhe Niu, Haoyue Bai, Xiaodan Liang, Hang Xu, and Chunjing Xu. Pyramid R-CNN: Towards better performance and adaptability for 3D object detection. In _ICCV_ , 2021a. * Mao et al. [2021b] Jiageng Mao, Yujing Xue, Minzhe Niu, Haoyue Bai, Jiashi Feng, Xiaodan Liang, Hang Xu, and Chunjing Xu. Voxel transformer for 3D object detection. In _ICCV_ , 2021b. * Xu et al. [2022] Qiangeng Xu, Yiqi Zhong, and Ulrich Neumann. Behind the curtain: Learning occluded shapes for 3d object detection. In _AAAI_ , 2022. * Graham et al. [2018] Benjamin Graham, Martin Engelcke, and Laurens van der Maaten. 3D semantic segmentation with submanifold sparse convolutional networks. In _CVPR_ , 2018. * Graham and van der Maaten [2017] Benjamin Graham and Laurens van der Maaten. Submanifold sparse convolutional networks. _arXiv preprint arXiv:1706.01307_ , 2017. * Yu et al. [2018] Lequan Yu, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. PU-Net: Point cloud upsampling network. In _CVPR_ , 2018. * Li et al. [2019] Ruihui Li, Xianzhi Li, Chi-Wing Fu, Daniel Cohen-Or, and Pheng-Ann Heng. PU-GAN: a point cloud upsampling adversarial network. In _CVPR_ , 2019. * Li et al. [2021b] Ruihui Li, Xianzhi Li, Pheng-Ann Heng, and Chi-Wing Fu. Point cloud upsampling via disentangled refinement. In _CVPR_ , 2021b. * Pan et al. [2021] Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai Yi, and Ziwei Liu. Variational relational point completion network. In _CVPR_ , 2021. * Xiang et al. [2021] Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, and Zhizhong Han. SnowflakeNet: Point cloud completion by snowflake point deconvolution with skip-transformer. In _CVPR_ , 2021. * Xie et al. [2021] Chulin Xie, Chuxin Wang, Bo Zhang, Hao Yang, Dong Chen, and Fang Wen. Style-based point generator with adversarial rendering for point cloud completion. In _CVPR_ , 2021. * Yu et al. [2021] Xumin Yu, Yongming Rao, Ziyi Wang, Zuyan Liu, Jiwen Lu, and Jie Zhou. PoinTr: Diverse point cloud completion with geometry-aware transformers. In _CVPR_ , 2021. * Yan et al. [2021] Xu Yan, Jiantao Gao, Jie Li, Ruimao Zhang, Zhen Li, Rui Huang, and Shuguang Cui. Sparse single sweep lidar point cloud segmentation via learning contextual shape priors from scene completion. In _AAAI_ , 2021. * Yi et al. [2021] Li Yi, Boqing Gong, and Thomas Funkhouser. Complete & label: A domain adaptation approach to semantic segmentation of LiDAR point clouds. In _CVPR_ , 2021. * Du et al. [2020] Liang Du, Xiaoqing Ye, Xiao Tan, Jianfeng Feng, Zhenbo Xu, Errui Ding, and Shilei Wen. Associate-3Ddet: Perceptual-to-conceptual association for 3D point cloud object detection. In _CVPR_ , 2020. * Zhang et al. [2021] Yanan Zhang, Di Huang, and Yunhong Wang. PC-RGNN: Point cloud completion and graph neural network for 3D object detection. In _AAAI_ , 2021. * Wang et al. [2020] Yue Wang, Alireza Fathi, Jiajun Wu, Thomas Funkhouser, and Justin Solomon. Multi-frame to single-frame: Knowledge distillation for 3d object detection. In _ECCVW_ , 2020. * Zhou et al. [2018] Qian-Yi Zhou, Jaesik Park, and Vladlen Koltun. Open3D: A modern library for 3D data processing. _arXiv:1801.09847_ , 2018. * Liu et al. [2022] Zhuang Liu, Hanzi Mao, Chao-Yuan Wu, Christoph Feichtenhofer, Trevor Darrell, and Saining Xie. A convnet for the 2020s. In _CVPR_ , 2022. * Lin et al. [2017] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. Focal loss for dense object detection. In _ICCV_ , 2017. ## Appendix A Additional Experiments ### A.1 Detailed Performance Gain on the Waymo Validation Set (Level 2) In this subsection, we present detailed performance comparisons over three baselines [16, 17, 1] on the Waymo validation set (level-2) for different distance ranges and three categories. We show the percentage improvement brought forth by our approach on various cases (baselines and distance ranges), demonstrating that our approach helps various methods to improve their performance. Table A1: Performance gain over three baselines on Waymo validation (level-2) for different distance ranges. Evaluated on vehicle. Our approach largely improves the performance on distant objects. Methods | All Range | Range [0, 30) | Range [30, 50) | Range [50, +inf) ---|---|---|---|--- | mAP | mAPH | MAP | mAPH | mAP | mAPH | mAP | mAPH SECOND [16] | 59.42 | 57.92 | 86.39 | 84.92 | 58.98 | 56.92 | 30.27 | 28.81 SECOND (ours) | 63.49 +4.07 | 62.17 +4.25 | 88.72 +2.33 | 87.50 +2.58 | 63.39 +4.41 | 61.72 +4.80 | 35.63 +5.36 | 34.27 +5.46 CenterPoint-Pillar [17, 1] | 64.12 | 63.54 | 88.35 | 87.77 | 63.88 | 63.29 | 36.29 | 35.62 CenterPoint-Pillar(ours) | 68.11 +3.99 | 67.58 +4.04 | 90.01 +1.74 | 89.51 +1.55 | 68.28 +4.15 | 67.72 +4.2 | 41.92 +6.27 | 41.26 +6.30 CenterPoint-Voxel [1] | 65.52 | 65.01 | 89.47 | 88.97 | 66.02 | 65.47 | 37.46 | 36.89 CenterPoint-Voxel(Ours) | 68.21 +2.69 | 67.68 +2.67 | 90.23 +0.76 | 89.76 +0.79 | 68.70 +2.68 | 68.11 +2.64 | 41.81 +4.35 | 41.13 +4.24 Table A2: Performance gain over three baselines on Waymo validation (level-2) for different distance ranges. Evaluated on pedestrian. Our approach largely improves the performance on distant objects. Methods | All Range | Range [0, 30) | Range [30, 50) | Range [50, +inf) ---|---|---|---|--- | mAP | mAPH | MAP | mAPH | mAP | mAPH | mAP | mAPH SECOND [16] | 47.99 | 38.50 | 56.58 | 47.09 | 48.00 | 37.26 | 32.53 | 24.03 SECOND (ours) | 51.12 +3.13 | 41.92 +3.42 | 59.32 +3.36 | 50.45 +2.77 | 51.17 +3.42 | 40.68 +3.19 | 36.39 +4.30 | 27.46 +3.99 CenterPoint-Pillar [17, 1] | 61.14 | 52.13 | 64.92 | 56.29 | 65.13 | 55.77 | 49.29 | 39.83 CenterPoint-Pillar(ours) | 66.41 +5.27 | 58.06 +5.93 | 70.36 +5.44 | 62.54 +6.25 | 69.25 +4.12 | 67.72 +5.02 | 41.92 +6.13 | 41.26 +6.17 CenterPoint-Voxel [1] | 66.30 | 61.09 | 74.77 | 70.48 | 66.68 | 60.69 | 50.64 | 43.49 CenterPoint-Voxel(Ours) | 70.07 +3.77 | 64.72 +3.63 | 77.98 +3.21 | 73.62 +3.14 | 70.26 +3.58 | 64.16 +3.47 | 55.31 +4.67 | 47.80 +4.31 Table A3: Performance gain over three baselines on Waymo validation (level-2) for different distance ranges. Evaluated on cyclist. Our approach largely improves the performance on distant objects. Methods | All Range | Range [0, 30) | Range [30, 50) | Range [50, +inf) ---|---|---|---|--- | mAP | mAPH | MAP | mAPH | mAP | mAPH | mAP | mAPH SECOND [16] | 55.19 | 52.51 | 68.77 | 66.50 | 50.00 | 46.69 | 35.11 | 31.90 SECOND (ours) | 57.03 +1.84 | 54.64 +2.13 | 70.78 +2.01 | 68.79 +2.29 | 51.36 +1.36 | 48.43 +1.74 | 36.93 +1.82 | 34.13 +2.23 CenterPoint-Pillar [17, 1] | 59.76 | 58.14 | 69.26 | 67.58 | 55.87 | 54.59 | 43.16 | 41.03 CenterPoint-Pillar(ours) | 65.28 +5.52 | 63.74 +5.60 | 73.34 +4.08 | 71.74 +4.16 | 62.62 +6.75 | 61.46 +6.87 | 50.93 +7.77 | 48.96 +7.93 CenterPoint-Voxel [1] | 66.32 | 65.24 | 78.16 | 77.13 | 60.02 | 58.82 | 48.01 | 46.66 CenterPoint-Voxel(Ours) | 69.34 +2.99 | 68.23 +2.99 | 79.93 +1.77 | 78.92 +1.79 | 63.85 +3.83 | 62.62 +3.80 | 51.22 +3.21 | 49.90 +3.24 ### A.2 Performance Comparison When Models Trained on Full Waymo _Train_ set (Level 2) We follow [1] to train our model with full _train_ set of Waymo Open Dataset and test the trained model on Waymo _val_ set and _test_ set. Table A4 shows that our method surpasses the baseline method. Note that we only train our model in a short training schedule (1x means training 12 epochs for the first stage) due to the limited computational resources, but we still achieve better results. Table A4: Performance Comparison on the Waymo _val_ set and _test_ Set. (Level 2) Methods | Split | Schedule | Vehicle-mAP | Pedestrian-mAP | Cyclist-mAP ---|---|---|---|---|--- CenterPoint-Voxel | Val | 3x | 67.9 | 65.6 | 68.6 CenterPoint-Voxel(Ours) | Val | 1x | 68.4 | 71.2 | 71.3 CenterPoint-Voxel | Test | 3x | 71.9 | 67.0 | 68.2 CenterPoint-Voxel(Ours) | Test | 1x | 72.6 | 72.1 | 70.3 ## Appendix B Additional Comparison Results with Feature Visualization (a) Detection results and sparse 3D features of baseline [16] (b) Detection results and densified 3D features of our method (c) Detection results and sparse 3D features of baseline [17] (d) Detection results and densified 3D features of our method (e) Detection results and sparse 3D features of baseline [1] (f) Detection results and densified 3D features of our method Figure A1: Visual comparison of 3D object detection results and 3D features produced by (a,c,e) baselines [16, 17, 1] and (b,d,f) our methods (our approach + corresponding baseline), where our approach successfully densifies the object features and helps the baseline methods produce more accurate detection results than three baselines. Note that red boxes show the detection results and green boxes show the ground truths. Orange boxes highlight the improvement brought by our approach. (a) Detection results and sparse 3D features of baseline [16] (b) Detection results and densified 3D features of our method (c) Detection results and sparse 3D features of baseline [17] (d) Detection results and densified 3D features of our method (e) Detection results and sparse 3D features of baseline [1] (f) Detection results and densified 3D features of our method Figure A2: Visual comparison of 3D object detection results and 3D features produced by (a,c,e) baselines [16, 17, 1] and (b,d,f) our methods (our approach + corresponding baseline), where our approach successfully densifies the object features and helps the baseline methods produce more accurate detection results than three baselines. Note that red boxes show the detection results and green boxes show the ground truths. Orange boxes highlight the improvement brought by our approach. (a) Detection results and sparse 3D features of baseline [16] (b) Detection results and densified 3D features of our method (c) Detection results and sparse 3D features of baseline [17] (d) Detection results and densified 3D features of our method (e) Detection results and sparse 3D features of baseline [1] (f) Detection results and densified 3D features of our method Figure A3: Visual comparison of 3D object detection results and 3D features produced by (a,c,e) baselines [16, 17, 1] and (b,d,f) our methods (our approach + corresponding baseline), where our approach successfully densifies the object features and helps the baseline methods produce more accurate detection results than three baselines. Note that red boxes show the detection results and green boxes show the ground truths. Orange boxes highlight the improvement brought by our approach.
* • The map $q_{10}$ is an isomorphism as there is an isomorphism of complexes ${}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,\mathcal{P}_{\\{s_{1}\\},\\{s_{0}\\}}}^{\bullet,k_{1}+\ell_{1}-s_{0}^{\prime}}\rightarrow_{\mathcal{P}_{\\{t_{1}\\},\\{s_{0}\\}}}E_{1,\mathcal{P}_{\\{s_{1}\\},\\{s_{0}\\}}}^{\bullet,k_{1}+\ell_{1}-s_{0}^{\prime}}.$ This isomorphism uses the following fact: for each $I_{2}\cup(I_{1}\setminus I_{0})\subseteq I^{\prime}\subseteq I_{1}$ and $I_{4}\cup(I_{1}\setminus I_{0})\subseteq I\subseteq I_{1}$ satisfying $\\#I^{\prime}\cap(I_{0}\setminus I_{2})\leq s_{0}$, $\\#I\cap(I_{2}\setminus I_{4})=s_{1}$ and $\\#I\cap(I_{0}\setminus I_{2})=s_{0}$, we have $I_{2}\cup I\subseteq I^{\prime}$ if and only if $I_{2}\cup I=I^{\prime}$ and $\\#I^{\prime}\cap(I_{0}\setminus I_{2})=s_{0}$. It follows from Lemma 4.19 that ${}_{I_{2},I_{1}}E_{1,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{\bullet,\bullet}$ is supported in degree $(-\ell_{1},\bullet)$ and thus (4.26) $M^{k_{1}}_{I_{2},I_{1},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}\cong~{}_{I_{2},I_{1}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1},k_{1}+\ell_{1}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{1}}$ for each $k\in\mathbb{Z}$. Similarly, we clearly have ${}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{\bullet,\bullet}\xrightarrow{\sim}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,\mathcal{P}_{\\{s_{1}\\},\\{s_{0}\\}}}^{\bullet,\bullet}$ is supported in degree $(-\ell_{1}+s_{0}^{\prime},\bullet)$, which implies that (4.27) $M^{k_{1}}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}\cong~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\xrightarrow{\sim}~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},\\{s_{0}\\}}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{2}}.$ ###### Lemma 4.26. The truncation $\mathbf{C}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}(\lambda)\rightarrow\mathbf{C}_{I_{2},I_{1}}(\lambda)$ induces a canonical map (4.28) $_{I_{2},I_{1}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1},k_{1}+\ell_{1}}\rightarrow_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}$ whose composition with the middle isomorphism of (4.27) gives $q_{7}$. Under the canonical isomorphisms (4.26) and (4.27), (4.28) is given by $\bigoplus_{I\in S_{1},I^{\prime}\in S_{2}}\mathrm{Res}_{n,I,I^{\prime}}^{k_{1}+\ell_{1}-\\#I_{1}}:H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{1}}\rightarrow H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{2}}.$ ###### Proof. The truncation $\mathbf{C}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}(\lambda)\rightarrow\mathbf{C}_{I_{2},I_{1}}(\lambda)$ induces a canonical map $M^{k_{1}}_{I_{2},I_{1},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}\rightarrow M^{k_{1}}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}$ which together with (4.26) and (4.27) gives the canonical map (4.28). Let $I_{4}\cup(I_{1}\setminus I_{2})\subseteq I\subseteq I_{1}$ be a set satisfying $\\#I\cap(I_{2}\setminus I_{4})=s_{1}$ and $I_{4}\cup(I_{1}\setminus I_{0})\subseteq I^{\prime}\subseteq I_{1}\cap I$ be a set satisfying $I^{\prime}\cap(I_{2}\setminus I_{4})=I\cap(I_{2}\setminus I_{4})$ and $\\#I^{\prime}\cap(I_{0}\setminus I_{2})=s_{0}$. We set $S_{I}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{I^{\prime}\in S_{2}\mid I^{\prime}\subseteq I\\}$. Then there exists a tuple $\mathcal{P}$ (of the kind introduced at the beginning of Section 4.2) such that $\mathbf{C}_{\mathcal{P}}(\lambda)\rightarrow\mathbf{C}_{\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}(\lambda)$ is a minimal truncation with the degree $-\ell_{1}$ term of $\mathbf{C}_{\mathcal{P}}(\lambda)$ being $i_{n,I}^{\rm{an}}(\lambda)[-\ell_{1}]$, and moreover we also have canonical truncation map $\mathbf{C}_{\mathcal{P}}(\lambda)\rightarrow\mathbf{C}_{I^{\prime},I}(\lambda)$. The diagram $\mathbf{C}_{\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}(\lambda)\leftarrow\mathbf{C}_{\mathcal{P}}(\lambda)\rightarrow\mathbf{C}_{I^{\prime},I}(\lambda)$ induces a commutative diagram (4.29) $\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern 10.74165pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern 0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{I_{2},I_{1},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 34.74165pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 34.74165pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]},\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}}$}}}}}}}{\hbox{\kern-10.74165pt\raise-44.69107pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{I_{2},I_{1},\mathcal{P}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 0.0pt\raise-8.91333pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 0.0pt\raise-77.82661pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 34.74165pt\raise-44.69107pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 34.74165pt\raise-44.69107pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]},\mathcal{P}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 45.4833pt\raise-8.91333pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 45.4833pt\raise-77.82661pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{I_{2},I_{1},I^{\prime},I}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern 34.74165pt\raise-89.60437pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern 34.74165pt\raise-89.60437pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\displaystyle{M^{k_{1}}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]},I^{\prime},I}}$}}}}}}}\ignorespaces}}}}\ignorespaces.$ Then we observe that ${}_{I_{2},I_{1}}E_{1,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{\bullet,\bullet}\leftarrow~{}_{I_{2},I_{1}}E_{1,\mathcal{P}}^{\bullet,\bullet}\xrightarrow{\sim}_{I_{2},I_{1}}E_{1,I^{\prime},I}^{\bullet,\bullet}$ are supported in degree $(-\ell_{1},\bullet)$, which implies that (4.30) $_{I_{2},I_{1}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1},k_{1}+\ell_{1}}\leftarrow~{}_{I_{2},I_{1}}E_{2,\mathcal{P}}^{-\ell_{1},k_{1}+\ell_{1}}\xrightarrow{\sim}_{I_{2},I_{1}}E_{2,I^{\prime},I}^{-\ell_{1},k_{1}+\ell_{1}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{I}.$ We also observe that ${}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{\bullet,\bullet}\leftarrow~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,\mathcal{P}}^{\bullet,\bullet}\rightarrow~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,I^{\prime},I}^{\bullet,\bullet}$ are supported in degree $(-\ell_{1}+s_{0}^{\prime},\bullet)$, which implies that (4.31) $_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\leftarrow~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\rightarrow~{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,I^{\prime},I}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{I^{\prime}}.$ We combine (4.29) with (4.30) as well as (4.31), and obtain the following diagram (4.32) $\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{1}}}$$\displaystyle{{}_{I_{2},I_{1}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1},k_{1}+\ell_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{2}}}$$\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{I}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{I_{2},I_{1}}E_{2,\mathcal{P}}^{-\ell_{1},k_{1}+\ell_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,\mathcal{P}_{\\{s_{1}\\},[s_{0},t_{0}]}}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{I}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{I}}$$\displaystyle{{}_{I_{2},I_{1}}E_{2,I^{\prime},I}^{-\ell_{1},k_{1}+\ell_{1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,I^{\prime},I}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{H^{k_{1}+\ell_{1}-\\#I_{1}}_{I^{\prime}}}$ with all horizontal maps towards group cohomologies being isomorphisms. The isomorphism of first pages ${}_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{1,I^{\prime},I}^{\bullet,\bullet}\xrightarrow{\sim}_{I_{2},I_{2}\cup I^{\prime}}E_{1,I^{\prime},I}^{\bullet,\bullet}$ (which are both supported in degree $(-\ell_{1}+s_{0}^{\prime},\bullet)$) induces an isomorphism (4.33) $_{\mathcal{P}_{\\{t_{1}\\},[0,s_{0}]}}E_{2,I^{\prime},I}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\xrightarrow{\sim}_{I_{2},I_{2}\cup I^{\prime}}E_{2,I^{\prime},I}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{I^{\prime}}.$ Combining (4.32) with (4.33), it suffices to show that the canonical map $H^{k_{1}+\ell_{1}-\\#I_{1}}_{I}\cong~{}_{I_{2},I_{1}}E_{2,I^{\prime},I}^{-\ell_{1},k_{1}+\ell_{1}}\rightarrow_{I_{2},I_{2}\cup I^{\prime}}E_{2,I^{\prime},I}^{-\ell_{1}+s_{0}^{\prime},k_{1}+\ell_{1}-s_{0}^{\prime}}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{I^{\prime}}$ is given $\mathrm{Res}_{n,I,I^{\prime}}^{k_{1}+\ell_{1}-\\#I_{1}}$. Now we choose a sequence of subsets $I^{\prime}=I[s_{0}]\subsetneq I[s_{0}+1]\subsetneq\cdots\subsetneq I[t_{0}]=I$ which necessarily satisfies $\\#I[s]\cap(I_{2}\setminus I_{4})=s$ for each $s\in[s_{0},t_{0}]$, and induces another sequence $I_{2}\cup I^{\prime}=I_{2}\cup I[s_{0}]\subsetneq I_{2}\cup I[s_{0}+1]\subsetneq\cdots\subsetneq I_{2}\cup I[t_{0}]=I_{2}\cup I=I_{1}.$ Using induction on $s\in[s_{0},t_{0}]$ with $s^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}t_{0}-s$ and the fact that $\mathrm{Res}_{n,I,I^{\prime}}^{k_{1}+\ell_{1}-\\#I_{1}}=\mathrm{Res}_{n,I[s_{0}+1],I[s_{0}]}^{k_{1}+\ell_{1}-\\#I_{1}}\circ\cdots\circ\mathrm{Res}_{n,I[t_{0}],I[t_{0}-1]}^{k_{1}+\ell_{1}-\\#I_{1}},$ it suffices to show that the composition of $H^{k_{1}+\ell_{1}-\\#I_{1}}_{I[s]}\cong_{I_{2},I_{2}\cup I[s]}E_{2,I^{\prime},I}^{-\ell_{1}+s^{\prime},k_{1}+\ell_{1}-s^{\prime}}\rightarrow_{I_{2},I_{2}\cup I[s-1]}E_{2,I^{\prime},I}^{-\ell_{1}+s^{\prime}+1,k_{1}+\ell_{1}-s^{\prime}-1}\cong H^{k_{1}+\ell_{1}-\\#I_{1}}_{I[s-1]}$ is given $\mathrm{Res}_{n,I[s],I[s-1]}^{k_{1}+\ell_{1}-\\#I_{1}}$ for each $s\in[s_{0}+1,t_{0}]$. We finish the proof by Lemma 4.18 (with $\theta=\theta^{\prime}$ there) and the following commutative diagram $\displaystyle{{}_{I_{2},I_{2}\cup I[s]}E_{2,I^{\prime},I}^{-\ell_{1}+s^{\prime},k_{1}+\ell_{1}-s^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{I_{2},I_{2}\cup I[s-1]}E_{2,I^{\prime},I}^{-\ell_{1}+s^{\prime}+1,k_{1}+\ell_{1}-s^{\prime}-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{I_{2}\cup I[s-1],I_{2}\cup I[s]}E_{2,I[s-1],I[s]}^{-\ell_{1}+s^{\prime},k_{1}+\ell_{1}-s^{\prime}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{{}_{I_{2}\cup I[s-1],I_{2}\cup I[s-1]}E_{2,I[s-1],I[s]}^{-\ell_{1}+s^{\prime}+1,k_{1}+\ell_{1}-s^{\prime}-1}}$ with vertical maps being isomorphisms. ∎ Let $v_{0}\subseteq\mathcal{B}_{n,I_{2}\cup(I_{1}\setminus I_{0})}$ and $v_{1}\subseteq\mathcal{B}_{n,I_{4}\cup(I_{1}\setminus I_{2})}$ be subsets. Let $\Omega_{0}$ (resp. $\Omega_{1}$) be a set of tuples $\Theta^{\prime}=(v_{0},I^{\prime},\underline{k}^{\prime},\underline{\lambda}^{\prime})$ with bidegree $(-\ell_{0},k_{0}+\ell_{0})$ (resp. $\Theta^{\prime\prime}=(v_{1},I^{\prime\prime},\underline{k}^{\prime\prime},\underline{\lambda}^{\prime\prime})$ with bidegree $(-\ell_{1},k_{1}+\ell_{1})$) that satisfies $I_{2}\cup(I_{1}\setminus I_{0})\subseteq I\subseteq I_{1}$ (resp. that satisfies $I_{4}\cup(I_{1}\setminus I_{2})\subseteq I\subseteq I_{1}$). In particular, we observe that $\Theta^{\prime}\in\Omega_{0}$ (resp. $\Theta^{\prime\prime}\in\Omega_{1}$) forces $I^{\prime}\in S_{0}$ (resp. forces $I^{\prime\prime}\in S_{1}$). As usual, we can define $x_{\Omega_{0}}=\sum_{\Theta\in\Omega_{0}}\varepsilon(\Theta)x_{\Theta}\in H^{k_{0}+\ell_{0}-\\#I_{1}}_{S_{0}}\cong_{I_{0},I_{1}}E_{1,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}$ and $x_{\Omega_{1}}=\sum_{\Theta\in\Omega_{1}}\varepsilon(\Theta)x_{\Theta}\in H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{1}}\cong_{I_{2},I_{1}}E_{1,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}$ with $\varepsilon(\Theta)$ defined in Definition 2.13. We define $x_{\Omega_{0}}\cup x_{\Omega_{1}}\in H^{k_{0}+k_{1}+\ell_{2}-\\#I_{1}}_{S_{2}}\hookrightarrow_{I_{0},I_{1}}E_{1,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}$ as the image of $(x_{\Omega_{0}},x_{\Omega_{1}})$ under the composition $H^{k_{0}+\ell_{0}-\\#I_{1}}_{S_{0}}\otimes H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{1}}\rightarrow H^{k_{0}+\ell_{0}-\\#I_{1}}_{S_{2}}\otimes H^{k_{1}+\ell_{1}-\\#I_{1}}_{S_{2}}\xrightarrow{\cup}H^{k_{0}+k_{1}+\ell_{2}-\\#I_{1}}_{S_{2}}.$ The following is the main outcome of diagram (4.23). ###### Lemma 4.27. Assume that ${}_{I_{0},I_{1}}d_{1,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}(x_{0})=0$, ${}_{I_{2},I_{1}}d_{1,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}(x_{\Omega_{1}})=0$ and ${}_{I_{0},I_{1}}d_{1,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}(x_{\Omega_{0}}\cup x_{\Omega_{1}})=0$. If we abuse $x_{\Omega_{0}}$, $x_{\Omega_{1}}$ and $x_{\Omega_{0}}\cup x_{\Omega_{1}}$ for their images in the second page of the corresponding spectral sequences, then $x_{\Omega_{0}}\cup x_{\Omega_{1}}$ is the image of $(x_{\Omega_{0}},x_{\Omega_{1}})$ under (4.34) $_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\otimes_{I_{2},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}\xrightarrow{\cup}_{I_{0},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}.$ ###### Proof. It suffices to observe that $x_{\Omega_{0}}$ (resp. $x_{\Omega_{1}}$, resp. $x_{\Omega_{0}}\cup x_{\Omega_{1}}$) can be naturally understood as elements of each term of the first (resp. second, resp. third) column of diagram (4.23). ∎ We assume from now on the following condition ###### Condition 4.28. Exactly one of the following holds * • $k_{0}=2\\#I_{0}-2\\#I_{2}$ and $k_{1}=2\\#I_{2}-2\\#I_{4}$; * • $k_{0}=2\\#I_{0}-2\\#I_{2}+1$ and $k_{1}=2\\#I_{2}-2\\#I_{4}$; * • $k_{0}=2\\#I_{0}-2\\#I_{2}$ and $k_{1}=2\\#I_{2}-2\\#I_{4}+1$. Note that we may always assume that $M^{k_{0}}_{I_{0},I_{1},I_{2},I_{1}}\neq 0$ (resp. $M^{k_{1}}_{I_{2},I_{1},I_{4},I_{1}}\neq 0$) so that the map (4.35) $M^{k_{0}}_{I_{0},I_{1},I_{2},I_{1}}\otimes M^{k_{1}}_{I_{2},I_{3},I_{4},I_{1}}\xrightarrow{\cup}M^{k_{0}+k_{1}}_{I_{0},I_{1},I_{4},I_{1}}$ is interesting, which together with first part of Theorem 4.22 implies that $k_{0}\geq 2\\#I_{0}-2\\#I_{2}$ (resp. $k_{1}\geq 2\\#I_{2}-2\\#I_{4}$). In other words, Condition 4.28 is equivalent to saying that $k_{0}\leq 2\\#I_{0}-2\\#I_{2}+1$, $k_{1}\leq 2\\#I_{2}-2\\#I_{4}+1$ and $k_{0}+k_{1}\leq 2\\#I_{0}-2\\#I_{4}+1$. Given $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ as above, we write $I_{2}^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{4}\cup(I_{0}\setminus I_{2})$. The following result makes crucial use of the construction in Section 2.6. ###### Proposition 4.29. Assume that Condition 4.28 holds and $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$. 1. (i) For each $\Omega_{0}\in\Psi_{I_{2}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{0},\ell_{0}+k_{0}-\\#I_{1}}$ and $\Omega_{1}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{2}),I_{1}}^{-\ell_{1},\ell_{1}+k_{1}-\\#I_{1}}$, the image of $(x_{\Omega_{0}},x_{\Omega_{1}})$ under (4.34) is $x_{\Omega_{2}}$ for a $\Omega_{2}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}$ uniquely determined by the pair $(\Omega_{0},\Omega_{1})$. Moreover, the map (4.36) $\Psi_{I_{2}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{0},\ell_{0}+k_{0}-\\#I_{1}}\times\Psi_{I_{4}\cup(I_{1}\setminus I_{2}),I_{1}}^{-\ell_{1},\ell_{1}+k_{1}-\\#I_{1}}\rightarrow\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}:~{}(\Omega_{0},\Omega_{1})\mapsto\Omega_{2}$ is injective. 2. (ii) We have canonical isomorphisms ${}_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\cong_{I_{2}^{\prime},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}$ and ${}_{I_{2},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}\cong_{I_{0},I_{1}}E_{2,I_{2}^{\prime},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}$. Using the notation in item (i) above, the map ${}_{I_{0},I_{1}}E_{2,I_{2}^{\prime},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}\otimes_{I_{2}^{\prime},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\xrightarrow{\cup}_{I_{0},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}$ sends $(x_{\Omega_{1}},x_{\Omega_{0}})$ to $(-1)^{k_{0}k_{1}}x_{\Omega_{2}}$. 3. (iii) The map (4.34) is injective for each pair $(\ell_{0},\ell_{1})$ and thus (4.35) is injective and compatible with the canonical filtration on both the source and the target. If furthermore there exists $i\in\Delta_{n}$ such that $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<i<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$, then (4.34) is an isomorphism for each pair $(\ell_{0},\ell_{1})$ and thus (4.35) is an isomorphism. ###### Proof. It is harmless to assume $I_{4}\subsetneq I_{2}\subsetneq I_{0}$ throughout the proof. The main idea of the proof of item (i) can be divided into the following steps. * • Construct a set of $\Omega_{0}^{\prime}$ of tuples with bidegree $(-\ell_{0},\ell_{0}+k_{0}-\\#I_{1})$ and prove that $x_{\Omega_{0}^{\prime}}$ induce the same element as $x_{\Omega_{0}}$ in ${}_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}$. * • Check that $\Omega_{0}^{\prime}$ and $\Omega_{1}$ satisfies the assumption of Lemma 4.27, so that $x_{\Omega_{0}^{\prime}}\cup x_{\Omega_{1}}$ is defined and has the form $x_{\Omega_{2}^{\prime}}$ with $\Omega_{2}^{\prime}$ a set of tuples with bidegree $(-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1})$ which induces the same element as $x_{\Omega_{2}}$ in ${}_{I_{0},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}$ for some $\Omega_{2}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}$. * • Check that the map $(\Omega_{0},\Omega_{1})\mapsto\Omega_{2}$ is an injection. We write $\Theta_{0}=(v,I,\underline{k},\underline{\Lambda})$ (resp. $\Theta_{1}=(v^{\prime},I^{\prime},\underline{k}^{\prime},\underline{\Lambda}^{\prime})$) for the maximal element in $\Omega_{0}$ (resp. $\Omega_{1}$). Note that $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<i<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$ implies that $I_{v}\cup I_{v^{\prime}}=\Delta_{n}$ and thus $v\cup v^{\prime}\subseteq\mathcal{B}_{n,\emptyset}$ with $I_{4}\cup(I_{1}\setminus I_{0})\subseteq I\cap I^{\prime}\subseteq I_{v\cup v^{\prime}}$. We set $d_{0}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}r_{I}$ and $s_{0}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}r_{I_{v}\cap I_{1}}$. According to the definition of $\Psi_{I_{2}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{0},\ell_{0}+k_{0}-\\#I_{1}}$ (based on Lemma 2.16 and Lemma 2.25) we have exactly two possibilities * • We have $\Lambda_{r_{v,I_{1},I}^{s_{0}-1}+1}=\emptyset$ and $s_{0}$ satisfies Condition 2.29, in which case we set $\Omega_{0}^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\Omega_{0}^{s_{0},d_{0}}$ which is the $(s_{0},d_{0})$-twist of $\Omega_{0}$ as defined before Condition 2.30. We also set $\Theta_{0}^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\Theta_{0}^{s_{0},d_{0}}$. Note that $x_{\Omega_{0}^{\prime}}$ and $x_{\Omega_{0}}$ induces the same element in ${}_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}$ by Proposition 2.32. * • We have $I^{d}\cap(I_{2}\cup(I_{1}\setminus I_{0}))=\emptyset$ and $\Lambda_{d}=\\{(2n_{d}-1,\iota_{d})\\}$ for some $\iota_{d}\in S$, for each $r_{v,I_{1},I}^{s_{0}-1}+1\leq d\leq d_{0}$. This is impossible as we can deduce from $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<i<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$ that $I_{2}\setminus I_{4}\subseteq I^{d_{0}}$. Our assumption $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<i<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$ implies that $I_{0}\setminus I_{2}\subseteq(I^{\prime})^{1}$, which together with Lemma 2.16 and Lemma 2.25 forces $\Lambda^{\prime}_{1}=\emptyset$. We claim that $x_{\Omega_{0}^{\prime}}\cup x_{\Omega_{1}}=x_{\Omega_{2}^{\prime}}$ with $\Omega_{2}^{\prime}=\Omega_{2}^{s_{0},d_{0}}$ the $(s_{0},d_{0})$-twist of some $\Omega_{2}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}$. This follows from the following two observations. * • We have $x_{\Theta_{0}^{\prime}}\cup x_{\Theta_{1}}=x_{\Theta_{2}^{\prime}}$ for some maximally $(s_{0},d_{0})$-twisted $(I_{4}\cup(I_{1}\setminus I_{0}),I_{1})$-atomic tuple $\Theta_{2}^{\prime}$. The tuple $\Theta_{2}^{\prime}=(v\cup v^{\prime},I\cap I^{\prime},\underline{k}^{\prime\prime},\underline{\Lambda}^{\prime\prime})$ is characterized by $\Lambda^{\prime\prime}_{d}=\Lambda_{d-1}$ for each $2\leq d\leq d_{0}$, $\Lambda^{\prime\prime}_{d_{0}}=\emptyset$ and $\Lambda^{\prime\prime}_{d}=\Lambda^{\prime}_{d-d_{0}+1}$ for each $d_{0}+1\leq d\leq r_{I\cap I^{\prime}}$. There clearly exists a maximally $(I_{4}\cup(I_{1}\setminus I_{0}),I_{1})$-atomic tuple $\Theta_{2}$ such that $\Theta_{2}^{\prime}=\Theta_{2}^{s_{0},d_{0}}$. We define $\Omega_{2}^{\prime}$ as the $(s_{0},d_{0})$-twisted equivalence class of $\Theta_{2}^{\prime}$ and $\Omega_{2}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}$ as the equivalence class of $\Theta_{2}$. * • Similar construction as above actually produces a bijection $\Omega_{0}^{\prime}\times\Omega_{1}\rightarrow\Omega_{2}^{\prime}:~{}(\Theta,\Theta^{\prime})\mapsto\Theta^{\prime\prime}$ with $\varepsilon(\Theta^{\prime\prime})=\varepsilon_{\sharp}\varepsilon(\Theta)\varepsilon(\Theta^{\prime})$ for some $\varepsilon_{\sharp}$ depending only on $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ and the pair $(\ell_{0},\ell_{1})$. This implies that $x_{\Omega_{0}^{\prime}}\cup x_{\Omega_{1}}=\varepsilon_{\sharp}x_{\Omega_{2}^{\prime}}.$ Now we check that $\Omega_{0}$ and $\Omega_{1}$ can be recovered from $\Omega_{2}$, $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ and the pair $(\ell_{0},\ell_{1})$. This is because we can recover $d_{0}$ (if exists, namely if $\Omega_{2}$ actually arises from some $(\Omega_{0},\Omega_{1})$), $s_{0}$, $\Omega_{2}^{s_{0},d_{0}}$ and then $\Omega_{0}^{s_{0},d_{0}}$ and $\Omega_{1}$ in order. Item (ii) follows from item (i) by a comparison with the symmetric construction for $I_{4}\subseteq I_{2}^{\prime}\subseteq I_{0}\subseteq I_{1}$ which satisfies the condition $\min\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}^{\prime}\\}>\max\\{i^{\prime}\mid i^{\prime}\in I_{2}^{\prime}\setminus I_{4}\\}$. The first part of item (iii) follows from item (i) and Theorem 4.22, as we obtain an injective map $x_{\Omega_{0}}\otimes x_{\Omega_{1}}\mapsto x_{\Omega_{2}}$ from a basis of the source to a basis of the target. For the second part of item (iii), it suffices to check that the map (4.36) is bijective if there exists $i\in\Delta_{n}$ such that $\max\\{i^{\prime}\mid i^{\prime}\in I_{0}\setminus I_{2}\\}<i<\min\\{i^{\prime}\mid i^{\prime}\in I_{2}\setminus I_{4}\\}$. In fact, if $\Theta_{2}=(v^{\prime\prime},I^{\prime\prime},\underline{k}^{\prime\prime},\underline{\Lambda}^{\prime\prime})\in\Omega_{2}$ is the maximal element, then as $i\in\Delta_{n}\setminus(I_{0}\setminus I_{4})$, either $i\notin I_{1}$ or there exists a unique $1\leq d_{0}\leq r_{I^{\prime\prime}}$ such that $i\in(I^{\prime\prime})^{d_{0}}\cap(I_{4}\cup(I_{1}\setminus I_{0}))$. We give explicit construction of maximal element $\Theta_{0}\in\Omega_{0}$ (resp. $\Theta_{1}\in\Omega_{1}$) in both cases. * • Assume that $i\notin I_{1}$, then there exists a unique $1\leq d_{1}\leq r_{I^{\prime\prime}}$ such that $i=\sum_{d^{\prime}=1}^{d_{1}}n^{\prime\prime}_{d^{\prime}}$. Then $\Theta_{0}$ and $\Theta_{1}$ can be uniquely characterized by $I=I^{\prime\prime}\cup(I_{2}\setminus I_{4})$, $I^{\prime}=I^{\prime\prime}\cup(I_{0}\setminus I_{2})$, $i\in I_{v}$, $\Lambda_{d}=\Lambda^{\prime\prime}_{d}$ for each $1\leq d\leq d_{1}$, $\Lambda^{\prime}_{1}=\emptyset$ and $\Lambda^{\prime}_{d}=\Lambda_{d+d_{1}-1}$ for each $2\leq d\leq r_{I^{\prime}}$. * • Assume that there exists a unique $1\leq d_{0}\leq r_{I^{\prime\prime}}$ such that $i\in(I^{\prime\prime})^{d_{0}}\cap(I_{4}\cup(I_{1}\setminus I_{0}))$. There exists a unique $1\leq s_{0}\leq r_{I_{v^{\prime\prime}}\cap I_{1}}$ such that $r_{v^{\prime\prime},I_{1},I^{\prime\prime}}^{s_{0}-1}+1\leq d_{0}\leq r_{v^{\prime\prime},I_{1},I^{\prime\prime}}^{s_{0}}$. Note that $i\in(I^{\prime\prime})^{d_{0}}\cap(I_{4}\cup(I_{1}\setminus I_{0}))$ (together with Lemma 2.16 and Lemma 2.25) forces $\Lambda_{r_{v^{\prime\prime},I_{1},I^{\prime\prime}}^{s_{0}-1}+1}=\emptyset$. Then $\Theta_{0}$ and $\Theta_{1}$ can be uniquely characterized by $I=I^{\prime\prime}\cup(I_{2}\setminus I_{4})$, $I^{\prime}=I^{\prime\prime}\cup(I_{0}\setminus I_{2})$, $\Lambda_{d}=\Lambda^{\prime\prime}_{d}$ for each $1\leq d\leq d_{0}$, $\Lambda^{\prime}_{1}=\emptyset$ and $\Lambda^{\prime}_{d}=\Lambda_{d+d_{0}-1}$ for each $2\leq d\leq r_{I^{\prime}}$. ∎ ###### Definition 4.30. Let $I,I^{\prime}\subseteq\Delta_{n}$ be two subsets. We say that _$I$ and $I^{\prime}$ do not connect_ if $|i-i^{\prime}|\geq 2$ for any $i\in I$ and $i^{\prime}\in I^{\prime}$. If we understand $I,I^{\prime}$ as two sets of positive simple roots, then $I$ and $I^{\prime}$ do not connect if and only if $\alpha+\alpha^{\prime}$ is not a root for any $\alpha\in I$ and $\alpha^{\prime}\in I^{\prime}$. This is intuitive from Dynkin diagram. ###### Theorem 4.31. Assume that Condition 4.28 holds. 1. (i) For each $\Omega_{0}\in\Psi_{I_{2}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{0},\ell_{0}+k_{0}-\\#I_{1}}$ and $\Omega_{1}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{2}),I_{1}}^{-\ell_{1},\ell_{1}+k_{1}-\\#I_{1}}$, the image of $(x_{\Omega_{0}},x_{\Omega_{1}})$ under (4.34) is $\varepsilon x_{\Omega_{2}}$ for a $\Omega_{2}\in\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}$ uniquely determined by the pair $(\Omega_{0},\Omega_{1})$ and a sign $\varepsilon\in\\{1,-1\\}$ uniquely determined by $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ and the pair $(\ell_{0},\ell_{1})$. Moreover, the map (4.37) $\Psi_{I_{2}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{0},\ell_{0}+k_{0}-\\#I_{1}}\times\Psi_{I_{4}\cup(I_{1}\setminus I_{2}),I_{1}}^{-\ell_{1},\ell_{1}+k_{1}-\\#I_{1}}\rightarrow\Psi_{I_{4}\cup(I_{1}\setminus I_{0}),I_{1}}^{-\ell_{2},\ell_{2}+k_{0}+k_{1}-\\#I_{1}}:~{}(\Omega_{0},\Omega_{1})\mapsto\Omega_{2}$ is injective. 2. (ii) We have canonical isomorphisms ${}_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\cong_{I_{2}^{\prime},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}$ and ${}_{I_{2},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}\cong_{I_{0},I_{1}}E_{2,I_{2}^{\prime},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}$. Using the notation in item (i) above, the map ${}_{I_{0},I_{1}}E_{2,I_{2}^{\prime},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}\otimes_{I_{2}^{\prime},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\xrightarrow{\cup}_{I_{0},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}$ sends $(x_{\Omega_{1}},x_{\Omega_{0}})$ to $(-1)^{k_{0}k_{1}}\varepsilon x_{\Omega_{2}}$. 3. (iii) The map (4.34) is injective for each pair $(\ell_{0},\ell_{1})$ and thus (4.35) is injective and compatible with the canonical filtration on both the source and the target. If furthermore $I_{0}\setminus I_{2}$ and $I_{2}\setminus I_{4}$ do not connect, then (4.34) is an isomorphism for each pair $(\ell_{0},\ell_{1})$ and thus (4.35) is an isomorphism. ###### Proof. Given two subsets $I,I^{\prime}\subseteq\Delta_{n}$, we use the shortened notation $I<I^{\prime}$ for $\max\\{i^{\prime}\mid i^{\prime}\in I\\}<\min\\{i^{\prime}\mid i^{\prime}\in I^{\prime}\\}$. Note that if $I,I^{\prime}\subseteq\Delta_{n}$ are two non-empty subintervals satisfying $I\cap I^{\prime}=\emptyset$, then we have either $I<I^{\prime}$ or $I^{\prime}<I$. It is harmless to assume that $I_{4}\subsetneq I_{2}\subsetneq I_{0}$ otherwise the claims are easy. We write $I_{0}\setminus I_{2}=\bigsqcup_{t^{\prime}=1}^{t_{0}}I_{0,t^{\prime}}\text{ and }I_{2}\setminus I_{4}=\bigsqcup_{t^{\prime}=1}^{t_{2}}I_{2,t^{\prime}}$ as disjoint union of non-empty maximal subintervals satisfying $I_{0,1}<\cdots<I_{0,t_{0}}$ and $I_{2,1}<\cdots<I_{2,t_{2}}$. As $(I_{0}\setminus I_{2})\cap(I_{2}\setminus I_{4})=\emptyset$, we have $I_{0,t^{\prime}}\cap I_{2,t^{\prime\prime}}=\emptyset$ for each $1\leq t^{\prime}\leq t_{0}$ and $1\leq t^{\prime\prime}\leq t_{2}$. We define the defect of the triple $I_{4}\subseteq I_{2}\subseteq I_{0}$ as $\delta_{I_{0},I_{2},I_{4}}=\\#\\{(t^{\prime},t^{\prime\prime})\mid 1\leq t^{\prime}\leq t_{0},~{}1\leq t^{\prime\prime}\leq t_{2},~{}I_{0,t^{\prime}}>I_{2,t^{\prime\prime}}\\}.$ We prove item (i), item (ii) and item (iii) using Proposition 4.29 and an induction on the defect $\delta_{I_{0},I_{2},I_{4}}$. If $\delta_{I_{0},I_{2},I_{4}}=0$, then we have $I_{0}\setminus I_{2}<I_{2}\setminus I_{4}$ and the result follows entirely from Proposition 4.29. Now we assume that $\delta_{I_{0},I_{2},I_{4}}\geq 1$ and that all three items hold for any triple $I_{7}\subseteq I_{6}\subseteq I_{5}\subseteq I_{1}$ satisfying $\delta_{I_{5},I_{6},I_{7}}<\delta_{I_{0},I_{2},I_{4}}$. Note that $\delta\geq 1$ is equivalent to $I_{0,t_{0}}>I_{2,1}$, and we define $I_{2}^{\prime\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}(I_{2}\setminus I_{2,1})\sqcup I_{0,t_{0}}$ and observe that $\delta_{I_{0},I_{2}^{\prime\prime},I_{4}}<\delta_{I_{0},I_{2},I_{4}}$. We also write $1\leq t_{0}^{\prime}\leq t_{0}$ (resp. $1\leq t_{2}^{\prime}\leq t_{2}$) for the minimal (resp. maximal) integer such that $I_{2,1}<I_{0,t_{0}^{\prime}}$ (resp. such that $I_{2,t_{2}^{\prime}}<I_{0,t_{0}}$) and then write $I_{2}^{+}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}\sqcup I_{0,t_{0}}$, $I_{2}^{++}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}\sqcup\bigsqcup_{t^{\prime}=t_{0}^{\prime}}^{t_{0}}I_{0,t^{\prime}}$, $I_{2}^{\sharp}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}^{++}\setminus I_{2,1}$, $I_{2}^{-}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}\setminus I_{2,1}$, $I_{2}^{--}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}\setminus\bigsqcup_{t^{\prime\prime}=1}^{t_{2}^{\prime}}I_{2,t^{\prime\prime}}$ and $I_{2}^{\flat}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{2}^{--}\sqcup I_{0,t_{0}}$. Then we have the following constructions using item (iii) of Proposition 4.29. * • As $I_{0,t_{0}}$ is a maximal subinterval of $I_{0}\setminus I_{2}$ satisfying $I_{0}\setminus I_{2}^{++}<I_{2}^{++}\setminus I_{2}^{+}<I_{2}^{+}\setminus I_{2}=I_{0,t_{0}}$, we obtain an isomorphism $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0},I_{2}^{++}}\otimes\mathbf{E}_{I_{2}^{++},I_{2}^{+}}\otimes\mathbf{E}_{I_{2}^{+},I_{2}}$ which is compatible with canonical filtration on both source and target. Consequently, $\Omega_{0}$ determines a triple $(\Omega_{0}^{++},\Omega_{0}^{+},\Omega_{0}^{-})$ where $\Omega_{0}^{++}$ is an equivalence class of $(I_{2}^{++}\cup(I_{1}\setminus I_{0}),I_{1})$-atomic tuples, $\Omega_{0}^{+}$ is an equivalence class of $(I_{2}^{+}\cup(I_{1}\setminus I_{2}^{++}),I_{1})$-atomic tuples and $\Omega_{0}^{-}$ is an equivalence class of $(I_{2}\cup(I_{1}\setminus I_{2}^{+}),I_{1})$-atomic tuples. * • As $I_{2,1}$ is a maximal subinterval of $I_{2}\setminus I_{4}$ satisfying $I_{2,1}=I_{2}\setminus I_{2}^{-}<I_{2}^{-}\setminus I_{2}^{-}-<I_{2}^{-}-\setminus I_{4}$, we obtain an isomorphism $\mathbf{E}_{I_{2},I_{4}}\cong\mathbf{E}_{I_{2},I_{2}^{-}}\otimes\mathbf{E}_{I_{2}^{-},I_{2}^{--}}\otimes\mathbf{E}_{I_{2}^{--},I_{4}}$ which is compatible with canonical filtration on both source and target. Consequently, $\Omega_{1}$ determines a pair $(\Omega_{1}^{+},\Omega_{1}^{-},\Omega_{1}^{--})$ where $\Omega_{1}^{+}$ is an equivalence class of $(I_{2}^{-}\cup(I_{1}\setminus I_{2}),I_{1})$-atomic tuples, $\Omega_{1}^{-}$ is an equivalence class of $(I_{2}^{--}\cup(I_{1}\setminus I_{2}^{-}),I_{1})$-atomic tuples and $\Omega_{1}^{--}$ is an equivalence class of $(I_{4}\cup(I_{1}\setminus I_{2}^{--}),I_{1})$-atomic tuples. Now we have the following observations from item (ii). * • We have a canonical isomorphism $\mathbf{E}_{I_{2}^{+},I_{2}}\otimes\mathbf{E}_{I_{2},I_{2}^{-}}\cong\mathbf{E}_{I_{2}^{+},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{-}}$ which exchange $\Omega_{0}^{-}$ and $\Omega_{1}^{+}$. * • We have a canonical isomorphism $\mathbf{E}_{I_{2}^{++},I_{2}^{+}}\otimes\mathbf{E}_{I_{2}^{+},I_{2}^{\prime\prime}}\cong\mathbf{E}_{I_{2}^{++},I_{2}^{\sharp}}\otimes\mathbf{E}_{I_{2}^{\sharp},I_{2}^{\prime\prime}}$ which exchange $\Omega_{0}^{+}$ and $\Omega_{1}^{+}$, and $\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{-}}\otimes\mathbf{E}_{I_{2}^{-},I_{2}^{--}}\cong\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{\flat}}\otimes\mathbf{E}_{I_{2}^{\flat},I_{2}^{--}}$ which exchange $\Omega_{0}^{-}$ and $\Omega_{1}^{-}$. The we deduce again from item (iii) of Proposition 4.29 a canonical isomorphism $\mathbf{E}_{I_{0},I_{2}^{++}}\otimes\mathbf{E}_{I_{2}^{++},I_{2}^{\sharp}}\otimes\mathbf{E}_{I_{2}^{\sharp},I_{2}^{\prime\prime}}\cong\mathbf{E}_{I_{0},I_{2}^{\prime\prime}}$ which determines an equivalence class $\Omega_{0}^{\prime\prime}$ of $(I_{2}^{\prime\prime}\cup(I_{1}\setminus I_{0}),I_{1})$-atomic tuples from $(\Omega_{0}^{++},\Omega_{1}^{+},\Omega_{0}^{+})$, and similarly a canonical isomorphism $\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{\flat}}\otimes\mathbf{E}_{I_{2}^{\flat},I_{2}^{--}}\otimes\mathbf{E}_{I_{2}^{--},I_{4}}\cong\mathbf{E}_{I_{2}^{\prime\prime},I_{4}}$ which determines an equivalence class $\Omega_{1}^{\prime\prime}$ of $(I_{4}\cup(I_{1}\setminus I_{2}^{\prime\prime}),I_{1})$-atomic tuples from $(\Omega_{1}^{-},\Omega_{0}^{-},\Omega_{1}^{--})$. As $\delta_{I_{0},I_{2}^{\prime\prime},I_{4}}<\delta_{I_{0},I_{2},I_{4}}$, the map $\mathbf{E}_{I_{0},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{4}}\rightarrow\mathbf{E}_{I_{0},I_{4}}$ together with our inductive assumption determines an equivalence class $\Omega_{2}$ of $(I_{4}\cup(I_{1}\setminus I_{0}),I_{1})$-atomic tuples. To summary, we define $\Omega_{2}$ via the following composition $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}\cong\mathbf{E}_{I_{0},I_{2}^{++}}\otimes\mathbf{E}_{I_{2}^{++},I_{2}^{+}}\otimes\mathbf{E}_{I_{2}^{+},I_{2}}\otimes\mathbf{E}_{I_{2},I_{2}^{-}}\otimes\mathbf{E}_{I_{2}^{-},I_{2}^{--}}\otimes\mathbf{E}_{I_{2}^{--},I_{4}}\\\ \cong\mathbf{E}_{I_{0},I_{2}^{++}}\otimes\mathbf{E}_{I_{2}^{++},I_{2}^{+}}\otimes\mathbf{E}_{I_{2}^{+},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{-}}\otimes\mathbf{E}_{I_{2}^{-},I_{2}^{--}}\otimes\mathbf{E}_{I_{2}^{--},I_{4}}\\\ \cong\mathbf{E}_{I_{0},I_{2}^{++}}\otimes\mathbf{E}_{I_{2}^{++},I_{2}^{\sharp}}\otimes\mathbf{E}_{I_{2}^{\sharp},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{2}^{\flat}}\otimes\mathbf{E}_{I_{2}^{\flat},I_{2}^{--}}\otimes\mathbf{E}_{I_{2}^{--},I_{4}}\\\ \cong\mathbf{E}_{I_{0},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{4}}\rightarrow\mathbf{E}_{I_{0},I_{4}}.$ Note that this composition differs from the cup product map $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}$ by a sign depending only on $I_{0}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ and $(\ell_{0},\ell_{1})$, by applying item (ii) and item (iii) to the composition. Note that all the subsets of $\Delta_{n}$ involved in the construction of $\Omega_{2}$ depends only on $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$. Moreover, given $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq I_{1}$ and $(\ell_{0},\ell_{1})$, we can read off $\Omega_{0}^{+}+$, $\Omega_{1}^{+}$, $\Omega_{0}^{+}$, $\Omega_{1}^{-}$, $\Omega_{0}^{-}$ and $\Omega_{1}^{--}$ from $\Omega_{2}$, and thus $\Omega_{0}$ and $\Omega_{1}$ as well. This implies that (4.37) is injective. Item (ii) of this theorem follows from constructing another composition symmetric to one above and apply our inductive assumption to the isomorphism $\mathbf{E}_{I_{0},I_{2}^{\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime},I_{4}}\cong\mathbf{E}_{I_{0},I_{2}^{\prime\prime\prime}}\otimes\mathbf{E}_{I_{2}^{\prime\prime\prime},I_{4}}$ where $I_{2}^{\prime\prime\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{4}\cup(I_{0}\setminus I_{2}^{\prime\prime})$. The first part of item (iii) follows from the injectivity of (4.37) as it induces an injection from a basis of ${}_{I_{0},I_{1}}E_{2,I_{2},I_{1}}^{-\ell_{0},k_{0}+\ell_{0}}\otimes_{I_{2},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{1},k_{1}+\ell_{1}}$ to a basis of ${}_{I_{0},I_{1}}E_{2,I_{4},I_{1}}^{-\ell_{2},k_{0}+k_{1}+\ell_{2}}$. The second part of item (iii) follows from our inductive assumption and the observation that $I_{0}\setminus I_{2}$ and $I_{2}\setminus I_{4}$ do not connect if and only if $I_{0}\setminus I_{2}^{\prime\prime}$ and $I_{2}^{\prime\prime}\setminus I_{4}$ do not connect if and only if $I_{0,t^{\prime}}$ and $I_{2,t^{\prime\prime}}$ do not connect for each $1\leq t^{\prime}\leq t_{0}$ and $1\leq t^{\prime\prime}\leq t_{2}$. The proof is thus completed. ∎ ## 5\. Breuil-Schraen $\mathscr{L}$-invariant ### 5.1. Definition of Breuil-Schraen $\mathscr{L}$-invariant In this section, we define _Breuil-Schraen $\mathscr{L}$-invariant_ in Definition 5.6 and study its moduli space in Theorem 5.9. Then we formulate Conjecture 5.11 that relates Breuil-Schraen $\mathscr{L}$-invariants with Breuil-Ding’s approach of higher $\mathscr{L}$-invariants. We recall from Corollary 4.23 the definition of $\mathbf{E}_{I_{0},I_{2}}$ and $\mathbf{E}_{I_{0},I_{2}}^{\prime}$ for each pair of subsets $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$. We consider a triple $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ and set $I_{2}^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}I_{4}\cup(I_{0}\setminus I_{2})$ as usual. By taking $I_{1}=\Delta_{n}$ in Theorem 4.31, we obtain the following result. ###### Corollary 5.1. 1. (i) For each $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq\Delta_{n}$, the following three maps are injective * • $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}$; * • $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}^{\prime}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}^{\prime}$; * • $\mathbf{E}_{I_{0},I_{2}}^{\prime}\otimes\mathbf{E}_{I_{2},I_{4}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}^{\prime}$. 2. (ii) We have canonical isomorphisms $\mathbf{E}_{I_{0},I_{2}}^{\ast}\cong\mathbf{E}_{I_{2}^{\prime},I_{4}}^{\ast}$ and $\mathbf{E}_{I_{2},I_{4}}^{\ast}\cong\mathbf{E}_{I_{0},I_{2}^{\prime}}^{\ast}$ for each $\ast\in\\{~{},^{\prime}\\}$. If we abuse the notation $x$ for an element of $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{2}^{\prime},I_{4}}$ and $y$ for an element of $\mathbf{E}_{I_{2},I_{4}}\cong\mathbf{E}_{I_{0},I_{2}^{\prime}}$, then $\mathbf{E}_{I_{0},I_{2}^{\prime}}\otimes\mathbf{E}_{I_{2}^{\prime},I_{4}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}$ sends $(y,x)$ to $(-1)^{(\\#I_{0}-\\#I_{2})(\\#I_{2}-\\#I_{4})}x\cup y$. Similar facts hold for other two kinds of maps in item (i). 3. (iii) If $I_{0}\setminus I_{2}$ and $I_{2}\setminus I_{4}$ do not connect (see Definition 4.30), then the three maps in item (i) are isomorphisms. Thanks to item (i), we can identify $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}$ with a subspace of $\mathbf{E}_{I_{0},I_{4}}$ for any $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ from now on. ###### Lemma 5.2. For each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ satisfying $\\#I_{0}\setminus I_{2}=1$, we have a canonical isomorphism $\mathbf{E}_{I_{0},I_{2}}\cong\mathrm{Hom}_{\rm{cont}}(K^{\times},E),$ which is compatible with $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0}\setminus I_{2},\emptyset}$. ###### Proof. We write $I_{0}\setminus I_{2}=\\{i\\}$ for some $1\leq i\leq n-1$. Note from Corollary 4.23 that $\mathbf{E}_{I_{0},I_{2}}$ admits a canonical filtration $0=\mathrm{Fil}^{-n+3}(\mathbf{E}_{I_{0},I_{2}})\subseteq\mathrm{Fil}^{-n+2}(\mathbf{E}_{I_{0},I_{2}})\subseteq\mathrm{Fil}^{-n+1}(\mathbf{E}_{I_{0},I_{2}})=\mathbf{E}_{I_{0},I_{2}}.$ We observe that $\Psi_{\Delta_{n}\setminus\\{i\\},\Delta_{n}}^{-n+1,2}=0$ and $\Psi_{\Delta_{n}\setminus\\{i\\},\Delta_{n}}^{-n+2,1}=\\{(v,\Delta_{n}\setminus\\{i\\},\underline{k},\underline{\Lambda})\mid v\in\mathcal{B}_{n,\Delta_{n}\setminus\\{i\\}}\\}$ with $\Lambda_{1}=\Lambda_{2}=\emptyset$, based on Lemma 2.16. In other words, we have canonical isomorphisms $\mathbf{E}_{I_{0},I_{2}}\cong_{I_{0},\Delta_{n}}E_{2,I_{2},\Delta_{n}}^{-n+2,n}\cong E_{2,\Delta_{n}\setminus\\{i\\},\Delta_{n}}^{-n+2,1}\cong\mathrm{Hom}_{\rm{cont}}(\overline{Z}_{n,\Delta_{n}\setminus\\{i\\}},E)\cong\mathrm{Hom}_{\rm{cont}}(K^{\times},E)$ where $\overline{Z}_{n,\Delta_{n}\setminus\\{i\\}}\cong K^{\times}$ is the center of $\overline{L}_{n,\Delta_{n}\setminus\\{i\\}}$. The compatibility with $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0}\setminus I_{2},\emptyset}$ is obvious from the argument above. ∎ For each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$, we set $\mathbf{E}_{I_{0},I_{2}}^{<}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\sum_{I_{2}\subsetneq I\subsetneq I_{0}}\mathbf{E}_{I_{0},I}\otimes\mathbf{E}_{I,I_{2}}\subseteq\mathbf{E}_{I_{0},I_{2}}$. ###### Lemma 5.3. Let $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ be subsets with $\\#I_{0}\setminus I_{2}\geq 2$. We have $\operatorname{dim}_{E}\mathbf{E}_{I_{0},I_{2}}/\mathbf{E}_{I_{0},I_{2}}^{<}\in\\{0,[K:\mathbb{Q}_{p}]\\}$, and it is non-zero if and only if $I_{0}\setminus I_{2}$ is an interval. ###### Proof. If $I_{0}\setminus I_{2}$ is not an interval, then we can choose $I_{2}\subsetneq I\subsetneq I_{0}$ such that $I_{0}\setminus I$ is a maximal interval of $I_{0}\setminus I_{2}$, which together with item (iii) implies that $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0},I}\otimes\mathbf{E}_{I,I_{2}}\subseteq\mathbf{E}_{I_{0},I_{2}}^{<}$ and thus $\mathbf{E}_{I_{0},I_{2}}^{<}=\mathbf{E}_{I_{0},I_{2}}$. It remains to treat the case when $I_{0}\setminus I_{2}$ is an interval of the form $\\{i,i+1,\dots,j\\}$. Note from Corollary 4.23 that $\mathbf{E}_{I_{0},I_{2}}$ admits a canonical filtration $0=\mathrm{Fil}^{-n+2+\\#I_{0}-\\#I_{2}}(\mathbf{E}_{I_{0},I_{2}})\subseteq\mathrm{Fil}^{-n+1+\\#I_{0}-\\#I_{2}}(\mathbf{E}_{I_{0},I_{2}})\subseteq\cdots\subseteq\mathrm{Fil}^{-n+1}(\mathbf{E}_{I_{0},I_{2}})=\mathbf{E}_{I_{0},I_{2}}.$ We finish the proof by the following two claims. 1. (i) We have $\operatorname{dim}_{E}\mathbf{E}_{I_{0},I_{2}}/\mathrm{Fil}^{-n+3}(\mathbf{E}_{I_{0},I_{2}})=\\#S=[K:\mathbb{Q}_{p}]$. According to Corollary 4.23, it suffices to observe that $\Psi_{I_{2}\cup(\Delta_{n}\setminus I_{0}),\Delta_{n}}^{-n+1,2\\#I_{0}-\\#I_{2}}=\emptyset$ and $\Psi_{I_{2}\cup(\Delta_{n}\setminus I_{0}),\Delta_{n}}^{-n+2,2\\#I_{0}-2\\#I_{2}-1}$ consists of those equivalent classes whose maximal elements $\Theta=(v,I,\underline{k},\underline{\Lambda})$ satisfies $v=\emptyset$, $I=\Delta_{n}\setminus\\{i\\}$, $\Lambda_{1}=\emptyset$ and $\Lambda_{2}=\\{(2\\#I_{0}\setminus I_{2}-1,\iota)\\}$ for some $\iota\in S$. In particular, there exists a natural bijection between $\Psi_{I_{2}\cup(\Delta_{n}\setminus I_{0}),\Delta_{n}}^{-n+2,2\\#I_{0}-2\\#I_{2}-1}$ and $S$. 2. (ii) We have $\mathbf{E}_{I_{0},I_{2}}^{<}=\mathrm{Fil}^{-n+3}(\mathbf{E}_{I_{0},I_{2}})$. For each $\\#I_{0}-\\#I_{2}\leq\ell\leq n-3$, let $\Omega^{\prime}\in\Psi_{I_{2}\cup(\Delta_{n}\setminus I_{0}),\Delta_{n}}^{-\ell,\ell+2\\#I_{0}-2\\#I_{2}-n+1}$ be an equivalence class and $\Theta^{\prime}=(v^{\prime},I^{\prime},\underline{k}^{\prime},\underline{\Lambda}^{\prime})$ be the maximal element inside. We always have $\Lambda_{1}=\emptyset$ thanks to Lemma 2.16. As $r_{I^{\prime}}=n-\ell\geq 3$, we have the following two possibilities. * • We have $v=\emptyset$ and there exists $2\leq d\leq r_{I^{\prime}}-1$ such that $\Lambda^{\prime}_{d}\neq\emptyset\neq\Lambda^{\prime}_{d+1}$. We write $i^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\sum_{d^{\prime}=1}^{d}n_{d^{\prime}}$ and set $I\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{i^{\prime},i^{\prime}+1,\dots,j\\}\cup I_{2}$. * • We have $I_{v}\subsetneq\Delta_{n}$ and thus there exists $2\leq d=r_{v^{\prime},\Delta_{n},I^{\prime}}^{1}+1\leq r_{I^{\prime}}$ such that $\Lambda^{\prime}_{d}=\emptyset$. We write $i^{\prime}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\sum_{d^{\prime}=1}^{d-1}n_{d^{\prime}}$ and set $I\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{i^{\prime},i^{\prime}+1,\dots,j\\}\cup I_{2}$. In both possibilities above, we define $\Theta^{\prime\prime}=(\emptyset,I^{\prime\prime},\underline{k}^{\prime\prime},\underline{\Lambda}^{\prime\prime})$ by $I^{\prime\prime}=I\cup(\Delta_{n}\setminus I_{0})$ and $\Lambda^{\prime\prime}_{d^{\prime}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\Lambda^{\prime}_{d^{\prime}}$ for each $1\leq d^{\prime}\leq r_{I^{\prime\prime}}=d$. We also define $\Theta^{\prime\prime\prime}=(\emptyset,I^{\prime\prime\prime},\underline{k}^{\prime\prime\prime},\underline{\Lambda}^{\prime\prime\prime})$ by $I^{\prime\prime\prime}=I_{2}\cup(\Delta_{n}\setminus I)$, $\Lambda^{\prime\prime\prime}_{1}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\emptyset$ and $\Lambda^{\prime\prime\prime}_{d^{\prime}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\Lambda^{\prime}_{d^{\prime}+d-1}$ for each $2\leq d^{\prime}\leq r_{I^{\prime\prime\prime}}=r_{I^{\prime}}-d+1$. We write $\Omega^{\prime\prime}$ (resp. $\Omega^{\prime\prime\prime}$) for the equivalence class of $\Theta^{\prime\prime}$ (resp. of $\Theta^{\prime\prime\prime}$) and claim that $x_{\Omega^{\prime\prime}}\cup x_{\Omega^{\prime\prime\prime}}=x_{\Omega^{\prime}}\in\mathrm{Fil}^{-\ell}(\mathbf{E}_{I_{0},I_{2}})/\mathrm{Fil}^{-\ell+1}(\mathbf{E}_{I_{0},I_{2}})$ from the proof of item (i) of Theorem 4.31. In other words, we have $x_{\Omega^{\prime}}\in\left(\mathbf{E}_{I_{0},I}\otimes\mathbf{E}_{I,I_{2}}+\mathrm{Fil}^{-\ell+1}(\mathbf{E}_{I_{0},I_{2}})\right)/\mathrm{Fil}^{-\ell+1}(\mathbf{E}_{I_{0},I_{2}})$ Let $\Omega^{\prime}$ run through $\Psi_{I_{2}\cup(\Delta_{n}\setminus I_{2}),\Delta_{n}}^{-\ell,\ell+2\\#I_{0}-2\\#I_{2}-n+1}$, we have thus shown that $\mathbf{E}_{I_{0},I_{2}}^{<}\cap\mathrm{Fil}^{-\ell}(\mathbf{E}_{I_{0},I_{2}})+\mathrm{Fil}^{-\ell+1}(\mathbf{E}_{I_{0},I_{2}})=\mathrm{Fil}^{-\ell}(\mathbf{E}_{I_{0},I_{2}})$ for each $\\#I_{0}-\\#I_{2}\leq\ell\leq n-3$, which is clearly sufficient to conclude. ∎ For each positive root $(i,j)\in\Phi^{+}$ (with $1\leq i<j\leq n$), we can clearly attach a subinterval $I_{\alpha}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{i,i+1,\dots,j-1\\}\subseteq\Delta_{n}$ and this induces a bijection between the set of positive roots (with respect to $(B_{n}^{+},T_{n})$) and the set of (non-empty) subintervals of $\Delta_{n}$. More generally, for each subset of $I\subseteq\Delta_{n}$, we can clearly attach an element in the root lattice $\alpha_{I}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\sum_{i\in I}(i,i+1)$. For each $\alpha\in\Phi^{+}$ with $\\#I_{\alpha}\geq 2$, we choose a set $\overline{X}_{\alpha}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{x_{\alpha,\iota}\mid\iota\in S\\}\subseteq\mathbf{E}_{I_{\alpha},\emptyset}$ which image in $\mathbf{E}_{I_{\alpha},\emptyset}/\mathbf{E}_{I_{\alpha},\emptyset}^{<}$ naturally corresponds to $\\{x_{\Omega}\mid\Omega\in\Psi_{\Delta_{n}\setminus I_{\alpha},\Delta_{n}}^{-n+2,2\\#I_{\alpha}-1}\\}$ via the bijection described in item (i) in the proof of Lemma 5.3. For each $\alpha\in\Phi^{+}$ with $\\#I_{\alpha}=1$, we write $x_{\alpha}^{\infty}$ (resp. $x_{\alpha,\iota}$) for the elements in $\mathbf{E}_{I_{\alpha},\emptyset}$ corresponding to $\mathrm{val}$ (resp. $\log_{\iota}$) under the isomorphism $\mathbf{E}_{I_{\alpha},\emptyset}\cong\mathrm{Hom}_{\rm{cont}}(K^{\times},E)$ (see Lemma 5.2), and then set $\overline{X}_{\alpha}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{x_{\alpha}^{\infty}\\}\sqcup\\{x_{\alpha,\iota}\mid\iota\in S\\}$. For each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ with $I_{0}\setminus I_{2}=I_{\alpha}$, we abuse $x_{\alpha,\iota}$ (and possibly $x_{\alpha}^{\infty}$) for the vector in $\mathbf{E}_{I_{0},I_{2}}$ obtained from the isomorphism $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{\alpha},\emptyset}$. Consequently, for each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ and each partition into positive roots $\alpha_{I_{0}\setminus I_{2}}=\alpha_{1}+\cdots+\alpha_{t}$, we obtain a well defined element $x_{\alpha_{1}}\cup x_{\alpha_{2}}\cup\cdots\cup x_{\alpha_{t}}\in\mathbf{E}_{I_{0},I_{2}}$ for each $x_{\alpha_{t^{\prime}}}\in\overline{X}_{\alpha_{t^{\prime}}}$ and $1\leq t^{\prime}\leq t$. ###### Lemma 5.4. For each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$, $\mathbf{E}_{I_{0},I_{2}}$ admits a basis of the form (5.1) $X_{I_{0},I_{2}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{x_{\alpha_{1}}\cup x_{\alpha_{2}}\cup\cdots\cup x_{\alpha_{t}}\\}_{\alpha_{I_{0}\setminus I_{2}}=\alpha_{1}+\cdots+\alpha_{t}}$ where $x_{\alpha_{t^{\prime}}}\in\overline{X}_{\alpha_{t^{\prime}}}$ for each $1\leq t^{\prime}\leq t$ and $\\{\alpha_{1},\dots,\alpha_{t}\\}$ runs through all the (unordered) partition of $\alpha_{I_{0}\setminus I_{2}}$. ###### Proof. We prove by an increasing induction on $\\#I_{0}\setminus I_{2}$. The case when $\\#I_{0}\setminus I_{2}=1$ is clear. Thanks to item (iii) we may assume without loss of generality that $I_{0}\setminus I_{2}=I_{\alpha}$ for some $\alpha\in\Phi^{+}$ with $\\#I_{\alpha}\geq 2$. According to Lemma 5.3, it suffices to show that $X_{I_{\alpha},\emptyset}\setminus\overline{X}_{\alpha}$ forms a basis of $\mathbf{E}_{I_{\alpha},\emptyset}^{<}$. We write $\alpha=(i,j)$ for some $1\leq i<j\leq n$ and note that $\mathbf{E}_{I_{\alpha},\emptyset}^{<}=\sum_{i<k<j}\mathbf{E}_{I_{\alpha},I_{(k,j)}}\otimes\mathbf{E}_{I_{(k,j)},\emptyset}$ admits an increasing filtration $\mathrm{Fil}_{\ell}\mathbf{E}_{I_{\alpha},\emptyset}^{<}=\sum_{i<k\leq\ell}\mathbf{E}_{I_{\alpha},I_{(k,j)}}\otimes\mathbf{E}_{I_{(k,j)},\emptyset}$ with $i\leq\ell\leq j-1$. Then we observe that $\mathrm{Fil}_{\ell}\mathbf{E}_{I_{\alpha},\emptyset}^{<}/\mathrm{Fil}_{\ell-1}\mathbf{E}_{I_{\alpha},\emptyset}^{<}=(\mathbf{E}_{I_{(i,\ell)},\emptyset}/\mathbf{E}_{I_{(i,\ell)},\emptyset}^{<})\otimes\mathbf{E}_{I_{(\ell,j)},\emptyset}$ which admits a basis induced from $X_{I_{\alpha},\emptyset}^{\ell}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{x_{(i,\ell)}\otimes x_{(\ell,j)}^{\prime}\mid x_{(i,\ell)}\in\overline{X}_{(i,\ell)},x_{(\ell,j)}^{\prime}\in X_{I_{(\ell,j)},\emptyset}\\}$ for each $i+1\leq\ell\leq j-1$. We conclude by the observation that $X_{I_{\alpha},\emptyset}\setminus\overline{X}_{\alpha}=\bigsqcup_{\ell=i+1}^{j-1}X_{I_{\alpha},\emptyset}^{\ell}$. ∎ Note that $\mathrm{val}$ spans a canonical line in $\mathrm{Hom}_{\rm{cont}}(K^{\times},E)$, and thus induces a canonical line $\mathbf{E}_{I_{0},I_{2}}^{\infty}\subseteq\mathbf{E}_{I_{0},I_{2}}$ for each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ with $\\#I_{0}\setminus I_{2}=1$ according to Lemma 5.2. For a general pair $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$, we choose a sequence $I_{2}=I_{2,0}\subsetneq I_{2,1}\subsetneq\cdots\subsetneq I_{2,t}=I_{0}$ for $t=\\#I_{0}-\\#I_{2}$ and thus $\\#I_{2,t^{\prime}}\setminus I_{2,t^{\prime}-1}=1$ for each $1\leq t^{\prime}\leq t$. Then we define $\mathbf{E}_{I_{0},I_{2}}^{\infty}$ as the image of the composition $\mathbf{E}_{I_{2,t},I_{2,t-1}}^{\infty}\otimes\cdots\otimes\mathbf{E}_{I_{2,1},I_{2,0}}^{\infty}\hookrightarrow\mathbf{E}_{I_{2,t},I_{2,t-1}}\otimes\cdots\otimes\mathbf{E}_{I_{2,1},I_{2,0}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{2}},$ which gives a canonical line in $\mathbf{E}_{I_{0},I_{2}}$. Note that item (ii) of Corollary 5.1 implies that $\mathbf{E}_{I_{0},I_{2}}^{\infty}$ is independent of the choice of $I_{2}=I_{2,0}\subsetneq I_{2,1}\subsetneq\cdots\subsetneq I_{2,t}=I_{0}$. We write $\widehat{\mathbf{E}}_{n}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathbf{E}_{\Delta_{n},\emptyset}$. For each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ we define $\widehat{\mathbf{E}}_{I_{0},I_{2}}$ as the image of $\mathbf{E}_{\Delta_{n},I_{0}}^{\infty}\otimes\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},\emptyset}^{\infty}\xrightarrow{\cup}\mathbf{E}_{\Delta_{n},\emptyset}=\widehat{\mathbf{E}}_{n}$ which gives a canonical subspace of $\widehat{\mathbf{E}}_{n}$. There exists clearly a non-canonical isomorphism $\iota_{I_{0},I_{2}}:\mathbf{E}_{I_{0},I_{2}}\xrightarrow{\sim}\widehat{\mathbf{E}}_{I_{0},I_{2}}$ by item (i) of Corollary 5.1 and the definition of $\widehat{\mathbf{E}}_{I_{0},I_{2}}$, depending on our choice of $\mathrm{val}\in\mathrm{Hom}_{\rm{cont}}(K^{\times},E)$. Note that we use the convention $\mathbf{E}_{I_{0},I_{0}}\cong\mathbf{E}_{\emptyset,\emptyset}\cong E$, and thus $\widehat{\mathbf{E}}_{I_{0},I_{0}}=\mathbf{E}_{\Delta_{n},\emptyset}^{\infty}\subseteq\widehat{\mathbf{E}}_{n}$ for each $I_{0}\subseteq\Delta_{n}$. ###### Lemma 5.5. * • We have $\widehat{\mathbf{E}}_{I_{0},I_{2}}=\widehat{\mathbf{E}}_{I_{0}\setminus I_{2},\emptyset}$ for each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$. * • We have $\widehat{\mathbf{E}}_{I_{0}^{\prime},I_{2}^{\prime}}\subseteq\widehat{\mathbf{E}}_{I_{0},I_{2}}$ for each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ and $I_{2}^{\prime}\subseteq I_{0}^{\prime}\subseteq\Delta_{n}$ satisfying $I_{0}^{\prime}\setminus I_{2}^{\prime}\subseteq I_{0}\setminus I_{2}$. ###### Proof. The first part follows immediately from item (ii) of Corollary 5.1 as the image of $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},\emptyset}^{\infty}\xrightarrow{\cup}\mathbf{E}_{I_{0},\emptyset}$ clearly equals that of $\mathbf{E}_{I_{0},I_{0}\setminus I_{2}}^{\infty}\otimes\mathbf{E}_{I_{0}\setminus I_{2},\emptyset}\xrightarrow{\cup}\mathbf{E}_{I_{0},\emptyset}$. Using the first part, we may assume that $I_{2}=I_{2}^{\prime}=\emptyset$ while checking the second part. We finish the proof by the observation that the map $\mathbf{E}_{\Delta_{n},I_{0}^{\prime}}^{\infty}\otimes\mathbf{E}_{I_{0}^{\prime},\emptyset}\xrightarrow{\cup}\widehat{\mathbf{E}}_{n}$ factors as $\mathbf{E}_{\Delta_{n},I_{0}^{\prime}}^{\infty}\otimes\mathbf{E}_{I_{0}^{\prime},\emptyset}\cong\mathbf{E}_{\Delta_{n},I_{0}}^{\infty}\otimes\mathbf{E}_{I_{0},I_{0}^{\prime}}^{\infty}\otimes\mathbf{E}_{I_{0}^{\prime},\emptyset}\rightarrow\mathbf{E}_{\Delta_{n},I_{0}}^{\infty}\otimes\mathbf{E}_{I_{0},\emptyset}\xrightarrow{\cup}\widehat{\mathbf{E}}_{n}.$ ∎ ###### Definition 5.6. A _Breuil-Schraen $\mathscr{L}$-invariant_ is a codimension one subspace $W\subseteq\widehat{\mathbf{E}}_{n}$ such that 1. (i) $W\cap\widehat{\mathbf{E}}_{I_{0},I_{2}}\subsetneq\widehat{\mathbf{E}}_{I_{0},I_{2}}$ and thus $W_{I_{0},I_{2}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\iota_{I_{0},I_{2}}^{-1}(W\cap\widehat{\mathbf{E}}_{I_{0},I_{2}})$ satisfies $\operatorname{dim}_{E}\mathbf{E}_{I_{0},I_{2}}/W_{I_{0},I_{2}}=1$ for each $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$; 2. (ii) the composition $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},I_{4}}\xrightarrow{\cup}\mathbf{E}_{I_{0},I_{4}}\twoheadrightarrow\mathbf{E}_{I_{0},I_{4}}/W_{I_{0},I_{4}}$ factors through an isomorphism of lines $(\mathbf{E}_{I_{0},I_{2}}/W_{I_{0},I_{2}})\otimes(\mathbf{E}_{I_{2},I_{4}}/W_{I_{2},I_{4}})\xrightarrow{\sim}\mathbf{E}_{I_{0},I_{4}}/W_{I_{0},I_{4}}$ for each $I_{4}\subseteq I_{2}\subseteq I_{0}\subseteq\Delta_{n}$. ###### Remark 5.7. Based on our Corollary 5.1, one can immediately generalize the definition of automorphic (simple) $\mathscr{L}$-invariants in Section 3.3 of [Geh21] to all automorphic higher $\mathscr{L}$-invariants, at least when the fixed global set up is locally $\mathrm{GL}_{n}$ in nature. One key idea in [Geh21] is to define a $\mathscr{L}$-invariant as kernel of certain cup product map, and condition (ii) is very natural from this point of view. In other words, our definition of Breuil-Schraen $\mathscr{L}$-invariants come from an attempt to combine [Geh21] with representation theoretic computations in [Schr11]. Note that codimension one subspaces $W\subseteq\widehat{\mathbf{E}}_{n}$ satisfying condition (i) of Definition 5.6 are clearly parameterized by a Zariski open subvariety $\mathbb{P}(\widehat{\mathbf{E}}_{n})^{\circ}$ of the projective space $\mathbb{P}(\widehat{\mathbf{E}}_{n})$. Adding the condition (ii) of Definition 5.6 cut out a closed subvariety $\mathcal{B}\mathcal{S}\subseteq\mathbb{P}(\widehat{\mathbf{E}}_{n})^{\circ}$. ###### Lemma 5.8. Let $W\subseteq\widehat{\mathbf{E}}_{n}$ be a hyperplane. Assume that * • $\operatorname{dim}_{E}\mathbf{E}_{I_{0},\emptyset}/W_{I_{0},\emptyset}=1$ for each $I_{0}\subseteq\Delta_{n}$ which is an (possibly empty) interval (see Definition 2.10); * • $W_{I_{0},\emptyset}$ contains the image of $\mathbf{E}_{I_{0},I_{2}}\otimes W_{I_{2},\emptyset}\xrightarrow{\cup}\mathbf{E}_{I_{0},\emptyset}$ for each pair of (possibly empty) subintervals $I_{2}\subseteq I_{0}\subseteq\Delta_{n}$ such that $I_{0}\setminus I_{2}$ is also an interval. Then $W$ is a Breuil-Schraen $\mathscr{L}$-invariant. ###### Proof. To check condition (i) of Definition 5.6, we may assume without loss of generality that $I_{2}=\emptyset$ thanks to Lemma 5.5. We consider $I_{0,1}$ which is a maximal subinterval of $I_{0}$. If $I_{0,1}=I_{0}$, then we have nothing to prove. Otherwise we have $\widehat{\mathbf{E}}_{I_{0,1},\emptyset}\subseteq\widehat{\mathbf{E}}_{I_{0},\emptyset}$. As $\widehat{\mathbf{E}}_{I_{0,1},\emptyset}\not\subseteq W$, we clearly have $\widehat{\mathbf{E}}_{I_{0},\emptyset}\not\subseteq W$, which finishes the proof of condition (i) of Definition 5.6. Now we check condition (ii) of Definition 5.6. Again we may assume using Lemma 5.5 that $I_{4}=\emptyset$. If $I_{0}\setminus I_{2}$ is not an interval, then there exists $I_{2}\subsetneq I\subsetneq I_{0}$ such that $I\setminus I_{2}$ is a maximal subinterval of $I_{0}\setminus I_{2}$ and $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0},I}\otimes\mathbf{E}_{I,I_{2}}$, and thus we can reduce this case to that of the pair $I\subseteq I_{0}$ and the pair $I_{2}\subseteq I$. We assume that $I_{0}\setminus I_{2}$ is an interval from now on. We observe that the kernel of $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},\emptyset}\twoheadrightarrow(\mathbf{E}_{I_{0},I_{2}}/W_{I_{0},I_{2}})\otimes(\mathbf{E}_{I_{2},\emptyset}/W_{I_{2},\emptyset})$ is simply $\mathbf{E}_{I_{0},I_{2}}\otimes W_{I_{2},\emptyset}+W_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},\emptyset}\cong\mathbf{E}_{I_{0},I_{2}}\otimes W_{I_{2},\emptyset}+\mathbf{E}_{I_{0},I_{0}\setminus I_{2}}\otimes W_{I_{0}\setminus I_{2},\emptyset}\subseteq W_{I_{0},\emptyset}$ by our assumption. Here we identify $\mathbf{E}_{I_{0},I_{2}}\otimes\mathbf{E}_{I_{2},\emptyset}$ with a subspace of $\mathbf{E}_{I_{0},\emptyset}$ using item (i) of Corollary 5.1, and then use item (ii) of Corollary 5.1 to transform the second direct summand. We also use the fact that $W_{I_{0},I_{2}}$ is sent to $W_{I_{0}\setminus I_{2},\emptyset}$ under the isomorphism $\mathbf{E}_{I_{0},I_{2}}\cong\mathbf{E}_{I_{0}\setminus I_{2},\emptyset}$. ∎ For each $I=\\{i_{1}<i_{2}<\cdots<i_{\ell}\\}\subseteq\Delta_{n}$, we set $x_{\alpha_{I}}^{\infty}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}x_{(i_{1},i_{1}+1)}^{\infty}\cup x_{(i_{2},i_{2}+1)}^{\infty}\cup\cdots\cup x_{(i_{\ell},i_{\ell}+1)}^{\infty}\in\mathbf{E}_{I,\emptyset}^{\infty}.$ ###### Theorem 5.9. 1. (i) Let $W\subseteq\widehat{\mathbf{E}}_{n}$ be a Breuil-Schraen $\mathscr{L}$-invariant. For each $\alpha\in\Phi^{+}$ and each $\iota\in S$, there exists a unique $\mathscr{L}_{\alpha,\iota}\in E$ such that $x_{\alpha,\iota}-\mathscr{L}_{\alpha,\iota}x_{\alpha}^{\infty}\in W_{I_{\alpha},\emptyset}$. 2. (ii) The map $\mathcal{B}\mathcal{S}\cong U_{n,E}^{+}:~{}W\mapsto(\mathscr{L}_{\alpha,\iota})_{\alpha\in\Phi^{+},\iota\in S}$ is an isomorphism, where $U_{n,E}^{+}$ is the unipotent radical of the Borel subgroup $B_{n,E}^{+}\subseteq G_{n,E}$. ###### Proof. Let $W\subseteq\widehat{\mathbf{E}}_{n}$ be a Breuil-Schraen $\mathscr{L}$-invariant, $\alpha\in\Phi^{+}$ be a positive root and $\iota\in S$ an embedding. If there exists $\mathscr{L}_{\alpha,\iota}\neq\mathscr{L}_{\alpha,\iota}^{\prime}$ such that $x_{\alpha,\iota}-\mathscr{L}_{\alpha,\iota}x_{\alpha}^{\infty},x_{\alpha,\iota}-\mathscr{L}_{\alpha,\iota}^{\prime}x_{\alpha}^{\infty}\in W_{I,\emptyset}$, then we have $x_{\alpha}^{\infty}\in W_{I,\emptyset}$ and thus $\mathbf{E}_{I,\emptyset}^{\infty}\subseteq W_{I,\emptyset}$. This forces $\widehat{\mathbf{E}}_{\emptyset,\emptyset}=\mathbf{E}_{\Delta_{n},I}^{\infty}\otimes\mathbf{E}_{I,\emptyset}^{\infty}\subseteq\iota_{I,\emptyset}(W_{I,\emptyset})\subseteq W$ and contradict condition (i) of Definition 5.6. Hence $\mathscr{L}_{\alpha,\iota}\in E$, if exists, is unique. Now we prove the existence by induction on the natural partial order on $\Phi^{+}$. If $\\#I_{\alpha}=1$, then $W_{I_{\alpha},\emptyset}$ is a hyperplane in $\mathbf{E}_{I_{\alpha},\emptyset}$ not containing $Ex_{\alpha}^{\infty}$ (as $\widehat{\mathbf{E}}_{\emptyset,\emptyset}\not\subseteq W$), and thus $W_{I_{\alpha},\emptyset}\cap(Ex_{\alpha}^{\infty}\oplus Ex_{\alpha,\iota})$ is a hyperplane in $Ex_{\alpha}^{\infty}\oplus Ex_{\alpha,\iota}$ not containing $Ex_{\alpha}^{\infty}$, which implies the existence of a unique $\mathscr{L}_{\alpha,\iota}\in E$ such that $x_{\alpha,\iota}-\mathscr{L}_{\alpha,\iota}x_{\alpha}^{\infty}\in W_{I_{\alpha},\emptyset}$. Now assume that $\\#I_{\alpha}\geq 2$ and $\mathscr{L}_{\alpha^{\prime},\iota}\in E$ exists for each $\alpha^{\prime}<\alpha$ and each $\iota\in S$. Note that $\mathbf{E}_{I_{\alpha},\emptyset}^{<}\not\subseteq W_{I_{\alpha},\emptyset}$ as we clearly have $W_{I_{\alpha},\emptyset}+Ex_{\alpha}^{\infty}=\mathbf{E}_{I_{\alpha},\emptyset}$. Recall from (the proof of) Lemma 5.4 that $X_{I_{\alpha},\emptyset}\setminus\\{x_{\alpha}\\}$ forms a basis of $\mathbf{E}_{I_{\alpha},\emptyset}^{<}$. For each partition $\alpha=\alpha_{1}+\cdots+\alpha_{t}$ with $t\geq 2$ and $x_{\alpha_{1}}\cup x_{\alpha_{2}}\cup\cdots\cup x_{\alpha_{t}}\in X_{I_{\alpha},\emptyset}\setminus\\{x_{\alpha}\\}$ (with $x_{\alpha_{t^{\prime}}}\in\overline{X}_{\alpha_{t^{\prime}}}$ for each $1\leq t^{\prime}\leq t$), we define a new element $y_{\alpha_{1}}\cup y_{\alpha_{2}}\cup\cdots\cup y_{\alpha_{t}}\in\mathbf{E}_{I_{\alpha},\emptyset}^{<}$ by taking $y_{\alpha_{t^{\prime}}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}x_{\alpha_{t^{\prime}}}-\mathscr{L}_{\alpha_{t^{\prime}},\iota}x_{\alpha_{t^{\prime}}}^{\infty}$ if $x_{\alpha_{t^{\prime}}}=x_{\alpha_{t^{\prime}},\iota}$ and $y_{\alpha_{t^{\prime}}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}x_{\alpha_{t^{\prime}}}^{\infty}$ if $x_{\alpha_{t^{\prime}}}=x_{\alpha_{t^{\prime}}}^{\infty}$. Hence, we obtain a new set of vectors $Y_{I_{\alpha},\emptyset}^{<}$ which is clearly a basis of $\mathbf{E}_{I_{\alpha},\emptyset}^{<}$ as it differs from $X_{I_{\alpha},\emptyset}\setminus\\{x_{\alpha}\\}$ by a triangular matrix. As $x_{\alpha^{\prime},\iota}-\mathscr{L}_{\alpha^{\prime},\iota}x_{\alpha^{\prime}}^{\infty}\in W_{I_{\alpha^{\prime}},\emptyset}$ for each $\alpha^{\prime}<\alpha$ and $\iota\in S$, we deduce that $Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}\subseteq W_{I_{\alpha},\emptyset}\cap\mathbf{E}_{I_{\alpha},\emptyset}^{<}$, which implies that $Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}$ is a basis of $W_{I_{\alpha},\emptyset}\cap\mathbf{E}_{I_{\alpha},\emptyset}^{<}$ as $\\#Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}=\\#X_{I_{\alpha},\emptyset}\setminus\\{x_{\alpha}\\}-1=\operatorname{dim}_{E}\mathbf{E}_{I_{\alpha},\emptyset}^{<}-1=\operatorname{dim}_{E}W_{I_{\alpha},\emptyset}\cap\mathbf{E}_{I_{\alpha},\emptyset}^{<}.$ As $\mathbf{E}_{I_{\alpha},\emptyset}^{<}\not\subseteq W_{I_{\alpha},\emptyset}$, the inclusion $W_{I_{\alpha},\emptyset}\subseteq\mathbf{E}_{I_{\alpha},\emptyset}$ induces an isomorphism $W_{I_{\alpha},\emptyset}/W_{I_{\alpha},\emptyset}\cap\mathbf{E}_{I_{\alpha},\emptyset}^{<}\xrightarrow{\sim}\mathbf{E}_{I_{\alpha},\emptyset}/\mathbf{E}_{I_{\alpha},\emptyset}^{<}.$ Consequently, for each $\iota\in S$, $W_{I_{\alpha},\emptyset}$ contains a vector of the form $x_{\alpha,\iota}-x^{\prime}$ for $x^{\prime}$ a linear combination of vectors in $X_{I_{\alpha},\emptyset}\setminus\\{x_{\alpha}\\}$, or equivalently a linear combination of vectors in $Y_{I_{\alpha},\emptyset}^{<}$. However, as $Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}\subseteq W_{I_{\alpha},\emptyset}$, we may choose $x^{\prime}$ to have the form $\mathscr{L}_{\alpha,\iota}x_{\alpha}^{\infty}$ for some $\mathscr{L}_{\alpha,\iota}\in E$. Consequently, $\mathscr{L}_{\alpha,\iota}$ exists, and the proof of item (i) is finished by an induction. Conversely, given a tuple $(\mathscr{L}_{\alpha,\iota})_{\alpha\in\Phi^{+},\iota\in S}\in U_{n,E}^{+}(E)$, we can define $\overline{Y}_{I_{\alpha},\emptyset}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\\{x_{\alpha,\iota}-\mathscr{L}_{\alpha,\iota}x_{\alpha}^{\infty}\mid\iota\in S\\}$ for each $\alpha\in\Phi^{+}$. Then we consider the set $Z_{I_{\alpha},\emptyset}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\bigsqcup_{\alpha^{\prime}\leq\alpha}\overline{Y}_{I_{\alpha},\emptyset}$ and define $W_{I_{\alpha},\emptyset}$ as the span of $Z_{I_{\alpha},\emptyset}$ for each $\alpha\in\Phi^{+}$. It suffices to check the second condition in Lemma 5.8 to conclude that $W_{\Delta_{n},\emptyset}$ is a Breuil-Schraen $\mathscr{L}$-invariant. This is clear as for each pair of subintervals $\emptyset\neq I_{2}\subsetneq I_{0}\subseteq\Delta_{n}$ with $I_{0}\setminus I_{2}$ being an interval, $\mathbf{E}_{I_{0},I_{2}}\otimes W_{I_{2},\emptyset}$ admits a basis of the form $(Z_{I_{0}\setminus I_{2},\emptyset}\sqcup\\{x_{\alpha_{I_{0}\setminus I_{2}}}^{\infty}\\})\otimes Z_{I_{2},\emptyset}\subseteq Z_{I_{0},\emptyset}.$ The proof is thus finished. ∎ ###### Remark 5.10. In the proof of Theorem 5.9, we have shown that $Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}$ is a basis of $W_{I_{\alpha},\emptyset}\cap\mathbf{E}_{I_{\alpha},\emptyset}^{<}$ for each $\alpha\in\Phi^{+}$ with $\\#I_{\alpha}\geq 2$. This actually implies the following equality (5.2) $\mathbf{E}_{I_{\alpha},\emptyset}^{<}\cap W_{I_{\alpha},\emptyset}=\sum_{\alpha^{\prime}<\alpha}\mathbf{E}_{I_{\alpha},I_{\alpha^{\prime}}}\otimes W_{I_{\alpha^{\prime}},\emptyset}.$ The RHS is clearly inside LHS by condition (ii). To see the LHS is inside RHS, it sufficient to check an arbitrary element $y_{\alpha_{1}}\cup y_{\alpha_{2}}\cup\cdots\cup y_{\alpha_{t}}\in Y_{I_{\alpha},\emptyset}^{<}\setminus\\{x_{\alpha}^{\infty}\\}$ for some partition $\alpha=\alpha_{1}+\cdots+\alpha_{t}$ with $t\geq 2$. Then there clearly exists some $1\leq t\leq t^{\prime}$ such that $y_{\alpha_{t^{\prime}}}\neq x_{\alpha_{t^{\prime}}}^{\infty}$, which implies that $y_{\alpha_{t^{\prime}}}\in W_{I_{\alpha_{t^{\prime}}},\emptyset}$ and thus $y_{\alpha_{1}}\cup y_{\alpha_{2}}\cup\cdots\cup y_{\alpha_{t}}\in\mathbf{E}_{I_{\alpha},I_{\alpha_{t^{\prime}}}}\otimes W_{I_{\alpha_{t^{\prime}}},\emptyset}.$ Now we are ready to formulate our first main conjecture on the existence of certain family of locally analytic representations parameterized by Breuil- Schraen $\mathscr{L}$-invariants. Note that $\mathcal{B}\mathcal{S}$ is an affine scheme isomorphic to $U_{n,E}^{+}$, and we write $\mathcal{O}(\mathcal{B}\mathcal{S})$ for its ring of global sections. Each closed point $x$ of $\mathcal{B}\mathcal{S}$ corresponds to a maximal ideal $\mathfrak{m}_{x}\subseteq\mathcal{O}(\mathcal{B}\mathcal{S})$ with residual field $E_{x}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathcal{O}(\mathcal{B}\mathcal{S})/\mathfrak{m}_{x}$. ###### Conjecture 5.11. We fix a weight $\lambda\in X(T_{n,E})$ which is dominant with respect to $B_{n,E}^{+}$. There exists a $\mathcal{O}(\mathcal{B}\mathcal{S})\otimes_{E}D(G_{n})$-module $\mathcal{M}(\lambda)$, such that for each closed point $x$ of $\mathcal{B}\mathcal{S}$, we have $\mathcal{M}(\lambda)/\mathfrak{m}\mathcal{M}(\lambda)\cong\mathcal{W}_{x}(\lambda)^{\prime}$ for some admissible locally analytic representation $\mathcal{W}_{x}(\lambda)$ of $G_{n}$ satisfying 1. (i) $\mathcal{W}_{x}(\lambda)$ is of finite length and each of its Jordan–Hölder factor is Orlik-Strauch; 2. (ii) both the socle and the maximal locally algebraic subrepresentation of $\mathcal{W}_{x}(\lambda)$ are isomorphic to $\mathrm{St}_{n}^{\rm{alg}}(\lambda)$, and $\mathrm{St}_{n}^{\rm{alg}}(\lambda)$ has multiplicity one inside $\mathcal{W}_{x}(\lambda)$; 3. (iii) $\operatorname{dim}_{E}\mathrm{Hom}_{G_{n},\lambda}(\mathrm{St}_{n}^{\rm{an}}(\lambda),\mathcal{W}_{x}(\lambda))=1$, and any embedding $\mathrm{St}_{n}^{\rm{an}}(\lambda)\hookrightarrow\mathcal{W}_{x}(\lambda)$ induces a surjection $\widehat{\mathbf{E}}_{n}=\mathrm{Ext}_{G_{n},\lambda}^{n-1}(F_{n,\Delta_{n}}(\lambda),\mathrm{St}_{n}^{\rm{an}}(\lambda))\twoheadrightarrow\mathrm{Ext}_{G_{n},\lambda}^{n-1}(F_{n,\Delta_{n}}(\lambda),\mathcal{W}_{x}(\lambda))$ with kernel $W_{x}$, where $W_{x}\subseteq\widehat{\mathbf{E}}_{n}$ is the Breuil-Schraen $\mathscr{L}$-invariant attached to $x$. ###### Remark 5.12. Conjecture 5.11 is known for $n=2$ with $K=\mathbb{Q}_{p}$ by Breuil in [Bre04] and [Bre10]), by Schraen and Ding for $n=2$ with general $K$ in [Schr10] and [Ding16]), and for $n=3$ with $K=\mathbb{Q}_{p}$ by [Schr11], [Bre19], [BD20] and [Qian21]. We refer further details to Remark 1.4. ###### Remark 5.13. As in [Bre10], [Ding16], [Bre19], [BD20] and [Qian21], the representation $\mathcal{W}_{x}(\lambda)$ is expected to satisfy certain $p$-adic local- global compatibility. Let $F$ be a number field, $v|p$ a finite place of $p$, $G_{/F}$ a reductive group satisfying $G(F_{v})\cong\mathrm{GL}_{n}(F_{v})$ and $U^{v}\subseteq G(\mathbf{A}^{\infty,v})$ a compact open subgroup. We define $\widehat{S}(U^{v},\mathcal{O})$ to be the space of $\mathcal{O}$-valued $p$-adic continuous functions on the profinite set $(G(F)\backslash G(\mathbf{A}^{\infty}))/U^{v}$ and then define $\widehat{S}(U^{v},E)\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}S(U^{v},\mathcal{O})\otimes_{\mathcal{O}}E$. The space $\widehat{S}(U^{v},\mathcal{O})$ admits commuting action of $G(F_{v})\cong\mathrm{GL}_{n}(F_{v})$ and a Hecke algebra $\mathbb{T}(U^{v})_{\mathcal{O}}$. Let $r:\mathrm{Gal}(\overline{F}/F)\rightarrow G^{\vee}(E)$ be a Galois representation with certain unramified conditions so that it determines a maximal ideal $\mathfrak{m}_{r}\subseteq\mathbb{T}(U^{v})\otimes_{\mathcal{O}}E$. We consider the $\mathfrak{m}_{r}$-isotypic space $\widehat{S}(U^{v},E)[\mathfrak{m}_{r}]$ which is an admissible unitary Banach representation of $\mathrm{GL}_{n}(F_{v})$, whose set of locally analytic vectors $\widehat{S}(U^{v},E)[\mathfrak{m}_{r}]^{\rm{an}}$ is an admissible locally analytic representation of $\mathrm{GL}_{n}(F_{v})$. Suppose that $\mathrm{Hom}_{\mathrm{GL}_{n}(F_{v})}\left(\mathrm{St}_{n}^{\rm{alg}}(\lambda),\widehat{S}(U^{v},E)[\mathfrak{m}_{r}]^{\rm{an}}\right)\neq 0$ for some dominant weight $\lambda\in X(T_{n,E})$, which under favorable conditions on $G$ and $r$ might imply that $\rho\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}r|_{\mathrm{Gal}(\overline{F_{v}}/F_{v})}$ is semi-stable with $N^{n-1}\neq 0$. Then we would expect the existence of a $x\in\mathcal{B}\mathcal{S}(E)$ uniquely determined by $\rho$ such that any embedding $\mathrm{St}_{n}^{\rm{alg}}(\lambda)\hookrightarrow\mathcal{W}_{x}(\lambda)$ induces an isomorphism $\mathrm{Hom}_{\mathrm{GL}_{n}(F_{v})}\left(\mathcal{W}_{x}(\lambda),\widehat{S}(U^{v},E)[\mathfrak{m}_{r}]^{\rm{an}}\right)\cong\mathrm{Hom}_{\mathrm{GL}_{n}(F_{v})}\left(\mathrm{St}_{n}^{\rm{alg}}(\lambda),\widehat{S}(U^{v},E)[\mathfrak{m}_{r}]^{\rm{an}}\right).$ ### 5.2. Breuil-Schraen $\mathscr{L}$-invariant and Galois representations In this section, we conjecture an isomorphism (see Conjecture 5.18) between the moduli of Breuil-Schraen $\mathscr{L}$-invariants and certain moduli of Galois representations _of Steinberg type_ (see Definition 5.16) via a universal Galois representation. For simplicity of presentation, we only treat the ordinary case, namely $\lambda=0$ and $V_{n,I}^{\rm{an}}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}V_{n,I}^{\rm{an}}(0)$ is the locally analytic vector of a continuous generalized Steinberg $V_{n,I}^{\rm{cont}}$ defined in a way similar to (1.15). This saves us from considering $(\varphi,\Gamma)$-modules over the Robba ring. We write $G_{K}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathrm{Gal}(\overline{K}/K)$ for the absolute Galois group of $K$ and $\varepsilon:G_{K}\hookrightarrow G_{\mathbb{Q}_{p}}\rightarrow\mathbb{Z}_{p}^{\times}$ for the cyclotomic character. We recall the following standard lemma. ###### Lemma 5.14. Let $\ell_{1},\ell_{2}$ be two integers. Then we have 1. (i) $\mathrm{Hom}_{G_{K}}(\varepsilon^{\ell_{1}},\varepsilon^{\ell_{2}})=0$ if $\ell_{1}\neq\ell_{2}$ and is one dimensional otherwise; 2. (ii) $\mathrm{Ext}_{G_{K}}^{1}(\varepsilon^{\ell_{1}},\varepsilon^{\ell_{2}})$ has dimension $[K:\mathbb{Q}_{p}]+1$ if $\ell_{1}\in\\{\ell_{2},\ell_{2}-1\\}$ and has dimension $[K:\mathbb{Q}_{p}]$ otherwise; 3. (iii) $\mathrm{Ext}_{G_{K}}^{2}(\varepsilon^{\ell_{1}},\varepsilon^{\ell_{2}})=0$ if $\ell_{1}\neq\ell_{2}-1$ and is one dimensional otherwise. Given a filtered $E$-vector space $V$, we write $V^{\vee}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathrm{Hom}_{E}(V,E)$ for its algebraic dual with the induced filtration. ###### Lemma 5.15. There exists a unique (up to isomorphism) indecomposable continuous $E$-representation $\mathbf{V}_{n}$ of $G_{K}$ that fits into the following short exact sequence $(\varepsilon^{n-1})^{\oplus\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n}}\hookrightarrow\mathbf{V}_{n}\twoheadrightarrow\mathbf{V}_{n-1}$ for each $n\geq 2$. Here we understand $\mathbf{V}_{1}=1_{G_{K}}$ to be the trivial representation of $G_{K}$. ###### Proof. It suffices to prove that (5.3) $\operatorname{dim}_{E}\mathrm{Ext}_{G_{K}}^{1}(\mathbf{V}_{n-1},\varepsilon^{n-1})=\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n}$ by induction on $n\geq 2$. Our inductive assumption (namely the existence of $\mathbf{V}_{1},\dots\mathbf{V}_{n-1}$) gives a increasing filtration $0=\mathrm{Fil}_{0}(\mathbf{V}_{n-1})\subsetneq\mathrm{Fil}_{1}(\mathbf{V}_{n-1})\subsetneq\cdots\subsetneq\mathrm{Fil}_{n-1}(\mathbf{V}_{n-1})=\mathbf{V}_{n-1}$ such that $\mathbf{V}_{n-1}/\mathrm{Fil}_{\ell-1}(\mathbf{V}_{n-1})\cong\mathbf{V}_{n-\ell}$ and (5.4) $\mathrm{Fil}_{\ell}(\mathbf{V}_{n-1})/\mathrm{Fil}_{\ell-1}(\mathbf{V}_{n-1})\cong(\varepsilon^{n-\ell-1})^{\oplus\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-\ell}}$ for each $1\leq\ell\leq n-1$. Item (iii) of Lemma 5.14 together with a simple dévissage shows that (5.5) $\mathrm{Ext}_{G_{K}}^{2}(\mathbf{V}_{n-1}/\mathrm{Fil}_{\ell}(\mathbf{V}_{n-1}),\varepsilon^{n-1})=0$ for each $1\leq\ell\leq n-1$. Item (ii) of _loc.it._ implies that $\operatorname{dim}_{E}\mathrm{Ext}_{G_{K}}^{1}(\mathrm{Fil}_{\ell}(\mathbf{V}_{n-1})/\mathrm{Fil}_{\ell-1}(\mathbf{V}_{n-1}),\varepsilon^{n-1})=[K:\mathbb{Q}_{p}]\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-\ell}$ for each $2\leq\ell\leq n-1$ and $\operatorname{dim}_{E}\mathrm{Ext}_{G_{K}}^{1}(\mathrm{Fil}_{1}(\mathbf{V}_{n-1}),\varepsilon^{n-1})=(1+[K:\mathbb{Q}_{p}])\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-1},$ which together with item (i) of _loc.it._ and (5.5) inductively shows that (5.6) $\operatorname{dim}_{E}\mathrm{Ext}_{G_{K}}^{1}(\mathbf{V}_{n-1},\varepsilon^{n-1})=(1+[K:\mathbb{Q}_{p}])\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-1}+[K:\mathbb{Q}_{p}]\sum_{\ell=2}^{n-1}\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-\ell}.$ Now we recall the partition (5.7) $X_{I_{\alpha},\emptyset}=\overline{X}_{\alpha}\sqcup\bigsqcup_{\ell=i+1}^{j-1}X_{I_{\alpha},\emptyset}^{\ell}$ from the proof of Lemma 5.4 and take $\alpha=\alpha_{\Delta_{n}}=(1,n)$ (namely $i=1$ and $j=n$). The definition of $X_{\Delta_{n},\emptyset}^{\ell}$ forces $\\#X_{\Delta_{n},\emptyset}^{\ell}=\\#\overline{X}_{(i,\ell)}\\#X_{I_{(\ell,j)},\emptyset}$. As $\\#\overline{X}_{\beta}=1+[K:\mathbb{Q}_{p}]$ if $\beta$ is simple and $\\#\overline{X}_{\beta}=[K:\mathbb{Q}_{p}]$ otherwise, we deduce from (5.7) (and Lemma 5.4) that $\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n}=\\#X_{\Delta_{n},\emptyset}=\\#\overline{X}_{(1,n)}+\sum_{\ell=2}^{n-1}\\#\overline{X}_{(1,\ell)}\\#X_{I_{(\ell,n)},\emptyset}\\\ =(1+[K:\mathbb{Q}_{p}])\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-1}+[K:\mathbb{Q}_{p}]\sum_{\ell=2}^{n-1}\operatorname{dim}_{E}\widehat{\mathbf{E}}_{n-\ell},$ which together with (5.6) clearly implies (5.3). ∎ ###### Definition 5.16. For each $n\geq 1$, we call $\mathbf{V}_{n}$ the _$n$ -th universal Steinberg representation of $G_{K}$_. A continuous $\rho:G_{K}\rightarrow\mathrm{GL}_{n}(E)$ is called _of Steinberg type_ if it does not have crystalline subquotient of dimension $\geq 2$ and it admits a increasing filtration $0=\mathrm{Fil}_{0}(\rho)\subsetneq\mathrm{Fil}_{1}(\rho)\subsetneq\cdots\subsetneq\mathrm{Fil}_{n}(\rho)=\rho$ such that $\mathrm{Fil}_{\ell}(\rho)/\mathrm{Fil}_{\ell-1}(\rho)\cong\varepsilon^{n-\ell}$ for each $1\leq\ell\leq n$. The following result justifies our terminology in Definition 5.16. ###### Proposition 5.17. 1. (i) We have $\operatorname{dim}_{E}\mathrm{Hom}_{G_{K}}(\mathbf{V}_{n},\rho)=1$ for each $\rho:G_{K}\rightarrow\mathrm{GL}_{n}(E)$ which is of Steinberg type. 2. (ii) If $\rho:G_{K}\rightarrow\mathrm{GL}_{n}(E)$ does not have crystalline subquotient of dimension $\geq 2$ and satisfies $\mathrm{Hom}_{G_{K}}(\mathbf{V}_{n},\rho)\neq 0$, then $\rho$ is of Steinberg type. ###### Proof. We first treat item (i). Let $\rho:G_{K}\rightarrow\mathrm{GL}_{n}(E)$ be of Steinberg type, then it is maximally non-split and there exists a unique $(n-1)$-dimensional quotient $\rho^{\prime}$ of $\rho$ which is of Steinberg type. By induction on dimension we may assume that $\operatorname{dim}_{E}\mathrm{Hom}_{G_{K}}(\mathbf{V}_{n-1},\rho^{\prime})=1.$ Note that $\mathrm{Ext}_{G_{K}}^{1}(\rho^{\prime},\varepsilon^{n-1})\rightarrow\mathrm{Ext}_{G_{K}}^{1}(\mathbf{V}_{n-1},\varepsilon^{n-1})$ is an embedding (which is unique up to a scalar). Any quotient of $\mathbf{V}_{n}$ isomorphic to $\rho$ necessarily determines a $E$-line in $\mathrm{Ext}_{G_{K}}^{1}(\mathbf{V}_{n-1},\varepsilon^{n-1})$ which must land in $\mathrm{Ext}_{G_{K}}^{1}(\rho^{\prime},\varepsilon^{n-1})$. Such a $E$-line clearly exists and is unique, which implies that $\operatorname{dim}_{E}\mathrm{Hom}_{G_{K}}(\mathbf{V}_{n},\rho)=1$. For item (ii), it suffices to find the filtration as in Definition 5.16. The standard increasing filtration on $\mathbf{V}_{n}$ induces a $n$-step filtration on $\rho$. Our $\rho$ is clear maximally non-split by assumption, which forces $\mathrm{Fil}_{\ell}(\rho)/\mathrm{Fil}_{\ell-1}(\rho)\cong\varepsilon^{n-\ell}$ for each $1\leq\ell\leq n$. ∎ We set $\mathcal{E}_{n}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathrm{Ext}_{G_{K}}^{1}(\mathbf{V}_{n-1},\varepsilon^{n-1})^{\vee}$ for each $n\geq 2$. Then Lemma 5.15 (together with its proof) can be summarized as the following * • The Galois representation $\mathbf{V}_{n}$ is defined inductively (for each $n\geq 2$) by the universal extension $\mathcal{E}_{n}\otimes\varepsilon^{n-1}\hookrightarrow\mathbf{V}_{n}\twoheadrightarrow\mathbf{V}_{n-1}$ where $G_{K}$ acts trivially on $\mathcal{E}_{n}$. * • The space $\mathcal{E}_{n}$ admits a canonical filtration $0=\mathrm{Fil}_{0}(\mathcal{E}_{n})\subsetneq\mathrm{Fil}_{1}(\mathcal{E}_{n})\subsetneq\cdots\subsetneq\mathrm{Fil}_{n-1}(\mathcal{E}_{n})=\mathcal{E}_{n}$ with $\mathrm{Fil}_{\ell}(\mathcal{E}_{n})\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathrm{Ext}_{G_{K}}^{1}(\mathrm{Fil}_{\ell}(\mathbf{V}_{n-1}),\varepsilon^{n-1})^{\vee}$ for each $1\leq\ell\leq n-1$. Moreover, we have a canonical isomorphism $\mathrm{Fil}_{\ell}(\mathcal{E}_{n})/\mathrm{Fil}_{\ell-1}(\mathcal{E}_{n})\cong\mathrm{Ext}_{G_{K}}^{1}(\varepsilon^{n-\ell-1},\varepsilon^{n-1})^{\vee}\otimes\mathcal{E}_{n-\ell}\cong\mathrm{Ext}_{G_{K}}^{1}(1,\varepsilon^{\ell})^{\vee}\otimes\mathcal{E}_{n-\ell}$ for each $1\leq\ell\leq n-1$. Recall that $\widehat{\mathbf{E}}_{n}$ satisfies conditions that are parallel to those of $\mathcal{E}_{n}$ above. More precisely, the space $\widehat{\mathbf{E}}_{n}$ admits a canonical filtration $0=\mathrm{Fil}_{0}(\widehat{\mathbf{E}}_{n})\subsetneq\mathrm{Fil}_{1}(\widehat{\mathbf{E}}_{n})\subsetneq\cdots\subsetneq\mathrm{Fil}_{n-1}(\widehat{\mathbf{E}}_{n})=\widehat{\mathbf{E}}_{n}$ as defined in the proof of Lemma 5.4. Moreover, we have a canonical isomorphism $\mathrm{Fil}_{\ell}(\widehat{\mathbf{E}}_{n})/\mathrm{Fil}_{\ell-1}(\widehat{\mathbf{E}}_{n})\cong P^{2\ell-1}(\mathfrak{g}_{\ell,E})\otimes\widehat{\mathbf{E}}_{n-\ell}$ for each $2\leq\ell\leq n-1$ (with $P^{2\ell-1}(\mathfrak{g}_{\ell,E})$ as in Theorem 2.3), and a canonical isomorphism $\mathrm{Fil}_{1}(\widehat{\mathbf{E}}_{n})\cong\mathrm{Hom}(K^{\times},E)\otimes\widehat{\mathbf{E}}_{n-1}$. Therefore it seems plausible that there should be a natural isomorphism $\widehat{\mathbf{E}}_{n}\cong\mathcal{E}_{n}$ of filtered $E$-vector spaces for each $n\geq 1$, and moreover such isomorphism should be of geometric nature. Following [Bre04] and [Schr11], it is natural to expect that such isomorphisms $\widehat{\mathbf{E}}_{n}\cong\mathcal{E}_{n}$ might be realized via the so- called Drinfeld upper half spaces $\mathcal{X}$. Recall that $\mathcal{X}$ is a rigid $K$-analytic space satisfying $\mathcal{X}(\mathbb{C}_{p})=\mathbb{P}^{n-1}(\mathbb{C}_{p})\setminus\bigcup_{H\in\mathscr{H}}H(\mathbb{C}_{p})$ where $\mathscr{H}$ is the set of hyperplanes of $\mathbb{P}^{n-1}_{K}$ defined over $K$. The $\mathrm{GL}_{n/K}$-action on $\mathbb{P}^{n-1}_{K}$ clearly induces a $G_{n}=\mathrm{GL}_{n}(K)$-action on $\mathcal{X}(\mathbb{C}_{p})$. We write $R\Gamma_{\rm{dR}}(\mathcal{X})\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}[\mathcal{O}(\mathcal{X})\rightarrow\Omega^{1}(\mathcal{X})\rightarrow\cdots\rightarrow\Omega^{n-1}(\mathcal{X})]$ for the de Rham complex of $\mathcal{X}$ (with coefficients $E$), which is an object in the derived category $\mathcal{M}(G_{n})$ attached to the abelian category $\mathrm{Mod}_{D(G_{n})}$ of (abstract) $D(G_{n})$-modules. (As $\mathcal{X}$ is Stein, we only need to consider global sections of various $\Omega^{i}$.) We write $K_{0}$ for the maximal unramified subfield of $K$. According to Hyodo-Kato isomorphism (see the $\iota_{HK}$ in Theorem 1.8 of [CDN20] and [GK05]), there exists a complex $R\Gamma_{\rm{HK}}(\mathcal{X})$ of $K_{0}$-vector spaces with suitable $(\varphi,N)$-action (on the complex) satisfying $N\varphi=p\varphi N$, as well as a canonical isomorphism $R\Gamma_{\rm{HK}}(\mathcal{X})\otimes_{K_{0}}E\cong R\Gamma_{\rm{dR}}(\mathcal{X})$ in $\mathcal{M}(G_{n})$. Consequently, $R\Gamma_{\rm{dR}}(\mathcal{X})$ is an object in $\mathcal{M}(G_{n})$ equipped with a $(\varphi,N)$-action (that commutes with $D(G_{n})$-action), which induces a $(\varphi,N)$-action on the functor $\mathrm{Hom}_{\mathcal{M}(G_{n})}(-,R\Gamma_{\rm{dR}}(\mathcal{X}))$. We write $\mathbf{D}_{\rm{st}}$ for Fontaine’s (covariant) functor (see [Fon94]) that sends a semi-stable $G_{K}$-representation to a filtered $(\varphi,N)$-module (with coefficients extended to $E$). Motivated by Proposition 6.21, Théorème 6.23 and Remarque 6.24 of [Schr11], we have the following conjecture. ###### Conjecture 5.18. There exists an isomorphism of filtered $(\varphi,N)$-modules (5.8) $\mathrm{Hom}_{\mathcal{M}(G_{n})}((\mathrm{St}_{n}^{\rm{an}})^{\prime}[1-n],R\Gamma_{\rm{dR}}(\mathcal{X}))\cong\mathbf{D}_{\rm{st}}(\varepsilon^{1-n}\otimes_{E}\mathbf{V}_{n}).$ In the following, we use the term “motivic” to indicate that certain map is compatible with the conjectural $p$-adic Langlands correspondence. ###### Remark 5.19. 1. (i) The existence of isomorphism (5.8) follows from [Bre04] if $n=2$ and $K=\mathbb{Q}_{p}$, from Théorèm 01 of [Schr10] if $n=2$ with general $K$, and from Proposition 6.21 of [Schr11] if $n=3$ and $K=\mathbb{Q}_{p}$. In a forthcoming work, we plan to prove the _existence_ of one such isomorphism, which depends on detailed computations of $\mathrm{Ext}$ groups in $\mathcal{M}(G_{n})$. 2. (ii) As $\mathbf{V}_{n}$ has lots of automorphisms, such an isomorphism (5.8), if exists, has many choices. But we expect that there exists a unique such (5.8) which is “motivic” (for example, can be interpreted as certain $p$-adic regulator map). 3. (iii) For each $x\in\mathcal{B}\mathcal{S}(E)$, we recall $\mathcal{W}_{x}(\lambda)$ from Conjecture 5.11 and write $\mathcal{W}_{x}\stackrel{{\scriptstyle\textrm{\tiny{def}}}}{{=}}\mathcal{W}_{x}(0)$ for short. Each choice of (5.8) would induce a bijection $x\mapsto\rho_{x}$ between $\mathcal{B}\mathcal{S}(E)$ and the set of $\rho_{x}:G_{K}\rightarrow\mathrm{GL}_{n}(E)$ which are of Steinberg type, such that the following diagram commutes $\displaystyle{\mathbf{D}_{\rm{st}}(\varepsilon^{1-n}\otimes_{E}\mathbf{V}_{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{\cong}$$\displaystyle{\mathbf{D}_{\rm{st}}(\varepsilon^{1-n}\otimes_{E}\rho_{x})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{\mathrm{Hom}_{\mathcal{M}(G_{n})}((\mathrm{St}_{n}^{\rm{an}})^{\prime}[1-n],R\Gamma_{\rm{dR}}(\mathcal{X}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\displaystyle{\mathrm{Hom}_{\mathcal{M}(G_{n})}(\mathcal{W}_{x}^{\prime}[1-n],R\Gamma_{\rm{dR}}(\mathcal{X})).}$ We expect that the “motivic” choice of (5.8) should induce a bijection $x\mapsto\rho_{x}$ which is compatible with $p$-adic local-global compatibility (see Remark 5.13). 4. (iv) We have $H^{\ell}_{\rm{dR}}(\mathcal{X})\cong(V_{n,\\{1,2,\cdots,n-1-\ell\\}}^{\infty})^{\prime}$ for each $0\leq\ell\leq n-1$ by [SS91]. Using the same argument as in Section 6.1 of [Schr11] based on [Dat06] and [Or05], there exists a splitting (5.9) $R\Gamma_{\rm{dR}}(\mathcal{X})\cong\bigoplus_{\ell=0}^{n-1}H^{\ell}_{\rm{dR}}(\mathcal{X})[-\ell].$ As the endormorphism algebra of $\bigoplus_{\ell=0}^{n-1}H^{\ell}_{\rm{dR}}(\mathcal{X})[-\ell]$ is easily shown to be isomorphic to the algebra of size $n$ upper triangular nilpotent matrices (see Corollaire 6.2 of [Schr11]), the splitting (5.9) is far from being canonical. Nevertheless, (5.9) induces an isomorphism of $E$-vector spaces $\mathrm{Hom}_{\mathcal{M}(G_{n})}((\mathrm{St}_{n}^{\rm{an}})^{\prime}[1-n],R\Gamma_{\rm{dR}}(\mathcal{X}))\cong\bigoplus_{\ell=0}^{n-1}\mathrm{Hom}_{\mathcal{M}(G_{n})}((\mathrm{St}_{n}^{\rm{an}})^{\prime}[1-n],H^{\ell}_{\rm{dR}}(\mathcal{X})[-\ell])\cong\bigoplus_{\ell=0}^{n-1}\widehat{\mathbf{E}}_{n-\ell},$ which together with (5.8) induces an isomorphism $\widehat{\mathbf{E}}_{n-\ell}\cong\mathcal{E}_{n-\ell}$ for each $0\leq\ell\leq n-1$ (where we identify $\mathcal{E}_{n-\ell}$ with the canonical $\varepsilon^{n-\ell-1}$-isotypic sub-quotient of $\mathbf{V}_{n}$ by their definition). We expect such an isomorphism $\widehat{\mathbf{E}}_{n-\ell}\cong\mathcal{E}_{n-\ell}$ to respect filtration on each side, regardless of the choice of the splitting (5.9). This suggests that there should be a “motivic” isomorphism (5.10) $P^{2n-1}(\mathfrak{g}_{n,E})\cong\mathrm{Ext}_{G_{K}}^{1}(1,\varepsilon^{n})^{\vee}$ for each $n\geq 2$. There already exists a natural candidate for (5.10), see [HK11] and [Sou81]. 5. (v) Let $\Lambda(G_{n})$ be the dual of $p$-adic continuous functions on $G_{n}$ and we write $M(G_{n})$ for the derived category attached to the abelian category of (abstract) modules over $\Lambda(G_{n})$. Motivated by [CDN20], one may wish that there exists a “geometrically constructed” object $\mathbf{M}\in M(G_{n})$, equipped with a commuting continuous action of $G_{K}$, such that there exists a unique “motivic” $G_{K}$-equivariant isomorphism $\mathrm{Hom}_{M(G_{n})}((\mathrm{St}_{n}^{\rm{cont}})^{\prime}[1-n],\mathbf{M})\cong\varepsilon^{1-n}\otimes_{E}\mathbf{V}_{n}.$ 6. (vi) Given a central division algebra $D$ over $K$ with invariant $\frac{1}{n}$, Scholze ([Sch18]) has constructed a cohomological covariant $\delta$-functor $\\{\mathcal{S}^{i},i\geq 0\\}$ from the category of smooth $\mathcal{O}$-torsion $\mathcal{O}[G_{n}]$-modules to smooth $\mathcal{O}$-torsion $\mathcal{O}[D^{\times}]$-modules which carry a continuous and commuting action of $G_{K}$. More precisely, for each $\pi$, $\mathcal{S}^{i}(\pi)$ is defined as the cohomology group $H^{i}_{\acute{e}t}(\mathbb{P}^{n-1}_{\mathbb{C}_{p}},\mathcal{F}_{\pi})$, where $\mathcal{F}_{\pi}$ is a certain Weil-equivariant sheaf on the adic space $\mathbb{P}^{n-1}_{\mathbb{C}_{p}}$. His construction is expected to realize both $p$-adic local Langlands and Jacquet-Langlands correspondence. Moreover, Scholze has computed $\mathcal{S}^{0}(\pi)$ and showed that $\mathcal{S}^{i}(\pi)=0$ for each $i>2(n-1)$. Given an admissible unitary $E$-Banach representation $\Pi$ of $G_{n}$, one is particularly interested in the following limit $\varprojlim_{r}\mathcal{S}^{n-1}(\Pi/p^{r}\Pi)$ which is an admissible unitary $E$-Banach representation of $D^{\times}$, carrying a continuous and commutinng action of $G_{K}$. Concerning the relation between Scholze’s functor and cohomology of Drinfeld space, our Conjecture 5.18 seems to suggest that we could have a $G_{K}$-equivariant isomorphism (5.11) $\varprojlim_{r}\mathcal{S}^{n-1}(\mathrm{St}_{n}^{\rm{cont}}/p^{r}\mathrm{St}_{n}^{\rm{cont}})^{D^{\times}}\cong\varepsilon^{1-n}\otimes_{E}\mathbf{V}_{n}$ where we take $D^{\times}$-invariant on the LHS. Again, there should be many isomorphisms of the form (5.11), but there should be a unique one which is “motivic”. The mod $p$ version of this isomorphism holds when $n=2$ and $K=\mathbb{Q}_{p}$ according to Theorem 8.34 of [HW22]. 7. (vii) The Galois representation $\mathbf{V}_{n}$ might seem to be a $p$-adic counterpart of the mixed Tate motives considered in [De89] and [DG05] (with thanks to Ma Luo, Liang Xiao and Daxin Xu for guiding me to those references). Note that Deligne–Goncharov mentioned in [DG05] a result of Beilinson (see Proposition 3.4 of _loc.it._) for general connected Hausdorff topological spaces, which has a similar form compared with (5.8). In particular, we expect certain versions of $p$-adic polylogarithm functions to appear in an explicit description of the desired “motivic” isomorphism (5.8). ## References * [Ast] L. Berger, C. Breuil, P. Colmez, éditeurs, _Représentations $p$-adiques de groupes $p$-adiques III: méthodes globales et géométriques_, Astérisque 331, 2010. * [BD20] C. Breuil, Y. Ding, _Higher $\mathfrak{L}$-invariants for $\mathrm{GL}_{3}(\mathbb{Q}_{p})$ and local-global compatibility_, Cambridge J. of Math. 8, 2020, 775-951. * [BD19] C. Breuil, Y. Ding, _Sur un problème de compatibilité local-global localement analytique_ , to appear in Memoirs of Amer. Math. Soc. * [Bre04] C. Breuil, _Invariant L et série spéciale $p$-adique_, Ann. Scient. de l’E.N.S. 37, 2004, 559-610. * [Bre10] C. Breuil, _Série spéciale $p$-adique et cohomologie étale complétée_, Astérisque 331, 2010, 65-115. * [Bre19] C. Breuil, _$\mathrm{Ext}^{1}$ localment analytique et compatibilité local-global_, American J. of Math. 141, 2019, 611-703. * [BT82] R. Bott, L. Tu, _Differential forms in algebraic topology_ , GTM, vol 82. * [BZ77] I. N. Bernstein, A. V. Zelevinsky, _Induced representations of reductive $p$-adic groups I_, Ann. Sci. École Norm. Sup. (4), 10(4): 441-472, 1977. * [CDN20] P. Colmez, G. Dospinescu, W. Niziol, _Cohomology of $p$-adic Stein spaces_, Invent. Math. 219, 2020, no. 3, 873-985. * [Col10] P. Colmez, _Représentations de $\mathrm{GL}_{2}(\mathbb{Q}_{p})$ et $(\varphi,\Gamma)$-modules_, Astérisque 330 (2010), 281-509. * [CW74] W. Casselman, D. Wigner, _Continuous cohomology and a conjecture of Serre’s_ , Invent. Math. 25 (1974), 199-211. * [Dat06] J. F. Dat, _Espaces symétriques de Drinfeld et correspondance de Langlands locale_ , Ann. Sci. École Norm. Sup. 39 (2006), 1-74. * [De89] P. Deligne, _Le groupe fondamental de la droite projective moins trois points_ , Galois groups over $\mathbb{Q}$, MSRI publ., vol. 16, Springer-Verlag, 1989, 79-313. * [DG05] P. Deligne, A. Goncharov, _Groupes fondamentaux motiviques de Tate mixte_ , Ann. Sci, École Norm. Sup. (4), 38(1): 1-56, 2005. * [Ding16] Y. Ding, _$\mathscr{L}$ -invariants, partially de Rham families, and local-global compatibility_, Forum of Math. Sigma, Vol 4, e13, 2016, 49 pages. * [Ding19] Y. Ding, _Simple $\mathscr{L}$-invariants for $\mathrm{GL}_{n}$_, Transactions of the A.M.S., Vol. 372, No. 11, 2019, 7993-8042. * [Fon94] J. M. Fontaine, _Représentations $p$-adiques semi-stables_, Astérisque 223 (1994), 113-184. * [Eme11] M. Emerton, _Local-global compatibility in the $p$-adic Langlands programme for $\mathrm{GL}_{2}/\mathbb{Q}$_, preprint. * [Geh21] L. Gehrmann, _Automorphic $L$-invariants for reductive groups_, J. Reine. Angew. Math. 779 (2021), 57-103. * [GK05] E. Grosse-Klönne, _Frobenius and monodromy operators in rigid analysis, and Drinfeld’s symmetric space_ , J. Algebraic Geom. 14 (2005), 391-437. * [HK11] A. Huber, G. Kings, _A $p$-adic analogue of the Borel regulator and the Bloch-Kato exponential map_, J. Inst. Math. Jussieu, 10(1), 2011, 149-190. * [HW22] Y. Hu, H. Wang, _On some mod $p$ representations of quaternion algebra over $\mathbb{Q}_{p}$_, preprint. * [JL21] A. Jena, A. Lahiri, _Translation functors for locally analytic representations_ , preprint. * [Koh07] J. Kohlhaase, _Invariant distribution on $p$-adic analytic groups_, Duke Math. J. 137. No.1, 2007, 19-62. * [Koh11] J. Kohlhaase, _The cohomology of locally analytic representations_ , J. Reine Angew. Math. (Crelle) 651, 2011, 187-240. * [Kos50] J-L. Koszul, _Homologie et cohomologie des algèbres de Lie_ , Bull. Soc. Math. France 78 (1950), 65-127. * [La65] M. Lazard, _Groupes analytiques $p$-adiques_, Publ. Math. I.H.E.S. No. 26, 1965. * [MTT86] B. Mazur, J. Tate, J. Teitelbaum, _On $p$-adic analogues of the conjectures of Birch and Swinnerton-Dyer_, Invent. Math. 84, 1986, 1-48. * [Or05] S. Orlik, _On extensions of generalized Steinberg representations_ , J. Algebra 293 (2005), 611-630. * [OS13] S. Orlik, B. Schraen, _The Jordan–Hölder series of the locally analytic Steinberg representation_ , Documenta Math. 19, 2014, 647-671. * [OS15] S. Orlik, M. Strauch, _On Jordan–Hölder series of some locally analytic representations_ , J. Amer. Math. Soc. 28, 2015, 99-157. * [Qian21] Z. Qian, _Dilogarithm and higher $\mathscr{L}$-invariants for $\mathrm{GL}_{3}(\mathbb{Q}_{p})$_, Represent. Theory 25 (2021), 344-411. * [Ren] D. Renard, _Représentations des groupes réductifs $p$-adiques_, Cours spécialisés, Vol 17, S.M.F. * [S02] P. Schneider, _Nonarchimedean Functional Analysis_ , Springer Monographs in Mathematics. * [Sch18] P. Scholze, _On the $p$-adic cohomology of the Lubin-Tate tower_, Ann. Sci. Éc. Norm. Supér. (4) 51 (2018), no. 4, 811-863. * [Schr10] B. Schraen, _Représentations $p$-adiques de $\mathrm{GL}_{2}(L)$ et catégories dérivées_, Israel Journal of Math., vol. 176 (2010), 307-362. * [Schr11] B. Schraen, _Représentation localment analytiques de $\mathrm{GL}_{3}(\mathbb{Q}_{p})$_, Ann. Scient. É.N.S 44, 2011, 43-145. * [Sou81] C. Soulé, _On higher $p$-adic regulators_, Algebraic $K$-theory Evanston 1980, Springer, Berlin, Heidelberg, 1981, 372-401. * [SS91] P. Schneider, U. Stuhler, _The cohomology of $p$-adic symmetric spaces_, Invent. Math. 105, 1991, 47-122. * [ST03] P, Schneider, J, Teitelbaum, _Algebras of $p$-adic distributions and admissible representations_, Invent. math. 153, 145–196 (2003). * [ST05] P, Schneider, J, Teitelbaum, _Duality for admissible locally analytic representations_ , Represent. Theory 9 (2005), 297–326.
note-name = , use-sort-key = false # Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms Zeki Zeybek<EMAIL_ADDRESS>The Hamburg Centre for Ultrafast Imaging, Universität Hamburg Luruper Chaussee 149, 22761 Hamburg, Germany Zentrum für Optische Quantentechnologien, Universität Hamburg Luruper Chaussee 149, 22761 Hamburg, Germany Rick Mukherjee <EMAIL_ADDRESS>Zentrum für Optische Quantentechnologien, Universität Hamburg Luruper Chaussee 149, 22761 Hamburg, Germany Peter Schmelcher The Hamburg Centre for Ultrafast Imaging, Universität Hamburg Luruper Chaussee 149, 22761 Hamburg, Germany Zentrum für Optische Quantentechnologien, Universität Hamburg Luruper Chaussee 149, 22761 Hamburg, Germany ###### Abstract Competing short- and long-range interactions represent distinguished ingredients for the formation of complex quantum many-body phases. In this regard, Rydberg atoms are promising as their excited manifold of states have both density-density and exchange interactions whose strength and range can vary considerably. Focusing on one-dimensional systems, we leverage the van der Waals and dipole-dipole interactions of the Rydberg atoms to obtain the zero-temperature phase diagram for a uniform chain and a dimer model. For the uniform chain, we can influence the boundaries between ordered phases and Luttinger liquid while for the dimerized case, a new type of bond-order- density-wave phase is identified, all of which highlights the versatility of the Rydberg platform in studying physics involving short- and long-ranged interactions simultaneously. Introduction.— The interplay of short- and long-range interactions gives rise to diverse phenomena with implications in different areas such as study of electronic dynamics and stability in proteins Sheu _et al._ (2015); Gnandt and Koslowski (2019); Miyazawa and Jernigan (2003); Alshareedah _et al._ (2019), self-assembly in polymers Patsahan _et al._ (2021) and exotic quantum phases in condensed matter physics Baćani _et al._ (2017); Iglói _et al._ (2018); Nishino _et al._ (2019); Azouz _et al._ (2022); Zhu _et al._ (2022). However, the study of these phenomena in the natural biochemical and solid-state setups are challenging due to the limited control and the finite temperature environments. This has lead to a rapid growth in the use of ultra- cold systems for quantum simulation of many-body problems Lewenstein _et al._ (2007); Bloch _et al._ (2008, 2012). These include the highly tunable short- range interactions with atoms in optical lattices Landig _et al._ (2016); Gross and Bloch (2017) to long-range interacting dipolar gases Trefzger _et al._ (2011); Baier _et al._ (2016), polar molecules Yan _et al._ (2013); Hazzard _et al._ (2014); Doçaj _et al._ (2016); Lemeshko _et al._ (2012) and trapped ions Monroe _et al._ (2021); Blatt and Roos (2012); Roy _et al._ (2019). Although trapped ions have been used to simulate effective interactions that have power-law decay $1/r^{\alpha}$, such that $\alpha$ can be varied from $0$ to $3$, they can be remarkably sensitive to external fields and noise. Figure 1: (a) Diagram depicting a uniform lattice of neutral atoms treated as two-level systems consisting of highly excited Rydberg states. Microwave laser with Rabi frequency $\Omega_{\mu w}$ and detuning $\Delta_{\mu w}$ couples the levels. Atoms in the same Rydberg state experience vdW interactions with strengths $V^{s}_{ij}$, $V^{p}_{ij}$ while $V^{sp}_{ij}$ tunes the dipolar exchange interaction between different levels. The two-level system encodes the presence (absence) of a boson at a given site defined by $\hat{b}^{\dagger}(\hat{b})$. (b) Dimerized chain with alternating intra-cell $a_{1}$ and inter-cell $a_{2}$ lattice constants with corresponding hopping ($t,t\alpha$) and off-site interactions ($V,V\alpha^{2}$). Platforms based on neutral Rydberg atoms have proven to be highly practical quantum simulators Weimer _et al._ (2010); Mukherjee _et al._ (2011); Löw _et al._ (2012); Browaeys _et al._ (2016); Morgado and Whitlock (2021) as their large dipole moments provide tunable strong interactions that range from dipole-dipole ($1/r^{3}$ scaling) to van der Waals ($1/r^{6}$ scaling). However, most of the applications of quantum simulation with Rydberg atoms have focused on either exploiting the van der Waals Bernien _et al._ (2017); Lienhard _et al._ (2018); Ebadi _et al._ (2021); Semeghini _et al._ (2021); Scholl _et al._ (2021); Samajdar _et al._ (2020, 2021) or the dipole-dipole interaction de Léséleuc _et al._ (2019); Li _et al._ (2021); Bettelli _et al._ (2013); Abumwis _et al._ (2020), but rarely both together. Rydberg- dressing Johnson and Rolston (2010); Mukherjee _et al._ (2016) allows for a certain flexibility in controlling short- and long-range interactions simultaneously with applications in many-body physics Henkel _et al._ (2012); Mukherjee (2019), but they can be experimentally challenging to realize Balewski _et al._ (2014); Zeiher _et al._ (2016). In this letter, we propose an alternative approach to study short-long range physics by combining the effects of van der Waals and dipole-dipole interactions of Rydberg atoms. Using the one-dimensional (1D) uniform/dimerized lattices, we study the ground state phase diagram and unveil the flexibility in accessing different regimes of the ordered and liquid phases. For the uniform chain, the competition between the interactions is reflected in the competing boundaries between the gapless Luttinger liquid (LL) and the gapped density-wave (DW) ordered phases. In the dimerized chain, apart from realizing the individual phases of bond-order (BO) and DW, we find unique bond-order-density-wave (BODW) phase that has not been previously explored using conventional dimerized model Hayashi _et al._ (2022). Model and Hamiltonian.—We discuss the Rydberg setup and its mapping to extended Bose-Hubbard model which distinguishes itself from existing bosonic models Rossini and Fazio (2012); Maik _et al._ (2013); Samajdar _et al._ (2020, 2021). The uniform and dimerized lattices are considered where the latter is known for rich physics involving topological and insulating phases Su _et al._ (1979); Chen _et al._ (2010); Grusdt _et al._ (2013); Sugimoto _et al._ (2019); Fraxanet _et al._ (2022); Hayashi _et al._ (2022). As illustrated in Fig. 1, the setup consists of a linear chain of trapped atoms which could either have uniform or dimerized lattice configurations. Each atom is a two-level system made of $\ket{ns}$ and $\ket{n^{\prime}p}$ Rydberg states, where $n,n^{\prime}$ are principal quantum numbers. Unlike most Rydberg simulators with one ground state and a Rydberg level, the pair of Rydberg states considered here allows the system to have two types of interactions which differ in range and character: (i) the short-range van der Waals (vdW) interactions between the $ns-ns$ and $n^{\prime}p-n^{\prime}p$ and (ii) the long-range dipolar interaction which causes a state exchange between atoms in different Rydberg levels $ns-n^{\prime}p$. The Rydberg interactions along with the microwave laser coupling between $\ket{ns}$ and $\ket{n^{\prime}p}$ levels are all schematically shown in Fig. 1(a). The corresponding atomic Hamiltonian describing the full setup with uniform lattice spacing $a$ is given as $\displaystyle\hat{H}_{A}$ $\displaystyle=\sum_{i}\Big{[}\frac{\Omega_{\mu w}}{2}(\hat{\sigma}^{sp}_{i}+\hat{\sigma}^{ps}_{i})-\Delta_{\mu w}\hat{\sigma}^{pp}_{i}\Big{]}+V^{p}\sum_{i<j}\frac{\hat{\sigma}^{pp}_{i}\hat{\sigma}^{pp}_{j}}{\absolutevalue{i-j}^{6}}$ $\displaystyle+V^{s}\sum_{i<j}\frac{\hat{\sigma}^{ss}_{i}\hat{\sigma}^{ss}_{j}}{\absolutevalue{i-j}^{6}}+V^{sp}\sum_{i<j}\Big{(}\frac{\hat{\sigma}^{sp}_{i}\hat{\sigma}^{ps}_{j}}{\absolutevalue{i-j}^{3}}+\text{h.c.}\Big{)}.$ (1) Here $\hat{\sigma}^{\alpha\beta}_{i}=\ket{\alpha}_{i}\bra{\beta}$ is the projection operator to the relevant atomic state with $\alpha,\beta\in\\{\ket{ns},\ket{n^{\prime}p}\\}$ at site $i$. $V^{p}=C^{p}_{6}/a^{6}$ and $V^{s}=C^{s}_{6}/a^{6}$ are the strength of vdW interactions, where $C^{s}_{6}$ and $C^{p}_{6}$ are the dispersion coefficients. $V^{sp}=C_{3}/a^{3}$ is the dipole-dipole interaction strength with $C_{3}$ as the exchange coefficient. There are experimental realizations of the above Hamiltonian de Léséleuc _et al._ (2019); Scholl _et al._ (2022). In order to represent Eq. Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms in the Bose-Hubbard picture, the occupation of state $\ket{n^{\prime}p}$ at site $i$ is associated with the presence of a boson at that site and denoted by $\ket{\bullet}_{i}$ while $\ket{\circ}_{i}$ means the absence of a boson which implies the occupation of state $\ket{ns}$. With these definitions, an arbitrary state $\ket{ns~{}n^{\prime}p~{}n^{\prime}p~{}ns\dots}$ is written as $\ket{\circ\bullet\bullet\circ\dots}$. Since each atom cannot have more than one excitation $\ket{n^{\prime}p}$, having two particles at the same site is prohibited, which imposes a hard-core constraint. Defining the $\hat{b}^{\dagger}(\hat{b})$ as the bosonic creation (annihilation) operator, $\hat{H}_{A}$ is re-written as follows, $\displaystyle\hat{H}_{eBH}$ $\displaystyle=\sum_{i<j}t_{ij}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.})+\sum_{i<j}V_{ij}\hat{n}_{i}\hat{n}_{j}$ $\displaystyle-\sum_{i}(\Delta_{\mu w}+\mathcal{I}_{i})\hat{n}_{i}+\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i}),$ (2) where we used the mapping $\hat{\sigma}^{ps}\rightarrow\hat{b}^{\dagger}$, $\hat{\sigma}^{pp}\rightarrow\hat{n}=\hat{b}^{\dagger}\hat{\hat{b}}$ and $\hat{\sigma}^{ss}\rightarrow\mathbb{1}-\hat{n}$ with $(\hat{b}_{i}^{\dagger})^{2}=0$. The first term in Eq. Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms is the long-range hopping $t_{ij}=V^{sp}/\absolutevalue{i-j}^{3}$ which is encoded by the dipolar exchange interaction. $V_{ij}=V/\absolutevalue{i-j}^{6}$ is the repulsive off-site density interaction, where $V=V^{s}+V^{p}=C_{6}/a^{6}$ and $C_{6}=C^{s}_{6}+C^{p}_{6}$ is the combined dispersion coefficient. The chemical potential $(\Delta_{\mu w}+\mathcal{I}_{i})$ determines the density of excitations $\ket{n^{\prime}p}$ (number of bosons) in a lattice. The site- dependent contribution $\mathcal{I}_{i}=\sum_{i\neq j}\frac{V^{s}}{\absolutevalue{i-j}^{6}}$ is an energy offset for a fixed value of the chemical potential. The $\hat{H}_{eBH}$ differs from other extended Bose-Hubbard models Rossini and Fazio (2012); Maik _et al._ (2013); Samajdar _et al._ (2020, 2021) in several aspects: $(i)$ the existence of longer-range hopping and interactions and $(ii)$ the last term in $\hat{H}_{eBH}$ breaks the global U$(1)$ symmetry causing the number of bosons to be a non-conserved quantity. These aspects will play a role in the phase diagrams obtained later. Figure 2: Phase diagrams showing the ground-state entanglement entropy $S_{vN}$ of $\hat{H}_{eBH}$ in the $(\Delta_{\mu w},\Omega_{\mu w})$ parameter space for system size $L=121$ with varying $t/V$ in (a), (b) and (c) respectively. The dark-shaded blue lobes in the top left part of the phase diagrams represent a vanishing $S_{vN}$ which correspond to different gapped ordered phases $Z_{q=2,3,4}$. The yellow-green regions represent finite $S_{vN}$ corresponding to the gapless Luttinger liquid (LL) phase. For large values of $\Omega_{\mu w}$, one obtains the disordered phase which is shown as light-shaded blue. Verification of the individual phases is provided in SM . Fig. 1(b) depicts the dimerized configuration formed by two sub-lattices with alternating lattice constants $a_{1}$ and $a_{2}$. The dimerized version of Eq. Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms is provided in [SM]See Supplemental Material for (1) Physical parameters for the realization of the setup, (2) Mapping of the atomic Hamiltonian to extended Bose-Hubbard model including the dimerized case, (3-4) Additional analysis on verification of individual phases and (5) Numerical method details., whose many-body energy spectrum for $\Omega_{\mu w}=0$ constitutes of many distinct manifolds, each of which is characterized by a fixed number of bosons. For large negative values of the microwave detuning $\Delta_{\mu w}$, one obtains a completely empty lattice (all atoms in $\ket{ns}$ state). As $\Delta_{\mu w}$ increases, the number of bosons added to the lattice also increases. Similar to experiment de Léséleuc _et al._ (2019), an adiabatic sweep through the parameters $(\Omega_{\mu w}(t),\Delta_{\mu w}(t))$ can take the lattice system of size $L$ from one manifold with zero bosons to another manifold of $N$ bosons giving a filling $\rho=N/L$. After reaching a given filling $\rho$, the microwave laser is switched off and the following Hamiltonian is written as $\displaystyle\hat{H}_{dim}$ $\displaystyle=t\sum_{i\in odd}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})+t\alpha\sum_{i\in even}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})$ $\displaystyle+V\sum_{i\in odd}\hat{n}_{i}\hat{n}_{i+1}+V\alpha^{2}\sum_{i\in even}\hat{n}_{i}\hat{n}_{i+1}+\hat{H}_{LR}.$ (3) The even and odd sums represent the intra- and inter-cell terms respectively and the dimerization constant $\alpha=(a_{1}/a_{2})^{3}$ controls the degree of dimerization in the lattice. $t=-C_{3}/a_{1}^{3}$ and $V=C_{6}/a_{1}^{6}$ are the intra-cell hopping and off-site interaction strengths respectively with $C_{6},C_{3}>0$. $\hat{H}_{dim}$ deviates from existing dimer models Grusdt _et al._ (2013); Sugimoto _et al._ (2019); Fraxanet _et al._ (2022); Hayashi _et al._ (2022) in the sense that it has both local as well as long- range hopping and off-site interaction defined under $\hat{H}_{LR}$ (explicitly given in SM ). The fact that it has dimerization in interaction and not just in hopping will play a crucial role in the phase diagrams. Results.— Figures 2 and 3 are the ground-state phase diagrams for $\hat{H}_{eBH}$ and $\hat{H}_{dim}$ obtained using finite-size DMRG White (1992, 1993). More details about numerics are in SM . In earlier works for a uniform lattice Weimer and Büchler (2010); Sela _et al._ (2011); Bernien _et al._ (2017); Yu _et al._ (2022), one finds the LL phase to be always dominated by the ordered phases for the entire region of allowed laser parameters. In contrast, here we show that the boundaries between ordered and LL phases are easily adjustable, and find scenarios where LL even dominates. This is possible due to the competition of vdW and dipolar exchange terms. The same flexibility in the boundaries of the BODW phases are seen in the dimerized case. Moreover, the existence of BODW phase for $\rho=1/3$ filling is shown, which does not occur in conventional models Hayashi _et al._ (2022). Figure 3: (a) Gapped phases of $\hat{H}_{dim}$ are shown as a function of $\alpha$ with fixed $V/t=200$ for system size $L=240$. The red dashed-dotted line defines the lower boundary at $\rho=1/2$ with the vertical dashed black line that separates the BO and DW regions. The green dotted and the blue dashed lines determine the boundaries of the BODW phases at $\rho=1/3$ and $\rho=1/4$ respectively. Density and bond formation in each phase are symbolically represented with partially filled circles (superposition of $ns$ and $n^{\prime}p$ states) and curved lines between sites respectively. (b) Expectation values of the bond energy (red, diamond) and density (blue, square) operators for the BODW phase at $\rho=1/3$ for $\alpha=0.4$ and $V/t=200$ is displayed. The corresponding structure factors are shown in (c). (d,e) Figure comparing the gap $\delta_{L}$ for BODW phases at filling $\rho=1/4$ and $\rho=1/3$ is shown with different types of couplings in Eq. Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms. The gap $\delta_{L}$ is plotted as a function of $V/t$ with fixed $\alpha$ and system size $L=240$. Cases with (squares) and without (circles) dimerization in the interaction are considered for different range of interactions: nearest-neighbour (dashed line with NN) and long-range (solid line with LR). In Fig. 2(a-c), we compute the half-chain bipartite entanglement entropy $S_{vN}\equiv-\Tr(\rho_{r}\ln{\rho_{r}})$ of the ground state over the parameter space $(\Omega_{\mu w},\Delta_{\mu w})$ for fixed hopping $t$, where $\rho_{r}$ is the reduced density matrix of half of the chain. This has been performed for a varying relative strength of the hopping $t$ as shown in Fig. 2(a-c). DW are many-body ground states that are ordered (crystalline) and are characterized with unit cell $p/q$ where $p$ denotes the number of bosons and $q$ is the size of the unit cell. For example, the circle markers in Fig. 2(a-c) correspond to phases that break $\mathbb{Z}_{2}$ translational symmetry with $p/q=1/2$. $\mathbb{Z}_{2}$ phase is described by the state $\ket{\bullet\circ\bullet\dots\bullet\circ\bullet}$, which is a product state and thus possesses a vanishing $S_{vN}$. Similarly, higher period DW phases ($\mathbb{Z}_{q=3,4}$) are also shown in Fig. 2(a) and (b). Although both the hopping term $t$ and Rabi coupling $\Omega_{\mu w}$ introduce quantum fluctuations to the system, they have different effects on the ordered states. For example, if $\Omega_{\mu w}\gg t$, then one obtains a disordered state. However when $\Omega_{\mu w}$ becomes comparable to $t$, then we have either an ordered phase or a LL phase depending on the value of $\Delta_{\mu w}$. Close to the classical regime ($\Omega_{\mu w}=t\simeq 0$), the range of the ordered phase $\mathbb{Z}_{q}$ in terms of the detuning is given as $\delta\Delta_{\mu w}\sim V_{i,i+q-1}+\mathcal{O}(V_{i,i+q})$ Bak and Bruinsma (1982); Sela _et al._ (2011). Thus for low values of $(t,\Omega_{\mu w})$, one finds a host of ordered phases $\mathbb{Z}_{q=2,3,4}$ as seen in Fig. 2(a). As $t$ increases such that $t\geq V_{i,i+q-1}$, then ordered phases with unit cells larger than $q$ get washed out and instead the LL phase takes over as seen in Fig. 2(b-c). This condition is satisfied as the vdW interaction has the combined effect of $ns$ and $n^{\prime}p$ states for different $n$ and $n^{\prime}$ SM . Universal properties of the LL phase such as power-law decay of correlations and the central charge $c=1$ Calabrese and Cardy (2009); Pollmann _et al._ (2009) have been verified SM . Figure 3(a) is obtained by determining the single-particle excitation gap $\delta_{L}(\alpha,V)=\mu^{+}(\alpha,V)-\mu^{-}(\alpha,V)$ as a function of $\alpha$ with fixed $V$. Thus, the extent of the gapped phases in the phase diagram scales as $\delta_{L}$. Here $\mu^{+}=E(N+1)-E(N)$ and $\mu^{-}=E(N)-E(N-1)$ are the chemical potentials that define the boundaries of the gapped phases for a given filling $\rho$ and $E(N)$ is the ground state energy for a system of $N$ bosons defined by $\hat{H}_{dim}$. In Fig. 3(a), four types of gapped phases (DW, BO, BODW1, BODW2) are obtained for different values of filling $\rho$ in the $(\alpha,\mu)$ parameter space with constant $V$. DW are the ordered phases as discussed before while the BO phase is a product of independent dimers which is represented as $\prod_{i}(\frac{\hat{b}^{\dagger}_{2i}+\hat{b}^{\dagger}_{2i+1}}{\sqrt{2}})\ket{\circ\circ\dots\circ}=\ket{\includegraphics[scale={0.18}]{dimersymbol.pdf}\hskip 2.27621pt\includegraphics[scale={0.18}]{dimersymbol.pdf}\dots}$, where each dimer corresponds to two sites sharing a single delocalized boson (). Bond- order-density-wave (BODW) has the characteristic of both bond ordering and density wave ordering. Numerical verification of the individual phases is provided in SM . In Fig. 3(a), at $\rho=1/2$, the gap remains open for all values of $\alpha$ and hosts two different ordered phases, BO and DW. Low values of $\alpha$ is indicative of a highly dimerized lattice where the nearest-neighbour processes within a unit cell dominates over long-range processes such as inter-cell hopping and extended off-site interactions. At $\rho=1/2$ filling, this means significant energy costs in adding/removing bosons which leads to a region of energy-gap corresponding to the BO phase as seen in Fig. 3(a). As $\alpha$ is increased, the long-range effects of hopping and interaction become relevant. But if $V/t$ is sufficiently large, which is the case in Fig. 3(a), then the repulsive vdW interaction leads to a DW phase for the $\rho=1/2$ filling. For any other filling $\rho\neq 1/2$, the gap closes as $\alpha\rightarrow 0$ as seen in Fig. 3(a), which implies that there is no energy cost in adding/removing bosons and allowing free movement of bosons across the lattice (LL phase) is favored over the BO phase as seen for $\rho=1/4,1/3$ fillings. As $\alpha$ increases, long-range processes become dominant and for sufficiently large $V/t$, BODW phases are obtained for $\rho=1/4,1/3$ fillings in contrast to the DW phase that we get for $\rho=1/2$ filling. BODW phases arise from the cumulative effect of dominant long-range repulsive interactions at large values of $\alpha$ and the constraint of sharing certain number of bosons among the lattice due to the fixed filling fraction. BODW1 phase for $\rho=1/4$ consists of dimers in every alternating unit cells and is described by the state $\prod_{i}(\frac{\hat{b}^{\dagger}_{4i}+\hat{b}^{\dagger}_{4i+1}}{\sqrt{2}})\ket{\circ\circ\dots\circ}=\ket{\includegraphics[scale={0.18}]{dimersymbol.pdf}\circ\circ\hskip 2.27621pt\includegraphics[scale={0.18}]{dimersymbol.pdf}\circ\circ\dots}$ while the BODW2 phase for $\rho=1/3$ is represented as $\ket{\includegraphics[scale={0.5}]{bodw1_3symbol.pdf}\dots}$, where a pair of dimers are shared between three sites. The latter BODW2 phase has not been explored before. In Figs. 3(b,c), the characterization of the BODW phase at $\rho=1/3$ is shown SM . The BO nature is probed with the bond order structure factor $\mathcal{S}_{BO}(k)=(1/L^{2})\sum_{i,j}e^{ikr}\braket{\hat{B}_{i}\hat{B}_{j}}$, where $\hat{B}_{i}=\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.}$ is the bond energy operator, while the density wave structure factor is $\mathcal{S}_{DW}(k)=(1/L^{2})\sum_{i,j}e^{ikr}\braket{\hat{n}_{i}\hat{n}_{j}}$. In Fig. 3(b), oscillations in $\hat{n}_{i}$ implies that the bosons primarily occupy every third site on the chain (analogous to $\mathbb{Z}_{q=3}$), thus giving the DW character of the phase. BO oscillations point to a state with $p=2$ bonds for a unit cell of size $q=3$ as two bonds form among three sites. These findings are also reflected in the peaks of the structure factors $\mathcal{S}_{DW}(k)$ and $\mathcal{S}_{BO}(k)$ at $k=2\pi/3$ as shown in Fig. 3(c). Figure 3(d) shows the gap $\delta_{L}$ for the BODW1 phase as a function $V/t$ with fixed dimerization $\alpha=0.4$ for different cases. One finds large energy gaps for the model with dimerized long-range interactions when compared to the almost vanishing gap for the non-dimerized and nearest- neighbour dimerized interacting models. A similar analysis applies to BODW2 in Fig. 3(e). Beyond nearest-neighbour contributions along with dimerization in the interaction leads to constrained packing of bosons favoring BODW phase at different fillings. This highlights the key merits of our setup when compared to existing dimer models where only the hopping term is dimerized Chen _et al._ (2010); Grusdt _et al._ (2013); Sugimoto _et al._ (2019); Fraxanet _et al._ (2022); Hayashi _et al._ (2022). Experimentally relevant parameters to observe these phases are discussed in SM . Conclusion and outlook.— Many-body systems with interactions operating over different length scales host a wide range of phenomena in nature. This work promotes the quantum simulation of such phenomena using Rydberg atoms where the interplay between vdW and dipolar interactions provide a long-range dimerized Hubbard model. The ground state phase diagram of this model is characteristically distinct from conventional models highlighting the benefit of studying short- and long-range interactions together. Future works utilizing Rydberg simulators to probe emerging quantum phases from competing interactions include the investigation of higher dimensional lattices, different geometries and out-of-equilibrium dynamics. ###### Acknowledgements. This work is funded by the Cluster of Excellence “CUI: Advanced Imaging of Matter” of the Deutsche Forschungsgemeinschaft (DFG) - EXC 2056 - Project ID 390715994. ## References * Sheu _et al._ (2015) S.-Y. Sheu, E. W. Schlag, and D.-Y. Yang, Phys. Chem. Chem. Phys. 17, 23088 (2015). * Gnandt and Koslowski (2019) D. Gnandt and T. Koslowski, Phys. Chem. Chem. Phys. 21, 18595 (2019). * Miyazawa and Jernigan (2003) S. Miyazawa and R. L. Jernigan, Proteins 50, 35 (2003). * Alshareedah _et al._ (2019) I. Alshareedah, T. Kaur, J. Ngo, H. Seppala, L.-A. D. Kounatse, W. Wang, M. M. Moosa, and P. R. Banerjee, J. Am. Chem. Soc. 141, 14593 (2019). * Patsahan _et al._ (2021) O. Patsahan, M. Litniewski, and A. Ciach, Soft Matter 17, 2883 (2021). * Baćani _et al._ (2017) M. Baćani, M. Novak, F. Orbanić, K. Prša, I. Kokanović, and D. Babić, Phys. Rev. B 96, 035104 (2017). * Iglói _et al._ (2018) F. Iglói, B. Blaß, G. m. H. Roósz, and H. Rieger, Phys. Rev. B 98, 184415 (2018). * Nishino _et al._ (2019) M. Nishino, C. Enachescu, and S. Miyashita, Phys. Rev. B 100, 134414 (2019). * Azouz _et al._ (2022) Y. Azouz, M. Benhamida, and K. Zanat, J. Magn. Magn. Mater. 559, 169518 (2022). * Zhu _et al._ (2022) X. Zhu, Y. Huang, H. Guo, and S. Feng, Phys. Rev. B 106, 075109 (2022). * Lewenstein _et al._ (2007) M. Lewenstein, A. Sanpera, V. Ahufinger, B. Damski, A. Sen(De), and U. Sen, Adv. Phys. 56, 243 (2007). * Bloch _et al._ (2008) I. Bloch, J. Dalibard, and W. Zwerger, Rev. Mod. Phys. 80, 885 (2008). * Bloch _et al._ (2012) I. Bloch, J. Dalibard, and S. Nascimbène, Nat. Phys. 8, 267 (2012). * Landig _et al._ (2016) R. Landig, L. Hruby, N. Dogra, M. Landini, R. Mottl, T. Donner, and T. Esslinger, Nature 532, 476 (2016). * Gross and Bloch (2017) C. Gross and I. Bloch, Science 357, 995 (2017). * Trefzger _et al._ (2011) C. Trefzger, C. Menotti, B. Capogrosso-Sansone, and M. Lewenstein, J. Phys. B: At. Mol. Opt. Phys. 44, 193001 (2011). * Baier _et al._ (2016) S. Baier, M. J. Mark, D. Petter, K. Aikawa, L. Chomaz, Z. Cai, M. Baranov, P. Zoller, and F. Ferlaino, Science 352, 201 (2016). * Yan _et al._ (2013) B. Yan, S. A. Moses, B. Gadway, J. P. Covey, K. R. A. Hazzard, A. M. Rey, D. S. Jin, and J. Ye, Nature 501, 521 (2013). * Hazzard _et al._ (2014) K. R. A. Hazzard, B. Gadway, M. Foss-Feig, B. Yan, S. A. Moses, J. P. Covey, N. Y. Yao, M. D. Lukin, J. Ye, D. S. Jin, and A. M. Rey, Phys. Rev. Lett. 113, 195302 (2014). * Doçaj _et al._ (2016) A. Doçaj, M. L. Wall, R. Mukherjee, and K. R. A. Hazzard, Phys. Rev. Lett. 116, 135301 (2016). * Lemeshko _et al._ (2012) M. Lemeshko, R. V. Krems, and H. Weimer, Phys. Rev. Lett. 109, 035301 (2012). * Monroe _et al._ (2021) C. Monroe, W. C. Campbell, L.-M. Duan, Z.-X. Gong, A. V. Gorshkov, P. W. Hess, R. Islam, K. Kim, N. M. Linke, G. Pagano, P. Richerme, C. Senko, and N. Y. Yao, Rev. Mod. Phys. 93, 025001 (2021). * Blatt and Roos (2012) R. Blatt and C. F. Roos, Nat. Phys. 8, 277 (2012). * Roy _et al._ (2019) N. Roy, A. Sharma, and R. Mukherjee, Phys. Rev. A 99, 052342 (2019). * Weimer _et al._ (2010) H. Weimer, M. Müller, I. Lesanovsky, P. Zoller, and H. P. Büchler, Nat. Phys. 6, 382 (2010). * Mukherjee _et al._ (2011) R. Mukherjee, J. Millen, R. Nath, M. P. A. Jones, and T. Pohl, J. Phys. B: At. Mol. Opt. Phys. 44, 184010 (2011). * Löw _et al._ (2012) R. Löw, H. Weimer, J. Nipper, J. B. Balewski, B. Butscher, H. P. Büchler, and T. Pfau, J. Phys. B: At. Mol. Opt. Phys. 45, 113001 (2012). * Browaeys _et al._ (2016) A. Browaeys, D. Barredo, and T. Lahaye, J. Phys. B: At. Mol. Opt. Phys. 49, 152001 (2016). * Morgado and Whitlock (2021) M. Morgado and S. Whitlock, AVS Quantum Sci. 3, 023501 (2021). * Bernien _et al._ (2017) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, Nature 551, 579 (2017). * Lienhard _et al._ (2018) V. Lienhard, S. de Léséleuc, D. Barredo, T. Lahaye, A. Browaeys, M. Schuler, L.-P. Henry, and A. M. Läuchli, Phys. Rev. X 8, 021070 (2018). * Ebadi _et al._ (2021) S. Ebadi, T. T. Wang, H. Levine, A. Keesling, G. Semeghini, A. Omran, D. Bluvstein, R. Samajdar, H. Pichler, W. W. Ho, S. Choi, S. Sachdev, M. Greiner, V. Vuletić, and M. D. Lukin, Nature 595, 227 (2021). * Semeghini _et al._ (2021) G. Semeghini, H. Levine, A. Keesling, S. Ebadi, T. T. Wang, D. Bluvstein, R. Verresen, H. Pichler, M. Kalinowski, R. Samajdar, A. Omran, S. Sachdev, A. Vishwanath, M. Greiner, V. Vuletić, and M. D. Lukin, Science 374, 1242 (2021). * Scholl _et al._ (2021) P. Scholl, M. Schuler, H. J. Williams, A. A. Eberharter, D. Barredo, K.-N. Schymik, V. Lienhard, L.-P. Henry, T. C. Lang, T. Lahaye, A. M. Läuchli, and A. Browaeys, Nature 595, 233 (2021). * Samajdar _et al._ (2020) R. Samajdar, W. W. Ho, H. Pichler, M. D. Lukin, and S. Sachdev, Phys. Rev. Lett. 124, 103601 (2020). * Samajdar _et al._ (2021) R. Samajdar, W. W. Ho, H. Pichler, M. D. Lukin, and S. Sachdev, Proc. Natl. Acad. Sci. USA 118, e2015785118 (2021). * de Léséleuc _et al._ (2019) S. de Léséleuc, V. Lienhard, P. Scholl, D. Barredo, S. Weber, N. Lang, H. P. Büchler, T. Lahaye, and A. Browaeys, Science 365, 775 (2019). * Li _et al._ (2021) K. Li, J.-H. Wang, Y.-B. Yang, and Y. Xu, Phys. Rev. Lett. 127, 263004 (2021). * Bettelli _et al._ (2013) S. Bettelli, D. Maxwell, T. Fernholz, C. S. Adams, I. Lesanovsky, and C. Ates, Phys. Rev. A 88, 043436 (2013). * Abumwis _et al._ (2020) G. Abumwis, M. T. Eiles, and A. Eisfeld, Phys. Rev. Lett. 124, 193401 (2020). * Johnson and Rolston (2010) J. E. Johnson and S. L. Rolston, Phys. Rev. A 82, 033412 (2010). * Mukherjee _et al._ (2016) R. Mukherjee, T. C. Killian, and K. R. A. Hazzard, Phys. Rev. A 94, 053422 (2016). * Henkel _et al._ (2012) N. Henkel, F. Cinti, P. Jain, G. Pupillo, and T. Pohl, Phys. Rev. Lett. 108, 265301 (2012). * Mukherjee (2019) R. Mukherjee, Phys. Rev. A 100, 013403 (2019). * Balewski _et al._ (2014) J. B. Balewski, A. T. Krupp, A. Gaj, S. Hofferberth, R. Löw, and T. Pfau, New J. Phys. 16, 063012 (2014). * Zeiher _et al._ (2016) J. Zeiher, R. van Bijnen, P. Schauß, S. Hild, J. Choi, T. Pohl, I. Bloch, and C. Gross, Nat. Phys. 12, 1095 (2016). * Hayashi _et al._ (2022) A. Hayashi, S. Mondal, T. Mishra, and B. P. Das, Phys. Rev. A 106, 013313 (2022). * Rossini and Fazio (2012) D. Rossini and R. Fazio, New J. Phys. 14, 065012 (2012). * Maik _et al._ (2013) M. Maik, P. Hauke, O. Dutta, M. Lewenstein, and J. Zakrzewski, New J. Phys. 15, 113041 (2013). * Su _et al._ (1979) W. P. Su, J. R. Schrieffer, and A. J. Heeger, Phys. Rev. Lett. 42, 1698 (1979). * Chen _et al._ (2010) B.-L. Chen, S.-P. Kou, Y. Zhang, and S. Chen, Phys. Rev. A 81, 053608 (2010). * Grusdt _et al._ (2013) F. Grusdt, M. Höning, and M. Fleischhauer, Phys. Rev. Lett. 110, 260405 (2013). * Sugimoto _et al._ (2019) K. Sugimoto, S. Ejima, F. Lange, and H. Fehske, Phys. Rev. A 99, 012122 (2019). * Fraxanet _et al._ (2022) J. Fraxanet, D. González-Cuadra, T. Pfau, M. Lewenstein, T. Langen, and L. Barbiero, Phys. Rev. Lett. 128, 043402 (2022). * Scholl _et al._ (2022) P. Scholl, H. J. Williams, G. Bornet, F. Wallner, D. Barredo, L. Henriet, A. Signoles, C. Hainaut, T. Franz, S. Geier, A. Tebben, A. Salzinger, G. Zürn, T. Lahaye, M. Weidemüller, and A. Browaeys, PRX Quantum 3, 020303 (2022). * (56) See Supplemental Material for (1) Physical parameters for the realization of the setup, (2) Mapping of the atomic Hamiltonian to extended Bose-Hubbard model including the dimerized case, (3-4) Additional analysis on verification of individual phases and (5) Numerical method details. * White (1992) S. R. White, Phys. Rev. Lett. 69, 2863 (1992). * White (1993) S. R. White, Phys. Rev. B 48, 10345 (1993). * Weimer and Büchler (2010) H. Weimer and H. P. Büchler, Phys. Rev. Lett. 105, 230403 (2010). * Sela _et al._ (2011) E. Sela, M. Punk, and M. Garst, Phys. Rev. B 84, 085434 (2011). * Yu _et al._ (2022) X.-J. Yu, S. Yang, J.-B. Xu, and L. Xu, Phys. Rev. B 106, 165124 (2022). * Bak and Bruinsma (1982) P. Bak and R. Bruinsma, Phys. Rev. Lett. 49, 249 (1982). * Calabrese and Cardy (2009) P. Calabrese and J. Cardy, J. Phys. A 42, 504005 (2009). * Pollmann _et al._ (2009) F. Pollmann, S. Mukerjee, A. M. Turner, and J. E. Moore, Phys. Rev. Lett. 102, 255701 (2009). * Saffman _et al._ (2010) M. Saffman, T. G. Walker, and K. Mølmer, Rev. Mod. Phys. 82, 2313 (2010). * de Léséleuc (2018) S. de Léséleuc, Ph.d. thesis, Université Paris-Saclay (ComUE) (2018). * Šibalić _et al._ (2017) N. Šibalić, J. Pritchard, C. Adams, and K. Weatherill, Comput. Phys. Commun. 220, 319 (2017). * Weber _et al._ (2017) S. Weber, C. Tresp, H. Menke, A. Urvoy, O. Firstenberg, H. P. Büchler, and S. Hofferberth, J. Phys. B: At. Mol. Opt. Phys. 50, 133001 (2017). * Hauschild and Pollmann (2018) J. Hauschild and F. Pollmann, SciPost Phys. Lect. Notes , 5 (2018). Supplemental Material: Quantum Phases from Competing van der Waals and Dipole-Dipole Interactions of Rydberg Atoms ## 1 Realization of the Rydberg Hamiltonian In this section, the experimental realization of the Rydberg Hamiltonian is discussed. In the main article, $\hat{H}_{A}$ (see Eq. 1 in the main article) is a collection of two-level systems consisting of Rydberg states $\\{\ket{r},\ket{r^{\prime}}\\}$. As a precursor, the setup will have all atoms in their electronic ground state $\ket{gg\dots g}$ which is coupled to the Rydberg level $\ket{rr\dots r}$ using either a single-photon or two-photon excitation scheme Browaeys _et al._ (2016); Löw _et al._ (2012) with effective Rabi frequency $\Omega$ and effective detuning $\Delta$. The Hamiltonian describing the precursor setup is as follows, $\hat{H}_{0}=\frac{\Omega}{2}\sum(\ket{r}_{i}\bra{g}+\ket{g}_{i}\bra{r})-\Delta\sum_{i}\ket{r}_{i}\bra{r}+\sum_{i,j}V_{ij}(\ket{r}_{i}\bra{r}\otimes\ket{r}_{j}\bra{r}),$ (S1) where $V_{ij}=C^{rr}_{6}/(a\absolutevalue{i-j})^{6}$ is the interaction between two atoms in Rydberg state $\ket{r}$ (which in our case is the $nS$ state) and $a$ is the lattice constant. The Hamiltonian $\hat{H}_{A}$ is realized by coupling two Rydberg levels $\ket{r}\leftrightarrow\ket{r^{\prime}}$ with a microwave laser with parameters $\Omega_{\mu w}$ and $\Delta_{\mu w}$ as mentioned in the main article. Since all atoms need to be excited into a Rydberg state, the lattice spacing $a$ must be chosen larger than the blockade radius, $r_{b}=(C^{nS- nS}_{6}/2\Omega)^{1/6}$. But large values of lattice spacing will imply that dipole-dipole interactions will be dominant whereas in this work, we are mainly interested in achieving $V\gg t$ for finite values of $t$. For this purpose, we choose different principal quantum numbers for the two Rydberg states, $\ket{r}=\ket{nS}$ and $\ket{r^{\prime}}=\ket{n^{\prime}P}$ and take advantage of the different scaling laws $V\propto n^{11}/a^{6}$ and $t\propto n^{4}/a^{3}$. For $nS$ states Saffman _et al._ (2010) and $nP$ states with $n>42$ de Léséleuc (2018), the vdW interactions are known to be repulsive. Some of the prototypical dispersion coefficients for Rb atoms useful for this work are provided in Table S1. Table S1: Dispersion coefficients for different $n,n^{\prime}$ for $\prescript{87}{}{\text{Rb}}$ using Šibalić _et al._ (2017); Weber _et al._ (2017). $\ket{\downarrow}$ | $\ket{\uparrow}$ | $C_{3}[GHz.\mu m^{3}]$ | $C_{6}^{s}[GHz.\mu m^{6}]$ | $C_{6}^{p}[GHz.\mu m^{6}]$ ---|---|---|---|--- $\ket{60S_{1/2},1/2}$ | $\ket{59P_{1/2},-1/2}$ | $2.51$ | $135.29$ | $1.89$ $\ket{60S_{1/2},1/2}$ | $\ket{60P_{1/2},-1/2}$ | $3.04$ | $135.29$ | $2.68$ $\ket{60S_{1/2},1/2}$ | $\ket{61P_{1/2},-1/2}$ | $0.04$ | $135.29$ | $3.15$ $\ket{90S_{1/2},1/2}$ | $\ket{89P_{1/2},-1/2}$ | $13.85$ | $16500.87$ | $521$ $\ket{90S_{1/2},1/2}$ | $\ket{90P_{1/2},-1/2}$ | $16.35$ | $16500.87$ | $597$ $\ket{90S_{1/2},1/2}$ | $\ket{91P_{1/2},-1/2}$ | $0.23$ | $16500.87$ | $682$ Assuming a Rabi frequency $\Omega=100$ MHz the two-levels $\\{\ket{60S_{1/2},1/2},\ket{61P_{1/2},-1/2}\\}$ with $a\in[4,7]$ $\mu$m can lead to $t/V\in[10^{-2},10^{-1}]$. By increasing $n$, even lower $t/V$ can be obtained. The setup $\\{\ket{90S_{1/2},1/2},\ket{91P_{1/2},-1/2}\\}$ with $a\in[9,19]$ $\mu$m can roughly provide $t/V\in[10^{-3},10^{-1}]$. Experimental realization mentioned above should be carried out within the life times of Rydberg states, which are roughly $200-800$ $\mu s$ for $n=60-90$. ## 2 Boson mapping of Rydberg Hamiltonian to uniform and dimerized models Here the hard-core boson mapping of $\hat{H}_{A}$ to $H_{eBH}$ is discussed. The atomic Hamiltonian $\hat{H}_{A}$ as given in the main article is $\hat{H}_{A}=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{\sigma}^{sp}_{i}+\hat{\sigma}^{ps}_{i})-\Delta_{\mu w}\sum_{i}\hat{\sigma}^{pp}_{i}+V^{p}\sum_{i<j}\frac{\hat{\sigma}^{pp}_{i}\hat{\sigma}^{pp}_{j}}{\absolutevalue{i-j}^{6}}+V^{s}\sum_{i<j}\frac{\hat{\sigma}^{ss}_{i}\hat{\sigma}^{ss}_{j}}{\absolutevalue{i-j}^{6}}+V^{sp}\sum_{i<j}\Bigg{(}\frac{\hat{\sigma}^{sp}_{i}\hat{\sigma}^{ps}_{j}}{\absolutevalue{i-j}^{3}}+\text{h.c.}\Bigg{)}.$ Identifying boson creation with the excitation of atoms to $\ket{n^{\prime}p}$ levels we can reformulate the problem in terms of the extended Bose-Hubbard model in which we deal with hard-core bosons. Making use of the transformation $\hat{\sigma}^{ps}\rightarrow\hat{b}^{\dagger}$, $\hat{\sigma}^{pp}\rightarrow\hat{n}=\hat{b}^{\dagger}\hat{\hat{b}}$, $\hat{\sigma}^{ss}\rightarrow\mathbb{1}-\hat{n}$, $\mathbb{1}=\ket{ns}\bra{ns}+\ket{n^{\prime}p}\bra{n^{\prime}p}$ and collecting the common terms lead to, $\displaystyle\hat{H}_{eBH}$ $\displaystyle=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\Delta_{\mu w}\sum_{i}\hat{n}_{i}+V^{p}\sum_{i<j}\frac{\hat{n}_{i}\hat{n}_{j}}{\absolutevalue{i-j}^{6}}+V^{s}\sum_{i<j}\frac{(1-\hat{n}_{i})(1-\hat{n}_{j})}{\absolutevalue{i-j}^{6}}$ $\displaystyle+V^{sp}\sum_{i<j}\Bigg{(}\frac{\hat{b}^{\dagger}_{i}\hat{b}_{j}}{\absolutevalue{i-j}^{3}}+\text{h.c.}\Bigg{)}$ $\displaystyle=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\Delta_{\mu w}\sum_{i}\hat{n}_{i}+(V^{p}+V^{s})\sum_{i<j}\frac{\hat{n}_{i}\hat{n}_{j}}{\absolutevalue{i-j}^{6}}-\sum_{i}\Bigg{(}\sum_{i\neq j}\frac{V^{s}}{\absolutevalue{i-j}^{6}}\Bigg{)}\hat{n}_{i}$ $\displaystyle+V^{sp}\sum_{i<j}\Bigg{(}\frac{\hat{b}^{\dagger}_{i}\hat{b}_{j}}{\absolutevalue{i-j}^{3}}+\text{h.c.}\Bigg{)}$ $\displaystyle=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\sum_{i}(\Delta_{\mu w}+\mathcal{I}_{i})\hat{n}_{i}+\sum_{i<j}V_{ij}\hat{n}_{i}\hat{n}_{j}+\sum_{i<j}t_{ij}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.}),$ (S2) where $V_{ij}=(V^{s}+V^{p})/\absolutevalue{i-j}^{6}$, $t_{ij}=V^{sp}/\absolutevalue{i-j}^{3}$ and $\mathcal{I}_{i}=\sum_{i\neq j}\frac{V^{s}}{\absolutevalue{i-j}^{6}}$ with $V^{s}$, $V^{p}$ and $V^{sp}$ as defined in the main article. In the second line above, we dropped the constant term that comes from the fourth term in the first line. Dimerizing the above Hamiltonian gives $\hat{H}_{dim}=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\sum_{i}(\Delta_{\mu w}+\mathcal{I}_{i})\hat{n}_{i}+\sum_{i<j}\frac{C_{3}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.})}{(k_{i}a_{1}+m_{j}a_{2})^{3}}+\sum_{i<j}\frac{C_{6}(\hat{n}_{i}\hat{n}_{j})}{(k_{i}a_{1}+m_{j}a_{2})^{6}},$ (S3) where the distance between a pair of sites $(i,j)$ is given by $k_{i}a_{1}+m_{j}a_{2}$ with $k_{i},m_{j}\in\mathbb{N}$. Here, the lattice can be split into two sublattices consisting of odd and even sites. Writing nearest-neighbour terms for even and odd sublattices separately yields $\displaystyle\hat{H}_{dim}$ $\displaystyle=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\sum_{i}(\Delta_{\mu w}+\mathcal{I}_{i})\hat{n}_{i}+\frac{C_{3}}{a_{1}^{3}}\sum_{i\in odd}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})+\frac{C_{6}}{a_{1}^{6}}\sum_{i\in odd}\hat{n}_{i}\hat{n}_{i+1}$ $\displaystyle+\frac{C_{3}}{a_{2}^{3}}\sum_{i\in even}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})+\frac{C_{6}}{a_{2}^{6}}\sum_{i\in even}\hat{n}_{i}\hat{n}_{i+1}+\sum_{\begin{subarray}{c}i<j\\\ k_{i},m_{j}\neq 0\end{subarray}}\frac{C_{3}(\hat{b}^{\dagger}_{i}\hat{b}_{j}+\text{h.c.})}{(k_{i}a_{1}+m_{j}a_{2})^{3}}+\sum_{\begin{subarray}{c}i<j\\\ k_{i},m_{j}\neq 0\end{subarray}}\frac{C_{6}(\hat{n}_{i}\hat{n}_{j})}{(k_{i}a_{1}+m_{j}a_{2})^{6}}.$ (S4) In the above equation, the sign of the $C_{3}$ coefficient can be changed by using a specific quantization axis. Here for the dimerized case, we set it to $-C_{3}/a_{1}^{3}$. Expressing all the interaction terms with respect to intra-cell interactions and defining $\alpha_{l}=a_{1}^{3}/(k_{l}a_{1}+m_{l}a_{2})^{3}$ with $\alpha\equiv\alpha_{1}(k_{1}=0,m_{1}=1)$ leads to the final form of the Hamiltonian $\displaystyle\hat{H}$ $\displaystyle=\frac{\Omega_{\mu w}}{2}\sum_{i}(\hat{b}^{\dagger}_{i}+\hat{b}_{i})-\sum_{i}(\Delta_{\mu w}+\mathcal{I}_{i})\hat{n}_{i}+t\sum_{i\in odd}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})+t\alpha\sum_{i\in even}(\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.})+V\sum_{i\in odd}\hat{n}_{i}\hat{n}_{i+1}$ $\displaystyle+V\alpha^{2}\sum_{i\in even}\hat{n}_{i}\hat{n}_{i+1}+\underbrace{\sum_{\begin{subarray}{c}i\\\ l=2\end{subarray}}t\alpha_{l}(\hat{b}^{\dagger}_{i}\hat{b}_{i+l}+\text{h.c.})+\sum_{\begin{subarray}{c}i\\\ l=2\end{subarray}}V\alpha_{l}^{2}\hat{n}_{i}\hat{n}_{i+l}}_{\hat{H}_{LR}},$ (S5) where $t=-C_{3}/a_{1}^{3}$ and $V=C_{6}/a_{1}^{6}$. As mentioned in the main article, $C_{6}=C^{s}_{6}+C^{p}_{6}$ is the combined vdW dispersion coefficient. Figure S1: (a) Structure factor $\mathcal{S}_{DW}(k)$ for the phases shown in the main article (Fig. $2$(b)) showing pronounced peaks for the ordered phases but remaining featureless for the disordered phase (b) Hopping correlation function displays power-law decay behavior in the LL phase. (c) Scaling of $S_{vN}$ as a function of the correlation length $\xi$ in the LL phase. iDMRG simulations are performed with increasing bond dimensions to capture the scaling. The dashed line has been obtained via fitting according to the equation $S=\frac{c}{6}\log(\xi)+$ const., where $c$ is the central charge. ## 3 Verification of phases for uniform lattice In this section, numerical verification of different phases observed in the uniform lattice is provided in the thermodynamic limit. Ordered (crystalline) phases are identified with the structure factor $\mathcal{S}_{DW}(k)=(1/L^{2})\sum_{i,j}e^{ikr}\braket{\hat{n}_{i}\hat{n}_{j}}$. Crystalline phases exhibit long-range order which translates into sharp peaks of $\mathcal{S}_{DW}(k)$ at commensurate wave vectors $k=2\pi n/q,n=0,\dots,q-1$ corresponding to DW order with $\mathbb{Z}_{q}$ translational symmetry breaking with $q=2,3$ as shown in Fig. S1(a). For the disordered phase, $\mathcal{S}_{DW}(k)$ is featureless without any peaks. LL phase exhibits universal behavior such as power-law decay of correlations and the central charge $c=1$ . In Fig S1(b) we show the hopping correlation function $\braket{\hat{b}^{\dagger}_{i}\hat{b}_{j}}$ which displays a power- law decay. We probed the growth of the entanglement entropy $S_{vN}$ as a function of the correlation length $\xi$ and extracted the universal central charge $c$ as displayed in Fig. S1(c). ## 4 Verification of phases for dimerized lattice In this section, the numerical verification of different phases obtained in the dimerized lattice is provided in the thermodynamic limit and the BO to DW transition point is given. Analysis on BODW phases for different types of couplings and dimerization values is provided. To identify the phases, the structure factors and expectation values of order operators are calculated. The BO is probed with the bond order structure factor $\mathcal{S}_{BO}(k)=(1/L^{2})\sum_{i,j}e^{ikr}\braket{\hat{B}_{i}\hat{B}_{j}}$, where $\hat{B}_{i}=\hat{b}^{\dagger}_{i}\hat{b}_{i+1}+\text{h.c.}$ is the bond energy operator. Figure S2: (a, c, e) Bond energy $\expectationvalue{B_{i}}$ and site density $\expectationvalue{n_{i}}$ expectation values for the gapped phases at each filling given in the main article (Fig. $3$(a)) are displayed. Oscillations corresponding to DW, BO, and BODW orders are shown respectively. (b,d,f) Structure factors $\mathcal{S}_{DW}$ and $\mathcal{S}_{BO}$ exhibit pronounced peaks for the ordered phases. (g) $\mathcal{S}_{BO}(\pi)$ and $\mathcal{S}_{DW}(\pi)$ are determined as a function of $\alpha$. (h) Derivatives of $\mathcal{S}_{BO}(\pi)$ and $\mathcal{S}_{DW}(\pi)$ are given as a function of $\alpha$ . In Fig. S2 (a,c), oscillations in $\hat{n}_{i}$ and $\hat{B}_{i}$ corresponds to DW and BO phases at $\rho=1/2$ respectively. This oscillatory behavior is reflected in the peaks of $\mathcal{S}_{DW}(k)$ and $\mathcal{S}_{BO}(k)$ at the wave vector $k=\pi$ as shown in Fig. S2 (b,d). The critical point where BO to DW transition occurs is reached when $\alpha$ attains a value beyond which $\mathcal{S}_{DW}(\pi)$ is finite and $\mathcal{S}_{BO}(\pi)$ vanishes (Fig. S2 (g)). In order to obtain the critical point, the derivative of $\mathcal{S}_{BO}(\pi)$ and $\mathcal{S}_{DW}(\pi)$ as a function of $\alpha$ is computed. The peak obtained for $d\mathcal{S}_{DW}(\pi)/d\alpha$ and $-d\mathcal{S}_{BO}(\pi)/d\alpha$ is around $\alpha\sim 0.16$ as shown in Fig. S2 (h). Oscillations in both $\hat{n}_{i}$ and $\hat{B}_{i}$ (Fig. S2 (e)) and sharp peaks of $\mathcal{S}_{BO}(k)$ and $\mathcal{S}_{DW}(k)$ (Fig. S2 (f)) verify the coexistence of both DW and BO characteristics in BODW phases. For the BODW phase at $\rho=1/4$, $\mathcal{S}_{BO}(k)$ makes a pronounced peak at $k=\pi/2,\pi$ and $\mathcal{S}_{DW}(k)$ at $k=\pi/2$ as shown in FIG S2 (f). This implies that the bosons are restricted to be found in every alternate unit cell, thus providing the DW character of the phase. Inside each alternating unit cell, bosons delocalize to minimize the ground state energy, which results in the bond formation. For the non-dimerized interacting model Hayashi _et al._ (2022) shown with the yellow shade in Fig. S3(a), the gap vanishes as $\alpha$ increases because hopping is favoured causing delocalization of bosons. With dimerized nearest-neighbour interactions, we get a non-vanishing gap for small values of $\alpha$ but it gradually tapers off as shown by the red shaded region of Fig. S3(a). Finally, for the dimerized long-range interactions, one finds the gap to persist for finite values of $\alpha$ and is significant compared to the other two models. A similar analysis applies to Fig. S3(b) for a $\rho=1/3$ filling where it is necessary to have dimerization in the interactions in order to obtain the BODW phase. In Fig. S3(c,d), we analyze the effect of different fixed dimerization values on the BODW phases and showcase the importance of strong off-site interactions by varying $V/t$. The BODW phase at $\rho=1/4$ shrinks as the dimerization decreases as shown in Fig. S3(c). Though, for sufficient $V/t$ the gap opens and gets larger as $V/t$ increases. In contrast with the previous case, the BODW phase at $\rho=1/3$ occurs only for $\alpha=0.4$ as shown in Fig. S3(d). This shows that beyond nearest-neighbor contributions play a more significant role when compared to the $\rho=1/4$ case. Figure S3: Comparison of BODW phases for different types of couplings (NN- Nearest Neighbour, LR- Long Range, $t_{dim}$\- dimerized hopping, $V_{dim}$\- dimerized interaction) in Eq. $3$ in the main article with and without dimerization in the interaction for $\rho=1/4$ and $\rho=1/3$ as a function of $\alpha$ with fixed $V/t=200$ ($V/t=10$ for $t_{dim};NN$ case) is shown in (a) and (b) respectively. (c,d) Comparison of BODW phases for different fixed dimerization values $\alpha=0.1,0.4$ is shown as a function of $V/t$ for dimerized long-range interactions. ## 5 Numerical Methods In this work, both finite and infinite matrix product states (MPS) are used for studying the ground state properties of the model in uniform and dimerized lattice configurations. All of the DMRG simulations are performed by using the TeNPy library Hauschild and Pollmann (2018). In the uniform lattice, a chain of length $L=121$ with open boundary conditions and units $a=1$,$V=1$ are adopted. The Hamiltonian is represented as a matrix product operator in which the power-law decaying dipolar and vdW interactions are expressed in terms of sum of exponentials. A decomposition that involves 15 exponentials is used to fit the interactions. The maximum MPS bond dimension is set to $\chi=320$. We set the relative energy error to be smaller than $10^{-10}$ to ensure convergence. During truncation, Schmidt values smaller than $10^{-10}$ are discarded. Since open boundary conditions are employed, there will be defects induced by the edge excitations. In order to prevent that from happening, a system size in the form of $L=12n+1$ with $n=10$ is chosen. By doing so, the ground state $q$-fold degeneracy is split for phases with $\mathbb{Z}_{q}$ order with $q=2,3,4$. A single ground state in the product state form is obtained. This enables us to distinguish the ordered phases since they exhibit vanishing $S_{vN}$. The LL phase is verified by determining the central charge $c$ Calabrese and Cardy (2009). Infinite DMRG (iDMRG) simulations with increasing bond dimensions are performed to compute the scaling of the $S_{vN}$ vs. the logarithm of the correlation length $\xi$. The central charge $c$ Pollmann _et al._ (2009) is extracted by a fitting according to the equation $S=\frac{c}{6}\log(\xi)+$ const. In the dimerized lattice, a system of $L=240$ sites with open boundary conditions and $t=1$ are used. The finite DMRG simulations are performed for obtaining the ordered regions whose boundaries indicate the closing of the single particle excitation gap $\delta_{L}$. The ground state for a system of chain length $L$ with $N$ bosons in the $\rho=1/2$ phase is obtained by considering a state $\ket{\Psi}_{0}=\ket{\circ\bullet\dots\circ\bullet}$ as an initial state. The ground state is then calculated by performing DMRG on this state while conserving the particle number $\sum_{i}^{L}n_{i}=N$. The energy for this ground state with $\braket{\sum_{i}^{L}n_{i}}=N$ is given as $E_{L}(N)$. The boundaries of the filling are determined by calculating the cost of creating a boson $\bullet$ or a hole $\circ$ in the system. For the upper (lower) boundary of the filling, the initial state for the DMRG is obtained by acting on $\ket{\Psi}_{0}$ with $\hat{b}^{\dagger}_{i}$($\hat{b}_{i}$) operators. Denoting the ground state energies for the particle and hole cases as $E_{L}(N+1)$ and $E_{L}(N-1)$, the cost of boson creation can be given as $\mu_{L}^{+}=E_{L}(N+1)-E_{L}(N)$ and for the hole creation $\mu_{L}^{-}=E_{L}(N)-E_{L}(N-1)$. This way the gap can be calculated and defined as $\delta_{L}=\mu_{L}^{+}-\mu_{L}^{-}$. Similar calculations are performed in order to determine the boundaries of the gapped phases in other fillings. For the DMRG calculations, the maximum bond dimension is set to $\chi=350$ and a system of length $L=240$ is considered. After determining the lobes with ordered phases in both uniform and dimerized cases, iDMRG simulations inside these regions are performed to compute observables such as the structure factor and the expectation values of certain operators in the thermodynamic limit.
11footnotetext: These authors contributed equally.22footnotetext: At OpenAI.33footnotetext: Correspondence<EMAIL_ADDRESS><EMAIL_ADDRESS> # Can Generalist Foundation Models Outcompete Special-Purpose Tuning? Case Study in Medicine Harsha Nori*‡ Yin Tat Lee* Sheng Zhang* Dean Carignan Richard Edgar Nicolo Fusi Nicholas King Jonathan Larson Yuanzhi Li Weishung Liu Renqian Luo Scott Mayer McKinney† Robert Osazuwa Ness Hoifung Poon Tao Qin Naoto Usuyama Chris White Eric Horvitz‡ (November 2023) ###### Abstract Generalist foundation models such as GPT-4 have displayed surprising capabilities in a wide variety of domains and tasks. Yet, there is a prevalent assumption that they cannot match specialist capabilities without intensive training of models with specialty knowledge. For example, most explorations to date on medical competency benchmarks have leveraged domain-specific training, as exemplified by efforts on BioGPT and Med-PaLM. We build on a prior study of the specialist capabilities of GPT-4 on medical challenge benchmarks in the absence of special training. In distinction to the intentional use of simple prompting to highlight the model’s out-of-the-box capabilities, we perform a systematic exploration of prompt engineering to boost performance. We find that prompting innovation can unlock deeper specialist capabilities and show that GPT-4 easily tops prior leading results for medical question-answering datasets. The prompt engineering methods we explore are general purpose, and make no specific use of domain expertise, removing the need for expert-curated content. Our experimental design carefully controls for overfitting during the prompt engineering process. As a culmination of the study, we introduce Medprompt, based on a composition of several prompting strategies. Medprompt greatly enhances GPT-4’s performance and achieves state of the art results on all nine of the benchmark datasets in the MultiMedQA suite. The method outperforms state-of-the-art specialist models such as Med-PaLM 2 by a large margin with an order of magnitude fewer calls to the model. Steering GPT-4 with Medprompt achieves a 27% reduction in error rate on the MedQA dataset (USMLE exam) over the best methods to date achieved with specialist models, and surpasses a score of 90% for the first time. Moving beyond medical challenge problems, we show the power of Medprompt to generalize to other domains and provide evidence for the broad applicability of the approach via studies of the strategy on competency exams in electrical engineering, machine learning, philosophy, accounting, law, nursing, and clinical psychology. ## 1 Introduction A long-term aspiration in AI research is to develop principles of computational intelligence and to harness these to build learning and reasoning systems that can perform general problem solving across a diversity of tasks [21, 22]. In line with this goal, large language models, also referred to as foundation models, such as GPT-3 [3] and GPT-4 [24], have demonstrated surprising competencies on a broad swath of tasks without requiring heavy specialized training [4]. These models build on the text-to- text paradigm [31] with investments in compute and data to learn at scale from indiscriminate consumption of large amounts of public web data. Some of these models are tuned via a learning objective to perform general instruction- following via prompts. (a) (b) Figure 1: (a) Comparison of performance on MedQA. (b) GPT-4 with Medprompt achieves SoTA on a wide range of medical challenge questions. A core metric for characterizing the performance of foundation models is the accuracy of next word prediction. Accuracy with next word prediction is found to increase with scale in training data, model parameters, and compute, in accordance with empirically derived “neural model scaling laws” [3, 12]). However, beyond predictions of scaling laws on basic measures such as next word prediction, foundation models show the sudden emergence of numerous problem-solving capabilities at different thresholds of scale [33, 27, 24]. Despite the observed emergence of sets of general capabilities, questions remain about whether truly exceptional performance can be achieved on challenges within specialty areas like medicine in the absence of extensive specialized training or fine-tuning of the general models. Most explorations of foundation model capability on biomedical applications rely heavily on domain- and task-specific fine-tuning. With first-generation foundation models, the community found an unambiguous advantage with domain-specific pretraining, as exemplified by popular models in biomedicine such as PubMedBERT [10] and BioGPT [19]. But it is unclear whether this is still the case with modern foundation models pretrained at much larger scale. We focus in this paper on steering foundation models via prompt engineering to excel on a set of medical challenge benchmarks. Med-PaLM 2 attains competitive results on MedQA and other medical challenge problems, via expensive, task- specific fine-tuning of the general PaLM [6] foundation model [29, 30]. In addition to reliance on fine-tuning of the base PaLM model, results on the medical benchmarks for Med-PaLM 2 were generated via use of sophisticated, complex prompting strategies, leveraging exemplars crafted by experts. For example, many of the answers rely on an elaborate two-stage prompt scheme of 44 calls for answering each question. Shortly after GPT-4 was made public in March 2023, several co-authors of this study showed that the model had impressive biomedical competencies “out-of- the-box” on medical challenge benchmarks. To demonstrate the latent power of GPT-4 on specialty medical expertise, the co-authors purposefully employed a rudimentary prompting strategy [23]. Despite the strong results demonstrated in that study, questions remain about the depth of GPT-4’s domain-specific capabilities in the absence of additional special training or tuning. We present results and methods of a case study on steering GPT-4 to answer medical challenge questions with innovative prompting strategies. We include a consideration of best practices for studying prompting in an evaluative setting, including the holding out of a true eyes-off evaluation set. We discover that GPT-4 indeed possesses deep specialist capabilities that can be evoked via prompt innovation. The performance was achieved via a systematic exploration of prompting strategies. As a design principle, we chose to explore prompting strategies that were inexpensive to execute and not customized for our benchmarking workload. We converged on a top prompting strategy for GPT-4 for medical challenge problems, which we refer to as Medprompt. Medprompt unleashes medical specialist skills in GPT-4 in the absence of expert crafting, easily topping existing benchmarks for all standard medical question-answering datasets. The approach outperforms GPT-4 with the simple prompting strategy and state-of-the-art specialist models such as Med-PaLM 2 by large margins. On the MedQA dataset (USMLE exam), Medprompt produces a 9 absolute point gain in accuracy, surpassing 90% for the first time on this benchmark. As part of our investigation, we undertake a comprehensive ablation study that reveals the relative significance for the contributing components of Medprompt. We discover that a combination of methods, including in-context learning and chain-of-thought, can yield synergistic effects. Perhaps most interestingly, we find that the best strategy in steering a generalist model like GPT-4 to excel on the medical specialist workload that we study is to use a generalist prompt. We find that GPT-4 benefits significantly from being allowed to design its prompt, specifically with coming up with its own chain- of-thought to be used for in-context learning. This observation echoes other reports that GPT-4 has an emergent self-improving capability via introspection, such as self-verification [9]. We note that the automated chain-of-thought reasoning removes dependency on special human expertise and medical datasets. Thus, despite the name Medprompt, extending from the framing context and research trajectory of our investigation of the capabilities of GPT-4 on medical challenge problems, the methodology doesn’t include any components specifically oriented towards medicine. As we explore in Section 5.3, the approach can be applied readily to other domains. We present details on Medprompt to facilitate future studies on steering generalist foundation models to provide specialist advice. ## 2 Background ### 2.1 Foundation Models on Medical Challenge Problems In the era of first-generation foundation models, limited model size and computational resources made domain-specific pretraining advantageous. Models such as PubMedBERT [10], BioLinkBERT [37], DRAGON [36], BioGPT [19], and BioMedLM [2] were pretrained with self-supervised objectives using domain- specific data sources, such as the PubMed corpus and UMLS knowledge graph. Despite their small size and limited computational power, these models demonstrate strong performance in biomedical NLP tasks. More powerful, general-domain foundation models have demonstrated significantly elevated performance in medical challenges without requiring domain-specific pretraining. Several studies have explored the performance of generalist foundation models on medical challenge problems. In [17], ChatGPT-3.5 was evaluated on questions drawn from United States Medical Licensing Exam (USMLE), and performed at or near the passing threshold without any specialized training. In [23], GPT-4 was shown to exceed the USMLE passing score by over 20 points using simple 5-shot prompting. Other studies have explored the use of foundation models that are specially fine-tuned with medical knowledge. Other studies have explored the power of relying on explicit tuning with medical knowledge. Med-PaLM [29] and Med-PaLM 2 [30] leverage fine-tuning of the 540B-parameter Flan-PaLM, using instruction prompt tuning. With Med-PaLM, authors asked a panel of five clinicians to prepare their instruction prompt tuning dataset. Med-PaLM 2, built similarly on PaLM 2, relied on instruction- following full fine-tuning and achieved the state-of-the-art performance on medical QA datasets. We re-examine the capabilities of generalist foundation models without resorting to extensive fine-tuning. We explore diverse prompting strategies to best steer powerful generalist foundation models toward delivering strong performance in specialized domains. ### 2.2 Prompting Strategies _Prompting_ in the context of language models refers to the input given to a model to guide the output that it generates. Empirical studies have shown that the performance of foundation models on a specific task can be heavily influenced by the prompt, often in surprising ways. For example, recent work shows that model performance on the GSM8K benchmark dataset can vary by over 10% without any changes to the model’s learned parameters [35]. _Prompt engineering_ refers to the process of developing effective prompting techniques that enable foundation models to better solve specific tasks. Here, we briefly introduce a few key concepts that serve as building blocks for our Medprompt approach. _In-Context Learning_ (ICL) is a key capability of foundation models, allowing the models to solve new tasks from just a few task demonstrations [3]. For example, an ICL prompt can be created by preceding a test question with several different examples of questions and desired results. ICL does not require updating model parameters but can offer effects similar to fine- tuning. The choice of examples used in few-shot prompting can substantially influence model performance. In our prior investigation of the performance of GPT-4 on medical challenge problems [23], we expressly limited prompting to basic in-context learning methods such as fixed one-shot and five-shot prompting to demonstrate the ease with which GPT-4 could be steered to perform with excellence. _Chain of Thought_ (CoT) is a prompting methodology that employs intermediate reasoning steps prior to introducing the sample answer [34]. By breaking down complex problems into a series of smaller steps, CoT is thought to help a foundation model to generate a more accurate answer. CoT ICL prompting integrates the intermediate reasoning steps of CoT directly into the few-shot demonstrations. As an example, in the Med-PaLM work, a panel of clinicians was asked to craft CoT prompts tailored for complex medical challenge problems [29]. Building on this work, we explore in this paper the possibility of moving beyond reliance on human specialist expertise to mechanisms for generating CoT demonstrations automatically using GPT-4 itself. As we shall describe in more detail, we can do this successfully by providing [question, correct answer] pairs from a training dataset. We find that GPT-4 is capable of autonomously generating high-quality, detailed CoT prompts, even for the most complex medical challenges. _Ensembling_ is a technique for combining the outputs of multiple model runs to arrive at a more robust or accurate result via combining the separate outputs with functions like averaging, consensus, or majority vote. Ensembling methods employing a technique referred to as _self-consistency_ [32] use a sampling method to produce multiple outputs that are then consolidated to identify a consensus output. The diversity of the outputs can be controlled by shifting the “temperature” parameter in a model’s generation, where higher temperatures can be viewed as injecting greater amounts of randomness into the generation process. By reordering or _shuffling_ components of a few-shot prompt, ensembling techniques can also address the order sensitivity commonly found with foundation models [26, 39], thus improving robustness. While ensembling can enhance performance, it comes at the cost of increased computational demands. For example, Med-PaLM 2’s Ensemble Refinement method used as many as 44 separate inferences for a single question. Due to this computational overhead, we have pursued as a design principle using simpler techniques to avoid excessive inference costs. We report an ablation study in Section 5.2 which explores the potential of further increased performance under increased computational load. ## 3 Experimental Design We start with an overview of the medical challenge problem datasets and then outline our testing methodology, designed to avoid overfitting that can occur with intensive iteration on a fixed evaluation dataset. ### 3.1 Datasets Our benchmarks, as reported in Section 5 are primarily based on performance of GPT-4 on 9 multiple-choice, biomedical datasets from the MultiMedQA benchmark suite [29]. Specifically, the benchmarks include the following: * • MedQA [14] contains multiple choice questions in the style of the Medical Licensing Examination questions used to test medical specialist competency in the United States, Mainland China, and Taiwan. For fair comparison with prior work [29, 30, 23], we focus on the United States subset of the dataset, which has questions in English in the style of the United States Medical Licensing Exam (USMLE). This dataset contains 1273 questions with four multiple choice answers each. * • MedMCQA [25] presents mock and historic exam questions in the style of two Indian medical school entrance exams—the AIIMS and NEET-PG. The “dev” subset of the dataset, upon which we report benchmark results (consistent with prior studies), contains 4183 questions, each with four multiple choice answers. * • PubMedQA [15] contains tests requiring a yes, no, or maybe answer to biomedical research questions when given context provided from PubMed abstracts. There are two settings for PubMedQA tests called _reasoning- required_ and _reasoning-free_. In the reasoning-free setting, a long-form answer that contains explanations of the abstracts is provided. We report results for the reasoning-required setting, in which the model is only given context from abstracts to use when answering the question. This dataset contains a total of 500 questions. * • MMLU [11] is a multitask benchmark suite of 57 different datasets spanning domains across STEM, humanities, and social sciences. We follow prior work [29] and benchmark against a medically relevant subset of MMLU tasks: clinical knowledge, medical genetics, anatomy, professional medicine, college biology, and college medicine. As we shall see in Section 5.3, we can test the generality of the Medprompt approach by studying its efficacy for competency exams outside the primary focus on medical challenge problems. We test our methodology on two nursing datasets focused on answering NCLEX (National Council Licensure Examinaton) questions and six additional datasets from MMLU covering topics like accounting and law. Details of these datasets are presented in Section 5.3. ### 3.2 Sound Testing Methodology While prompting and in-context learning does not change model parameters, a specific choice of prompting strategy can be viewed as a high-level setting or hyperparameter of the end-to-end testing process. As a result, we must be cautious about overfitting as part of training and testing, thus providing results that would not generalize out of the training and test sets under consideration. Concerns about overfitting with studies of foundation model performance are similar to the valid concerns in traditional machine learning with overfitting during the hyperparameter optimization process [8]. We wish to avoid analogous overfitting in the prompt engineering process. Intuitively, a prompt harnessing for examples a lookup table of specific benchmark questions will naturally perform much better on those questions than on unseen problems. A common technique to address this problem in traditional machine learning is to create “test” sets, _which are only evaluated against at the end of the model selection process_. We adopt this important aspect of sound testing methodology for machine learning studies and randomly carved out 20% of each benchmark dataset as an “eyes-off” split that is completely held out from consideration until the final testing phase. That is, the eyes-off data is kept hidden until the end-stage. The data is not examined or optimized against during the prompt engineering process. For simplicity, we apply the same methodology to every dataset in MultiMedQA, as many of the datasets were not published with dedicated train/test splits by the authors. In Section 5.1, we show the stratified performance of Medprompt on “eyes-on” vs. “eyes-off” splits of the MultiMedQA datasets. We find that our performance is quite similar between the two, and that GPT-4 with Medprompt actually performs marginally better on the eyes-off, held out data suggesting that the methods will generalize well to similar questions in the “open world.” We have not seen evidence of the use of a similar eyes-off approach in prior studies. ## 4 Power of Prompting: Exploration and Results In this section, we detail the three major techniques employed in Medprompt: Dynamic few-shot selection, self-generated chain of thought, and choice shuffle ensembling. After discussing each technique, we review our approach to composing the three methods into the integrated Medprompt. ### 4.1 Dynamic Few-shot Few-shot learning [3] is arguably the most effective in-context learning method. With the prompting approach, through a few demonstrations, foundation models quickly adapt to a specific domain and learn to follow the task format. For simplicity and efficiency, the few-shot examples applied in prompting for a particular task are typically fixed; they are unchanged across test examples. This necessitates that the few-shot examples selected are broadly representative and relevant to a wide distribution of text examples. One approach to meeting these requirements is to have domain experts carefully hand-craft _exemplars_ [29]. Even so, this approach cannot guarantee that the curated, fixed few-shot examples will be appropriately representative of every test example. In comparison, when available, the task training set can serve as an inexpensive, high-quality source for few-shot examples. If the training set is sufficiently large, we can select different few-shot examples for different task inputs. We refer to this approach as employing dynamic few-shot examples. The method makes use of a mechanism to identify examples based on their similarity to the case at hand [18]. For Medprompt, we did the following to identify representative few shot examples: Given a test example, we choose $k$ training examples that are semantically similar using a $k$-NN clustering in the embedding space. Specifically, we first use text-embedding- ada-002111https://openai.com/blog/new-and-improved-embedding-model to embed training questions and test questions as vector representations. Then, for each test question $x$, we retrieve its nearest $k$ neighbors $x_{1},x_{2},...,x_{k}$ from the training set (according to distance in the embedding space of text-embedding-ada-002). Given a pre-defined similarity measure $d$ such as cosine similarity, the neighbors are ordered in such a way that $d(x_{i},x)\leq d(x_{j},x)$ when $i<j$. Compared with fine-tuning, dynamic few-shot leverages the training data, but does not require billions of updates to model parameters. ### 4.2 Self-Generated Chain of Thought Figure 2: Comparison of expert-crafted and GPT-4-generated chain-of-thought (CoT) prompts. Using a [question, correct answer] pair from a training set, GPT-4 is capable of generating a detailed explanation suitable for use in few- shot CoT demonstrations. Chain-of-thought (CoT) [34] uses natural language statements, such as “ _Let’s think step by step_ ,” to explicitly encourage the model to generate a series of intermediate reasoning steps. The approach has been found to significantly improve the ability of foundation models to perform complex reasoning. Most approaches to chain-of-thought center on the use of experts to manually compose few-shot examples with chains of thought for prompting [30]. Rather than rely on human experts, we pursued a mechanism to automate the creation of chain-of-thought examples. We found that we could simply ask GPT-4 to generate chain-of-thought for the training examples using the following prompt: Self-generated Chain-of-thought Template `## Question:` `{{question}}` `{{answer_choices}}` `## Answer` model generated chain of thought explanation Therefore, the answer is [final model answer (e.g. A,B,C,D)] Figure 3: Template used to prompt foundation model to generate chain-of-thought explanations automatically (detailed in Section 4.2). A key challenge with this approach is that self-generated CoT rationales have an implicit risk of including hallucinated or incorrect reasoning chains. We mitigate this concern by having GPT-4 generate both a rationale and an estimation of the most likely answer to follow from that reasoning chain. If this answer does not match the ground truth label, we discard the sample entirely, under the assumption that we cannot trust the reasoning. While hallucinated or incorrect reasoning can still yield the correct final answer (i.e. false positives), we found that this simple label-verification step acts as an effective filter for false negatives. We observe that, compared with the CoT examples used in Med-PaLM 2 [30], which are hand-crafted by clinical experts, CoT rationales generated by GPT-4 are longer and provide finer-grained step-by-step reasoning logic. Concurrent with our study, recent works [35, 7] also find that foundation models write better prompts than experts do. ### 4.3 Choice Shuffling Ensemble While less severe than other foundation models, GPT-4 can exhibit a propensity to favor certain options in multiple choice answers over others (regardless of the option content), i.e., the model can show position bias [1, 16, 40]. To reduce this bias, we propose shuffling the choices and then checking consistency of the answers for the different sort orders of the multiple choice. As a result, we perform choice shuffle and self-consistency prompting. Self-consistency [32] replaces the naive single-path or _greedy_ decoding with a diverse set of reasoning paths when prompted multiple times at some temperature$>0$, a setting that introduces a degree of randomness in generations. With choice shuffling, we shuffle the relative order of the answer choices before generating each reasoning path. We then select the most consistent answer, i.e., the one that is least sensitive to choice shuffling. Choice shuffling has an additional benefit of increasing the diversity of each reasoning path beyond temperature sampling, thereby also improving the quality of the final ensemble [5]. We also apply this technique in generating intermediate CoT steps for training examples. For each example, we shuffle the choices some number of times and generate a CoT for each variant. We only keep the examples with the correct answer. ### 4.4 Putting it all together: Medprompt Figure 4: Visual illustration of Medprompt components and additive contributions to performance on the MedQA benchmark. The prompting strategy combines $k$NN-based few-shot example selection, GPT-4–generated chain-of- thought prompting, and answer-choice shuffled ensembling (see details in Section 4). Relative contributions of each component are shown at the bottom (details in Section 5.2). Medprompt combines intelligent few-shot exemplar selection, self-generated chain of thought steps, and a majority vote ensemble, as detailed above in Sections 4.1, 4.2, and 4.3, respectively. The composition of these methods yields a general purpose prompt-engineering strategy. A visual depiction of the performance of the Medprompt strategy on the MedQA benchmark, with the additive contributions of each component, is displayed in Figure 4. We provide an a corresponding algorithmic description in Algorithm 1. Medprompt consists of two stages: a preprocessing phase and an inference step, where a final prediction is produced on a test case. During preprocessing, each question in the training dataset is passed through a lightweight embedding model to generate an embedding vector (Line 4 in Algorithm 1). We employed OpenAI’s text-embedding-ada-002 to create an embedding. For each question, GPT-4 is harnessed to create a chain of thought and a prediction of the final answer (Line 5). If the generated answer is correct and matches the ground truth label, we store the associated question, its embedding vector, the chain of thought, and the answer. Otherwise, we discard the question entirely from our retrieval pool, with the assumption that we cannot trust the reasoning if the model ultimately arrives at the wrong final answer (Lines 6-7). At inference time, given a test question, we re-embed the test sample with the same embedding model used during pre-processing, and utilize $k$NN to retrieve similar examples from the preprocessed pool (Lines 12-13). These examples, and their corresponding GPT-4 generated reasoning chains, are structured as context for GPT-4 (Line 14). The test question and corresponding answer choices are then appended at the end, which serves as the final prompt (Line 17). The model, following the few shot exemplars, then outputs a chain of thought and a candidate answer. Finally, we perform an ensembling process consisting of repeating the steps described above multiple times. We increase diversity by shuffling the answer choices of the test question (Lines 15-16), as detailed in Section 4.3 and Figure 4. To determine the final predicted answer, we select the most frequent answer (Line 20). Algorithm 1 Algorithmic specification of Medprompt, corresponding to the visual representation of the strategy in Figure 4. 1:Input: Development data $\mathcal{D}$, Test question $Q$ 2:Preprocessing: 3:for each question $q$ in $\mathcal{D}$ do 4: Get an embedding vector $v_{q}$ for $q$. 5: Generate a chain-of-thought $C_{q}$ and an answer $A_{q}$ with the LLM. 6: if Answer $A_{q}$ is correct then 7: Store the embedding vector $v_{q}$, chain-of-thought $C_{q}$, and answer $A_{q}$. 8: end if 9:end for 10: 11:Inference Time: 12:Compute the embedding $v_{Q}$ for the test question $Q$. 13:Select the 5 most similar examples $\\{(v_{Q_{i}},C_{Q_{i}},A_{Q_{i}})\\}_{i=1}^{5}$ from the preprocessed training data using KNN, with the distance function as the cosine similarity: $\text{dist}(v_{q},v_{Q})=1-\frac{\langle v_{q},v_{Q}\rangle}{\|v_{q}\|\|v_{Q}\|}$. 14:Format the 5 examples as context $\mathcal{C}$ for the LLM. 15:for 5 times do 16: Shuffle the answer choices of the test question. 17: Generate a chain-of-thought $C_{q}^{k}$ and an answer $A_{q}^{k}$ with the LLM and context $\mathcal{C}$. 18:end for 19:Compute the majority vote of the generated answers $\\{A_{q}^{k}\\}_{k=1}^{K}$: $A^{\text{Final}}=\operatorname{mode}(\\{A_{q}^{k}\\}_{k=1}^{K}),$ where $\operatorname{mode}(X)$ denotes the most common element in the set $X$. 20:Output: Final answer $A^{\text{Final}}$. The Medprompt results we report here are configured to use 5 $k$NN selected few shot exemplars and 5 parallel API calls as part of the choice-shuffle ensemble procedure, which we find strikes a reasonable balance between minimizing inference cost and maximizing accuracy. Our ablation studies, detailed in Section 5.2, suggest that further improvements may be achieved by increasing these hyperparameter values. For example, by increasing to 20 few-shot exemplars and 11 ensemble items, we achieve a further $+0.4\%$ performance on MedQA, setting a new state-of-the- art performance threshold of $\mathbf{90.6\%}$. We note that, while Medprompt achieves record performance on medical benchmark datasets, the algorithm is general purpose and is not restricted to the medical domain or to multiple choice question answering. We believe the general paradigm of combining intelligent few-shot exemplar selection, self- generated chain of thought reasoning steps, and majority vote ensembling can be broadly applied to other problem domains, including less constrained problem solving tasks (see Section 5.3 for details on how this framework can be extended beyond multiple choice questions). ## 5 Results Table 1: Performance of different foundation models on multiple choice components of MultiMedQA [29]. GPT-4 with Medprompt outperforms all other models on every benchmark. Dataset | Flan-PaLM 540B* | Med-PaLM 2* | GPT-4 | GPT-4 ---|---|---|---|--- (choose best) | (choose best) | (5 shot) | (Medprompt) MedQA | | | | US (4-option) | 67.6 | 86.5 | 81.4 | 90.2** PubMedQA | | | | Reasoning Required | 79.0 | 81.8 | 75.2 | 82.0 MedMCQA | | | | Dev | 57.6 | 72.3 | 72.4 | 79.1 MMLU | | | | Clinical Knowledge | 80.4 | 88.7 | 86.4 | 95.8 Medical Genetics | 75.0 | 92.0 | 92.0 | 98.0 Anatomy | 63.7 | 84.4 | 80.0 | 89.6 Professional Medicine | 83.8 | 95.2 | 93.8 | 95.2 College Biology | 88.9 | 95.8 | 95.1 | 97.9 College Medicine | 76.3 | 83.2 | 76.9 | 89.0 * * Sourced directly from [29] and [30]. “Choose best” refers to a process used in the Med-Palm studies of executing several distinct approaches and selecting the best performing strategy for each dataset among the variety of experimental methods tried. Flan-PaLM 540B and Med-PaLM 2 are also both fine- tuned on subsets of these benchmark datasets. By contrast, every GPT-4 reported number uses a single, consistent strategy across all datasets. * ** We achieve 90.6%, as discussed in Section 5.2, with $k=20$ and 11x ensemble steps. The 90.2% represents “standard” Medprompt performance with $k=5$ few shot examples and a 5x ensemble. With harnessing the prompt engineering methods described in Section 4 and their effective combination as Medprompt, GPT-4 achieves state-of-the-art performance on every one of the nine benchmark datasets in MultiMedQA. ### 5.1 Performance on Eyes-Off Data Figure 5: Medprompt evaluation against 20% eyes-off holdout. Medprompt performs better on the eyes-off dataset in the majority of cases. As introduced in Section 5.1, we evaluated the Medprompt prompting design on a held-out “eyes-off” subset of each benchmark dataset to check for overfitting risk. GPT-4 with Medprompt achieved an average performance of 90.6% on the eyes-on data, and an average performance of 91.3% on the eyes-off data, suggesting that the prompt engineering process likely did not lead to overfitting on MultiMedQA datasets. As additional evidence, the performance on eyes-off data was better in 6/9 of the benchmark datasets (Figure 5). ### 5.2 Insights about Medprompt Components via Ablation Studies Figure 6: Identification of the relative contributions of different components of Medprompt via an ablation study. Figure 6 shows the results of an ablation study conducted on the MedQA dataset, in an attempt to understand the relative contributions of each technique in Medprompt. The blue bars represent prior work from [23], and establish baselines for the Medprompt methodology. We then iteratively layered in each technique, and measured the relative difference in performance from each incremental change. As outlined in Section 4.4, our base Medprompt strategy uses 5 kNN-curated few-shot exemplars and ensembles 5 API-calls together. We also experimented with setting up to 20 few-shot exemplars and up to 11 steps in the ensemble. We found that performance does increase marginally to 90.6%, with additional few-shot exemplars and more ensemble steps. This suggests that further improvements on benchmarks may yet be possible, with a corresponding increase in inference time cost and complexity. The introduction of chain-of-thought steps, as described in Section 4, contributed the most to performance ($+3.4\%$), followed by few-shot prompting and choice shuffle ensembling ($+2.2\%$ each). The techniques we use are not statistically independent – therefore, the order in which we test the contribution of each method matters. Our choice of ordering for this ablation study is subjective and based on the relative complexity of the technique introduced. A more theoretically sound method for credit allocation in the ablation study would involve the calculation of game- theoretic Shapley values [28], which takes exponentially more model evaluations to test every potential permutation of orderings. We leave this to future work and encourage readers to think of the specific numbers in the ablation studies as reasonable approximations of relative contributions. Table 2: Ablation study on expert-crafted chain-of-thought (CoT) vs. GPT-4 self-generated CoT. Both use fixed 5-shot examples, with no ensemble. | MedQA ---|--- | US (4-option) Expert-crafted CoT prompt from [30] | 83.8 GPT-4’s self-generated CoT prompt | 86.9 (+3.1) Apart from the stack of incremental changes, we compare the expert-crafted chain-of-thought (CoT) prompt used in Med-PaLM 2 [30] with the CoT prompt automatically generated by GPT-4 (Section 4.2). We evaluate GPT-4 using both prompts, with fixed 5-shot examples, no ensemble. Table 2 reports their accuracy on the MedQA dataset. GPT-4’s self-generated CoT outperforms the expert-crafted one by 3.1 absolute points. We notice that compared with the expert-crafted CoT used in Med-PaLM 2, CoT rationales generated by GPT-4 are longer and provide finer-grained step-by-step reasoning logic. One potential explanation is that GPT-4 generated CoT may be better suited to the model’s own strengths and limitations, which could lead to improved performance when compared to the expert-crafted one. Another potential explanation is that expert-crafted CoT may contain implicit biases or assumptions that may not hold for all questions in the MedQA dataset, whereas GPT-4 generated CoT may be more neutral and generalizable across different questions. ### 5.3 Generalization: Cross-Domain Exploration of Medprompt We argue that the composition of prompt engineering techniques employed in Medprompt, based on a combination of dynamic few shot selection, self- generated chain of thought, and choice shuffle ensembling, have general purpose application. They are not custom-tailored to the MultiMedQA benchmark datasets. To validate this, we further tested the final Medprompt methodology on six additional, diverse datasets from the MMLU benchmark suite covering challenge problems in the following subjects: electrical engineering, machine learning, philosophy, professional accounting, professional law, and professional psychology. We further sourced two additional datasets answering NCLEX (National Council Licensure Examination) style questions, the exam required to practice as a registered nurse in the United States. Figure 7: GPT-4 performance with three different prompting strategies on out of domain datasets. Zero-shot and five-shot approaches represent baselines and mirror the methodology followed in [23]. Figure 7 shows GPT-4’s performance on these diverse, out of domain dataset with Medprompt alongside zero-shot and five-shot prompts (with random exemplar selection). Across these datasets, Medprompt provides an average improvement of $+7.3\%$ over baseline zero-shot prompting. By comparison, Medprompt provided a $+7.1\%$ improvement over the same zero-shot baseline on the MultiMedQA datasets studied in this paper. We emphasize that the similarity of improvement across datasets from different distributions demonstrates the generality of the Medprompt approach. While beyond the scope of this paper, we believe the general framework underlying MedPrompt—a combination of few shot learning and chain-of-thought reasoning wrapped in an ensemble layer—can further generalize in applicability beyond the multiple choice question/answer setting with minor algorithmic modifications. For example, in an open-text generation setting, the ensemble layer may not be able to rely on a direct majority vote, but instead may aggregate by selecting the answer closest to all other answers in an embedding space. Another option would be to concatenate each of the $K$ generated pieces of text in a structured format and ask the model to select the most likely option, in the style of Ensemble Refinement [30]. We leave as future work exploration of the space of algorithmic modifications to other settings. ## 6 Limitations and Risks Our paper highlights the power of systematic prompt engineering for steering generalist foundation models to amplify the specialist abilities of GPT-4 on medical challenge problems. We now share reflections on limitations and future directions from our assessment. As foundation models are trained on massive, internet-scale datasets, strong performance on benchmark problems may be due to memorization or leakage effects, where direct test samples have previously been observed by the model during training. In our previous study, which assessed the performance of GPT-4 on the datasets studied in this work with basic prompting [23], we introduced and ran a blackbox testing algorithm (MELD) which was unable to discover evidence of memorization. However, blackbox testing approaches like MELD are unable to guarantee that data has not been seen before. We also separately assessed GPT-4’s performance on USMLE questions that were behind a paywall and, thus, not available on the public internet, and saw similarly strong performance [23]. In this study, we adopted standard machine learning best practices to control for overfitting and leakage during the prompt engineering process (Section 5.1). However, concerns of benchmark contamination during training remain. Further, we note that the strong performance of GPT-4 with Medprompt cannot be taken to demonstrate real-world efficacy of the model and methods on open- world healthcare tasks [23]. While we are excited about the ability to steer foundations models to become top specialists on the benchmarks, we are cautious about taking the performance of the prompting strategies and model output to mean that the methods will be valuable in the practice of medicine in the open world, whether for automated or assisting healthcare professionals with administrative tasks, clinical decision support, or patient engagement in the open world. To be clear, the medical challenge problems that we and others have studied are designed for testing human competencies in selected domains. Such competency tests are typically framed as sets of multiple choice questions. Although such challenge problems are a common evaluation method and cover diverse topics, they do not capture the range and complexity of medical tasks that healthcare professionals face in actual practice. Thus, the pursuit of tests as proxies for real-world competency and the focus on multiple-choice style answers are limitations when it comes to transferring strong performance on speciality benchmarks to real-world performance. Futhermore, while we believe that the MedPrompt strategy can be adapted to non-multiple choice settings, we did not explicitly test these proposed adaptations on benchmarks in this work. We note that foundation models can generate erroneous information (sometimes referred to as _hallucinations_) which may compromise generations and advice. While improvements in prompting strategies may lead to reductions in hallucinations and better overall accuracy, they may also make any remaining hallucinations even harder to detect. Promising directions include efforts on probabilistic calibration of generations, providing end-users with trustworthy measures of confidence in output. In our prior study, we found that GPT-4 was well-calibrated and could provide trustable measures of its confidence on multiple choice test questions [23]. We must also remain aware of biases in the output of foundation models. We do not yet understand how optimization in pursuit of top-level performance could influence other goals, such as equitable performance. It is vital to balance the pursuit of overall accuracy with equitable performance across different subpopulations to avoid exacerbating existing disparities in healthcare. Prior work has highlighted the need to understand and address biases in AI systems. The challenge of bias and fairness remains relevant and pressing in the context of model optimization, fine-tuning, and prompt engineering [13, 20, 38]. ## 7 Summary and Conclusions We presented background, methods, and results of a study of the power of prompting to unleash top-performing specialist capabilities of GPT-4 on medical challenge problems, without resorting to special fine-tuning nor reliance on human specialist expertise for prompt construction. We shared best practices for evaluating performance, including the importance of evaluating model capabilities on an eyes-off dataset. We reviewed a constellation of prompting strategies and showed how they could be studied and combined via a systematic exploration. We found a significant amount of headroom in boosting specialist performance via steering GPT-4 with a highly capable and efficient prompting strategy. We described the composition of a set of prompting methods into Medprompt, the best performing prompting strategy we found for steering GPT-4 on medical challenge problems. We showed how Medprompt can steer GPT-4 to handily top existing charts for all standard medical question-answering datasets, including the performance by Med-PaLM 2, a specialist model built via fine- tuning with specialist medical data and guided with handcrafted prompts authored by expert clinicians. Medprompt unlocks specialty skills on MedQA delivering significant gains in accuracy over the best performing model to date, surpassing 90% for the first time on the benchmark. During our exploration, we found that GPT-4 can be tasked with authoring sets of custom-tailored chain-of-thought prompts that outperform hand-crafted expert prompts. We pursued insights about the individual contributions of the distinct components of the Medprompt strategy via ablation studies that demonstrate the relative importance of each component. We set aside eyes-off evaluation case libraries to avoid overfitting and found that the strong results by Medprompt are not due to overfitting. We explored the generality of Medprompt via performing studies of its performance on a set of competency evaluations in six fields outside of medicine, including electrical engineering, machine learning, philosophy, accounting, law, nursing, and clinical psychology. The findings in disparate fields suggests that Medprompt and its derivatives will be valuable in unleashing specialist capabilities of foundation models for numerous disciplines. We see further possibilities for refining prompts to unleash speciality capabilities from generalist foundation models, particularly in the space of adapting the general MedPrompt strategy to non multiple choice questions. For example, we see an opportunity to build on the Medprompt strategy of using GPT-4 to compose its own powerful chain of thought examples and then employ them in prompting. Research directions moving forward include further investigation of the abilities of foundation models to reflect about and compose few-shot examples and to weave these into prompts. While our investigation focuses on exploring the power of prompting generalist models, we believe that fine-tuning, and other methods of making parametric updates to foundation models are important research avenues to explore, and may offer synergistic benefits to prompt engineering. We maintain that both approaches should be judiciously explored for unleashing the potential of foundation models in high-stakes domains like healthcare. ## Acknowledgments We thank Sébastien Bubeck, Peter Durlach, Peter Lee, Matthew Lungren, Satya Nadella, Joe Petro, Kevin Scott, Desney Tan, and Paul Vozila for discussion and feedback. ## References * [1] Niels J. Blunch. Position bias in multiple-choice questions. Journal of Marketing Research, 21(2):216–220, 1984. * [2] Elliot Bolton, David Hall, Michihiro Yasunaga, Tony Lee, Chris Manning, and Percy Liang. Biomedlm, 2022. Stanford Center for Research on Foundation Models. * [3] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020. * [4] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712, 2023. * [5] Rich Caruana, Alexandru Niculescu-Mizil, Geoff Crew, and Alex Ksikes. Ensemble selection from libraries of models. In Proceedings of the twenty-first international conference on Machine learning, page 18, 2004. * [6] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311, 2022. * [7] Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. Promptbreeder: Self-referential self-improvement via prompt evolution. arXiv preprint arXiv:2309.16797, 2023. * [8] Matthias Feurer and Frank Hutter. Hyperparameter optimization. Automated machine learning: Methods, systems, challenges, pages 3–33, 2019. * [9] Zelalem Gero, Chandan Singh, Hao Cheng, Tristan Naumann, Michel Galley, Jianfeng Gao, and Hoifung Poon. Self-verification improves few-shot clinical information extraction. In ICML 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH), 2023. * [10] Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. Domain-specific language model pretraining for biomedical natural language processing. ACM Transactions on Computing for Healthcare (HEALTH), 3(1):1–23, 2021. * [11] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300, 2020. * [12] Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. * [13] Ayanna M. Howard, Cha Zhang, and Eric Horvitz. Addressing bias in machine learning algorithms: A pilot study on emotion recognition for intelligent systems. 2017 IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), pages 1–7, 2017. * [14] Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. What disease does this patient have? a large-scale open domain question answering dataset from medical exams. Applied Sciences, 11(14):6421, 2021. * [15] Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William W Cohen, and Xinghua Lu. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146, 2019. * [16] Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. Look at the first sentence: Position bias in question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1109–1121, Online, November 2020\. Association for Computational Linguistics. * [17] Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models. PLoS digital health, 2(2):e0000198, 2023. * [18] Jiachang Liu, Dinghan Shen, Yizhe Zhang, Bill Dolan, Lawrence Carin, and Weizhu Chen. What makes good in-context examples for gpt-$3$?, 2021. * [19] Renqian Luo, Liai Sun, Yingce Xia, Tao Qin, Sheng Zhang, Hoifung Poon, and Tie-Yan Liu. Biogpt: generative pre-trained transformer for biomedical text generation and mining. Briefings in Bioinformatics, 23(6):bbac409, 2022. * [20] Michael Madaio, Lisa Egede, Hariharan Subramonyam, Jennifer Wortman Vaughan, and Hanna Wallach. Assessing the fairness of ai systems: AI practitioners’ processes, challenges, and needs for support. In 25th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2022), February 2022. * [21] John McCarthy, Marvin L Minsky, Nathaniel Rochester, and Claude E Shannon. A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI magazine, 27(4):12–12, 2006. * [22] Allen Newell, John C Shaw, and Herbert A Simon. Report on a general problem solving program. In IFIP congress, volume 256, page 64. Pittsburgh, PA, 1959. * [23] Harsha Nori, Nicholas King, Scott Mayer McKinney, Dean Carignan, and Eric Horvitz. Capabilities of GPT-4 on medical challenge problems. arXiv preprint arXiv:2303.13375, 2023. * [24] OpenAI. Gpt-4 technical report. ArXiv, abs/2303.08774, 2023. * [25] Ankit Pal, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. In Conference on Health, Inference, and Learning, pages 248–260. PMLR, 2022. * [26] Pouya Pezeshkpour and Estevam Hruschka. Large language models sensitivity to the order of options in multiple-choice questions. arXiv preprint arXiv:2308.11483, 2023. * [27] Rylan Schaeffer, Brando Miranda, and Sanmi Koyejo. Are emergent abilities of large language models a mirage?, 2023. * [28] Lloyd S Shapley et al. A value for n-person games. 1953\. * [29] Karan Singhal, Shekoofeh Azizi, Tao Tu, S Sara Mahdavi, Jason Wei, Hyung Won Chung, Nathan Scales, Ajay Tanwani, Heather Cole-Lewis, Stephen Pfohl, et al. Large language models encode clinical knowledge. arXiv preprint arXiv:2212.13138, 2022. * [30] Karan Singhal, Tao Tu, Juraj Gottweis, Rory Sayres, Ellery Wulczyn, Le Hou, Kevin Clark, Stephen Pfohl, Heather Cole-Lewis, Darlene Neal, Mike Schaekermann, Amy Wang, Mohamed Amin, Sami Lachgar, Philip Mansfield, Sushant Prakash, Bradley Green, Ewa Dominowska, Blaise Aguera y Arcas, Nenad Tomasev, Yun Liu, Renee Wong, Christopher Semturs, S. Sara Mahdavi, Joelle Barral, Dale Webster, Greg S. Corrado, Yossi Matias, Shekoofeh Azizi, Alan Karthikesalingam, and Vivek Natarajan. Towards expert-level medical question answering with large language models, 2023. * [31] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27, 2014. * [32] Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. * [33] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models. Transactions on Machine Learning Research, 2022. Survey Certification. * [34] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. * [35] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. arXiv preprint arXiv:2309.03409, 2023. * [36] Michihiro Yasunaga, Antoine Bosselut, Hongyu Ren, Xikun Zhang, Christopher D. Manning, Percy Liang, and Jure Leskovec. Deep bidirectional language-knowledge graph pretraining. In Neural Information Processing Systems (NeurIPS), 2022. * [37] Michihiro Yasunaga, Jure Leskovec, and Percy Liang. Linkbert: Pretraining language models with document links. In Association for Computational Linguistics (ACL), 2022. * [38] Travis Zack, Eric Lehman, Mirac Suzgun, Jorge A. Rodriguez, Leo Anthony Celi, Judy Gichoya, Dan Jurafsky, Peter Szolovits, David W. Bates, Raja-Elie E. Abdulnour, Atul J. Butte, and Emily Alsentzer. Coding inequity: Assessing gpt-4’s potential for perpetuating racial and gender biases in healthcare. medRxiv, 2023. * [39] Chujie Zheng, Hao Zhou, Fandong Meng, Jie Zhou, and Minlie Huang. Large language models are not robust multiple choice selectors, 2023. * [40] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
# Transonic and supershear crack propagation driven by geometric nonlinearities Mohit Pundir Institute for Building Materials, ETH Zurich, Switzerland Mokhtar Adda-Bedia Laboratoire de Physique, CNRS, ENS de Lyon, Université de Lyon, 69342 Lyon, France David S. Kammer<EMAIL_ADDRESS>Institute for Building Materials, ETH Zurich, Switzerland ###### Abstract Linear elastic fracture mechanics theory predicts that the speed of crack growth is limited by the Rayleigh wave speed. Although many experimental observations and numerical simulations have supported this prediction, some exceptions have raised questions about its validity. The underlying reasons for these discrepancies and the precise limiting speed of dynamic cracks remain unknown. Here, we demonstrate that tensile (mode I) cracks can exceed the Rayleigh wave speed and propagate at supershear speeds. We show that taking into account geometric non-linearities, inherent in most materials, is sufficient to enable such propagation modes. These geometric non-linearities modify the crack-tip singularity, resulting in different crack-tip opening displacements, cohesive zone behavior, and energy flows towards the crack tip. The speed at which cracks propagate is a fundamental characteristic that has implications in various fields such as material design [1], earthquake mechanics [2], and even the phenomenon of popping balloons [3]. Linear Elastic Fracture Mechanics (LEFM) [4] plays a crucial role in predicting crack speed $c_{\mathrm{f}}$ by establishing an energy balance between the energy release rate, which drives crack growth, and the fracture energy ($\Gamma$), which resists it. This framework, which assumes that $\Gamma$ is dissipated solely at the crack tip, predicts that the material Rayleigh wave speed $c_{\mathrm{R}}$ serves as a limiting speed for crack propagation. This prediction has been experimentally confirmed [5]. Crack growth occurring at speeds between $c_{\mathrm{R}}$ and the shear wave speed $c_{\mathrm{s}}$ is considered physically inadmissible, as it would generate energy rather than dissipate it. Nevertheless, LEFM predicts that cracks can propagate at supershear speeds $c_{\mathrm{f}}>c_{\mathrm{s}}$ if one assumes that dissipation occurs within a spatially extended zone around the crack tip [4, 6]. However, the specific conditions that allow for supershear propagation of, particularly, opening (mode I) cracks and the processes involved in the transition through the forbidden speed range remain largely unknown. Supershear crack growth is predominantly observed in cracks under shear (mode II) loading conditions, as described theoretically [7, 8] and widely supported by numerical simulations [9, 10, 8], experimental studies [11, 12, 13, 14, 15], and natural observations [2, 16, 17, 18, 19]. Supershear propagation is generally associated with high-stress states [9, 8]. In contrast, supershear propagation in cracks under mode I loading conditions is relatively rare. Molecular dynamics (MD) simulations [20] and lattice models [21, 22, 23, 24] have shown instances of supershear crack speeds, while experimental observations have been reported for rubber-like materials [25, 3, 26], hydrogels [27] and structural materials where the loading is applied directly at the crack tip by some extreme conditions [28]. The presence of some type of non-linearity, extending beyond the limits of LEFM, is a recurring feature in both simulations and experiments, indicating its potential contribution to enabling supershear growth in tensile cracks. However, the specific type of material non-linearity required for this phenomenon, as well as its generality across different materials, remains unknown. Here, we investigate the minimal requirements for the transition to supershear propagation of tensile cracks using numerical simulations. Our simulations reveal that the presence of geometric non-linearities alone is the primary factor driving supershear crack growth resulting from a continuous acceleration through the transonic speed range. Since such non-linearities are generally present in materials, these findings demonstrate that supershear propagation is an inherent characteristic in dynamic crack problems, independent of the specific material constitutive laws. Figure 1: Model setup and illustrative examples. (a) 2D model configuration with an elastic material and a weak cohesive interface. (b) Temporal evolution of interface for simulations at an imposed stretch $\lambda=1.125$ with (left) linear elastic material ($\alpha=0$) and (right) geometric non-linear elastic material ($\alpha=1$). Blue indicates intact material, turquoise the cohesive zone area, and yellow the broken interface. The crack speed $c_{\mathrm{f}}$ is indicated by white lines, and the Rayleigh wave speed $c_{\mathrm{R}}$ by a black dashed line. We consider the most generic and simple model without introducing a non-linear material constitutive law or any additional material parameter. The material deformation is described by a two-dimensional plane-strain tensor $E_{ij}$, defined as $E_{ij}=\frac{1}{2}\left(\frac{\partial u_{i}}{\partial x_{j}}+\frac{\partial u_{j}}{\partial x_{i}}+\alpha\frac{\partial u_{k}}{\partial x_{j}}\frac{\partial u_{k}}{\partial x_{i}}\right)~{},$ (1) where $u_{i}$ and $x_{i}$ are the $i$-th displacement and coordinate component $(i\equiv x,y)$, respectively. To easily switch between linear and geometrically non-linear (GNL) cases, we introduce the factor $\alpha\in\\{0,1\\}$. Therefore, for $\alpha=1$, $E_{ij}$ corresponds to the Green-Lagrangian strain tensor, while for $\alpha=0$, it is its linear approximation, the infinitesimal strain tensor $\varepsilon_{ij}$ (see Fig. 1a). In our model, we employ a linear elastic constitutive law, as described by the shear modulus $\mu$ and Poisson’s ratio $\nu$, to relate the two- dimensional $2^{\mathrm{nd}}$-Piola-Kirchhoff stress tensor $\sigma_{ij}$ to $E_{ij}$, through $\sigma_{ij}=2\mu\left(E_{ij}+\frac{\nu}{1-2\nu}\delta_{ij}E_{kk}\right)~{},$ (2) where $\delta_{ij}$ is the Kronecker delta. Notice that, as sketched in Fig. 1a, the GNL model ($\alpha=1$) induces a strain-enhancing effect with respect to the linear case ($\alpha=0$). The advantage of this model, compared to one with a non-linear constitutive law (e.g., neo-Hookean), is that it allows isolation of the effect of non-linearity on the crack propagation. In the following, we choose $\mu=39.2~{}\mathrm{kPa}$ and $\nu=0.35$, and then solve the problem for the conservation of linear momentum (full details provided in [29]). Fracture of the material is modeled using a cohesive approach, where cohesive tractions across the crack plane represent the progressive failure of the material. In our model, see Fig. 1a, we adopt a linear cohesive law with $\sigma_{c}=20~{}\mathrm{kPa}$ and $\Gamma=15~{}\mathrm{J/m^{2}}$, which gives a critical opening distance of $\delta_{\mathrm{c}}=1.5~{}\mathrm{mm}$ (for details, see [29]). This cohesive approach allows for the representation of a cohesive zone that captures localized spatially distributed dissipation, providing an approximation of the process zone observed in natural fractures. We use standard numerical techniques, detailed in [29], to accurately simulate crack growth. We examine the behavior of fracture growth in a two-dimensional plane-strain system of height $H=102.6~{}\mathrm{mm}$ and length $L=154~{}\mathrm{mm}$ mimicking the most common experimental configuration (see Fig. 1a). The dimensions are chosen sufficiently large to avoid any wave reflections that could affect the results. We apply a uniform and constant remote displacement $\Delta\gg\delta_{\mathrm{c}}$, which results in a uniform stretch $\lambda=1+(\Delta/H)$ on the entire sample. We initiate crack growth at time $t=0$ by artificially introducing a seed crack that slightly exceeds Griffith’s critical length for plane-strain conditions given by $L_{\mathrm{G}}=2\mu\Gamma/\pi(1-\nu)\sigma_{\infty}^{2}=5.1~{}\mathrm{mm}$ where $\sigma_{\infty}$ is the applied stress induced by the imposed remote displacement $\Delta$. The growth of the crack is confined to a (weak) plane perpendicular to the imposed stretch and aligned with the seed crack, restricting its propagation to a straight path $y=0$ (as illustrated in Fig. 1a). This constraint effectively prevents crack branching instabilities, commonly observed [30, 31], imitates the grooves used in experiments [30, 27], and enables a thorough exploration of crack speeds across the full range. First, we consider the linear elastic case ($\alpha=0$). Immediately after the seed crack is introduced, it becomes unstable, accelerates, and propagates through the entire interface (Fig. 1b-left). The crack-tip position, as defined by the transition from intact material to the cohesive zone (see Fig. 1b-left), moves through the interface, leading to a growing crack length $a(t)$. We observe that the crack speed, as computed by $c_{\mathrm{f}}=\mathrm{d}a/\mathrm{d}t$, approaches $c_{\mathrm{R}}$ (see Fig. 1b-left) but does not exceed it respecting the limiting speed given by LEFM. Considering the exact same model with the sole difference of including geometric nonlinearities ($\alpha=1$), we observe a different crack propagation (Fig. 1b-right). In this case, the crack continuously surpasses both $c_{\mathrm{R}}$ and $c_{\mathrm{s}}$, and propagates at supershear speeds. These results reveal an unknown mechanism for supershear propagation, which is simply due to geometric non-linearities. Figure 2: Crack-tip dynamics for three different values of stretches $\lambda=1.0875,1.1$ and $1.125$. The crack dynamics are shown in green shades for $\alpha=0$, and purple shades for $\alpha=1$, respectively. The red solid line is the LEFM crack-tip equation of motion. For a quantitative evaluation of the different crack growth behavior, we consider the instantaneous crack-tip dynamics, as shown in Fig. 2. For uniform systems, LEFM predicts that $c_{\mathrm{f}}/c_{\mathrm{R}}\approx 1-L_{\mathrm{G}}/a$ (details provided in [29]), which shows that a LEFM- governed crack remains sub-Rayleigh – even if it gets infinitely long. We observe that the simulation with a linear elastic material ($\alpha=0$) agrees quantitatively well with the LEFM prediction (Fig. 2). Specifically, it asymptotically approaches $c_{\mathrm{R}}$, satisfying the theoretical limit by remaining at sub-Rayleigh speeds for all crack lengths. In contrast, the simulations with geometric non-linearities ($\alpha=1$) do not follow the LEFM prediction and exceed the limiting speed $c_{\mathrm{R}}$. These results reveal a few important mechanisms for crack dynamics of geometrically non-linear materials. First, simulations at different stretch levels are superimposed when normalized by Griffith’s length (see Fig. 2), which suggests that there is a crack-tip equation of motion for geometrically non-linear materials. Second, the crack speeds in the GNL case are consistently and significantly above the LEFM prediction even in the sub- Rayleigh regime (i.e. $c_{\mathrm{f}}<c_{\mathrm{R}}$), indicating that the LEFM energy balance is fundamentally changed. Third, the crack accelerates simply through the transonic speed range $\left[c_{\mathrm{R}},c_{\mathrm{s}}\right]$ to reach supershear speeds ($c_{\mathrm{f}}>c_{\mathrm{s}}$). This is fundamentally different from the subRayleigh-to-supershear transition observed in shear cracks, in which a secondary crack ahead of the main crack is required to allow for a crack speed jump (i.e. discontinuity) across the forbidden transonic speed range [9, 12, 15]. Figure 3: Crack-tip opening displacement, $\delta$, for simulations at $\lambda=1.1$ with small strain $\alpha=0$ (shown in green) and geometric non- linear strain $\alpha=1$ (shown in purples), respectively. For $\alpha=1$, the light purple corresponds to the subRayleigh speed, $c_{\mathrm{f}}\approx 0.93\,c_{\mathrm{R}}$, and the dark purple corresponds to subsonic speed, $c_{\mathrm{f}}\approx 1.07\,c_{\mathrm{R}}$. The cross marks indicate the ends of the cohesive zone. (inset) A log-log plot of $\delta$ as a function of $r=(a(t)-x)/L_{G}$, the distance from the crack tip. The black and red lines indicate $r^{1/2}$ scaling. Next, we focus on the crack-tip opening displacement to determine the mechanisms allowing for propagation through the transonic regime. For the linear case ($\alpha=0$), see Fig. 3, the crack opening $\delta$ follows a square-root behavior outside the cohesive zone, which is, as expected, consistent with LEFM. The GNL material, however, presents a different behavior. Close to the crack-tip (but outside the cohesive zone), the exponent increases (see inset in Fig. 3). This effect becomes even stronger when the crack surpasses $c_{\mathrm{R}}$ (see Fig. 3). These results suggest that the square-root singular behavior of the strain and stress fields is not relevant when GNL effects are considered. This calls into question the foundations of brittle fracture mechanics, which are based mainly on the consequences of the square root behavior for energy budgeting and, thus, for the crack equation of motion. Such an observation may waver one of the fundamental corollaries of LEFM: the maximum speed allowed by the rate of energy flow toward the crack tip. To visualize further the modified near-tip fields and the associated energy flow to the crack tip, we compute the Poynting vector [4, 20] (details in [29]). For the linear material, the Poynting vector field takes the ordinary shape and values (see Fig. 4a). At the same sub-Rayleigh crack speed, the GNL material presents a significantly different pattern (see Fig. 4b). While the magnitude of the Poynting vector is lower in the vicinity of the crack tip (note the absence of blue color), it is somewhat increased further away (see brighter red at $|x|>a)$. Further changes occur at transonic crack speeds (see Fig. 4c), where the magnitude of the Poynting vector increases but remains below the values observed in the linear material, and the lobes are inclined to the back of the crack, which are forerunners of the Mach cone in the supershear propagation regime [20]. These modifications to the near-tip crack fields confirm that the crack dynamics changed due to the GNL material behavior and point to a totally different energy budgeting even in the subshear propagation regime. Figure 4: Snapshots of energy flow density, represented as the magnitude of Poynting vector $P_{j}$ (see [29] for more details). All the snapshots are for an imposed stretch $\lambda=1.125$ (a) $\alpha=0$ at subRayleigh speed $c_{\mathrm{f}}\approx 0.93c_{\mathrm{R}}$ (b) $\alpha=1$ at subRayleigh speed, $c_{\mathrm{f}}\approx 0.93c_{\mathrm{R}}$ (c) $\alpha=1$ at subsonic speed, $c_{\mathrm{f}}\approx 1.04c_{\mathrm{R}}$. (d) Evolution of cohesive zone size $\mathcal{X}(c_{\mathrm{f}})$ for $\lambda=1.125$ with small strain $\alpha=0$ and geometric non-linear strain $\alpha=1$, respectively. The cohesive zone size is normalized by static cohesive zone size $\mathcal{X}_{0}$. The red solid line shows the analytical solution for cohesive zone size from LEFM. The modified energy flux to the crack tip also causes changes to the local dissipation, which manifests itself in the properties of the cohesive zone. From the near-tip Poynting vector fields (see Fig. 4a-c), we observe an increase in the cohesive-zone size $\mathcal{X}(c_{\mathrm{f}})$ for the GNL case. Quantitatively, the cohesive zone size for the linear material follows, after some initial perturbations from the nucleation, the LEFM prediction [4] with a Lorentz contraction from its static size $\mathcal{X}_{0}$ to zero towards $c_{\mathrm{R}}$ (see Fig. 4d). In contrast, the cohesive zone in the GNL material is considerably larger and appears to be relatively constant at $\mathcal{X}\approx 0.4\,\mathcal{X}_{0}$ while the crack traverses the transonic regime and reaches supershear propagation. This suggests that kinetic energy and bulk wave interference responsible for the Lorentz contraction in LEFM yield completely different results in materials where GNL effects are dominant. The fact that $\mathcal{X}$ is finite in the transonic regime indicates that the Lorentz contraction is superseded by a different mechanism that could be related to either the cohesive zone response or the non-square root singular behavior near the crack tip. The present study confirms previous numerical simulations related to the limiting speed of propagating cracks [20, 21, 22, 23, 24]. In MD simulations, where hyperelasticity was assumed, supershear propagation was related to an increased local wave speed in the highly stretched region at the crack tip compared to the far-field wave speed. In simulations using a lattice model, a completely different branch of supershear solutions for propagating cracks was found for stretches above a critical value. However, the novelty of the present work is to show that supershear propagation is possible within a continuum elastic framework. We show that GNL is the principal ingredient to “break” the barrier of Rayleigh wave speed for tensile crack propagation and that there is no forbidden interval velocity. Tensile cracks accelerate smoothly from the sub-Rayleigh regime to the supershear regime. These observations are robust as they do not depend on the specific choice of material parameter values (see [29]). Moreover, they agree with experiments on crack propagation in hydrogels [27] and rubber-like materials [25, 26]. The common thread between our model and these experiments is that the materials considered exhibit a nonlinear elastic response, for which LEFM is possibly not an adequate framework. Note that in the case of hydrogels, experiments show that supershear rupture is enabled above an applied stretch level [27]. We believe that the existence of such a critical stretch is caused by the velocity dependence of the fracture energy of the hydrogel. Our model is consistent with a constant $\Gamma$ that allows the crack to always be in an accelerating phase (both in LEFM and certainly in GNL frameworks). For GNL simulations, our system size and other crack nucleation aspects prevent us from determining the terminal supershear speed of the crack, which is beyond the scope of this study. Let us conclude with a discussion on possible directions for future work. The results for the crack-tip opening displacement, energy release rate, and cohesive zone size point toward the conclusion that the elastic field distribution and energy budgeting in the vicinity of the crack tip of a GNL material exhibit completely different behaviors than that of a linear elastic material. Recent attempts to uncover the underlying mechanisms tackled such problems perturbatively [32]; however, our results lean toward a non- perturbative effect. We believe that the methods developed for static cracks in nonlinear materials [33] should be generalized to the dynamic problem. Indeed, important questions arise related to a material’s non-linear response which produces either strain stiffening or strain softening at high stretches. How does it affect our findings and is any non-linearity capable of allowing supershear cracks? For example, it is believed that dynamic crack growth in a purely neo-Hookean material follows LEFM solutions [32]. How does this reconcile with the fact that the non-linearity used in our model is present in any material? Why is it believed that supershear crack propagation is not possible in engineering brittle materials such as glass? For this, other instabilities may occur at lower speeds (such as microbranching, oscillatory instabilities, and instabilities along the crack front) prevent cracks from reaching transonic and supershear speeds. Moreover, the brittleness of such materials does not allow them to store enough potential energy prior to crack propagation. This would explain why supershear cracks are commonly associated with direct extreme loading on the crack surface [28, 11]. Finally, the classical LEFM theory developed over the last century [4, 6] has been a strong backbone for our understanding of material fracture. Here, we demonstrate its breakdown by showing that naturally existing geometric non- linear strain causes supershear crack propagation. This study sets a starting stone for the development of a new framework within the mechanics of continuous media: nonlinear elastic fracture mechanics (NLEFM). ###### Acknowledgements. DSK and MP acknowledge support from the Swiss National Science Foundation under the SNSF starting grant (TMSGI2_211655). We thank J. Fineberg and M. Wang for sharing their experimental results [27] prior to publication and for fruitful discussions. ## References * Baumberger _et al._ [2006] T. Baumberger, C. Caroli, and D. Martina, Nature Materials 5, 552 (2006). * Bao _et al._ [2022] H. Bao, L. Xu, L. Meng, J.-P. Ampuero, L. Gao, and H. Zhang, Nature Geoscience 15, 942 (2022). * Moulinet and Adda-Bedia [2015] S. Moulinet and M. Adda-Bedia, Physical Review Letters 115, 184301 (2015). * Freund [1998] L. B. Freund, _Dynamic Fracture Mechanics_ (1998). * Goldman _et al._ [2010] T. Goldman, A. Livne, and J. Fineberg, Physical Review Letters 104, 114301 (2010). * Broberg [1999] K. B. Broberg, _Cracks and Fracture_ (Elsevier, 1999). * Burridge [1973] R. Burridge, Geophysical Journal International 35, 439 (1973). * Kammer _et al._ [2018] D. S. Kammer, I. Svetlizky, G. Cohen, and J. Fineberg, Science Advances 4, eaat5622 (2018). * Andrews [1976] D. J. Andrews, Journal of Geophysical Research (1896-1977) 81, 5679 (1976). * Abraham and Gao [2000] F. F. Abraham and H. Gao, Physical Review Letters 84, 3113 (2000). * Rosakis _et al._ [1999] A. J. Rosakis, O. Samudrala, and D. Coker, Science 284, 1337 (1999). * Xia _et al._ [2004] K. Xia, A. J. Rosakis, and H. Kanamori, Science 303, 1859 (2004). * Ben-David _et al._ [2010] O. Ben-David, G. Cohen, and J. Fineberg, Science 330, 211 (2010). * Passelègue _et al._ [2013] F. X. Passelègue, A. Schubnel, S. Nielsen, H. S. Bhat, and R. Madariaga, Science 340, 1208 (2013). * Svetlizky _et al._ [2016] I. Svetlizky, D. P. Muñoz, M. Radiguet, D. S. Kammer, J.-F. Molinari, and J. Fineberg, Proceedings of the National Academy of Sciences 113, 542 (2016). * Bouchon and Vallée [2003] M. Bouchon and M. Vallée, Science 301, 824 (2003). * Walker and Shearer [2009] K. T. Walker and P. M. Shearer, Journal of Geophysical Research: Solid Earth 114, 10.1029/2008JB005738 (2009). * Dunham [2004] E. M. Dunham, Bulletin of the Seismological Society of America 94, S256 (2004). * Wang _et al._ [2016] D. Wang, J. Mori, and K. Koketsu, Earth and Planetary Science Letters 440, 115 (2016). * Buehler _et al._ [2003] M. J. Buehler, F. F. Abraham, and H. Gao, Nature 426, 141 (2003). * Guozden _et al._ [2010] T. M. Guozden, E. A. Jagla, and M. Marder, International Journal of Fracture 162, 107 (2010). * Slepyan [1981] L. I. Slepyan, Doklady Akademii Nauk 260, 566 (1981). * Marder [2005] M. Marder, Physical Review Letters 94, 048001 (2005). * Marder [2006] M. Marder, Journal of the Mechanics and Physics of Solids 54, 491 (2006). * Petersan _et al._ [2004] P. J. Petersan, R. D. Deegan, M. Marder, and H. L. Swinney, Physical Review Letters 93, 015504 (2004). * Mai _et al._ [2020] T.-T. Mai, K. Okuno, K. Tsunoda, and K. Urayama, ACS Macro Letters 9, 762 (2020). * Wang _et al._ [2023] M. Wang, S. Shi, and J. Fineberg, Science 381, 415 (2023), publisher: American Association for the Advancement of Science. * Winkler _et al._ [1970] S. Winkler, D. A. Shockey, and D. R. Curran, International Journal of Fracture Mechanics 6, 151 (1970). * [29] See supplemental material at [url will be inserted by publisher] for details on the formulation of the numerical framework for modeling dynamic crack propagation in geometrical nonlinear materials, the LEFM equation of motion, and the near-tip Poynting vector field. * Washabaugh and Knauss [1994] P. D. Washabaugh and W. G. Knauss, International Journal of Fracture 65, 97 (1994). * Sharon and Fineberg [1999] E. Sharon and J. Fineberg, Nature 397, 333 (1999). * Livne _et al._ [2010] A. Livne, E. Bouchbinder, I. Svetlizky, and J. Fineberg, Science 327, 1359 (2010), publisher: American Association for the Advancement of Science. * Long and Hui [2015] R. Long and C.-Y. Hui, Extreme Mechanics Letters 4, 131 (2015). * Bathe [1996] K.-J. Bathe, _Finite element procedures_ , second edition ed. (Prentice Hall, Englewood Cliffs, New Jersey, 1996). * Hansbo and Hansbo [2004] A. Hansbo and P. Hansbo, Computer Methods in Applied Mechanics and Engineering 193, 3523 (2004). * Hansbo and Salomonsson [2015] P. Hansbo and K. Salomonsson, Finite Elements in Analysis and Design 102-103, 1 (2015). * Ten Eyck and Lew [2006] A. Ten Eyck and A. Lew, International Journal for Numerical Methods in Engineering 67, 1204 (2006). * Liu _et al._ [2009] R. Liu, M. F. Wheeler, and C. N. Dawson, Computers & Structures 87, 141 (2009). * Nguyen [2014] V. P. Nguyen, Engineering Fracture Mechanics 128, 37 (2014). * Camacho and Ortiz [1996] G. T. Camacho and M. Ortiz, International Journal of Solids and Structures 33, 2899 (1996). ## Appendix A Supplemental Material ## Appendix B Total Lagrangian formulation for geometrical non-linearties Let $\tau_{ij}$ denote the true (Cauchy) stress tensor. The equation of motion in the absence of body forces reads: $\tau_{ij,j}-\prescript{t+\Delta t}{}{\rho}\ddot{u}_{i}=0\quad\text{on}~{}\prescript{t+\Delta t}{}{\Omega}$ (3) where $\prescript{t+\Delta t}{}{\Omega}$ is the body domain in the deformed (actual) configuration at time $t+\Delta t$, $\prescript{t+\Delta t}{}{\rho}$ is the material density in that configuration, and the differentiation is with respect to the deformed configuration. The weak form of Eq. 3 reads $\begin{split}\int_{\prescript{t+\Delta t}{}{\Omega}}\delta u_{i}\tau_{ij,j}\mathrm{d}\Omega~{}-\int_{\prescript{t+\Delta t}{}{\Omega}}\delta u_{i}\prescript{t+\Delta t}{}{\rho}\ddot{u}_{i}\mathrm{d}\Omega=0~{}.\end{split}$ (4) Applying integration by parts and divergence theorem, while omitting the boundary terms, we find $\int_{\prescript{t+\Delta t}{}{\Omega}}\delta e_{ij}\tau_{ij}~{}\mathrm{d}\Omega~{}+\int_{\prescript{t+\Delta t}{}{\Omega}}\delta u_{i}\prescript{t+\Delta t}{}{\rho}\ddot{u}_{i}~{}\mathrm{d}\Omega=0~{},$ (5) where $e_{ij}$ is the linear strain tensor in the deformed configuration. Since a body can undergo large displacements and rotations, the above relation cannot be solved directly. It is solved incrementally where the solution at $t+\Delta t$ is approximated from already known equilibrium configurations. Thus, it is more suitable to employ the Total Lagrangian (T.L.) formulation where the known equilibrium configuration at $t=0$ is considered, and all the kinematic variables are thus computed relative to this initial configuration. The T.L. formulation uses the $2^{\mathrm{nd}}$-Piola-Kirchhoff stress measure $\sigma_{ij}$, and Green-Lagrange strain measure $E_{ij}$ to formulate Eq. 5 with respect to the initial configuration at time $t=0$. This yields $\int_{\prescript{0}{}{\Omega}}\delta\prescript{t+\Delta t}{}{E_{ij}}\prescript{t+\Delta t}{}{\sigma_{ij}~{}\mathrm{d}\Omega}~{}+\int_{\prescript{0}{}{\Omega}}\delta\prescript{t+\Delta t}{}{u_{i}}\prescript{0}{}{\rho}\prescript{t+\Delta t}{}{\ddot{u}_{i}}~{}\mathrm{d}\Omega=0$ (6) where for a geometrical non-linear case $\displaystyle\delta\prescript{t+\Delta t}{}{E_{ij}}$ $\displaystyle=\delta\dfrac{1}{2}(\prescript{t+\Delta t}{}{u_{i,j}}+\prescript{t+\Delta t}{}{u_{j,i}}+\prescript{t+\Delta t}{}{u_{k,i}}\prescript{t+\Delta t}{}{u_{k,j}})~{}.$ (7) Notice that the differentiation is with respect to the initial configuration. A complete derivation of Eq. 6 from Eq. 5 can be found in [34]. The stress and strain at $t+\Delta t$ is expressed as a sum of stress and strain at a previous time $t$ and their incremental change at $\Delta t$, i.e. $\displaystyle\prescript{t+\Delta t}{}{\sigma_{ij}}$ $\displaystyle=\prescript{t}{}{\sigma_{ij}}+\Delta\sigma_{ij}$ (8) $\displaystyle\prescript{t+\Delta t}{}{E_{ij}}$ $\displaystyle=\prescript{t}{}{E_{ij}}+\Delta E_{ij};~{}\Delta E_{ij}=\Delta\varepsilon_{ij}+\Delta\eta_{ij}$ (9) where $\Delta\varepsilon_{ij}$, $\Delta\eta_{ij}$ is the linear and non-linear incremental strains referred to the configuration at $t=0$ (for details, see [34]). The stress-strain relationship for a linear-elastic material is given by Eq. 2. From Eq. 9, we can say that $\delta\prescript{t+\Delta t}{}{E_{ij}}=\delta\Delta E_{ij}$ and consequently, linearizing $\delta\Delta E_{ij}$, i.e. assuming $\delta\Delta E_{ij}=\delta\Delta\varepsilon_{ij}$, the equation of motion is given as $\int_{\prescript{0}{}{\Omega}}\delta\Delta\varepsilon_{ij}\prescript{t+\Delta t}{}{\sigma_{ij}~{}\mathrm{d}\Omega}~{}+\int_{\prescript{0}{}{\Omega}}\delta\prescript{t+\Delta t}{}{u_{i}}\prescript{0}{}{\rho}\prescript{t+\Delta t}{}{\ddot{u}_{i}}~{}\mathrm{d}\Omega=0~{}.$ We employ an explicit time integration (central difference method) along with a predictor-corrector scheme to solve the above equation of motion. The predictor stage is given as : $\displaystyle\prescript{t+\Delta t}{}{u_{i}}$ $\displaystyle=\prescript{t}{}{u_{i}}+\Delta t\prescript{t}{}{\dot{u}_{i}}+\dfrac{\Delta t^{2}}{2}\prescript{t}{}{\ddot{u}_{i}}$ $\displaystyle\prescript{t+\Delta t}{}{\dot{u}_{i}}$ $\displaystyle=\prescript{t}{}{\dot{u}_{i}}+\Delta t\prescript{t}{}{\ddot{u}_{i}}$ $\displaystyle\prescript{t+\Delta t}{}{\ddot{u}_{i}}$ $\displaystyle=\prescript{t}{}{\ddot{u}_{i}}$ Solving the equation of motion for incremental acceleration $\Delta\ddot{u}_{i}$: $\begin{split}\int_{\prescript{0}{}{\Omega}}\delta\Delta u_{i}\prescript{0}{}{\rho}\Delta\ddot{u}_{i}~{}\mathrm{d}\Omega=-\int_{\prescript{0}{}{\Omega}}\delta\Delta\varepsilon_{ij}\prescript{t+\Delta t}{}{\sigma_{ij}~{}\mathrm{d}\Omega}\\\ -\int_{\prescript{0}{}{\Omega}}\delta\Delta u_{i}\prescript{0}{}{\rho}\prescript{t}{}{\ddot{u}_{i}}~{}\mathrm{d}\Omega\end{split}$ (10) The corrector stage is given as : $\displaystyle\prescript{t+\Delta t}{}{\ddot{u}_{i}}$ $\displaystyle=\prescript{t}{}{\ddot{u}_{i}}+\Delta\ddot{u}_{i}$ $\displaystyle\prescript{t+\Delta t}{}{\dot{u}_{i}}$ $\displaystyle=\prescript{t+\Delta t}{}{\dot{u}_{i}}+\Delta t\Delta\ddot{u}_{i}$ $\displaystyle\prescript{t+\Delta t}{}{u_{i}}$ $\displaystyle=\prescript{t+\Delta t}{}{u_{i}}$ We employ the Finite Element method to solve the above equations. To ensure numerical stability, we chose the incremental time step $\Delta t$ based on the Courant–Friedrichs–Lewy condition: $\Delta t=0.05\times h_{\mathrm{min}}\sqrt{\prescript{0}{}{\rho}(1-\nu^{2})/2\mu(1+\nu)}$, where $h_{\mathrm{min}}$ is the minimum element size, $\mu$ is the shear modulus of the material, and $\nu$ is its Poisson’s ratio. ## Appendix C Implementation of cohesive elements using discontinuous Galerkin method We adopt a discontinuous Galerkin finite element method for cohesive zone modelling [35, 36]. We apply Nitsche’s method to tie the meshes together, i.e. to weakly enforce the continuity of displacements across the internal elemental interfaces [37, 36]. To this end, additional terms are added to the second term in Eq. 10. For simplicity, we have dropped the superscript $t+\Delta t$ and subscript $0$. The replaced expression is given as $\begin{split}\int_{\Omega}\delta\varepsilon_{ij}\sigma_{ij}\mathrm{d}\Omega&~{}-\int_{\Gamma}\left([\\![v]\\!]_{i}n^{+}_{j}\right)\langle\sigma_{ij}\rangle\mathrm{d}\Gamma~{}-\\\ &\int_{\Gamma}\left([\\![u]\\!]_{i}n^{+}_{j}\right)\langle\sigma_{ij}(v)\rangle\mathrm{d}\Gamma+\vartheta\int_{\Gamma}[\\![v]\\!]_{i}[\\![u]\\!]_{i}\mathrm{d}\Gamma\end{split}$ (11) where $[\\![x]\\!]=x^{+}-x^{-}$ represents the jump across the interface ($\Gamma^{+},\Gamma^{-}$), $\langle x\rangle=\frac{1}{2}(x^{+}+x^{-})$ represents the average across an interface and $n_{j}^{+}$ is the outward normal from the interface $\Gamma^{+}$. The second term in the above equation makes the formulation consistent. The third term is added to make the formulation symmetric, which improves convergence, and the fourth term enforces the displacement continuity. The stabilization parameter $\vartheta$ that ensures that the formulation remains positive-definite. For a linear elastic material, $\vartheta$ depends on the material parameters and the element size $h$, i.e. $\vartheta=\theta\mu(1+2\nu/(1-2\nu))/h$ where $\mu$ is shear modulus, $\nu$ is Poisson’s ratio and $\theta$ an arbitrarily chosen positive number [38]. Equation 11 is valid only prior to fracture [39]. Here, since we simulate crack propagation along a weak plane represented as $\Gamma_{f}$, the above expression is further modified to allow a jump in displacements along $\Gamma_{f}$. For fracture along an internal interface $\Gamma_{f}$, the modified expression is given as $\begin{split}\int_{\Omega}\delta\varepsilon_{ij}\sigma_{ij}\mathrm{d}\Omega&~{}-\int_{\Gamma/\Gamma_{f}}\left([\\![v]\\!]_{i}n^{+}_{j}\right)\langle\sigma_{ij}\rangle\mathrm{d}\Gamma~{}-\\\ \int_{\Gamma/\Gamma_{f}}&\left([\\![u]\\!]_{i}n^{+}_{j}\right)\langle\sigma_{ij}(v)\rangle\mathrm{d}\Gamma~{}+\vartheta\int_{\Gamma/\Gamma_{f}}[\\![v]\\!]_{i}[\\![u]\\!]_{i}\mathrm{d}\Gamma~{}+\\\ &\beta\left\\{\int_{\Gamma_{f}}T_{i}([\\![u]\\!])[\\![v]\\!]_{i}d\Gamma\right\\}~{}+\\\ &(1-\beta)\Bigg{\\{}-\int_{\Gamma_{f}}([\\![v]\\!]_{i}n^{+}_{j})\langle\sigma_{ij}\rangle\mathrm{d}\Gamma~{}-\\\ &\int_{\Gamma_{f}}([\\![u]\\!]_{i}n^{+}_{j})\langle\sigma_{ij}(v)\rangle\mathrm{d}\Gamma~{}+\vartheta\int_{\Gamma_{f}}[\\![v]\\!]_{i}[\\![u]\\!]_{i}\mathrm{d}\Gamma\Bigg{\\}}\end{split}$ (12) where $\beta=1$ if the average traction $\langle\sigma_{ij}\rangle n_{j}$ along the interface $\Gamma_{f}$ is greater than the critical stress, i.e. $\langle\sigma_{ij}\rangle n_{j}\geq\sigma_{c}$, otherwise $\beta=0$. The displacement jump $[\\![u]\\!]_{i}$ ($[\\![u]\\!]$ represents the crack tip opening displacement $\delta$ in Fig. 1) along $\Gamma_{f}$. The traction- separation relation $T_{i}([\\![u]\\!])$ (note that $T_{i}([\\![u]\\!])$ represents $\sigma_{yy}(y=0)$ in Fig. 1) is a linear cohesive law [40], characterized by a material’s critical strength $\sigma_{c}$ and fracture energy $\Gamma$ and is described as $\displaystyle T_{i}([\\![u]\\!])$ $\displaystyle=\dfrac{\sigma_{c}}{g(t)}\left(1-\dfrac{g(t)}{2G_{c}}\sigma_{c}\right)[\\![u]\\!]_{i}~{}\mathrm{for~{}}g(t)=g_{\mathrm{max}}$ (13) $\displaystyle T_{i}([\\![u]\\!])$ $\displaystyle=\dfrac{\sigma_{c}}{g_{\mathrm{max}}}\left(1-\dfrac{g_{\mathrm{max}}}{2G_{c}}\sigma_{c}\right)[\\![u]\\!]_{i}~{}\mathrm{for~{}}g(t)<g_{\mathrm{max}}$ (14) where $g(t)=\sqrt{[\\![u_{n}(t)]\\!]^{2}+\beta[\\![u_{t}(t)]\\!]^{2}}$ and $g_{\mathrm{max}}=\underset{t^{\prime}\in[0,t]}{\mathrm{max}}~{}g(t^{\prime})$. Thus, replacing the second term in Eq. 10 with the expression Eq. 12 gives the final equation of motion. ## Appendix D LEFM crack-tip equation of motion Linear Elastic Fracture Mechanics (LEFM) describes the growth of a crack by an energy balance $\Gamma=G(a,c_{\mathrm{f}},\sigma_{\infty})~{},$ (15) where $\Gamma$ is the fracture energy, assumed to be a constant material property, and $G$ is the energy release rate that depends on crack (half-)length $a$, crack speed $c_{\mathrm{f}}$, and remote load $\sigma_{\infty}$. Under assumptions of time-invariant loading, the dynamic energy release rate $G$ can be related to its static equivalent $G_{0}$ via $G=g(c_{\mathrm{f}})\,G_{0}(a,\sigma_{\infty})~{},$ (16) where $g(c_{\mathrm{f}})$ is a known universal function [4], and can be approximated by $g(c_{\mathrm{f}})\approx 1-c_{\mathrm{f}}/c_{\mathrm{R}}$. Using Eqs. (15) and (16), one can formulate the crack-tip equation of motion as $c_{\mathrm{f}}/c_{\mathrm{R}}\approx 1-\Gamma/G_{0}~{}.$ (17) In a uniformly loaded system, where plane-strain Griffith’s nucleation length is $L_{\mathrm{G}}=2\mu\Gamma/\pi(1-\nu)\sigma_{\infty}^{2}$, the static energy release rate is given by $G_{0}=\pi(1-\nu)\sigma_{\infty}^{2}a/2\mu$. Therefore, the specific crack-tip equation of motion for a uniformly loaded system can be written as: $c_{\mathrm{f}}/c_{\mathrm{R}}\approx 1-\frac{L_{\mathrm{G}}}{a}~{}.$ (18) ## Appendix E Effect of Fracture Energy The fracture energy value does not affect the crack dynamics (normalized by Griffith’s length) both in linear materials, as expected from LEFM, and GNL materials (see Fig. 5). Figure 5: Crack-tip dynamics for two different values of fracture energy $\Gamma=15~{}\mathrm{J/m^{2}}$ (indicated by hollow square and circle markers) and $\Gamma=8~{}\mathrm{J/m^{2}}$ (indicated by cross markers). The crack dynamics is shown in green and purple shades for $\alpha=0$ and $\alpha=1$, respectively. All simulations were conducted at stretch value $\lambda=1.125$. The red solid line is the LEFM crack-tip equation of motion. ## Appendix F The near-tip Poynting vector field The instantaneous rate of energy flow towards the crack tip through an arbitrary contour $\Gamma$ is given by $F(\Gamma)=\int_{\Gamma}\underbrace{\Big{[}S_{ji}\dfrac{\partial u_{i}}{\partial t}+(U+K)c_{\mathrm{f}}\delta_{xj}\Big{]}}_{P_{j}}\mathrm{d}\Gamma~{},$ (19) where $S_{ji}$ is the first Piola-Kirchhoff stress tensor, $u_{i}$ is the displacement, $U$ is the strain energy density, $K$ is the kinetic energy density, $c_{\mathrm{f}}$ is the crack speed, and $\delta_{xj}$ is the Kronecker’s delta operator, which is 1 only along the crack propagation direction $x$. In Eq. 19, the term inside the integration, denoted as dynamic Poynting vector $P_{i}$, represents the direction of the energy flow near the vicinity of the crack tip [4, 20]. The magnitude of the Poynting vector $\|P\|=\sqrt{P_{i}}$ is a measure of local energy flow. The quantities in the Pointing vector expression are computed with respect to the reference frame configuration [4].
Verified Compilation of Quantum Oracles A Verified Optimizer for Quantum Oracles L. Li, F. Voichick, K. Hietala, Y. Peng, X. Wu & M. Hicks Liyi Li University of Maryland Finn Voichick University of Maryland Kesha Hietala University of Maryland Yuxiang Peng University of Maryland Xiaodi Wu University of Maryland Michael Hicks University of Maryland Quantum algorithms often apply classical operations, such as arithmetic or predicate checks, over a quantum superposition of classical data; these so-called oracles are often the largest components of a quantum program. To ease the construction of efficient, correct oracle functions, this paper presents , a high-assurance framework implemented with the Coq proof assistant. The core of is , the oracle quantum assembly language. operations move qubits between two different bases via the quantum Fourier transform, thus admitting important optimizations, but without inducing entanglement and the exponential blowup that comes with it. 's design enabled us to prove correct 's compilers—from a simple imperative language called to , and from to , a general-purpose quantum assembly language—and allowed us to efficiently test properties of programs using the QuickChick property-based testing We have used to implement a variety of arithmetic and geometric operators that are building blocks for important oracles, including those used in Shor's and Grover's algorithms. We found that 's QFT-based arithmetic oracles require fewer qubits, sometimes substantially fewer, than those constructed using “classical” gates; 's versions of the latter were nevertheless on par with or better than (in terms of both qubit and gate counts) oracles produced by Quipper, a state-of-the-art but unverified quantum programming platform. § INTRODUCTION Quantum computers offer unique capabilities that can be used to program substantially faster algorithms compared to those written for classical computers. For example, Grover's search algorithm <cit.> can query unstructured data in sub-linear time (compared to linear time on a classical computer), and Shor's algorithm <cit.> can factorize a number in polynomial time (compared to the sub-exponential time for the best known classical algorithm). An important source of speedups in these algorithms are the quantum computer's ability to apply an oracle function coherently, i.e., to a superposition of classical queries, thus carrying out in one step a function that would potentially take many steps on a classical computer. For Grover's, the oracle is a predicate function that determines when the searched-for data is found. For Shor's, it is a classical modular exponentiation function; the algorithm finds the period of this function where the modulus is the number being factored. While the classical oracle function is perhaps the least interesting part of a quantum algorithm, it contributes a significant fraction of the final program's compiled quantum circuit. For example, <cit.> estimated that Shor's modular exponentiation function constitutes 90% of the final code. In our own experiments with Grover's, our oracle makes up over 99% of the total gate count (the oracle has 3.3 million gates). Because quantum computers will be resource-limited for the foreseeable future <cit.>, programmers and programming tools will be expected to heavily optimize their quantum circuits, especially the oracles. Such optimizations, including ones that involve approximation, risk bugs that can be hard to detect. This is because quantum programs are inherently difficult to simulate, test, and debug—qubits on real quantum computers are noisy and unreliable; observing a quantum program state mid-execution may change that state; and simulating a general quantum program on a classical computer is intractable because quantum states can require resources exponential in the number of qubits. The high-assurance compiler stack. Checkbox means verified; gear means property-tested. In this paper, we report on a framework we have been developing called , the Verified Quantum Oracle framework, whose goal is to help programmers write quantum oracles that are correct and efficient. is part of , for Quantum Verified Machine, which has several elements, as shown in <Ref>. * Using , an oracle can be specified in a simple, high-level programming language we call , which has standard imperative features and can express arbitrary classical programs. It distinguishes quantum variables from classical parameters, allowing the latter to be partially evaluated <cit.>, thereby saving qubits during compilation. * The resulting program is compiled to (pronounced “O-chasm”), the oracle quantum assembly language. was designed to be efficiently simulatable while nevertheless admitting important optimizations; it is our core technical contribution and we say more about it below. The generated code links against implementations of standard operators (addition, multiplication, sine, cosine, etc.) also written in . * The oracle is then translated to , the Simple Quantum Intermediate Representation, which is a circuit language embedded in the Coq proof assistant. has been used to prove correct both quantum algorithms <cit.> and optimizations <cit.>, the latter as part of , the Verified Optimizer for Quantum Circuits. After linking the oracle with the quantum program that uses it, the complete program can be optimized and extracted to OpenQASM 2.0 <cit.> to run on a real quantum machine. Both 's compilation from to and translation from to have been proved correct in Coq. helps programmers ensure their oracles are correct by supporting both testing and verification, and ensures they are efficient by supporting several kinds of optimization. Both aspects are captured in the design of , a quantum assembly language specifically designed for oracles. Because oracles are classical functions, a reasonable approach would have been to design to be a circuit language comprised of “classical” gates; e.g., prior work has targeted gates (“not”), (“controlled not”), and (“controlled controlled not,” aka Toffoli). Doing so would simplify proofs of correctness and support efficient testing by simulation because an oracle's behavior could be completely characterized by its behavior on computational basis states (essentially, classical bitstrings). ReverC <cit.> and ReQWIRE <cit.> take this approach. However, doing so cannot support optimized oracle implementations that use fundamentally quantum functionality, e.g., as in quantum Fourier transform (QFT)-based arithmetic circuits <cit.>. These circuits employ quantum-native operations (e.g., controlled-phase operations) in the QFT basis. Our key insight is that expressing such optimizations does not require expressing all quantum programs, as is possible in a language like . Instead, 's type system restricts programs to those that admit important optimizations while keeping simulation tractable. also supports virtual qubits; its type system ensures that position shifting operations, commonly used when compiling arithmetic functions, require no extra gates when compiled to , so there is no added run-time cost. Leveraging 's efficient simulatability, we implemented a property-based random testing (PBT) framework for programs in QuickChick <cit.>, a variant of Haskell's QuickCheck <cit.> for Coq programs. This framework affords two benefits. First, we can test that an operator or program is correct according to its specification. Formal proof in Coq can be labor-intensive, so PBT provides an easy-to-use confidence boost, especially prior to attempting formal proof. Second, we can use testing to assess the effect of approximations when developing oracles. For example, we might like to use approximate QFT, rather than full-precision QFT, in an arithmetic oracle in order to save gates. PBT can be used to test the effect of this approximation within the overall oracle by measuring the distance between the fully-precise result and the approximate one. To assess 's effectiveness we have used it to build several efficient oracles and oracle components, and have either tested or proved their correctness. * Using we implemented sine, cosine, and other geometric functions used in Hamiltonian simulation <cit.>, leveraging the arithmetic circuits described below. Compared to a sine function implemented in Quipper <cit.>, a state-of-the-art quantum programming framework, 's uses far fewer qubits thanks to 's partial * We have implemented a variety of arithmetic operators in , including QFT-, approximate QFT- and Toffoli-based multiplication, addition, modular multiplication, and modular division. Overall, circuit sizes are competitive with, and oftentimes better than, those produced by Quipper. Qubit counts for the final QFT-based circuits are always lower, sometimes significantly so (up to 53%), compared to the Toffoli-based circuits. * We have proved correct both QFT and Toffoli-based adders, and QFT and Toffoli-based modular multipliers (which are used in Shor's algorithm). These constitute the first proved-correct implementations of these functions, as far as we are aware. * We used PBT to test the correctness of various operators. Running 10,000 generated tests on 8- or 16-bit versions of the operators takes just a few seconds. Testing 60-bit versions of the adders and multipliers takes just a few minutes, whereas running a general quantum simulator on the final circuits fails. We found several interesting bugs in the process of doing PBT and proof, including in the original algorithmic description of the QFT-based modular multiplier <cit.>. * We used PBT to analyze the precision difference between QFT and approximate QFT (AQFT) circuits, and the suitability of AQFT in different algorithms. We found that the AQFT adder (which uses AQFT in place of QFT) is not an accurate implementation of addition, but that it can be used as a subcomponent of division/modulo with no loss of precision, reducing gate count by 4.5–79.3%. * Finally, to put all of the pieces together, we implemented the ChaCha20 stream cipher <cit.> in and used it as an oracle for Grover's search, previously implemented and proved correct in  <cit.>. We used PBT to test the oracle's correctness. Combining its tested property with Grover's correctness property, we demonstrate that Grover's is able to invert the ChaCha20 function and find collisions. The rest of the paper is organized as follows. We begin with some background on quantum computing (<Ref>) and then present 's syntax, typing, and semantics (<Ref>). Then we discuss 's implementation: 's translator and property-based tester, and (<Ref>). Finally, we present our results (<Ref>), compare against related work (<Ref>), and conclude. All code presented in this paper is freely available at <https://github.com/inQWIRE/VQO>. § BACKGROUND We begin with some background on quantum computing and quantum algorithms. Quantum States A quantum state consists of one or more quantum bits (qubits). A qubit can be expressed as a two dimensional vector $\begin{psmallmatrix} \alpha \\ \beta \end{psmallmatrix}$ where $\alpha,\beta$ are complex numbers such that $|\alpha|^2 + |\beta|^2 = 1$. The $\alpha$ and $\beta$ are called amplitudes. We frequently write the qubit vector as $\alpha\ket{0} + \beta\ket{1}$ where $\ket{0} = \begin{psmallmatrix} 1 \\ 0 \end{psmallmatrix}$ and $\ket{1} = \begin{psmallmatrix} 0 \\ 1 \end{psmallmatrix}$ are computational basis states. When both $\alpha$ and $\beta$ are non-zero, we can think of the qubit as being “both 0 and 1 at once,” a.k.a. a superposition. For example, $\frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ is an equal superposition of $\ket{0}$ and $\ket{1}$. We can join multiple qubits together to form a larger quantum state with the tensor product ($\otimes$) from linear algebra. For example, the two-qubit state $\ket{0} \otimes \ket{1}$ (also written as $\ket{01}$) corresponds to vector $[~0~1~0~0~]^T$. Sometimes a multi-qubit state cannot be expressed as the tensor of individual states; such states are called entangled. One example is the state $\frac{1}{\sqrt{2}}(\ket{00} + \ket{11})$, known as a Bell pair. Entangled states lead to exponential blowup: A general $n$-qubit state must be described with a $2^n$-length vector, rather than $n$ vectors of length two. The latter is possible for unentangled states like $\ket{0} \otimes \ket{1}$; 's type system guarantees that qubits remain unentangled. @C=0.5em @R=0.5em H R_2 R_3 R_4 -1 H R_2 R_3 -2 -1 H R_2 -3 -2 -1 H @C=0.5em @R=0.5em H R_2 R_3 -1 H R_2 -2 -1 H Example quantum circuits: QFT over 4 qubits (left) and approximate QFT with 3 qubit precision (right). $R_m$ is a $z$-axis rotation by $2\pi / 2^m$. Quantum Circuits Quantum programs are commonly expressed as circuits, like those shown in <Ref>. In these circuits, each horizontal wire represents a qubit, and boxes on these wires indicate quantum operations, or gates. Gates may be controlled by a particular qubit, as indicated by a filled circle and connecting vertical line. The circuits in <Ref> use four qubits and apply 10 (left) or 7 (right) gates: four Hadamard ($H$) gates and several controlled $z$-axis rotation (“phase”) gates. When programming, circuits are often built by meta-programs embedded in a host language, e.g., Python (for Qiskit <cit.>, Cirq <cit.>, PyQuil <cit.>, and others), Haskell (for Quipper <cit.>), or Coq (for and our work). Quantum Fourier Transform The quantum Fourier transform (QFT) is the quantum analogue of the discrete Fourier transform. It is used in many quantum algorithms, including the phase estimation portion of Shor's factoring algorithm <cit.>. The standard implementation of a QFT circuit (for 4 qubits) is shown on the left of <Ref>; an approximate QFT (AQFT) circuit can be constructed by removing select controlled phase gates <cit.>. This produces a cheaper circuit that implements an operation mathematically similar to the QFT. The AQFT circuit we use in (for 4 qubits) is shown on the right of <Ref>. When it is appropriate to use AQFT in place of QFT is an open research problem, and one that is partially addressed by our work on , which allows efficient testing of the effect of AQFT inside of oracles. Computational and QFT Bases The computational basis is just one possible basis for the underlying vector space. Another basis is the Hadamard basis, written as a tensor product of $\{\ket{+}, \ket{-}\}$, obtained by applying a Hadamard transform to elements of the computational basis, where $\ket{+}=\frac{1}{\sqrt{2}}(\ket{0}+\ket{1})$ and $\ket{-}=\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})$. A third useful basis is the Fourier (or QFT) basis, obtained by applying a quantum Fourier transform (QFT) to elements of the computational basis. Applying a gate to a state evolves the state. The semantics of doing so is expressed by multiplying the state vector by the gate's corresponding matrix representation; single-qubit gates are 2-by-2 matrices, and two-qubit gates are 4-by-4 matrices. A gate's matrix must be unitary, ensuring that it preserves the unitarity invariant of quantum states' amplitudes. An entire circuit can be expressed as a matrix by composing its constituent gates. Measurement A special, non-unitary measurement operation extracts classical information from a quantum state, typically when a computation completes. Measurement collapses the state to a basis states with a probability related to the state's amplitudes. For example, measuring $\frac{1}{\sqrt{2}}(\ket{0} + \ket{1})$ in the computational basis will collapse the state to $\ket{0}$ with probability $\frac{1}{2}$ and likewise for $\ket{1}$, returning classical value 0 or 1, respectively. In all the programs discussed in this paper, we leave the final measurement operation implicit. Quantum Algorithms and Oracles Quantum algorithms manipulate input information encoded in “oracles,” which are callable black box circuits. For example, Grover's algorithm for unstructured quantum search <cit.> is a general approach for searching a quantum “database,” which is encoded in an oracle for a function $f : \{0, 1\}^n \to \{0, 1\}$. Grover's finds an element $x \in \{0, 1\}^n$ such that $f(x) = 1$ using $O(2^{n/2})$ queries, a quadratic speedup over the best possible classical algorithm, which requires $\Omega(2^n)$ queries. An oracle can be constructed for an arbitrary function $f$ simply by constructing a reversible classical logic circuit implementing $f$ and then replacing classical logic gates with corresponding quantum gates, e.g., for “not,” for “xor,” and (aka Toffoli) for “and.” However, this approach does not always produce the most efficient circuits; for example, quantum circuits for arithmetic can be made more space-efficient using the quantum Fourier transform <cit.>. Transforming an irreversible computation into a quantum circuit often requires introducing ancillary qubits, or ancillae, to store intermediate information <cit.>. Oracle algorithms typically assume that the oracle circuit is reversible, so any data in ancillae must be uncomputed by inverting the circuit that produced it. Failing to uncompute this information leaves it entangled with the rest of the state, potentially leading to incorrect program behavior. To make this uncomputation more efficient and less error-prone, recent programming languages such as Silq <cit.> have developed notions of implicit uncomputation. We have similar motivations in developing : we aim to make it easier for programmers to write efficient quantum oracles, and to assure, through verification and randomized testing, that they are correct. § : AN ASSEMBLY LANGUAGE FOR QUANTUM ORACLES We designed to be able to express efficient quantum oracles that can be easily tested and, if desired, proved operations leverage both the standard computational basis and an alternative basis connected by the quantum Fourier transform (QFT). 's type system tracks the bases of variables in programs, forbidding operations that would introduce entanglement. states are therefore efficiently represented, so programs can be effectively tested and are simpler to verify and analyze. In addition, uses virtual qubits to support position shifting operations, which support arithmetic operations without introducing extra gates during translation. All of these features are novel to quantum assembly This section presents states and the language's syntax, semantics, typing, and soundness results. As a running example, we use the QFT adder <cit.> shown in <Ref>. The Coq function rz_adder generates an program that adds two natural numbers a and b, each of length n qubits. @C=0.5em @R=0.75em |a_n-1⟩ 5 |a_n-1⟩ |a_n-2⟩ 4 |a_n-2⟩ ⋮ ⋮ |a_0⟩ 1 |a_0⟩ |b_n-1⟩ 5 3 5 5^-1 |a_n-1 + b_n-1⟩ |b_n-2⟩ ^-1 |a_n-2 + b_n-2⟩ ⋮ ⋮ |b_0⟩ ^-1 |a_0 + b_0⟩ Quantum circuit Fixpoint rz_adder' (a b:var) (n:nat) := match n with | 0 => ID (a,0) | S m => CU (a,m) (SR m b); rz_adder' a b m Definition rz_adder (a b:var) (n:nat) := Rev a ; Rev b ; $\texttt{QFT}$ b ; rz_adder' a b n; $\texttt{QFT}^{-1}$ b; Rev b ; Rev a. metaprogram (in Coq) Example program: QFT-based adder §.§ States \[\hspace*{-0.5em} \begin{array}{l>{$} p{1.2cm} <{$} c l} \text{Bit} & b & ::= & 0 \mid 1 \\ \text{Natural number} & n & \in & \mathbb{N} \\ \text{Real} & r & \in & \mathbb{R}\\ \text{Phase} & \alpha(r) & ::= & e^{2\pi i r} \\ \text{Basis} & \tau & ::= & \texttt{Nor} \mid \texttt{Phi}\;n \\ \text{Unphased qubit} & \overline{q} & ::= & \ket{b} ~~\mid~~ \qket{r} \\ \text{Qubit} & q & ::= &\alpha(r) \overline{q}\\ \text{State (length $d$)} & \varphi & ::= & q_1 \otimes q_2 \otimes \cdots \otimes q_d \end{array} \] \caption{\oqasm state syntax} \label{fig:vqir-state} \end{figure} An \oqasm program state is represented according to the grammar in \Cref{fig:vqir-state}. A state $\varphi$ of $d$ qubits is a length-$d$ tuple of qubit values $q$; the state models the tensor product of those values. This means that the size of $\varphi$ is $O(d)$ where $d$ is the number of qubits. A $d$-qubit state in a language like \sqir is represented as a length $2^d$ vector of complex numbers, which is $O(2^d)$ in the number of qubits. Our linear state representation is possible because applying any well-typed \oqasm program on any well-formed \oqasm state never causes qubits to be A qubit value $q$ has one of two forms $\overline{q}$, scaled by a global phase $\alpha(r)$. The two forms depend on the \emph{basis} $\tau$ that the qubit is in---it could be either \texttt{Nor} or \texttt{Phi}. A \texttt{Nor} qubit has form $\ket{b}$ (where $b \in \{ 0, 1 \}$), which is a computational basis value. A \texttt{Phi} qubit has form $\qket{r} = \frac{1}{\sqrt{2}}(\ket{0}+\alpha(r)\ket{1})$, which is a value of the (A)QFT basis. The number $n$ in \texttt{Phi}$\;n$ indicates the precision of the state $\varphi$. As shown by~\citet{qft-adder}, arithmetic on the computational basis can sometimes be more efficiently carried out on the QFT basis, which leads to the use of quantum operations (like QFT) when implementing circuits with classical input/output behavior. \subsection{\oqasm Syntax, Typing, and Semantics}\label{sec:oqasm-syn} \begin{figure}[t] \begin{minipage}[t]{0.5\textwidth} {\small \centering $ \hspace*{-0.8em} \begin{array}{llcl} \text{Position} & p & ::= & (x,n) \qquad \text{Nat. Num}~n \qquad \text{Variable}~x\\ \text{Instruction} & \instr & ::= & \iskip{p} \mid \inot{p} \mid \iseq{\instr}{\instr}\\ & & \mid & \isr[\lbrack -1 \rbrack]{n}{x} \mid \iqft[\lbrack -1 \rbrack]{n}{x} \mid \ictrl{p}{\instr} \\ & & \mid & \ilshift{x} \mid \irshift{x} \mid \irev{x} \end{array} \caption{\oqasm syntax. For an operator \texttt{OP}, $\texttt{OP}^{\lbrack -1 \rbrack}$ indicates that the operator has a built-in inverse available.} \label{fig:vqir} \end{minipage} \hfill \begin{minipage}[t]{0.45\textwidth} \centering \begin{tabular}{c@{$\quad=\quad$}c} \begin{minipage}{0.3\textwidth} \Small \Qcircuit @C=0.5em @R=0.5em { \lstick{} & \qw & \multigate{4}{\texttt{SR m}} & \qw & \qw \\ \lstick{} & \qw & \ghost{\texttt{SR m}} & \qw & \qw \\ \lstick{} & \vdots & & \vdots & \\ \lstick{} & & & & \\ \lstick{} & \qw & \ghost{\texttt{SR m}} & \qw & \qw \end{minipage} & \begin{minipage}{0.3\textwidth} \Small \Qcircuit @C=0.5em @R=0.5em { \lstick{} & \qw & \gate{\texttt{RZ (m+1)}} & \qw & \qw \\ \lstick{} & \qw & \gate{\texttt{RZ m}} & \qw & \qw \\ \lstick{} & & \vdots & & \\ \lstick{} & & & & \\ \lstick{} & & & & \\ \lstick{} & \qw & \gate{\texttt{RZ 1}} & \qw & \qw \end{minipage} \end{tabular} \caption{\texttt{SR} unfolds to a series of \texttt{RZ} instructions} \label{fig:sr-meaning} \end{minipage} \end{figure} \Cref{fig:vqir} presents \oqasm's syntax. An \oqasm program consists of a sequence of instructions $\instr$. Each instruction applies an operator to either a variable $x$, which represents a group of qubits, or a \emph{position} $p$, which identifies a particular offset into a variable $x$. The instructions in the first row correspond to simple single-qubit quantum gates---$\iskip{p}$ and $\inot{p}$---and instruction sequencing. The instructions in the next row apply to whole variables: $\iqft{n}{x}$ applies the AQFT to variable $x$ with $n$-bit precision and $\iqft[-1]{n}{x}$ applies its inverse. If $n$ is equal to the size of $x$, then the AQFT operation is exact. $\isr[\lbrack -1 \rbrack]{n}{x}$ applies a series of \texttt{RZ} gates (\Cref{fig:sr-meaning}). Operation $\ictrl{p}{\instr}$ applies instruction $\instr$ \emph{controlled} on qubit position $p$. All of the operations in this row---\texttt{SR}, \texttt{QFT}, and \texttt{CU}---will be translated to multiple \sqir gates. Function \coqe{rz_adder} in \Cref{fig:circuit-example}(b) uses many of these instructions; e.g., it uses \texttt{QFT} and \texttt{QFT}$^{-1}$ and applies \texttt{CU} to the $m$th position of variable \texttt{a} to control instruction \texttt{SR m b}. In the last row of \Cref{fig:vqir}, instructions $\ilshift{x}$, $\irshift{x}$, and $\irev{x}$ are \emph{position shifting operations}. Assuming that $x$ has $d$ qubits and $x_k$ represents the $k$-th qubit state in $x$, $\texttt{Lshift}\;x$ changes the $k$-th qubit state to $x_{(k + 1)\% d}$, $\texttt{Rshift}\;x$ changes it to $x_{(k + d - 1)\% d}$, and \texttt{Rev} changes it to $x_{d-1-k}$. In our implementation, shifting is \emph{virtual} not physical. The \oqasm translator maintains a logical map of variables/positions to concrete qubits and ensures that shifting operations are no-ops, introducing no extra gates. Other quantum operations could be added to \oqasm to allow reasoning about a larger class of quantum programs, while still guaranteeing a lack of entanglement. In \Cref{sec:extended-oqasm}, we show how \oqasm can be extended to include the Hadamard gate \texttt{H}, $z$-axis rotations \texttt{Rz}, and a new basis \texttt{Had} to reason directly about implementations of QFT and<EMAIL_ADDRESS>However, this extension compromises the property of type reversibility (\Cref{thm:reversibility}, \Cref{sec:metatheory}), and we have not found it necessary in oracles we have developed. \begin{figure}[t] \begin{minipage}[t]{0.6\textwidth} \begin{mathpar} \inferrule[X]{\Omegaty(x)=\texttt{Nor} \\ n < \Omegasz(x)}{\Sigma;\Omega \vdash \inot{(x,n)}\triangleright \Omega} \inferrule[SR]{\Omegaty(x)=\tphi{n} \\ m < n}{\Sigma;\Omega \vdash \texttt{SR}\;m\;x\triangleright \Omega} \inferrule[QFT]{\Omegaty(x)=\texttt{Nor}\\n \le \Omegasz(x)}{\Sigma; \Omega \vdash \iqft{n}{x}\triangleright \Omega[x\mapsto \tphi{n}]} \inferrule[RQFT]{\Omegaty(x)=\tphi{n}\\n \le \Omegasz(x)}{\Sigma; \Omega \vdash \iqft[-1]{n}{x}\triangleright \Omega[x\mapsto \texttt{Nor}]} \inferrule[CU]{\Omegaty(x)=\texttt{Nor} \\ \texttt{fresh}~(x,n)~\instr \\\\ \Sigma; \Omega\vdash \instr\triangleright \Omega \\ \texttt{neutral}(\instr)}{\Sigma; \Omega \vdash \texttt{CU}\;(x,n)\;\instr \triangleright \Omega} \inferrule[LSH]{\Omegaty(x)=\texttt{Nor}}{\Sigma; \Omega \vdash \texttt{Lshift}\;x\triangleright \Omega} \inferrule[SEQ]{\Sigma; \Omega\vdash \instr_1\triangleright \Omega' \\ \Sigma; \Omega'\vdash \instr_2\triangleright \Omega''}{\Sigma; \Omega \vdash \instr_1\;;\;\instr_2\triangleright \Omega''} \end{mathpar} \caption{Select \oqasm typing rules} \label{fig:exp-well-typed} \end{minipage} \hfill \begin{minipage}[t]{0.35\textwidth} \begin{center}\hspace*{-1em} \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3.2cm, \tikzstyle{every state}=[fill=black,draw=none,text=white] \node[state] (A) {$\texttt{Nor}$}; \node[state] (C) [left of=A] {$\tphi{n}$}; \path (A) edge [loop above] node {$\Big\{\begin{array}{l}\texttt{ID},~\texttt{X},~\texttt{CU},~\texttt{Rev},\\ \texttt{Lshift},\texttt{Rshift}\end{array}\Big\}$} (A) edge node [above] {\{$\texttt{QFT}\;n$\}} (C); \path (C) edge [loop above] node {$\{\texttt{ID},~\texttt{SR}^{\lbrack -1 \rbrack}\}$} (C) edge [bend right] node {$\{\texttt{QFT}^{-1}\;n\}$} (A); \end{tikzpicture} \end{center} \caption{Type rules' state machine} \label{fig:state-machine} \end{minipage} \end{figure} \myparagraph{Typing} \label{sec:vqir-typing} In \oqasm, typing is with respect to a \emph{type environment} $\Omega$ and a \emph{size environment} $\Sigma$, which map \oqasm variables to their basis and size (number of qubits), respectively. The typing judgment is written $\Sigma; \Omega\vdash \instr \triangleright \Omega'$ which states that $\instr$ is well-typed under $\Omega$ and $\Sigma$, and transforms the variables' bases to be as in $\Omega'$ ($\Sigma$ is unchanged). Select type rules are given in \Cref{fig:exp-well-typed}; the rules not shown (for \texttt{ID}, \texttt{Rshift}, \texttt{Rev}, and \texttt{SR}$^{-1}$) are similar. The type system enforces three invariants. First, it enforces that instructions are well-formed, meaning that gates are applied to valid qubit positions (the second premise in \rulelab{X}) and that any control qubit is distinct from the target(s) (the \texttt{fresh} premise in \rulelab{CU}). This latter property enforces the quantum \emph{no-cloning rule}. For example, we can apply the \texttt{CU} in \code{rz\_adder'} (\Cref{fig:circuit-example}) because position \code{a,m} is distinct from variable \code{b}. Second, the type system enforces that instructions leave affected qubits in a proper basis (thereby avoiding entanglement). The rules implement the state machine shown in \Cref{fig:state-machine}. For example, $\texttt{QFT}\;n$ transforms a variable from \texttt{Nor} to $\tphi{n}$ (rule \rulelab{QFT}), while $\texttt{QFT}^{-1}\;n$ transforms it from $\tphi{n}$ back to \texttt{Nor} (rule \rulelab{RQFT}). Position shifting operations are disallowed on variables $x$ in the \texttt{Phi} basis because the qubits that make up $x$ are internally related (see \Cref{def:well-formed}) and cannot be rearranged. Indeed, applying a \texttt{Lshift} and then a $\texttt{QFT}^{-1}$ on $x$ in \texttt{Phi} would entangle $x$'s qubits. % \begin{figure}[t] % {\footnotesize % \begin{center} % \begin{tikzpicture}[->,>=stealth',shorten >=1pt,auto,node distance=3.2cm, % semithick] % \tikzstyle{every state}=[fill=white,draw=black,text=black] % \node[initial,accepting,state] (A) {$\texttt{OK}$}; % \node[state] (B) [right of=A] {$ $}; % \path (A) edge [loop above] node {$b,\epsilon / \epsilon$} (A) % edge [above] node {$a,\emptyset / a$} (B); % \path (B) edge [loop right] node [right] {$\begin{array}{l}b,\epsilon / \epsilon\\ % a,a' / a a'\\ % a,\overline{a} / \epsilon\\ % \end{array}$} (B) % edge [bend left] node [above] {$\epsilon,\emptyset / \emptyset$} (A); % \end{tikzpicture} % \end{center} % } % { % \footnotesize % $ % \begin{array}{l} % a,a'\in \{\ilshift{x},\irshift{x},\irev{x} \} \wedge a' \neq \overline{a} % \\ % \overline{\ilshift{x}}=\irshift{x} % \quad % \overline{\irshift{x}}=\ilshift{x} % \quad % \overline{\irev{x}}=\irev{x} % \\ % b\not\in\{\ilshift{x},\irshift{x},\irev{x}, \instr;\instr \} % \\ % \emptyset=\text{ no element in stack} % \end{array} % $ % } % \caption{Pushdown automata for \texttt{neutral}} % \label{fig:pushdown-neu} % \end{figure} Third, the type system enforces that the effect of position shifting operations can be statically tracked. The \texttt{neutral} condition of \rulelab{CU} requires that any shifting within $\instr$ is restored by the time it For example, $\sseq{\ictrl{p}{(\ilshift{x})}}{\inot{(x,0)}}$ is not well-typed, because knowing the final physical position of qubit $(x,0)$ would require statically knowing $p$. On the other hand, the program $\sseq{\ictrl{c}{(\sseq{\ilshift{x}}{\sseq{\inot{(x,0)}}{\irshift{x}}})}}{\inot{(x,0)}}$ is well-typed because the effect of the \texttt{Lshift} is ``undone'' by an \texttt{Rshift} inside the body of the \texttt{CU}. % \texttt{neutral}'s definition in \Cref{fig:pushdown-neu} % views $\instr$ as a string concatenated % by the sequence operation ($;$) and requires $\instr$ to be % accepted according to a family of pushdown automatas $\{G\}_{x}$ for every $x$ presented in $\instr$. % A program $\instr$ is \texttt{neutral}, iff, $\instr$ as a string is % accepted by all the automatas in $\{G\}_{x}$. \myparagraph{Semantics}\label{sec:pqasm-dsem} \begin{figure}[t] \[ \begin{array}{lll} \llbracket \iskip{p} \rrbracket\varphi &= \varphi\\[0.2em] \llbracket \inot{(x, i)} \rrbracket\varphi &= \app{\uparrow\xsem(\downarrow\varphi(x,i))}{\varphi}{(x,i)} & \texttt{where }\xsem(\ket{0})=\ket{1} \qquad\, \xsem(\ket{1})=\ket{0} \\[0.5em] \llbracket \ictrl{(x,i)}{\instr} \rrbracket\varphi &= \csem(\downarrow\varphi(x,i),\instr,\varphi) \texttt{where } \csem({\ket{0}},{\instr},\varphi)=\varphi\quad\;\, \csem({\ket{1}},{\instr},\varphi)=\llbracket \instr \rrbracket\varphi \\[0.4em] \llbracket \isr{m}{x} \rrbracket\varphi & \multicolumn{2}{l}{= \app{\uparrow \qket{r_i+\frac{1}{2^{m-i+1}}}}{\varphi}{\forall i \le m.\;(x,i)} \qquad \texttt{when } \downarrow\varphi(x,i) = \qket{r_i}}\\[0.5em] \llbracket \isr[-1]{m}{x} \rrbracket\varphi&\multicolumn{2}{l}{= \app{\uparrow \qket{r_i-\frac{1}{2^{m-i+1}}}}{\varphi}{\forall i \le m.\;(x,i)} \qquad \texttt{when } \downarrow\varphi(x,i) = \qket{r_i}}\\[0.5em] \llbracket \iqft{n}{x} \rrbracket\varphi &= \app{\uparrow\qsem(\Sigma(x),\downarrow\varphi(x),n)}{\varphi}{x} & \texttt{where }\qsem(i,\ket{y},n)=\bigotimes_{k=0}^{i-1}(\qket{\frac{y}{2^{n-k}}}) \\[0.5em] \llbracket \iqft[-1]{n}{x} \rrbracket\varphi &= \app{\uparrow\qsem^{-1}(\Sigma(x),\downarrow\varphi(x),n)}{\varphi}{x} \\[0.5em] \llbracket \ilshift{x} \rrbracket\varphi &= \app{{\psem}_{l}(\varphi(x))}{\varphi}{x} \texttt{where }{\psem}_{l}(q_0\otimes q_1\otimes \cdots \otimes q_{n-1})=q_{n-1}\otimes q_0\otimes q_1 \otimes \cdots \\[0.5em] \llbracket \irshift{x} \rrbracket\varphi &= \app{{\psem}_{r}(\varphi(x))}{\varphi}{x} \texttt{where }{\psem}_{r}(q_0\otimes q_1\otimes \cdots \otimes q_{n-1})=q_1\otimes \cdots \otimes q_{n-1} \otimes q_0 \\[0.5em] \llbracket \irev{x} \rrbracket\varphi &= \app{{\psem}_{a}(\varphi(x))}{\varphi}{x} \texttt{where }{\psem}_{a}(q_0\otimes \cdots \otimes q_{n-1})=q_{n-1}\otimes \cdots \otimes q_0 \\[0.5em] \llbracket \iota_1; \iota_2 \rrbracket\varphi &= \llbracket \iota_2 \rrbracket (\llbracket \iota_1 \rrbracket\varphi) \end{array} \] \begin{array}{l} \\[0.2em] \downarrow \alpha(b)\overline{q}=\overline{q} \qquad \downarrow (q_1\otimes \cdots \otimes q_n) = \downarrow q_1\otimes \cdots \otimes \downarrow q_n \\[0.2em] \app{\uparrow \overline{q}}{\varphi}{(x,i)}=\app{\alpha(b)\overline{q}}{\varphi}{(x,i)} \qquad \texttt{where }\varphi(x,i)=\alpha(b)\overline{q_i} \\[0.2em] \app{\uparrow \alpha(b_1)\overline{q}}{\varphi}{(x,i)}=\app{\alpha(b_1+b_2)\overline{q}}{\varphi}{(x,i)} \qquad \texttt{where }\varphi(x,i)=\alpha(b_2)\overline{q_i} \\[0.2em] \app{q_x}{\varphi}{x}=\app{q_{(x,i)}}{\varphi}{\forall i < \Sigma(x).\;(x,i)} \\[0.2em] \app{\uparrow q_x}{\varphi}{x}=\app{\uparrow q_{(x,i)}}{\varphi}{\forall i < \Sigma(x).\;(x,i)} \end{array} \vspace*{-0.5em} \caption{\oqasm semantics} \label{fig:deno-sem} \end{figure} We define the semantics of an \oqasm program as a partial function $\llbracket\rrbracket$ from an instruction $\instr$ and input state $\varphi$ to an output state $\varphi'$, written $\llbracket \instr \rrbracket\varphi=\varphi'$, shown in \Cref{fig:deno-sem}. % The definition for $\llbracket\rrbracket$ is syntax-driven, meaning that it is defined in terms of the state syntax presented in \Cref{fig:vqir-state}. % defines the denotational semantics of \oqasm, which maps a \oqasm instruction $\instr \in \{\instr\}$ to its unitary operator on $\varphi \in \hsp{S}^d$. % The key takeaway of the \oqasm denotational semantics is that given an input $\varphi \in \hsp{S}^d$, a well typed instruction affects only one qubit (notation: $\varphi{(x,n)}$ or $q_{(x,n)}$) or qubit array (notation: $\varphi{(x)}$ or $q_x$), which means it \emph{does not create entanglement}. % The benefit of this is that we can completely describe the state $\varphi$ using $d$ terms, instead of considering a length $2^d$ vector, as would generally be required to analyze an $d$-qubit system. Recall that a state $\varphi$ is a tuple of $d$ qubit values, modeling the tensor product $q_1\otimes \cdots \otimes q_d$. The rules implicitly map each variable $x$ to a range of qubits in the state, e.g., $\varphi(x)$ corresponds to some sub-state $q_k\otimes \cdots \otimes q_{k+n-1}$ where $\Omegasz(x)=n$. Many of the rules in \Cref{fig:deno-sem} update a \emph{portion} of a state. We write $\app{q_{(x,i)}}{\varphi}{(x,i)}$ to update the $i$-th qubit of variable $x$ to be the (single-qubit) state $q_{(x,i)}$, and $\app{q_{x}}{\varphi}{x}$ to update variable $x$ according to the qubit \emph{tuple} $q_x$. $\app{\uparrow q_{(x,i)}}{\varphi}{(x,i)}$ and $\app{\uparrow q_{x}}{\varphi}{x}$ are similar, except that they also accumulate the previous global phase of $\varphi(x,i)$ (or $\varphi(x)$). We use $\downarrow$ to convert a qubit $\alpha(b)\overline{q}$ to an unphased qubit $\overline{q}$. %Thus, we have $\downarrow \alpha(b)\overline{q}=\overline{q}$ %and $\downarrow (q_1\otimes...\otimes q_n) = \downarrow q_1\otimes...\otimes \downarrow q_n$. %$\app{\uparrow q_{(x,i))}}{\varphi}{(x,i)}$ means to put back the global phase to the result qubit assigning to $(x,i)$. %%If $\varphi(x,i)=e^{2\pi i b}\overline{q}$ %and the result $q_{(x,i)}=\overline{q_{(x,i)}}$, %then we assign $e^{2\pi i b}\overline{q_{(x,i)}}$ to $(x,i)$; %if the result $q_{(x,i)}=e^{2\pi i b_1}\overline{q_{(x,i)}}$, then we assign $e^{2\pi i (b+b_1)}\overline{q_{(x,i)}}$ to $(x,i)$. $\app{\uparrow q_{x}}{\varphi}{x}$ applies the above scenario to a list of qubits $q_k\otimes ... \otimes q_{k+n-1}$ %where $\Omegasz(x)=n$. Function $\xsem$ updates the state of a single qubit according to the rules for the standard quantum gate $X$. \texttt{cu} is a conditional operation depending on the \texttt{Nor}-basis qubit $(x,i)$. \texttt{SR} (or $\texttt{SR}^{-1}$) applies an $m+1$ series of \texttt{RZ} (or $\texttt{RZ}^{-1}$) rotations where the $i$-th rotation applies a phase of $\alpha({\frac{1}{2^{m-i+1}}})$ (or $\alpha({-\frac{1}{2^{m-i+1}}})$). $\qsem$ applies an approximate quantum Fourier transform; $\ket{y}$ is an abbreviation of $\ket{b_1}\otimes \cdots \otimes \ket{b_i}$ (assuming $\Omegasz(y)=i$) and $n$ is the degree of approximation. If $n = i$, then the operation is the standard<EMAIL_ADDRESS>Otherwise, each qubit in the state is mapped to $\qket{\frac{y}{2^{n-k}}}$, which is equal to $\frac{1}{\sqrt{2}}(\ket{0} + \alpha(\frac{y}{2^{n-k}})\ket{1})$ when $k < n$ and $\frac{1}{\sqrt{2}}(\ket{0} + \ket{1}) = \ket{+}$ when $n \leq k$ (since $\alpha(n) = 1$ for any natural number $n$). $\qsem^{-1}$ is the inverse function of $\qsem$. Note that the input state to $\qsem^{-1}$ is guaranteed to have the form $\bigotimes_{k=0}^{i-1}(\qket{\frac{y}{2^{n-k}}})$ because it has type $\tphi{n}$. $\psem_l$, $\psem_r$, and $\psem_a$ are the semantics for \itext{Lshift}, \itext{Rshift}, and \itext{Rev}, respectively. % Several takeaways about \oqasm denotational semantics. % For any operation application within the space domain $\hsp{S}^d$, the semantic application $U$ only has effect on the specific qubit ($\varphi_{(x,n)}$) / qubit array ($\varphi_{x}$) that it targets at, which does not create entanglement with other subsystems. % This clear separation only works for the domain $\hsp{S}^d$. % When we compile these operations to \sqir and see their effects on a general Hilbert space $\hsp{H}$, they might have entanglement effects. % \yxp{Even if we turn it into unitary over the Hilbert space, it still does not generate entanglement with other subsystems.} % \liyi{Can you have CNOT x y when you have x is Had and y is in Nor, then you will definitely have entanglement. } % However, the clear separation in $\hsp{S}^d$ provides us a decompositional and analytical way of verifying and validating quantum oracles; thus, each sub-oracle-component can be analyzed individually. The potential entanglements in a general Hilbert space becomes the naturally extended (additive) superposition effects. % In addition, all semantic functions in Fig.~\ref{fig:deno-sem} are carefully engineered to only target qubits in a register $\varphi$, and does not target on invidual vectors in the vector space $\varphi$ represents. % For example, $\xsem$ is defined for a basis phase space case $\ket{c}$, and we also define the case for superposition $\frac{1}{\sqrt{2}}(\ket{0}+(-1)^c\ket{1})$. We do not assume the the semantics of the basis phase space is automatically extended to dealing with individual elements in the superposition case. % By using the semantics to prove quantum oracle properties, we only need to consider $O(n)$ qubits instead of the possible $2^n$ expanded vector elements. % The semantics of a universal quantum assembly language like \sqir, by contrast, represents a quantum state as a unitary matrix whose size is \emph{exponential} in the number of vectors by expanding qubits to vectors in a register. \sqir's semantics also relies on the use of concrete qubits; using a unitary matrix and virtual positions would inject a virtual-to-physical mapping into the semantic definition, which can severely complicate proofs~\cite{PQPC}. This leads to the successful correctness proof of the QFT-adder for the first time (Sec.~\ref{sec:op-verification}). % We only define semantic functions for qubit forms when it is possible to apply. For example, we do not define $\xsem$ for the form $\frac{1}{\sqrt{2}}(\ket{0}+e^{2\pi{i} b}\ket{1})$, because the \oqasm type system does not allow it. \subsection{\oqasm Metatheory}\label{sec:metatheory} \myparagraph{Soundness} We prove that well-typed \oqasm programs are well defined; i.e., the type system is sound with respect to the semantics. We begin by defining the well-formedness of an \oqasm state. \begin{definition}[Well-formed \oqasm state]\label{def:well-formed}\rm A state $\varphi$ is \emph{well-formed}, written $\Sigma;\Omega \vdash \varphi$, iff: \begin{itemize} \item For every $x \in \Omega$ such that $\Omegaty(x) = \texttt{Nor}$, for every $k <\Omegasz(x)$, $\varphi(x,k)$ has the form \item For every $x \in \Omega$ such that $\Omegaty(x) = \tphi{n}$ and $n \le \Omegasz(x)$, there exists a value $\upsilon$ such that for every $k < \Omegasz(x)$, $\varphi(x,k)$ has the form $\alpha(r)\qket{\frac{\upsilon}{ 2^{n- k}}}$.\footnote{Note that $\Phi(x) = \Phi(x + n)$, where the integer $n$ refers to phase $2 \pi n$; so multiple choices of $\upsilon$ are possible.} \end{itemize} \end{definition} \noindent Type soundness is stated as follows; the proof is by induction on $\instr$, and is mechanized in Coq. \begin{theorem}\label{thm:type-sound-oqasm}\rm[\oqasm type soundness] If $\Sigma; \Omega \vdash \instr \triangleright \Omega'$ and $\Sigma;\Omega \vdash \varphi$ then there exists $\varphi'$ such that $\llbracket \instr \rrbracket\varphi=\varphi'$ and $\Sigma;\Omega' \vdash \varphi'$. \end{theorem} \myparagraph{Algebra} Mathematically, the set of well-formed $d$-qubit \oqasm states for a given $\Omega$ can be interpreted as a subset $\hsp{S}^d$ of a $2^d$-dimensional Hilbert space $\hsp{H}^d$,\footnote{A \emph{Hilbert space} is a vector space with an inner product that is complete with respect to the norm defined by the inner product. $\hsp{S}^d$ is a sub\emph{set}, not a sub\emph{space} of $\hsp{H}^d$ because $\hsp{S}^d$ is not closed under addition: Adding two well-formed states can produce a state that is not well-formed.} and the semantics function $\llbracket \rrbracket$ can be interpreted as a $2^d \times 2^d$ unitary matrix, as is standard when representing the semantics of programs without measurement~\cite{PQPC}. Because \oqasm's semantics can be viewed as a unitary matrix, correctness properties extend by linearity from $\hsp{S}^d$ to $\hsp{H}^d$---an oracle that performs addition for classical \texttt{Nor} inputs will also perform addition over a superposition of \texttt{Nor} inputs. We have proved that $\hsp{S}^d$ is closed under well-typed \oqasm programs. Given a qubit size map $\Sigma$ and type environment $\Omega$, the set of \oqasm programs that are well-typed with respect to $\Sigma$ and $\Omega$ (i.e., $\Sigma;\Omega \vdash \instr \triangleright \Omega'$) form a groupoid $(\{\instr\},\Sigma, \Omega,\hsp{S}^d)$, where $\hsp{S}^d$ is the set of $d$-qubit states that are well-formed ($\Omega \vdash \varphi$) according to \Cref{def:well-formed}. We can extend the groupoid to $(\{\instr\},\Sigma,\hsp{H}^d)$ by defining a general $2^d$ dimensional Hilbert space $\hsp{H}^d$, such that $\hsp{S}^d \subseteq \hsp{H}^d$, and removing the typing requirements on $\{\instr\}$. Clearly, $(\{\instr\},\Sigma,\hsp{H}^d)$ is still a groupoid because every \oqasm operation is valid in a traditional quantum language like \sqir. We then have the following two two theorems to connect \oqasm operations with operations in the general Hilbert space: \begin{theorem}\label{thm:subgroupoid}\rm $(\{\instr\},\Sigma, \Omega,\hsp{S}^d) \subseteq (\{\instr\},\Sigma,\hsp{H}^d)$ is a subgroupoid. \end{theorem} \begin{theorem}\label{thm:sem-same}\rm Let $\ket{y}$ be an abbreviation of $\bigotimes_{m=0}^{d-1} \alpha(r_m) \ket{b_m}$ for $b_m \in \{0,1\}$. If for every $i\in [0,2^d)$, $\llbracket \instr \rrbracket\ket{y_i}=\ket{y'_i}$, then $\llbracket \instr \rrbracket (\sum_{i=0}^{2^d-1} \ket{y_i})=\sum_{i=0}^{2^d-1} \ket{y'_i}$. \end{theorem} We prove these theorems as corollaries of the compilation correctness theorem from \oqasm to \sqir (\Cref{thm:vqir-compile}). \Cref{thm:subgroupoid} suggests that the space $\hsp{S}^d$ is closed under the application of any well-typed \oqasm operation. \Cref{thm:sem-same} says that \oqasm oracles can be safely applied to superpositions over classical states.\footnote{Note that a superposition over classical states can describe \emph{any} quantum state, including entangled states.} \begin{figure}[t] \begin{mathpar} \inferrule[ ]{}{\inot{(x,n)}\xrightarrow{\text{inv}} \inot{(x,n)}} \inferrule[ ]{}{\texttt{SR}\;m\;x\xrightarrow{\text{inv}} \texttt{SR}^{-1}\;m\;x} \inferrule[ ]{}{\iqft{n}{x} \xrightarrow{\text{inv}} \iqft[-1]{n}{x}} \inferrule[ ]{}{\texttt{Lshift}\;x\xrightarrow{\text{inv}} \texttt{Rshift}\;x} \inferrule[ ]{\instr \xrightarrow{\text{inv}} \instr'}{\texttt{CU}\;(x,n)\;\instr \xrightarrow{\text{inv}} \texttt{CU}\;(x,n)\;\instr'} \inferrule[ ]{\instr_1 \xrightarrow{\text{inv}} \instr'_1 \\ \instr_2 \xrightarrow{\text{inv}} \instr'_2}{\instr_1\;;\;\instr_2\xrightarrow{\text{inv}} \instr'_2\;;\;\instr'_1} \end{mathpar} \caption{Select \oqasm inversion rules} \label{fig:exp-reversed-fun} \end{figure} \begin{figure}[t] \centering \begin{tabular}{c@{$\quad=\quad$}c@{\qquad}c@{$\quad=\quad$}c} \begin{minipage}{0.25\textwidth} \footnotesize \Qcircuit @C=0.25em @R=0.35em { & \qw & \multigate{3}{(x+a)_n} & \qw \\ & \vdots & & \\ & & & \\ & \qw & \ghost{(x+a)_n} & \qw \\ \end{minipage} \begin{minipage}{.45\textwidth} % \includegraphics[width=1\textwidth]{qft-adder.png} \footnotesize \Qcircuit @C=0.35em @R=0.55em { & \qw & \gate{\texttt{SR}\;0} & \multigate{3}{\texttt{SR}\;1} & \qw & \qw & \qw & \multigate{5}{\texttt{SR}\;(n-1)} & \qw \\ & & & & & \dots & & & \\ & \qw & \qw & \ghost{\texttt{SR}\; 1} & \qw & \qw & \qw & \ghost{\texttt{SR}\;(n-1)} & \qw \\ & & & & & & & & \\ & & & & & & & & \\ & \qw & \qw & \qw & \qw & \qw & \qw & \ghost{\texttt{SR}\;(n-1)} & \qw \end{minipage} \begin{minipage}{0.25\textwidth} \footnotesize \Qcircuit @C=0.25em @R=0.35em { & \qw & \multigate{3}{(x-a)_n} & \qw \\ & \vdots & & \\ & & & \\ & \qw & \ghost{(x+a)_n} & \qw \\ \end{minipage} \begin{minipage}{.45\textwidth} % \includegraphics[width=1\textwidth]{qft-adder.png} \footnotesize \Qcircuit @C=0.35em @R=0.55em { & \qw & \multigate{5}{\texttt{SR}^{-1} (n-1)} & \qw & \qw & \qw & \multigate{3}{\texttt{SR}^{-1} 1} & \gate{\texttt{SR}^{-1} 0} & \qw \\ & & & & \dots & & & & \\ & \qw & \ghost{\texttt{SR}^{-1} (n-1)} & \qw & \qw & \qw & \ghost{\texttt{SR}^{-1} 1} & \qw & \qw \\ & & & & & & & & \\ & & & & & & & & \\ & \qw & \ghost{\texttt{SR}^{-1} (n-1)} & \qw & \qw & \qw & \qw & \qw & \qw \end{minipage} \end{tabular} \caption{Addition/subtraction circuits are inverses} \label{fig:circuit-add-sub} \end{figure} \oqasm programs are easily invertible, as shown by the rules in \Cref{fig:exp-reversed-fun}. This inversion operation is useful for constructing quantum oracles; for example, the core logic in the QFT-based subtraction circuit is just the inverse of the core logic in the addition circuit (\Cref{fig:exp-reversed-fun}). This allows us to reuse the proof of addition in the proof of subtraction. The inversion function satisfies the following properties: \begin{theorem}\label{thm:reversibility}\rm[Type reversibility] For any well-typed program $\instr$, such that $\Sigma; \Omega \vdash \instr \triangleright \Omega'$, its inverse $\instr'$, where $\instr \xrightarrow{\text{inv}} \instr'$, is also well-typed and we have $\Sigma;\Omega' \vdash \instr' \triangleright \Omega$. Moreover, $\llbracket \instr ; \instr' \rrbracket \varphi=\varphi$. \end{theorem} \section{\name Quantum Oracle Framework} \label{sec:implementation} This section presents \name, our framework for specifying, compiling, testing, and verifying quantum oracles, whose architecture was given in \Cref{fig:arch}. We start by considering translation from \oqasm to \sqir and proof of its correctness. Next, we discuss \name's property-based random testing framework for \oqasm programs. Finally, we discuss \vqimp, a simple imperative language for writing oracles, which compiles to \oqasm. We also present its proved-correct compiler and means to test the correctness of \vqimp oracles. \subsection{Translation from \oqasm to \sqir}\label{sec:vqir-compilation} \newcommand{\tget}{\texttt{get}} \newcommand{\tstart}{\texttt{start}} \newcommand{\tfst}{\texttt{fst}} \newcommand{\tsnd}{\texttt{snd}} \newcommand{\tucom}[1]{\texttt{ucom}~{#1}} \newcommand{\tif}{\texttt{if}} \newcommand{\tthen}{\texttt{then}} \newcommand{\telse}{\texttt{else}} \newcommand{\tlet}{\texttt{let}} \newcommand{\tin}{\texttt{in}} \name translates \oqasm to \sqir by mapping \oqasm positions to \sqir concrete qubit indices and expanding \oqasm instructions to sequences of \sqir gates. % Most \oqasm instructions are easily mapped to operations in \sqir, with the exception of the position shifting instructions. % The difficulty there is the virtual-to-physical qubit compilation. % In \oqasm, a position $p$ is a pair of a qubit variable and offset, not % a physical qubit location in a quantum circuit. We keep track of a map % from each \oqasm position to a concrete \sqir qubit index. Translation is expressed as the judgment $\Sigma\vdash (\gamma,\instr) \steps (\gamma',\epsilon)$ where $\Sigma$ maps \oqasm variables to their sizes, $\epsilon$ is the output \sqir circuit, and $\gamma$ maps an \oqasm position $p$ to a \sqir concrete qubit index (i.e., offset into a global qubit register). At the start of translation, for every variable $x$ and $i < \Sigma(x)$, $\gamma$ maps $(x,i)$ to a unique concrete index chosen from 0 to $\sum_{x}(\Sigma(x))$. \begin{figure}[t] \begin{mathpar} \inferrule{ }{\Sigma\vdash(\gamma,\inot{p}) \to (\gamma,\textcolor{blue}{\inot{\gamma(p)}})} \inferrule{\gamma'=\gamma[\forall i.\; i < \Sigma(x) \Rightarrow (x,i)\mapsto \gamma(x,(i+1)\%\Sigma(x))]}{\Sigma\vdash(\gamma,\ilshift{x}) \to (\gamma',\textcolor{blue}{\iskip{(\gamma'(x,0))}})} \inferrule{\Sigma\vdash(\gamma,\instr) \to (\gamma,\textcolor{blue}{\epsilon})\\ \textcolor{blue}{\epsilon' = \texttt{ctrl}(\gamma(p),\epsilon)}}{\Sigma\vdash(\gamma,\ictrl{p}{\instr}) \to (\gamma,\textcolor{blue}{\epsilon')}} \inferrule{ \Sigma\vdash (\gamma,\instr_1) \to (\gamma',\textcolor{blue}{\epsilon_1}) \\ \Sigma\vdash(\gamma',\instr_2) \to (\gamma'',\textcolor{blue}{\epsilon_2})}{\Sigma\vdash(\gamma,\sseq{\instr_1}{\instr_2}) \to (\gamma'',\textcolor{blue}{\sseq{\epsilon_1}{\epsilon_2}})} \end{mathpar} \vspace*{-1em} \caption{Select \oqasm to \sqir translation rules (\sqir circuits are marked blue)} \label{fig:compile-vqir} \end{figure} \Cref{fig:compile-vqir} depicts a selection of translation rules.\footnote{Translation in fact threads through the typing judgment, but we elide that for simplicity.} The first rule shows how to translate $\inot{p}$, which has a directly corresponding gate in \sqir. The second rule left-shifts the qubits of the target variable in the map $\gamma$, and produces an identity gate (which will be removed in a subsequent optimization pass). For example, say we have variables $x$ and $y$ in the map $\gamma$ and variable $x$ has three qubits so $\gamma$ is $\{(x,0)\mapsto 0,(x,1)\mapsto 1, (x,2)\mapsto 2,(y,0)\mapsto 3,...\}$. Then after $\ilshift{x}$ the $\gamma$ map becomes $\{(x,0)\mapsto 1,(x,1)\mapsto 2, (x,2)\mapsto 0,(y,0)\mapsto 3,...\}$. The last two rules translate the \texttt{CU} and sequencing instructions. In the \texttt{CU} translation, the rule assumes that $\instr$'s translation does not affect the $\gamma$ position map. This requirement is assured for well-typed programs per rule \rulelab{CU} in \Cref{fig:exp-well-typed}. \texttt{ctrl} generates the controlled version of an arbitrary \sqir program using standard decompositions \cite[Chapter 4.3]{mike-and-ike}. \newcommand{\transs}[3]{[\!|{#1}|\!]^{#2}_{#3}} We have proved \oqasm-to-\sqir translation correct. To formally state the correctness property we relate $d$-qubit \oqasm states to \sqir states, which are vectors of $2^d$ complex numbers, via a function $\transs{-}{d}{\gamma}$, where $\gamma$ is the virtual-to-physical qubit map. For example, say that our program uses two variables, $x$ and $y$, and both have two qubits. The qubit states are $\ket{0}$ and $\ket{1}$ (meaning that $x$ has type \texttt{Nor}), and $\qket{r_1}$ and $\qket{r_2}$ (meaning that $y$ has type \texttt{Phi}). Furthermore, say that $\gamma = \{(x,0)\mapsto 0,(x,1)\mapsto 1, (y,0)\mapsto 2, (y,1)\mapsto 3\}$. This \oqasm program state will be mapped to the $2^4$-element vector $\ket{0}\otimes \ket{1}\otimes (\ket{0}+e^{2\pi i r_1}\ket{1})\otimes (\ket{0}+e^{2\pi i r_2}\ket{1})$. \begin{theorem}\label{thm:vqir-compile}\rm[\oqasm translation correctness] Suppose $\Sigma; \Omega \vdash \instr \triangleright \Omega'$ and $\Sigma \vdash(\gamma,\instr) \to (\gamma',\epsilon)$. %, and $\overline{\gamma}$ and $\overline{\gamma'}$ are the inverse functions of $\gamma$ and $\gamma'$, respectively.\footnote{$\gamma$ and $\overline{\gamma}$ form a finite bijection. I.e., for every $k<d$, $\overline{\gamma}(k)$ is defined, $\gamma(\overline{\gamma}(k))=k$, and for every $p$ that appears in $\instr$, $\gamma(p)$ is defined, $\gamma(p)< d$, and $\overline{\gamma}(\gamma(p)) = p$.} Then for $\Sigma; \Omega \vdash \varphi$, $\llbracket \instr \rrbracket\varphi=\varphi'$, and we have $\llbracket \epsilon \rrbracket \times \transs{\varphi}{d}{\gamma} = \transs{\varphi'}{d}{\gamma'}$ where $\llbracket \epsilon \rrbracket$ is the matrix interpretation of $\epsilon$ per \sqir's semantics. \end{theorem} The proof of translation correctness is by induction on the \oqasm program $\instr$. Most of the proof simply shows the correspondence of operations in $\instr$ to their translated-to gates $\epsilon$ in \sqir, except for shifting operations, which update the virtual-to-physical map. % Notice that a \oqasm shifting operation on variable $x$ changes the virtual to physical map from $\gamma$ to $\gamma'$ while generating only \texttt{ID} gates. The map shifting changes the ``world view'' of later operations on $x$, because the qubit physical positions are different between $\gamma$ and $\gamma'$. % To prove the correctness, for physical positions in $x$, we compare their virtual positions before and after the shifting by using the inverse maps of $\gamma$ and $\gamma'$. Then, we show that difference implements the shifting operation semantics. Note that to link a complete, translated oracle $\instr$ into a larger \sqir program may require that $\gamma = \gamma'$, i.e., $\texttt{neutral}(\instr)$, so that logical inputs match logical outputs. This requirement is naturally met for programs written to be reversible, as is the case for all arithmetic circuits in this paper, e.g., \coqe{rz_adder} from \Cref{fig:circuit-example}. % If necessary, the programmer could add dynamic swap instructions manually (encodable in \oqasm). \ignore{ \begin{lemma}\label{thm:subgroupoid-lemma}\rm For all $\epislon \in \{\epsilon^{(\Sigma,\Omega)}\}$, if $\epislon$ is valid operation in $\hsp{S}_n$, $n \le m$, and $\hsp{S}_m$ and it every qubit in $\hsp{S}_m$ satisfies $\Omega$'s restriction, then \end{lemma} We view $(\mathcal{H}, \instr )$ as a groupoid over Hilbert space $\mathcal{H}$, we can then defined a subset of $\mathcal{H}$ as $\mathcal{H}^n_s$, where it has the following conditions: \begin{itemize} \item Each element in $\mathcal{H}^n_s$ has the form: $\ket{q_1}\otimes ... \otimes \ket{q_n}$, where $\ket{q_1}$,...,$\ket{q_n}$ are 1-dimensional qubit. \item For any element $\ket{q_1}$,...,$\ket{q_n}$ in $\mathcal{H}^n_s$, $\ket{q_i}$ has three possible forms: $\alpha\ket{c}$, $\frac{1}{\sqrt{2}} \alpha( s_1 \ket{0}+ s_2 \ket{1})$, or $\frac{1}{\sqrt{2}}\alpha~(\ket{0}+\beta\ket{1})$. \end{itemize} We view $\Sigma;\Omega\vdash \iota \triangleright \Omega'$ as a predicate for each \oqasm operation $\iota$ on where a program $\iota$ is defined given a subspace $\mathcal{H}_{(x, p)}$, then $(\mathcal{H}^n_s, \instr )$ is a sub-groupoid of $(\mathcal{H}, \instr )$ for all $\instr$ that is type-checked in $\mathcal{H}^n_s$. We then define a superoperator over $\instr$ as $\instr^*(\rho)= \llbracket \instr \rrbracket \rho \llbracket \instr \rrbracket^{\dag}$ where $\rho \in (\mathcal{H}^n_s)^*$. $(\mathcal{H}^n_s)^*$ is the collection of density matrices seen as linear transformations from $\mathcal{H}^n_s$ to $\mathcal{H}^n_s$. The superoperator gives the density matrix interpretation of the \oqasm semantics. We define a $2^m$ dimensional database $D$ as $\mathcal{H}^n_s \otimes D$, and $D$ has the format $\ket{q_1}$,...,$\ket{q_{2^n}}$ where $q_i$ is a $k$ array bitstring, each of the bitstring position is either $0$ or $1$. We define a new operation in $\instr$ as $\texttt{read}\;y\;x$, such that $y$ is a $k$-length qubit, and $x$ is a $m$ length qubit representing the position. The desity matrix semantics of the $\texttt{read}$ operation is given as: \[\Sigma^{2^m-1}_{0}\ket{i}\bra{i}\otimes D_{i}\] With finite bijection mapping $\tget(\rho)$ and $\varrho$, we develop the translation process as the function $(d * \Sigma * \rho * \instr * \varrho) \to (\tucom{d}* \rho * \varrho)$, where $d$ is the dimension number indicating the number of qubits in the system, $\Sigma$ maps variables to qubit numbers in \oqasm, $\rho$ is the position mapping database, $\varrho$ is the inverse function of $\tget(\rho)$, $\instr$ is an \oqasm program, and $\delta\in\tucom{d}$ is a \sqir circuit. create a mapping database $\rho$ that maps positions $p$ to a data structure $\coqe{nat} * (\coqe{nat} \to \coqe{nat}) * (\coqe{nat} \to \coqe{nat})$. We assume that all qubit locations in \sqir are managed as a number line counting from $0$. The first \coqe{nat} value is the starting position for an \oqasm variable $x$ on the number line. We assume that $\texttt{start}(\rho,x)$ is a function to get the start position of $x$ in the map $\rho$. The second function ($\mu$, $\coqe{nat} \to \coqe{nat}$) is a mapping from position offset to the offset number of the physical position in \sqir. \khh{Moved from earlier text: Function $\tstart(\rho,x)$ is equivalent to $\rho(x,0)$. } For example, a position $(x,i)$ is mapped to $\tstart(\rho,x)+\mu(i)$ in the number line. The third function ($\nu$, $\coqe{nat} \to \coqe{nat}$) is the inverse function mapping from an offset in \sqir back to the offset in \oqasm. For every offset $i$ for $x$ in \oqasm, if $\mu$ and $\nu$ are the two maps in $\rho(x)$, then $\mu(\nu(i)) = i$, and vice versa. We assume that the actual virtual to physical position mapping is $ \tget(\rho)$, which gets the physical position in \sqir for $p$. $\tget(\rho,p)$ gives us the \sqir position for $p$ and its definition is $\texttt{start}(\rho,\texttt{fst}(p))+\texttt{get\_}\mu(\rho,\texttt{fst}(p))(\texttt{snd}(p))$. On the other hand, since different virtual positions map to different physical positions, the function $\tget(\rho)$ is bijective; there is an inverse function $\varrho$ for $\tget(\rho)$, such that $\tget(\rho,p)=i \Rightarrow \varrho(i) = p$. The functions $\rho$, $\tget(\rho)$, and its inverse function $\varrho$ are also useful in the translation process, and we assume that they satisfy \textbf{finite bijection}, where for a set of positions $\overline{p}$, there exists a mapping database $\rho$, a dimension number $d$ and an inverse function $\varrho$, such that for all $p$ in $\overline{p}$, $\tget(\rho,p)=i$, $i<d$, $\varrho(\tget(\rho,p))=p$, and $\tget(\rho,\varrho(i))=i$.} \ignore{ \begin{definition}\label{def:vars-def}\rm (\textbf{finite bijection}) Given a virtual to physical position mapping database $\rho$, and the mapping function $\tget(\rho)$, its inverse function $\varrho$, a map from \oqasm variables to its qubit size $\Sigma$, and $d$ is the dimension of the qubits in \sqir and it is a maximum number that is larger than all physical position number in the image of $\tget{\rho}$, we say that $\rho$ and $\varrho$ is finitely bijective iff: \begin{itemize} \item For all $p$, if $\tfst(p))$ is in the domain of $\rho$ and $\tsnd(p)< \Sigma(\tfst(p))$, then $\tget(\rho,p)<d$. \item For all $i$, if $i < d$, then $\tfst(\varrho(i))$ is in the domain of $\rho$ and $\tsnd(\varrho(i))< \Sigma(\tfst(\varrho(i)))$ \item For all $p$, if $\tfst(p))$ is in the domain of $\rho$ and $\tsnd(p)< \Sigma(\tfst(p))$, then $\varrho(\tget(\rho,p)) = p$. \item For all $i$, if $i < d$, then $\tget(\rho, \varrho(i))=i$. \item For all $p_1$ $p_2$, if $p_1 \neq p_2$, then $\tstart(\rho,p_1) \neq \tstart(\rho,p_1)$. \item For all $x$ $y$, if $x \neq y$, then for all $i$ $j$, such that $i < \Sigma(x)$ and $j < \Sigma(y)$, $\tget(\rho,(x,i)) \neq \tget(\rho,(y, j))$. \item For all $p$, if $\tsnd(p) < \Sigma(\tfst(p))$, then $\tget\_\mu(\rho,\tfst(p))(\tsnd(p))<\Sigma(\tfst(p))$. \item For all $p$, if $i < \Sigma(x)$, then $\tget\_\nu(\rho,x)(i)<\Sigma(x)$. \end{itemize} \end{definition} % \mwh{The bits that were here about proof engineering were not % understandable to someone without context. If you want to say that % somehow our design made things easier, you have to present the % alternative, to indicate why it would be otherwise be hard. Being % more concrete (e.g., framing it for a particular proof) will make it % more understandable too. If this is a general benefit, I would % suggest saying something back in section 3, or speaking about the % proofs of particular operators in section 5.} % \subsection{\oqasm Proof Engineering} % \label{sec:oqasm-sem} % We benefit from three key features when writing proofs about \oqasm programs: \textit{separability}, \textit{discreteness}, and \textit{well-formedness}. % Separability means that when reasoning about state $\varphi$, we can consider each qubit separately. % This is reflected in the data structures in Coq that represent the three qubit forms in \Cref{def:well-formed}: % {\footnotesize % \noindent % $ % \inval{c}{b} \equiv e^{2 \pi i b}\ket{c} \quad % \insttwo{Hval}{h}{b} \equiv e^{2\pi{i} b}\ket{h} \quad % \iqval{b_1}{b_2} \equiv e^{2\pi{i} b_1}\qket{b_2} % $ % } % \noindent % In this representation, each qubit has its own complex global phase factors ($b, b_1$), which are independent of other qubits'. % Discreteness refers to the fact that the bits $c$ and phase values $b$ can be represented by natural numbers, which is a benefit for randomized testing in \Cref{sec:rand-testing}. % Well-formedness means that we can use assumptions about a state's form from \Cref{def:well-formed} to simplify proofs. % For example, consider applying a $\texttt{SR}^{[-1]}$ gate to a variable $x$ of type \texttt{Phi}. % In the result of this application, the $b_2$ value of every qubit $k$ in $x$ will be of the form $\frac{\upsilon}{2^{n - k}}$ for some $\upsilon$ (which is the same over all $k$). % Therefore, proving a property about the $b_2$ values of qubits in $x$ only requires reasoning about $\varphi(x,0)$. % If $\varphi(x,0)$'s local phase is $\frac{\upsilon}{2^{n}}$ for some $\upsilon$, then the local phase for $\varphi(x,k)$ is $\frac{\upsilon}{2^{n- k}}$. \subsection{Property-based Random Testing}\label{sec:rand-testing} % Proofs of operator correctness can be time-consuming and repetitive. \oqasm's type system ensures that states can be efficiently represented. We leverage this fact to implement a testing framework for \oqasm programs using QuickChick \cite{quickchick}, which is a property-based testing (PBT) framework for Coq in the style of Haskell's QuickCheck~\cite{10.1145/351240.351266}. We use this framework for two purposes: To test correctness properties of \oqasm programs and to experiment with the consequences of approximation for efficiency and correctness. \myparagraph{Implementation} PBT randomly generates inputs using a hand-crafted \emph{generator}, and confirms that a property holds for the given inputs. Since arithmetic/oracle operations are defined over \texttt{Nor}-basis inputs, we wrote a generator for these inputs. A single test of an \oqasm program involves five steps: (1) generate (or specify) $n$, which is the number of qubits in the input; (2) for each input variable $x$, generate uniformly random bitstrings $b_0b_1...b_{n-1}$ of length $n$, representing $x$'s initial qubit value $\bigotimes_{i=0}^{n-1} \alpha(0)\ket{b_i}$; (3) prepare an \oqasm state $\varphi$ containing all input variables' encoded bitstrings; (4) execute the \oqasm program with the prepared state; and (5) check that the resulting state satisfies the desired property. %\footnote{Given a binary operation $\oplus$, its bitstring form $\overline{\oplus}$, and operation $[-]_n$ to turn a number into a size $n$ bitstring, we aim to prove the following property about \oqasm program $\oplus(z,x,y)$: For variables $x$ and $y$, state $\varphi$, if $\varphi(x)=e^{2 \pi i b_x}\ket{[v_x]_n}$, $\varphi(y)=e^{2 \pi i b_y}\ket{[v_y]_n}$, and $\varphi(z)=e^{2 \pi i b_z}\ket{0}$, then $\llbracket \oplus(z,x,y) \rrbracket\varphi=\varphi[z\mapsto \ket{[[v_x]_n\overline{\oplus}[v_y]_n]_n}]$.} We took several steps to improve testing performance. First, we streamlined the representation of states: Per the semantics in \Cref{fig:deno-sem}, in a state with $n$ qubits, the phase associated with each qubit can be written as $\alpha(\frac{\upsilon}{2^n})$ for some natural number $\upsilon$. Qubit values in both bases are thus pairs of natural numbers: the global phase $\upsilon$ (in range $[0,2^n)$) and $b$ (for $\ket{b}$) or $y$ (for $\qket{\frac{y}{2^n}}$). An \oqasm state $\varphi$ is a map from qubit positions $p$ to qubit values $q$; in our proofs, this map is implemented as a partial function, but for testing, we use an AVL tree implementation (proved equivalent to the functional map). To avoid excessive stack use, we implemented the \oqasm semantics function tail-recursively. To run the tests, QuickChick runs OCaml code that it \emph{extracts} from the Coq definitions; during extraction, we replace natural numbers and operations thereon with machine integers and operations. We present performance results in \Cref{sec:arith-oqasm}. \myparagraph{Testing Correctness} A full formal proof is the gold standard for correctness, but it is also laborious. It is especially deflating to be well through a proof only to discover that the intended property does not hold and, worse, that nontrivial changes to the program are necessary. Our PBT framework gives assurance that an \oqasm program property is correct by attempting to falsify it using thousands of randomly generated instances, with good coverage of the program's input space. We have used PBT to test the correctness of a variety of operators useful in oracle programs, as presented in \Cref{sec:arith-oqasm}. When implementing a QFT-adder circuit, using PBT revealed that we had encoded the wrong endianness. We have also used PBT with \vqimp programs by first compiling them to \oqasm and then testing their correctness at that \myparagraph{Assessing the Effect of Approximation} Because of the resource limitations of near-term machines, programmers may wish to \emph{approximate} the implementation of an operation to save qubits or gates, rather than implement it exactly. For example, a programmer may prefer to substitute QFT with an approximate QFT, which requires fewer gates. Of course, this substitution will affect the circuit's semantics, and the programmer will want to understand the \emph{maximum distance} (similarity) between the approximate and exact implementations, to see if it is tolerable. To this end, we can test a relational property between the outputs of an exact and approximate circuit, on the same inputs, to see if the difference is acceptable. \Cref{sec:approx-circs} presents experiments comparing the effect of approximation on circuits using QFT-based adders. % There are two modes of the comparison framework. First, we randomly generate same inputs for the two circuits, and answer the question how many percentages of outputs are exactly the same. If the difference rate is small, in some cases, the approximate circuit could be useful. Second, we also generate same inputs for the two circuits and answer the question that what parts in the output bitstrings are exactly the same. In analyzing circuits and their approximate implementations, most likely, a circuit might produce results that are almost always different from the results from its approximate implementation. However, some parts of their outputs might be the same. For example, in comparing the QFT-based addition circuit and the AQFT-based one, we find that the result high bits that are within the AQFT precision number range are the same as long as the low bits do not produce extra carry bit. % Then, we can utilize the similarity being discovered in our framework to construct other useful oracles. See \Cref{sec:qft-case}. % For PBT to be efficient, careful consideration must be given to the datatypes used to represent program states. % As shown in \Cref{def:well-formed}, all possible phase values have the form $e^{2 \pi i b}$ for some real value $b$.In our implementation, we consider a finite approximation of $b$, such that there is a bitstring $[\upsilon]_n$ with $b\approx\frac{1}{2^{1\cdot [\upsilon]_n(k)}}+...+\frac{1}{2^{n-k\cdot [\upsilon]_n(n-1)}}$. % This is why we have a smallest phase precision number $r_0$ (represented by the $n$ here) in translation from \oqasm to \sqir. % Obviously, $[\upsilon]_n$ can be represented by a natural number $\upsilon$. % The remaining issue is that we need to find a good number representation of $b$, so that phase rotation gate applications ($\texttt{RZ}^{[-1]}$ and $\texttt{SR}^{[-1]}$) can be implemented by natural number operations. % We choose to record $b$ as a number $\upsilon'$ whose bitstring representation is $\texttt{rev}([\upsilon]_n)$ \footnote{$\texttt{rev}([c_1,...,c_n])=[c_n,...,c_1]$}, i.e, $[\upsilon']_n=\texttt{rev}([\upsilon]_n)$, where $[\upsilon]_n$ is the bitstring mentioned above. % Then, applying the operation $(\upsilon'+2^{n - j})\%2^{n}$ is exactly the same as applying a phase rotation $e^{\frac{2 \pi i}{2^j}}$ to $b$ as $e^{2\pi i * (b+\frac{1}{2^j})}$, and applying $(\upsilon'+2^{n}-2^{n - j})\%2^{n}$ is the same as applying a phase rotation $e^{-\frac{2 \pi i}{2^j}}$ to $b$ as $e^{2\pi i * (b-\frac{1}{2^j})}$. % Since $\texttt{SR}^{[-1]}$ is a series of $\texttt{RZ}^{[-1]}$, the $\upsilon'$ representation allows us to represent phases (and phase rotations) as natural numbers (and arithmetic operations). \subsection{\vqimp: A High-level Oracle Language}\label{sec:qimp} \begin{figure}[t] \centering \[\hspace*{-1em} \begin{array}{c} \begin{array}{l} \texttt{fixedp sin}(\textcolor{red}{Q~\texttt{fixedp }x_{/8}},\;\textcolor{red}{Q~\texttt{fixedp }x_r},\;C~\texttt{nat }n)\{ \\[0.2em] \;\;\textcolor{red}{x_r\texttt{ = }x_{/8};} \;\;C~\texttt{fixedp }n_y;\;\;\textcolor{red}{Q~\texttt{fixedp }x_z;}\;\; \textcolor{red}{Q~\texttt{fixedp }x_1;} \\[0.2em] \;\; C~\texttt{nat }n_1;\;\; C~\texttt{nat }n_2;\;\; C~\texttt{nat }n_3;\;\; C~\texttt{nat }n_4;\;\; C~\texttt{nat }n_5; \\[0.2em] \;\;\texttt{for }(C~\texttt{nat }i\texttt{ = }0;\; i\texttt{ < }n;\;i\texttt{++})\{\\ \;\;\quad n_1\texttt{ = }i+1;\;\;n_2\texttt{ = }2*n_1;\;\;n_3\texttt{ = }\texttt{pow}(8,n_2);\;\; n_4\texttt{ = }n_2+1; \\[0.2em] \;\;\quad n_5\texttt{ = }n_4!;\;\;n_y\texttt{ = }n_3 / n_5;\;\; \textcolor{red}{x_z\texttt{ = }\texttt{pow}(x_{/8},n_4);}\\[0.2em] \;\;\quad \texttt{if }(\texttt{even}(n_1))\;\;{\{ \textcolor{red}{{x_1}\texttt{ = }{n_y}*{x_z};\;\; x_r\texttt{ += }{x_1};} \}}\\[0.2em] \;\;\quad\texttt{else } {\{\textcolor{red}{{x_1}\texttt{ = }{n_y}*{x_z};\;\;{x_r}\texttt{ -= }{x_1};}\};}\\[0.2em] \;\;\quad\textcolor{red}{\texttt{inv}(x_1);\;\;\texttt{inv}(x_z);} \}\\ \;\;\texttt{return }\textcolor{red}{(8*x_r)};\\ \} \end{array}\\[8em] \sin{x}\approx 8*(\frac{x}{8}-\frac{8^2}{3!}(\frac{x}{8})^3+\frac{8^4}{5!}(\frac{x}{8})^5-\frac{8^6}{7!}(\frac{x}{8})^7+...+(-1)^{n-1}\frac{8^{2n-2}}{(2n-1)!}(\frac{x}{8})^{2n-1}) \end{array} \] \vspace*{-1em} \caption{Implementing sine in \vqimp} \label{fig:sine-impl} \end{figure} It is not uncommon for programmers to write oracles as metaprograms in a quantum assembly's host language, e.g., as we did for \coqe{rz_adder} in \Cref{fig:circuit-example}. But this process can be tedious and error-prone, especially when trying to write optimized code. To make writing efficient arithmetic-based quantum oracles easier, we developed \vqimp, a higher-level imperative language that compiles to \oqasm. Here we discuss \vqimp's basic features, describe how we optimize \vqimp programs during compilation using partial evaluation, and provide correctness guarantees for \vqimp programs. Using \vqimp, we have defined operations for the ChaCha20 hash-function \cite{chacha}, exponentiation, sine, arcsine, and cosine, and tested program correctness by running inputs through \vqimp's semantics. More details about \vqimp are available in \Cref{sec:appendix}. \myparagraph{Language Features} An \vqimp program is a sequence of function definitions, with the last acting as the ``main'' function. Each function definition is a series of statements that concludes by returning a value $v$. \vqimp statements contain variable declarations, assignments (e.g., $x_r\texttt{ = }x_{/8}$), arithmetic computations ($n_1\texttt{ = }i+1$), loops, conditionals, and function calls. %In declarations, all variables are initialized as $0$. Variables $x$ have types $\tau$, which are either primitive types $\omega^m$ or arrays thereof, of size $n$. A primitive type pairs a base type $\omega$ with a \emph{quantum mode} $m$. There are three base types: type $\tnat$ indicates non-negative (natural) numbers; type $\tfixed$ indicates fixed-precision real numbers in the range $(-1,1)$; and type $\tbool$ represents booleans. The programmer specifies the number of qubits to use to represent $\tnat$ and $\tfixed$ numbers when invoking the \vqimp compiler. The mode $m \in\{C, Q\}$ on a primitive type indicates when a type's value is expected to be known: $C$ indicates that the value is based on a classical parameter of the oracle, and should be known at compile time; $Q$ indicates that the value is a quantum input to the oracle, computed at run-time. \Cref{fig:sine-impl} shows the \vqimp implementation of the sine function, which is used in quantum applications such as Hamiltonian Because $\tfixed$ types must be in the range $(-1,1)$, the function takes $\frac{1}{8}$ times the input angle in variable $x_{/8}$ (the input angle $x$ is in $[0,2\pi)$). The result, stored in variable $x_r$, is computed by a Taylor expansion of $n$ terms. The standard formula for the Taylor expansion is the loop in the algorithm computes an equivalent formula given input $\frac{1}{8}x$, as shown at the bottom of the figure. % Performing this manual transformation is tedious, but allows for % the significant reduction in qubits required to represent a % $\tfixed$. We can use property testing to help assure the % transformation is correct; automating the transformation would be % interesting future work. % The return value $8*x_r$ in \Cref{fig:sine-impl} is automatically computed by \vqimp. % Programmers can set a flag $n$ for a $\tfixed$ variable $x$ % and \vqimp will automatically convert the value of $x$ to $\frac{x}{n}$, like $x_{/8}$ in \Cref{fig:sine-impl}, and multiply $n$ back to the return result, like $8*x_r$. % \khh{But even if \vqimp adds the multiplication by 8, the user still has to write their algorithm with $x/8$ in mind, right? (E.g., ``pow(8,n2)'' in the code.) This doesn't seem very automatic.} % Another feature of \vqimp is \emph{reversibility}; we provide two kinds. % First, every \vqimp function call is reversible. For example, in a call to the \texttt{sin} function, excluding the computed result $x_r$, the side-effects on input variables, e.g., $x_{/8}$, are uncomputed once the call returns. % Second, \vqimp's \texttt{inv} operation can be used to manually uncompute a single assignment of a variable. % For example, $\texttt{inv}(x_1)$ in \Cref{fig:sine-impl} uncomputes $x_1$ in every loop iteration. % To implement this, we maintain a stack during compilation that tracks the nearest assignment of a variable. % See \Cref{sec:appendix} for more details. \myparagraph{Reversible Computation} \label{sec:revcomp} Since programs that run on quantum computers must be \emph{reversible}, \vqimp compiles functions to reverse their effects upon returning. In \Cref{fig:sine-impl}, after the \texttt{main} function returns, only the return value is copied and stored to a target variable. For other values, like $x_{/8}$, the compiler will insert an \emph{inverse circuit} to revert all side effects. When variables are reused within a function, they must be \emph{uncomputed} using \vqimp's $\sinv{x}$ operation. For example, in \Cref{fig:sine-impl}, the second \texttt{inv} operation returns $x_z$ to its state prior to the execution of $\textcolor{red}{x_z\texttt{=}\texttt{pow}(x_{/8},n_4)}$ so that $x_z$ can be reassigned in the next iteration. We plan to incorporate automatic uncomputation techniques to insert $\sinv{x}$ calls automatically, but doing so requires care to avoid blowup in the generated circuit \cite{unqomp}. The \name compiler imposes three restrictions on the use of $\sinv{x}$, which aim to ensure that each use uncomputes just one assignment to $x$. First, since the semantics of an \texttt{inv} operation reverses the most recent assignment, we require that every \texttt{inv} operation have a definite predecessor. Example \texttt{(1)} in \Cref{fig:inv-examples} shows an \texttt{inv} operation on a variable that does not have a predecessor; \texttt{(2)} shows a variable $z$ whose predecessor is not always executed. Both are invalid in \vqimp. Second, the statements between an \texttt{inv} operation and its predecessor cannot write to any variables used in the body of the predecessor. Example \texttt{(3)} presents an invalid case where $x$ is used in the predecessor of $z$, and is assigned between the \texttt{inv} and the predecessor. The third restriction is that, while sequenced \texttt{inv} operations are allowed, the number of \texttt{inv} operations must match the number of predecessors. Example \texttt{(4)} is invalid, while \texttt{(5)} is valid, because the first \texttt{inv} in \texttt{(5)} matches the multiplication assignment and the second \texttt{inv} matches the addition assignment. \begin{figure}[t] \footnotesize \[ \begin{array}{c} \texttt{(1)} \begin{array}{l} a\texttt{=}x\texttt{ * }y; \\ \texttt{inv}(z); \textcolor{red}{\xmark} \end{array} \quad \texttt{(2)} \begin{array}{l} \texttt{if}(x<y)\\ \;\;\;a\texttt{=}x\texttt{ * }y;\\ \texttt{else}\\ \;\;\;z\texttt{=}x\texttt{ * }y; \\ \texttt{inv}(z); \textcolor{red}{\xmark} \end{array} \quad \texttt{(3)} \begin{array}{l} z\texttt{=}x\texttt{ * }y; \\ x\texttt{=}x\texttt{ + }1; \textcolor{red}{\xmark} \\ \texttt{inv}(z); \end{array} \quad \texttt{(4)} \begin{array}{l} z\texttt{=}x\texttt{ * }y; \\ \texttt{inv}(z); \\ \texttt{inv}(z); \textcolor{red}{\xmark} \end{array} \quad \texttt{(5)} \begin{array}{l} \\ z\texttt{=}x\texttt{ * }y; \\ \texttt{inv}(z); \\ \texttt{inv}(z); \end{array} \textcolor{green}{\cmark} \end{array} \] \caption{Example (in)valid uses of \texttt{inv}}\label{fig:inv-examples} \end{figure} To implement these well-formedness checks, \name's \vqimp compiler maintains a stack of assignment statements. Once the compiler hits an \texttt{inv} operation, it pops statements from the stack to find a match for the variable being uncomputed. It also checks that none of the popped statements contain an assignment of variables used in the predecessor statement. \myparagraph{Compilation from \vqimp to \oqasm} The \vqimp compiler performs \emph{partial evaluation} \cite{partialeval} on the input program given classical parameters; the residual program is compiled to a quantum In particular, we compile an \vqimp program by evaluating its $C$-mode components, storing the results in a store $\sigma$, and then using these results while translating its $Q$-mode components into \oqasm code. For example, when compiling the \texttt{for} loop in \Cref{fig:sine-impl}, the compiler will look up the value of loop-bound variable $n$ in the store and update $i$'s value in the store for each iteration. When compiling the loop-body statement $n_1\texttt{=} i + 1$, variable $n_1$ will simply be updated in the store, and no code generated. When compiling statement $\textcolor{red}{x_z \texttt{=} \texttt{pow}({x_{/8}},n_4)}$, the fact that $x_z$ has mode $Q$ means that \oqasm code must be generated. Thus, each iteration will compile the non $C$-mode components of the body, essentially inlining the loop. As an illustration, if we were to initialize $n$ to 3, the partially evaluated program would be equivalent to the following (in \oqasm rather than \vqimp). \begin{footnotesize} \begin{center} \begin{array}{l@{~}l} \textcolor{red}{x_r\texttt{ = }x_{/8};}\; & \textcolor{red}{{x_z}\texttt{ = }\texttt{pow}(x_{/8},3);}\; \textcolor{red}{{x_1}\texttt{ = }{\frac{8^2}{3!}}*{x_z};\;{x_r}\texttt{ -= }{x_1};}\; \textcolor{red}{\sinv{x_1};\;\sinv{x_z};} \\ &\textcolor{red}{{x_z}\texttt{ = }\texttt{pow}(x_{/8},5);}\; \textcolor{red}{{x_1}\texttt{ = }{\frac{8^4}{5!}}*{x_z};\;{x_r}\texttt{ += }{x_1};}\; \textcolor{red}{\sinv{x_1};\;\sinv{x_z};}\; \\ &\textcolor{red}{{x_z}\texttt{ = }\texttt{pow}(x_{/8},7);}\; \textcolor{red}{{x_1}\texttt{ = }{\frac{8^6}{7!}}*{x_z};\;{x_r}\texttt{ -= }{x_1};}\; \textcolor{red}{\sinv{x_1};\;\sinv{x_z};} \end{array} \end{center} \end{footnotesize} We have verified that compilation from \vqimp to \oqasm is correct, in Coq, with a caveat: Proofs for assignment statements are parameterized by correctness statements about the involved operators. Each Coq operator function has a correctness statement associated with it; e.g., we state that the \oqasm code produced by invoking \coqe{rz_adder} for addition corresponds to an addition at the \vqimp level. In the case of \coqe{rz_adder} and a few others, we have a proof of this in Coq; for the rest, we use PBT to provide some assurance that the statement is true. Further details about \vqimp compilation and its correctness claims can be found in \Cref{sec:appendix}. % Programmers can also use PBT to test \vqimp programs, by first compiling them to \oqasm (see \Cref{sec:grovers}). \section{Evaluation: Arithmetic Operators in \oqasm} \label{sec:arith-oqasm} We evaluate \name by (1) demonstrating how it can be used for validation, both by verification and random testing, and (2) by showing that it gets good performance in terms of resource usage compared to Quipper, a state-of-the-art quantum programming framework~\cite{Green2013}. This section presents the arithmetic operators we have implemented in \oqasm, while the next section discusses the geometric operators and expressions implemented in \vqimp. The following section presents an end-to-end case study applying Grover's search. \subsection{Implemented Operators} \Cref{fig:circ-evaluation,fig:op-table} summarize the operators we have implemented in \vqir. The addition and modular multiplication circuits (parts (a) and (d) of \Cref{fig:circ-evaluation}) are components of the oracle used in Shor's factoring algorithm~\cite{shors}, which accounts for most of the algorithm's cost \cite{Gidney2021howtofactorbit}. The oracle performs modular exponentiation on natural numbers via modular multiplication, which takes a quantum variable $x$ and two co-prime constants $M, N \in \mathbb{N}$ and produces $(x * M) \% N$. We have implemented two modular multipliers---inspired by \citet{qft-adder} and \citet{ripple-carry-mod}---in \vqir. Both modular multipliers are constructed using controlled modular addition by a constant, which is implemented in terms of controlled addition and subtraction by a constant, as shown in \Cref{fig:mod-mult}. The two implementations differ in their underlying adder and subtractor circuits: the first (QFT) uses a quantum Fourier transform-based circuit for addition and subtraction \cite{Draper2000AdditionOA}, while the second (TOFF) uses a ripple-carry adder \cite{ripple-carry-mod}, which uses classical controlled-controlled-not (Toffoli) gates. Part (b) of \Cref{fig:circ-evaluation} shows results for \oqasm implementations of multiplication (without the modulo) and part (c) shows results for modular division by a constant, which is useful in Taylor series expansions used to implement operators like sine and cosine. \Cref{fig:op-table} lists additional operations we have implemented in \vqir for arithmetic and Boolean comparison using natural and fixed-precision numbers. \begin{figure*}[t] \centering \centering \begin{tabular}{c@{$\quad=\quad$}c} \begin{minipage}{0.2\textwidth} \Small \Qcircuit @C=0.5em @R=0.5em { & \qw & \ctrl{1} & \qw \\ & \qw & \multigate{3}{\texttt{ADD(c)\%n}} & \qw \\ & \vdots & & \\ & & & \\ & \qw & \ghost{\texttt{ADD(c)\%n}} & \qw \\ \end{minipage} & \begin{minipage}{0.75\textwidth} \Small \Qcircuit @C=0.5em @R=0.5em { & \ket{x_i}\quad & & \qw & \ctrl{1} & \qw & \qw & \qw & \ctrl{1} & \qw & \qw & \qw & \ctrl{1} & \qw & \ket{x_i} \\ & && \qw & \multigate{4}{\texttt{ADD(c)}} & \multigate{4}{\texttt{SUB(n)}} & \qw & \multigate{4}{\texttt{ADD(n)}} & \multigate{4}{\texttt{SUB(c)}} & \qw & \qw & \qw & \multigate{4}{\texttt{ADD(c)}} & \qw & \\ & \push{\ket{b}\quad} & & \qw & \ghost{\texttt{ADD(c)}} & \ghost{\texttt{SUB(n)}} & \qw & \ghost{\texttt{ADD(n)}} & \ghost{\texttt{SUB(c)}} & \qw & \qw & \qw & \ghost{\texttt{ADD(c)}} & \qw & \push{\quad\ket{(c+b)\%n}} \\ & & & \vdots & & & & & & & & & & & & \\ & & & & & & & & & & & & & & & \\ & & & \qw & \ghost{\texttt{ADD(c)}} & \ghost{\texttt{SUB(n)}} & \ctrl{1} & \ghost{\texttt{ADD(n)}} & \ghost{\texttt{SUB(c)}} & \targ & \ctrl{1} & \targ & \ghost{\texttt{ADD(c)}} & \qw & \\ & \ket{0}\quad && \qw & \qw & \qw & \targ & \qw & \qw & \qw & \targ & \qw & \qw & \qw & \ket{0} \gategroup{2}{3}{6}{3}{1em}{\{} \gategroup{2}{14}{6}{14}{1em}{\}} \end{minipage} \end{tabular} \vspace{1em} \begin{tabular}{c} \begin{minipage}{0.5\textwidth} \Small \Qcircuit @C=0.5em @R=0.5em { & & & \qw & \ctrl{6} & \qw & \qw & \qw & \qw & \qw & \qw & \\ & \push{\ket{x}\quad} & & \qw & \qw & \ctrl{5} & \qw & \qw & \qw & \qw & \qw & \ket{x} \\ & & & \vdots & & & & \dots & & & & \\ & & & & & & & & & & & \\ & & & & & & & & & & & \\ & & & \qw & \qw & \qw & \qw & \qw & \qw & \ctrl{1} & \qw & \\ & & & \qw & \multigate{4}{\texttt{ADD($2^0$c)\%n}} & \multigate{4}{\texttt{ADD($2^1$c)\%n}} & \qw & \qw & \qw & \multigate{4}{\texttt{ADD($2^{n-1}$c)\%n}} & \qw & \\ & \push{\ket{0}\quad} & & \qw & \ghost{\texttt{ADD($2^0$c)\%n}} & \ghost{\texttt{ADD($2^1$c)\%n}} & \qw & \qw & \qw & \ghost{\texttt{ADD($2^{n-1}$c)\%n}} & \qw & \push{\quad\ket{cx\%n}} \\ & & & \vdots & & & & \dots & & & & \\ & & & & & & & & & & & \\ & & & \qw & \ghost{\texttt{ADD($2^0$c)\%n}} & \ghost{\texttt{ADD($2^1$c)\%n}} & \qw & \qw & \qw & \ghost{\texttt{ADD($2^{n-1}$c)\%n}} & \qw \gategroup{1}{3}{5}{3}{1em}{\{} \gategroup{7}{3}{11}{3}{1em}{\{} \gategroup{1}{11}{5}{11}{1em}{\}} \gategroup{7}{11}{11}{11}{1em}{\}} \end{minipage} \end{tabular} \caption{Structure of modular multiplication circuits} \label{fig:mod-mult} \end{figure*} \begin{figure*}[t] \hspace*{-1em} \begin{tabular}{c @{\quad} c} \begin{minipage}[b]{0.48\textwidth} \centering \begin{tabular}{|l|c|c|c|} \hline & \# qubits & \# gates & Verified \\ \hline \oqasm TOFF & 33 & 423 & \cmark \\ \oqasm QFT & 32 & 1206 & \cmark \\ \oqasm QFT (const) & 16 & $756 \pm 42$ & \cmark \\ \hline Quipper TOFF & 47 & 768 & \\ Quipper QFT & 33 & 6868 & \\ Quipper TOFF (const) & 31 & $365 \pm 11$ & \\ \hline \end{tabular} \subcaption{Addition circuits (16 bits)} \end{minipage} \begin{minipage}[b]{0.52\textwidth} \centering \begin{tabular}{|l|c|c| >{\centering\arraybackslash} m{1cm} |} \hline & \# qubits & \# gates & QC time (16 / 60 bits)\\ \hline \oqasm TOFF & 49 & 11265 & 6 / 74 \\ \oqasm TOFF (const) & 33 & $1739 \pm 367$ & 3 / 31 \\ \oqasm QFT & 48 & 4339 & 4 / 138 \\ \oqasm QFT (const) & 32 & $1372 \pm 26$ & 4 / 158 \\ \hline Quipper TOFF & 63 & 8060 & \\ Quipper TOFF (const) & 41 & $2870\pm 594$ & \\ \hline \end{tabular} \subcaption{Multiplication circuits (16 bits)} \end{minipage} \vspace{0.5mm} \end{tabular} \\ \hspace*{-1em} \begin{tabular}{c @{\;\;} c} \begin{minipage}[b]{0.52\textwidth} \centering \begin{tabular}{|l|c|c| >{\centering\arraybackslash} m{1cm} |} \hline & \# qubits & \# gates & QC time (16 / 60 bits) \\ \hline \oqasm TOFF (const) & 49 & $28768 $ & 16 / 397 \\ \oqasm QFT (const) & 34 & 15288 & 5 / 412 \\ \oqasm AQFT (const) & 34 & 5948 & 4 / 323 \\\hline Quipper TOFF & 98 & 37737 & \\ \hline \end{tabular} \subcaption{Division/modulo circuits (16 bits)} \end{minipage} \begin{minipage}[b]{0.52\textwidth} \centering \begin{tabular}{|l|c|c|c|} \hline & \# qubits & \# gates & Verified \\ \hline \oqasm TOFF (const) & 41 & 56160 & \cmark \\ \oqasm QFT (const) & 19 & 18503 &\cmark \\ \hline \end{tabular} \subcaption{Modular multiplication circuits (8 bits)} \end{minipage} \end{tabular} \caption{Comparison of \oqasm and Quipper arithmetic operators. In the ``const'' case, one argument is a classically-known constant parameter. For (a)-(b) we present the average ($\pm$ standard deviation) over 20 randomly selected constants $c$ with $0 < c < 2^{16}$. For division/modulo, $x \textsf{ mod } n$, we only consider the case when $n=1$, which results in the maximum number of circuit iterations; the Quipper version assumes $n$ is a variable, but uses the same number of iterations as the constant case when $n=1$. In (d), we use the constant 255 ($=2^8-1$) for the modulus and set the other constant to 173 (which is invertible mod 255). Quipper supports no QFT-based circuits aside from an adder. ``QC time'' is the time (in seconds) for QuickChick to run 10,000 tests.} \label{fig:circ-evaluation} \end{figure*} \begin{figure}[t] \centering \hspace*{-1em} \begin{tabular}{|l|>{\centering\arraybackslash}p{3.2cm}<{}|>{\centering\arraybackslash}p{3.3cm}<{}|} \hline type &Verified & Randomly Tested \\[0.5em] \hline Nat /Bool & \vspace{-0.5em} \begin{adjustwidth}{-1em}{} \begin{tabular}{l} $[x\texttt{-}N]_{q}\;$ $[N\texttt{-}x]_{q}\;$ $[x\texttt{-}y]_{q,t}\;$\\ $[x\texttt{<}N]_{q,t}\;$ \\$[x\texttt{=}y]_{q,t}\;$ $[x\texttt{<}y]_{q,t}$ \end{tabular} \end{adjustwidth} \vspace{-0.5em} \begin{adjustwidth}{-1em}{} \begin{tabular}{l} $[x\texttt{-}N]_{t}$ \\ \end{tabular} \end{adjustwidth} \\[0.5em] \hline FixedP & \vspace{-0.5em} \begin{adjustwidth}{-1em}{} \begin{tabular}{l} $[x\texttt{+}N]_{q}\;$ $[x\texttt{+}y]_{t}\;$ $[x\texttt{-}N]_{q}$ \\ $[N\texttt{-}x]_{q}\;$ $[x\texttt{-}y]_{t}\;$ $[x\texttt{=}N]_{q,t}$\\ $[x\texttt{<}N]_{q,t}$ $[x\texttt{=}y]_{q,t}$ $[x\texttt{<}y]_{q,t}$ \end{tabular} \end{adjustwidth} \vspace{-0.5em} \begin{adjustwidth}{-1em}{} \begin{tabular}{l} $[x\texttt{+}N]_{t}\;$ $[x\texttt{+}y]_{q}\;$ $[x\texttt{-}N]_{t}$\\ $[N\texttt{-}x]_{t}\;$ $[x\texttt{-}y]_{q}\;$ $[x\texttt{*}N]_{q,t}$\\ $[x\texttt{*}y]_{q,t}\;$ $[x\texttt{/}N]_{q,t}$ \end{tabular} \end{adjustwidth} \\[0.5em] \hline \end{tabular}\\[1em] {\scriptsize $x$,$y$ = variables, $N$ = constant,\\ $[]_{a,q,t}$ = AQFT-based ($a$), QFT-based ($q$), or Toffoli-based ($t$)\\ All testing is done with 16-bit/60-bit circuits. \caption{Other verified \& tested operations} \label{fig:op-table} \end{figure} \subsection{Validating Operator Correctness} As shown in \Cref{fig:circ-evaluation}, we have fully verified the adders and modular multipliers used in Shor's algorithm. These constitute the first proved-correct implementations of these functions, as far as we are aware. % Virtual qubits, and invariants enforced by \oqasm's type system, were of significant help in completing these proofs. % \liyi{we could say: The QFT-based modular multiplier is verified by utilizing the state well-formedness property (\Cref{def:well-formed}). For qubits in \texttt{Phi} space, since there is a uniformity across all qubits of qubits ($\frac{\upsilon}{2^{n - k}}$), we are able to know $\upsilon$ by only looking at qubit $(x,0)$ without looking at other qubits. This saves many proof cases to analyze. } % We benefit from three key features when writing proofs about \oqasm programs: \textit{separability}, \textit{discreteness}, and \textit{well-formedness}. % Separability means that when reasoning about state $\varphi$, we can consider each qubit separately. % This is reflected in the data structures in Coq that represent the three qubit forms in \Cref{def:well-formed}: % {\footnotesize % \noindent % $ % \inval{c}{b} \equiv e^{2 \pi i b}\ket{c} \quad % \insttwo{Hval}{h}{b} \equiv e^{2\pi{i} b}\ket{h} \quad % \iqval{b_1}{b_2} \equiv e^{2\pi{i} b_1}\qket{b_2} % $ % } % \noindent % In this representation, each qubit has its own complex global phase factors ($b, b_1$), which are independent of other qubits'. % Discreteness refers to the fact that the bits $c$ and phase values $b$ can be represented by natural numbers, which is a benefit for randomized testing in \Cref{sec:rand-testing}. % Well-formedness means that we can use assumptions about a state's form from \Cref{def:well-formed} to simplify proofs. % For example, consider applying a $\texttt{SR}^{[-1]}$ gate to a variable $x$ of type \texttt{Phi}. % In the result of this application, the $b_2$ value of every qubit $k$ in $x$ will be of the form $\frac{\upsilon}{2^{n - k}}$ for some $\upsilon$ (which is the same over all $k$). % Therefore, proving a property about the $b_2$ values of qubits in $x$ only requires reasoning about $\varphi(x,0)$. % If $\varphi(x,0)$'s local phase is $\frac{\upsilon}{2^{n}}$ for some $\upsilon$, then the local phase for $\varphi(x,k)$ is $\frac{\upsilon}{2^{n- k}}$. All other operations in the figure were tested with Quick\-Chick. To ensure these tests were efficacious, we confirmed they could find hand-injected bugs; e.g., we reversed the input bitstrings for the QFT adder (\Cref{fig:circuit-example}) and confirmed that testing found the endianness bug. The tables in \Cref{fig:circ-evaluation} give the running times for the QuickChick tests---the times include the cost of extracting the Coq code to OCaml, compiling it, and running it with 10,000 randomly generated inputs. We tested these operations both on 16-bit inputs (the number that's relevant to the reported qubit and gate sizes) and 60-bit inputs. For the smaller sizes, tests complete in a few seconds; for the larger sizes, in a few minutes. For comparison, we translated the operators' \vqir programs to \sqir, converted the \sqir programs to OpenQASM 2.0 \cite{Cross2017}, and then attempted to simulate the resulting circuits on test inputs using the DDSim~\cite{ddsim}, a state-of-the-art quantum simulator. Unsurprisingly, the simulation of the 60-bit versions did not complete when running overnight. We also verified and property-tested several other operations, as shown in \Cref{fig:op-table}. During development, we found two bugs in the original presentation of the QFT-based modular multiplier \cite{qft-adder}. The first issue was discovered via random testing and relates to assumptions about the endianness of stored integers. The binary number in Figure 6 of the paper uses a little-endian format whereas the rest of the circuit assumes big-endian. Quipper's implementation of this algorithm solves the problem by creating a function in their Haskell compiler to reverse the order of qubits. %\footnote{This is one of the reasons why Quipper's QFT-based adder uses many gates in \Cref{fig:circ-evaluation}. } In \vqir, we can use the \texttt{Rev} operation (which does not insert \texttt{SWAP}s) to correct the format of the input binary number. The second issue was discovered during verification. \citet{qft-adder} indicates that the input $x$ should be less than $2^n$ where $n$ is the number of bits. However, to avoid failure the input must \emph{actually} be less than $N$, where $N$ is the modulus defined in Shor's algorithm. To complete the proof of correctness, we needed to insert a preprocessing step to change the input to $x \% N$. The original on-paper implementation of the ripple-carry-based modular multiplier \cite{ripple-carry-mod} has the same issue. \ignore{ In addition to formal verification and random testing, we also ``spot-checked'' results by translating \vqir programs to \sqir, converting the \sqir programs to OpenQASM 2.0 \cite{Cross2017}, and simulating the resulting circuits on test inputs. For this manual testing, we generated circuits for 8 bits and simulated each circuit with four manually generated inputs using DDSIM \cite{ddsim}. This helped to prevent bugs in the parts of our toolchain not implemented in Coq (e.g., extraction from Coq to OCaml and OpenQASM file I/O). \subsection{Operator Resource Usage} \Cref{fig:circ-evaluation} compares the resources used by \vqir operators with counterparts in Quipper. In both cases, we compiled the operators to OpenQASM 2.0 circuits,\footnote{We converted the output Quipper files to OpenQASM 2.0 using a compiler produced at Dalhousie University \cite{quipper-qasm}.} and then ran the circuits through the \voqc optimizer~\cite{VOQC} to ensure that the outputs account for inefficiencies in automatically-generated circuit programs (e.g., no-op gates inserted in the base case of a recursive function). \voqc outputs the final result to use gates preferred by the Qiskit compiler~\cite{Qiskit}, which are the single-qubit gates $U_1, U_2, U_3$ and the two-qubit gate $CNOT$. We also provide resource counts (computed by the same procedure) for our implementations of 8-bit modular multiplication. Quipper does not have a built-in operation for modular multiplication (which is different from multiplication followed by a modulo operator in the presence of overflow). We define all of the arithmetic operations in \Cref{fig:circ-evaluation} for arbitrary input sizes; the limited sizes in our experiments (8 and 16 bits) are to account for inefficiencies in \voqc. For the largest circuits (the modular multipliers), running \voqc takes about 10 minutes. \myparagraph{Comparing QFT and Toffoli-based operators} The results show that the QFT-based implementations always use fewer qubits. This is because they do not need ancillae to implement reversibility. For both division/modulo and modular multiplication (used in Shor's oracle), the savings are substantial because those operators are not easily reversible using Toffoli-based gates, and more ancillae are needed for uncomputation. The QFT circuits also typically use fewer gates. This is partially due to algorithmic advantages of QFT-based arithmetic, partially due to \voqc (\voqc reduced QFT circuit gate counts by 57\% and Toffoli circuit gate counts by 28\%) and partially due to the optimized decompositions we use to convert many-qubit gates to the one- and two-qubit gates supported by \voqc.% \footnote{We use the decompositions for Toffoli and controlled-Toffoli at \url{https://qiskit.org/documentation/_modules/qiskit/circuit/library/standard_gates/x.html}; the decomposition for controlled-$Rz$ at \url{https://qiskit.org/documentation/_modules/qiskit/circuit/library/standard_gates/u1.html}; and the decomposition for controlled-controlled-$Rz$ at \url{https://quantumcomputing.stackexchange.com/questions/11573/controlled-u-gate-on-ibmq}. The decompositions we use are all proved correct in the \sqir development. All of the decompositions are ancilla free.} We found during evaluation that gate counts are highly sensitive to the decompositions used: Using a more na\"{i}ve decomposition of the controlled-Toffoli gate (which simply computes the controlled version of every gate in the standard Toffoli decomposition) increased the size of our Toffoli-based modular multiplication circuit by 1.9x, and a similarly na\"{i}ve decomposition of the controlled-controlled-$Rz$ gate increased the size of our QFT-based modular multiplication circuit by 4.4x. We also found that gate counts (especially for the Toffoli-based circuits) are sensitive to choice of constant parameter: The QFT-based constant multiplication circuits had between 1320 and 1412 gates, while the Toffoli-based circuits had between 988 and 2264. Unlike gate counts, qubit counts are more difficult to optimize because they require fundamentally changing the structure of the circuit; this makes QFT's qubit savings for modular multiplication even more impressive. Overall, our results suggest that QFT-based arithmetic provides better and more consistent performance, so when compiling \vqimp programs (like the sine function in \Cref{fig:sine-impl}) to \oqasm, we should bias towards using the QFT-based operators. \myparagraph{Comparing to Quipper} Overall, \Cref{fig:circ-evaluation}(a)-(c) shows that operator implementations in \vqir consume resources comparable to those available in Quipper, often using fewer qubits and gates, both for Toffoli- and QFT-based operations. In the case of the QFT adder, the difference in results is because the Quipper-to-OpenQASM converter we use has a more expensive decomposition of controlled-$Rz$ gates.\footnote{\citet{quipper-qasm} decomposes a controlled-$Rz$ gate into a circuit that uses two Toffoli gates, an $Rz$ gate, and an ancilla qubit. In \voqc, each Toffoli gate is decomposed into 9 single-qubit gates and 6 two-qubit gates. In contrast, \name's decomposition for controlled-$Rz$ uses 3 single-qubit gates, 2 two-qubit gates, and no ancilla qubits.} In the other cases (all Toffoli-based circuits), we made choices when implementing the oracles that improved their resource usage. Nothing fundamental stopped the Quipper developers from having made the same choices, but we note they did not have the benefit of the \oqasm type system and PBT framework. Quipper has recently begun to develop a random testing framework based on QuickCheck~\cite{10.1145/351240.351266}, but it only applies to Toffoli-based (i.e., classical) gates. % In the Quipper development, we found that they are developing a random testing kit. This work is under construction. From what we see so far, it can test small circuits (a 5 bit addition circuit). The random testing kit is based on classical gates, including \texttt{X}, \texttt{CNOT} and Toffoli gates. % Our random testing kit is a product that can finish different tasks mentioned in the paper (\Cref{sec:rand-testing} and \Cref{sec:approx-circs}). \subsection{Approximate Operators} \label{sec:approx-circs} % One use case for \name is in the selection between competing circuit implementations. % For example, someone implementing a quantum algorithm involving something as simple as \textit{integer addition} must choose from at least three different circuit implementations \cite{ripple-carry-mod,qft-adder,Gidney2019ApproximateEP}, and different contexts may favor different implementations. % Complicating the decision, one may wish to use \textit{approximate} components (such as the approximate QFT, see \Cref{fig:background-circuit-example}) to improve performance, but it may not be clear precisely what effect such an approximation will have on an algorithm as a whole. \oqasm's efficiently-simulable semantics can be used to predict the effect of using approximate components, which enables a new workflow for optimizing quantum circuits: Given an exact circuit implementation, replace a subcomponent with an approximate implementation; use \name's PBT framework to compare the outputs between the exact and approximate circuits; and finally decide whether to accept or reject the approximation based on the results of these tests, iteratively improving performance. In this section, we use \name's PBT framework to study the effect of replacing QFT circuits with AQFT circuits (\Cref{fig:circuit-example}) in addition and division/modulo circuits. \myparagraph{Approximate Addition} \Cref{fig:approx-results}(a) shows the results of replacing QFT with AQFT in the QFT adder from \Cref{fig:circ-evaluation}(a). As expected, a decrease in precision leads to a decrease in gate count. On the other hand, our testing framework demonstrates that this also increases error (measured as absolute difference accounting for overflow, maximized over randomly-generated inputs). Random testing over a wider range of inputs suggests that dropping $b$ bits of precision from the exact QFT adder always induces an error of at most $\pm 2^b - 1$. This exponential error suggests that the ``approximate adder'' is not particularly useful on its own, as it is effectively ignoring the least significant bits in the computation. However, it computes the most significant bits correctly: if the inputs are both multiples of $2^b$ then an approximate adder that drops $b$ bits of precision will always produce the correct result. \begin{figure*}[t] \hspace*{-1em} \begin{tabular}{c @{\quad} c} \begin{minipage}[b]{0.4\textwidth} \centering \begin{tabular}{|l|c|c|} \hline Precision & \# gates & Error \\ \hline 16 bits (full) & 1206 & $\pm$ 0 \\ 15 bits & 1063 & $\pm$ 1 \\ 14 bits & 929 & $\pm$ 3 \\ \hline \end{tabular} \subcaption{Varying the precision in a 16-bit adder} \end{minipage} \begin{minipage}[b]{0.55\textwidth} \centering \begin{tabular}{|l|c|c|c|c|} \hline \# iters. ($I+1$) & TOFF & QFT & AQFT & \% savings \\ \hline 1 & 1798 & 1794 & 1717 & 4.5 / 4.5 \\ $4$ & 7192 & 4432 & 3488 & 48.5 / 21.2 \\ $8$ & 14384 & 8017 & 4994 & 65.2 / 37.7 \\ ${12}$ & 21576 & 11637 & 5684 & 73.6 / 51.1 \\ ${16}$ & 28768 & 15288 & 5948 & 79.3 / 61.1 \\ \hline \end{tabular} \subcaption{Gate counts for TOFF vs. QFT vs. AQFT division/modulo circuits; the righthand column shows the savings for TOFF vs. AQFT and QFT vs. AQFT} \end{minipage} \vspace{0.5mm} \end{tabular} \caption{Effects of approximation} \label{fig:approx-results} \end{figure*} \myparagraph{Exact Division/Modulo using an Approximate Adder} \label{sec:qft-moder} \begin{figure*}[t] \begin{tabular}{c } \begin{minipage}{.4\textwidth} % \includegraphics[width=1\textwidth]{qft-adder.png} \footnotesize \Qcircuit @C=0.25em @R=0.4em { \lstick{\qket{x_{n-1}}} & \multigate{4}{\texttt{$x-2^{I-i} n$}} & \multigate{4}{ \texttt{QFT}^{-1}\;N } & \ctrl{9} & \multigate{4}{ \texttt{QFT}\;N } & \multigate{4}{\texttt{$x+2^{I-i} n$}} & \qw & \qw & \qw \\ \lstick{\qket{x_{n-2}}} & \ghost{\texttt{$x-2^{I-i} n$}} & \ghost{ \texttt{QFT}^{-1}\;N } & \qw & \ghost{ \texttt{QFT}\;N } & \ghost{\texttt{$x+2^{I-i} n$}} & \qw & \qw & \qw \\ \lstick{\vdots} & & & & & & & & \rstick{\vdots} \\ \lstick{} & & & & & & & & \\ \lstick{\qket{x_0}} & \ghost{\texttt{$x-2^{I-i} n$}} & \ghost{ \texttt{QFT}^{-1}\;N } & \qw & \ghost{ \texttt{QFT}\;N } & \ghost{\texttt{$x+2^{I-i} n$}} & \qw & \qw & \qw \\ \lstick{} & & & & & & & & &\\ \lstick{\ket{b_{n-1}}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ \lstick{\vdots} & & & & \dots & & & \\ \lstick{} & & & & & & & & \\ \lstick{\ket{b_{1}}} & \qw & \qw & \targ & \qw & \ctrl{-5} & \qw & \targ & \qw \\ \lstick{\vdots} & & & & & & & & \rstick{\vdots} \\ \lstick{} & & & & & & & & & \\ \lstick{\ket{b_0}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \subcaption{QFT-based} \end{minipage} \\\\ \begin{minipage}{.7\textwidth} % \includegraphics[width=1\textwidth]{qft-adder.png} \footnotesize \Qcircuit @C=0.25em @R=0.4em { \lstick{\qket{x_{n-1}}} & \multigate{4}{\texttt{$x-2^{I-i} n$}} & \multigate{4}{ \texttt{QFT}^{-1}\;(N-i) } & \qw & \qswap & \qw & \multigate{4}{ \texttt{RSH} } & \multigate{4}{ \texttt{QFT}\;(N-i-1) } & \multigate{4}{x+(2^{I-i} n \!\mod 2^{I-i-1})} & \qw & \qw & \qw & \qw \\ \lstick{\qket{x_{n-2}}} & \ghost{\texttt{$x-2^{I-i} n$}} & \ghost{ \texttt{QFT}^{-1}\;(N-i) } & \qw & \qw \qwx &\qw & \ghost{ \texttt{RSH} } & \ghost{ \texttt{QFT}\;(N-i-1) } & \ghost{x+(2^{I-i} n \!\mod 2^{I-i-1})} & \qw & \qw & \qw & \qw \\ \lstick{\vdots} & & & & \qwx & & & & & & & & \rstick{\vdots} \\ \lstick{} & & & & \qwx & & & & & & & & \\ \lstick{\qket{x_0}} & \ghost{\texttt{$x-2^{I-i} n$}} & \ghost{ \texttt{QFT}^{-1}\;(N-i) } & \qw & \qw \qwx & \qw & \ghost{\texttt{RSH}} & \ghost{ \texttt{QFT}\;(N-i-1) } & \ghost{x+(2^{I-i} n \!\mod 2^{I-i-1})} & \qw & \qw & \qw & \qw \\ \lstick{} & & & & \qwx & & & & & & & & & \\ \lstick{\ket{b_{n-1}}} & \qw & \qw & \qw & \qw \qwx & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \\ \lstick{\vdots} & & & & \qwx & & & \dots & & & & & \\ \lstick{} & & & & \qwx & & & & & & & & & \\ \lstick{\ket{b_{1}}} & \qw & \qw & \qw & \qswap \qwx & \qw & \qw & \qw & \ctrl{-5} & \qw & \targ & \qw & \qw \\ \lstick{\vdots} & & & & & & & & && & & \rstick{\vdots} \\ \lstick{} & & & & & & & & && & & \\ \lstick{\ket{b_0}} & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw & \qw \subcaption{AQFT-based (addition and subtraction are approximate)} \end{minipage} \end{tabular} \caption{One step of the QFT/AQFT division/modulo circuit} \label{fig:qft-moder} \end{figure*} \ignore{ \begin{figure*}[t] \centering \begin{coq} Fixpoint appx_moder' i (n:nat) (b:nat) (x ex:var) (M:nat -> bool) := match i with 0 => (SKIP (x,0)) | S j => appx_compare_half3 x n b (ex,j) M ; Rshift x; QFT x b; (CU (ex,j) ((appx_adder x n b M))); (X (ex,j)); appx_moder' j n (b+1) x ex (cut_n (div_two_spec M) n) \end{coq} \caption{Approximate QFT Modulo Operation (Core of Addition/Subtraction in \Cref{fig:circuit-add-sub})} \label{fig:qft-moder} \end{figure*} Even though the approximate adder is not particularly useful for addition, there are still cases where it can be useful as a subcomponent. For example, the modulo/division circuit relies on an addition subcomponent, but does not need every bit to be correctly added. \Cref{fig:qft-moder}(a) shows one step of an $N$-bit QFT-based modulo circuit that computes $x\!\!\mod n$ for constant $n$. The algorithm runs for $I+1$ iterations, where $2^{N-1} \le 2^I n <2^N$, with the iteration counter $i$ increasing from 0 to $I$ (inclusive). In each iteration, the circuit in \Cref{fig:qft-moder}(a) computes $x-2^{I-i} n$ and uses the result's most significant bit (MSB) to check whether $x < 2^{N-1-i}$. If the MSB is $0$, then $x \ge 2^{N-1-i}$ and the circuit continues to next iteration; otherwise, it adds $2^{I-i} n$ to the result and continues. We can improve the resource usage of the circuit in \Cref{fig:qft-moder}(a) by replacing the addition, subtraction, and QFT components with approximate versions, as shown in \Cref{fig:qft-moder}(b). At the start of each iteration, $x < 2^{N-i}$, so it is safe to replace components with versions that will perform the intended operation on the lowest $(N-i)$ bits. The circuit in \Cref{fig:qft-moder}(b) begins by subtracting the top $(N-i)$ bits, and then converts $x$ back to the \texttt{Nor} basis using an $(N-i)$-bit precision<EMAIL_ADDRESS>It then swaps the MSB with an ancilla, guaranteeing that the MSB is 0. Next, it uses a \texttt{Rshift} to move the cleaned MSB to become the lowest significant bit (effectively, multiplying $x$ by 2) and uses a $(N-i-1)$-bit precision QFT to convert back to the \texttt{Phi} basis. Finally, it conditionally adds back the top $(N-i-1)$ bit of the value $(2^{I-i} n \!\mod 2^{I-i-1})$, ignoring the original MSB. The result is a division/modulo circuit that uses approximate components, but, as our testing assures, is exactly correct. \Cref{fig:approx-results}(b) shows the required resources for varying numbers of iterations. Compared to the QFT-based circuit, for a single iteration, the approximation provides a 4.5\% savings. And the saving increases with more iterations. In the case of the maximum number of iterations (16 for $n=1$), the AQFT-based division/modulo circuit uses 61.1\% fewer gates than the QFT-based implementation and 79.3\% fewer gates than the Toffoli-based implementation. \section{Evaluation: \vqimp Oracles and Partial Evaluation} \label{sec:partial-eval} The prior section considered arithmetic operators implemented in \oqasm, which are the building blocks for operators we have programmed using \vqimp, including sine, arcsine, cosine, arccosine, and exponentiation on fixed-precision These operators are useful in near-term applications; for example, the arcsine and sine functions are used in the quantum walk algorithm~\cite{Childs_2009}. We used \vqimp's source semantics to test each operator's correctness. As discussed in \Cref{sec:qimp}, one of the key features of \vqimp is \emph{partial evaluation} during compilation to \vqir. The simplest optimization similar to partial evaluation happens for a binary operation $x := x\odot y$, where $y$ is a constant value. \Cref{fig:circ-evaluation} hints at the power of partial evaluation for this case---all constant operations (marked ``const'') generate circuits with significantly fewer qubits and gates. Languages like Quipper take advantage of this by producing special circuits for operations that use classically-known constant parameters. Partial evaluation takes this one step further, pre-evaluating as much of the circuit as possible. For example, consider the fixed precision operation $\frac{x*y}{M}$ where $M$ is constant and a natural number, and $x$ and $y$ are two fixed precision numbers that may be constants. This is a common pattern, appearing in many quantum oracles (recall the $\frac{8^n*x}{n!}$ in the Taylor series decomposition of sine). In Quipper, this is expression compiled to ${r_1}\texttt{ = }{\frac{x}{M}}; {r_2}\texttt{ = }{r1*y}$. The \vqimp compiler produces different outputs depending on whether $x$ and $y$ are constants. If they both are constant, \vqimp simply assigns the result of computing $\frac{x*y}{M}$ to a quantum variable. If $x$ is a constant, but $y$ is not, \vqimp evaluates $\frac{x}{M}$ classically, assigns the value to $r_1$, and evaluates $r_2$ using a constant multiplication circuit. If they are both quantum variables, \vqimp generates a circuit to evaluate the division first and then the multiplication. In \Cref{fig:self-data}~(a) we show the size of the circuit generated for $\frac{x*y}{M}$ where zero, one, or both variables are classically known. It is clear that more classical variables in a program lead to a more efficient output circuit. If $x$ and $y$ are both constants, then only a constant assignment circuit is needed, which is a series of \texttt{X} gates. Even if only one variable is constant, it may lead to substantial savings: In this example, if $x$ is constant, the compiler can avoid the division circuit and use a constant multiplier instead of a general multiplier. These savings quickly add up: \Cref{fig:self-data}~(b) shows the qubit size difference between our implementation of sine and Quippers'. Both the TOFF and QFT-based circuits use fewer than $7\%$ of the qubits used by Quipper's sine implementation. \footnote{\vqimp also benefits from its representation of fixed-precision numbers (\Cref{sec:qimp}), which is more restrictive than Quipper's. Our representation of fixed-precision numbers reduces the qubit usage of the sine function by half, so about half of the qubit savings can be attributed to this.} \begin{figure}[t] \begin{subfigure}[b]{.6\textwidth} \centering \begin{tabular}{| l | c | c |} \hline & \# qubits & \# gates \\ \hline OQIMP ($x$, $y$ const) & 16 & 16\\ OQIMP TOFF ($x$ const) & 33 & $1739 \pm 376$ \\ OQIMP QFT ($x$ const) & 16 & $1372 \pm 26$ \\ OQIMP TOFF & 33 & 61470 \\ OQIMP QFT & 32 & 25609 \\ \hline \end{tabular} \subcaption{ Fixed-precision circuits for $\frac{x*y}{M}$ with $M=5$ (16 bits) \end{subfigure} \hfill \begin{subfigure}[b]{.35\textwidth} \centering \begin{tabular}{| l | c|} \hline & \# qubits \\ \hline OQIMP TOFF & 418 \\ OQIMP QFT & 384 \\ Quipper & 6142 \\ \hline \end{tabular} \subcaption{ Sine circuits (64 bits)} \end{subfigure} \caption{Effects of partial evaluation} \label{fig:self-data} \end{figure} \section{Case Study: Grover's Search} \label{sec:grovers} Here we present a case study of integrating an oracle implemented with \name into a full quantum algorithm, Grover's search algorithm, implemented and verified in \sqir. Grover's search algorithm \cite{grover1996,grover1997}, described in \Cref{sec:background}, has implications for cryptography, in part because it can be used to find collisions in cryptographic hash functions \cite{grover-hash}. Thus, the emergence of quantum computers may require lengthening hash function outputs. We have used \vqimp to implement the ChaCha20 stream cipher \cite{chacha} as an oracle for Grover's search algorithm. %\name proved especially useful for this task, as \vqimp contains many of the operations commonly used by cryptographic hash functions, and any oracles were written can be efficiently tested on a classical machine. This cipher computes a hash of a 256-bit key, a 64-bit message number, and a 64-bit block number, and it is actively being used in the TLS protocol \cite{rfc7905,rfc8446}. The procedure consists of twenty cipher rounds, most easily implemented when segmented into quarter-round and double-round subroutines. The only operations used are bitwise \textsc{xor}, bit rotations, and addition modulo $2^{32}$, all of which are included in \vqimp; the implementation is given in \Cref{fig:chacha-qr}. To test our oracle implementation, we wrote our specification as a Coq function on bitstrings. We then defined correspondence between these bitstrings and program states in \vqir semantics and conjectured that for any inputs, the semantics of our compiled oracle matches the corresponding outputs from our specification function. Using random testing (\Cref{sec:rand-testing}), we individually tested the quarter-round and double-round subroutines as well as the whole twenty-round cipher, performing a sort of unit testing. We also tested the oracle for the boolean-valued function that checks whether the ChaCha20 output matches a known bitstring rather than producing the output directly. This oracle can be compiled to \sqir using our verified compiler, and then the compiled oracle can be used by Grover's algorithm to invert the ChaCha20 function and find collisions. Grover's algorithm was previously implemented and verified in \sqir \cite{PQPC}, and we have modified this implementation and proof to allow for oracles with ancillae like the ones generated by our compiler; thus, our successful QuickChick tests combined with the previously proved theorems for Grover's algorithm provide confidence that we can find Chacha20's hash collisions in a certain probability through Grover's algorithm. \begin{figure}[t] \[\footnotesize \begin{array}{l} \\ \quad{x_1}\texttt{ += }{x_2};\; {x_4}{\; \oplus\texttt{= } }{x_1};\; x_4\,\texttt{<{}<{}<=}\,16;\\ \quad{x_3}\texttt{ += }{x_4};\; {x_2}{\; \oplus\texttt{= } }{x_3};\; x_4\,\texttt{<{}<{}<=}\,12;\\ \quad{x_1}\texttt{ += }{x_2};\; {x_2}{\; \oplus\texttt{= } }{x_1};\; x_4\,\texttt{<{}<{}<=}\,8;\\ \quad{x_3}\texttt{ += }{x_4};\; {x_2}{\; \oplus\texttt{= } }{x_3};\; x_4\,\texttt{<{}<{}<=}\,7\\ \quad\texttt{return}\;[x_1,x_2,x_3,x_4];\\ \}\\\\ \texttt{void}\;chacha20(Q\;\tnat[16]\;x)\;\{ \\ \quad\texttt{for}(C\;\tnat\;i=20;\;i>0;\;i \texttt{ -= } 2)\;\{\\ \qquad [x[0], x[4], x[8], x[12]] = qr(x[0], x[4], x[8], x[12]);\\ \qquad [x[1], x[5], x[9], x[13]] = qr(x[1], x[5], x[9], x[13]);\\ \qquad [x[2], x[6], x[10], x[14]] = qr(x[2], x[6], x[10], x[14]);\\ \qquad [x[3], x[7], x[11], x[15]] = qr(x[3], x[7], x[11], x[15]);\\ \qquad [x[0], x[5], x[10], x[15]] = qr(x[0], x[5], x[10], x[15]);\\ \qquad [x[1], x[6], x[11], x[12]] = qr(x[1], x[6], x[11], x[12]);\\ \qquad [x[2], x[7], x[8], x[13]] = qr(x[2], x[7], x[8], x[13]);\\ \qquad [x[3], x[4], x[9], x[14]] = qr(x[3], x[4], x[9], x[14]);\\ \quad\}\\ \} \end{array} \] \caption{ChaCha20 implementation in \vqimp} \label{fig:chacha-qr} \end{figure} \section{Related Work} \label{sec:related} \myparagraph{Oracles in Quantum Languages} Quantum programming languages have proliferated in recent years. Many of these languages (e.g. Quil~\cite{quilc}, OpenQASM 2.0~\cite{Cross2017}, \sqir~\cite{VOQC}) describe low-level circuit programs and provide no abstractions for describing quantum oracles. Higher-level languages may provide library functions for performing common oracle operations (e.g. Q\# \cite{qsharp}, Scaffold~\cite{scaffold,scaffCCnew}) or support compiling from classical programs to quantum circuits (e.g. Quipper~\cite{Green2013}), but still leave some important details (like uncomputation of ancilla qubits) to the programmer. There has been some work on type systems to enforce that uncomputation happens correctly (e.g. Silq~\cite{sliqlanguage}), and on automated insertion of uncomputation circuits (e.g. Quipper~\cite{Green2013}, Unqomp~\cite{unqomp}), but while these approaches provide useful automation, they also lead to inefficiencies in compiled circuits. For example, all of these tools force compilation into the classical gate set \texttt{X}, \texttt{CNOT}, and \texttt{CCNOT} (or ``Toffoli''), which precludes the use of QFT-based arithmetic, which uses fewer qubits than Toffoli-based approaches. Of course, programmers are not obligated to use automation for constructing oracles---they can do it by hand for greater efficiency---but this risks mistakes. \name allows programmers to produce oracles automatically from \vqimp using \texttt{inv} to uncompute, or to manually implement oracle functions in \vqir, in both cases supporting formal verification and testing. % VARIOUS OLD TEXT: % Quipper's approach is efficacious, but it can be inefficient and risks % bugs. It compiles to a circuit whose gates do not leverage a quantum % computer's specific capabilities. For example, addition between % integers compiles to a classical ripple-carry adder rather than one % based on the \emph{quantum fourier transform} (QFT), which can be more % space-efficient. Quipper's compilation strategy also blows up the use % of ancillae. For example, implementing cosine as a Haskell function % and then building a Quipper circuit from it uses $n^2$ ancilla qubits % for an $n$ qubit-encoded number, and usage increases linearly with the % number of steps of the Taylor expansion. Of course, programmers are % not obligated to use the above recipe for constructing oracles---they % can do it by hand for greater efficiency---but this risks % mistakes. While writing this paper we found a bug in Quipper's adder: % When adding numbers of two different precisions, the lower-precision % number is shifted incorrectly.\footnote{The \texttt{k} on the last line of % \texttt{qdouble\_align} should be \texttt{h}; \url{https://www.mathstat.dal.ca/~selinger/quipper/doc/src/Quipper/Algorithms/QLS/QDouble.html\#line-413}} % \item Quipper~\cite{10.1145/2491956.2462177} -- see section 4.6. It % uses Template Haskell to take a Haskell function $f$ of type % \emph{list of bool} $\rightarrow$ \emph{list of bool} (or just a % single \emph{bool}), and converts it to $U_f$ for a fixed number of % qubits. A subsequent step ``uncomputes'' any ancillae that are no % longer needed. Note that Quipper has implemented \texttt{sin}, % \texttt{cos}, etc. But: This is low-level, since the programmer must % manage lists of (physical) qubits and ancillae. No % verification. Look at current code and see what it can do now? % Quipper \cite{Green2013} is a Haskell-like functional quantum language. Many quantum oracles have been defined in Quipper. Users are able to generate quantum circuits by using the Quipper compiler. We have mentioned several Quipper limitations in Sec.~\ref{sec:evaluation}. The major limitations are two. First, the circuits generated from Quipper oracles are not effective in terms of qubits and gates, and most Quipper oracle definitions are not verified. Quipper has a new development of compiling the language to QPMC \cite{Anticoli2017}, which is a model checker that is capable of verifying algorithms defined in Quipper. However, the oracles defined in Quipper are largely not verified. \myparagraph{Verified Quantum Programming} Recent work on formally verifying quantum programs includes \qwire~\cite{RandThesis}, \sqir~\cite{PQPC}, and \qbricks~\cite{qbricks}. These tools have been used to verify a range of quantum algorithms, from Grover's search to quantum phase estimation. Like these tools, properties of \vqir programs are expressed and verified in a proof assistant. But, unlike these tools, we focus on a quantum sub-language that, while not able to express any quantum program, is efficiently simulatable. This allows us to reuse existing infrastructure (like QuickChick~\cite{quickchick}) for testing Coq properties. % We design \vqir with both efficiency and verification in mind: on one hand, \vqir allows users to build more efficient quantum circuit constructions by leveraging native quantum operations such as Hadamard and quantum Fourier transformation; on the other hand, % we identify a class of such circuit constructions whose semantics can be succinctly expressed and efficiently simulated, the specific form of which is enforced by a type system on \vqir. % The latter eases the verification of the compilation and enables % random testing, for any well-formed \vqir program. \myparagraph{Verified Compilation of Quantum Programs} Recent work has looked at verified optimization of quantum circuits (e.g., \voqc~\cite{VOQC}, CertiQ~\cite{Shi2019}), but the problem of verified \emph{compilation} from high-level languages to quantum circuits has received less attention. The only examples of verified compilers for quantum circuits are ReVerC~\cite{reverC} and ReQWIRE~\cite{Rand2018ReQWIRERA}. Both of these tools support verified translation from a low-level Boolean expression language to circuits consisting of \texttt{X}, \texttt{CNOT}, and \texttt{CCNOT} gates.
# Adaptive Lookahead Pure-Pursuit for Autonomous Racing Varundev Sukhil & Madhur Behl Dept. of Computer Science, University of Virginia {varundev<EMAIL_ADDRESS> ###### Abstract This paper presents an adaptive lookahead pure-pursuit lateral controller for optimizing racing metrics such as lap time, average lap speed, and deviation from a reference trajectory in an autonomous racing scenario. We propose a greedy algorithm to compute and assign optimal lookahead distances for the pure-pursuit controller for each waypoint on a reference trajectory for improving the race metrics. We use a ROS based autonomous racing simulator to evaluate the adaptive pure-pursuit algorithm and compare our method with several other pure-pursuit based lateral controllers. We also demonstrate our approach on a scaled real testbed using a F1/10 autonomous racecar. Our method results in a significant improvement ($20\%$) in the racing metrics for an autonomous racecar. ## I Introduction Autonomous racing can be considered as the extreme version (high speeds, and close proximity to other self-driving agents) of the self-driving car problem, and therefore making progress here will enable breakthroughs in agile, and safe autonomy. Autonomous racing is already becoming a futuristic motor-sport [1]. Roborace [2] is the Formula E’s sister series, which features fully autonomous race cars. International autonomous racing competitions such as F1/10 autonomous racing [3, 4], Autonomous Formula SAE [5] are becoming proving grounds for testing the perception, planing, and control algorithms at higher speeds. Amazon has also recently announced a 1/18 scale DeepRacer testbed [6] for end-to-end driving and reinforcement learning methods for autonomous racing. For a single vehicle to race autonomously around a track, the environment around the car on the racetrack must be perceived. This is typically done using a Simultaneous Localization And Mapping (SLAM) algorithm ([7, 8, 9, 10]). Next, the map is used to obtain a reference trajectory ([11, 12, 13]) that the race car can follow. Finally, the vehicle’s steering and velocity controller is fed with small trajectory parts with a defined time horizon while the car is driving around the track. This combination of path planning and motion control is a critical capability for autonomous vehicles. Pure-pursuit controllers are a prevalent class of geometric lateral control algorithms for steering autonomous cars. This paper focuses on advancing the design of an adaptive version of the Ackermann- adjusted pure-pursuit controller [14] to make it suitable for the purpose of autonomous racing. The analysis and scope of this paper is limited to the single agent setting, where a single autonomous race car is tasked with following a reference trajectory (often the raceline), with the minimum lap time. This is known as the _time-trial_ racing problem. Research contributions of this paper: With the autonomous racing time-trial scenario in mind, this paper has the following novel contributions: 1. 1. A greedy algorithm for adaptive lookahead pure-pursuit: given a reference trajectory, our offline algorithm produces the optimal lookahead distance assignment for each waypoint on the reference trajectory based on a tunable convex racing objective. 2. 2. We demonstrate the increased performance in lap time and average speed of the adaptive lookahead pure-pursuit implementations and compare them to a baseline Ackermann-adjusted pure-pursuit in a Gazebo based racing simulator[15] & on a real scaled F1/10 autonomous racecar[4]. ## II Related Work Autonomous racing has received attention in recent years from the robotics, control systems, autonomous vehicles, and deep learning communities. In [16], authors present the use of nonlinear model predictive controller (NMPC) for the control of 1:43 scale RC race cars. Using a dynamical model of the vehicle, the authors compute racing trajectories and control inputs using receding horizon based controller. A similar MPC based controller is also presented in [17, 18]. In [19], authors design a controller to drive an autonomous vehicle at the limits of its control and traction. AutoRally, an open-source 1:5 scale vehicle platform for aggressive autonomous diving is presented in [20]. In all of this work, the MPC directly generates the steering and throttle control inputs based on the reference trajectory and the state of the vehicle. With these approaches an accurate and detailed dynamical model is required. Researchers have also analyzed the problem of computing the optimal (fastest) raceline for a given track layout. A minimum curvature trajectory controller for the Roborace DevBot autonomous racecar is described in [21]. [22, 23, 24, 25, 26] addresses the problem of computing the optimal racing line. In our work, we assume that the race line is known a-priori and provided as a reference trajectory. Our proposed adaptive lookahead pure-pursuit algorithm can work for any reference trajectory. Path tracking is the problem concerned with determining speed and steering inputs at each instant of time in order for the robot to follow a certain path. In [27], authors describe a model-based receding horizon controller for pure-pursuit tracking. They accommodate the vehicle’s steady-state lateral dynamics to improve tracking performance at high speeds. [28] investigates the application of the pure-pursuit technique for reactive tracking of paths for nonholonomic mobile robots. Researchers have also analyzed the stability of mobile robot path tracing algorithms [29] including pure-pursuit. We guide the reader towards [30] for a detailed review of the applications of pure-pursuit. Previous work on overcoming the limitations of pure-pursuit like corner cutting and limited maximum speed are addressed in [31, 32] and have been successful within the stated scope of those projects. However, the metrics for an autonomous racecar as defined in this paper require a different approach which addresses a combination of the previous work and a novel method to maximize a global racing objective. ## III Problem Formulation We present a brief overview of the pure-pursuit algorithm in order to provide the background and motivation for our work on adaptive lookahead pure-pursuit. ### III-A Pure-Pursuit Algorithm Pure-pursuit is a seminal algorithm for geometric lateral control that can be easily implemented in several applications including autonomous robots. It can be dated back in history to the pursuit of missile to a target [33]. This algorithm is popular for it’s ability to recover if the robot moves too far away from the reference trajectory. Seminal Pure-Pursuit Pure-pursuit computes the angular velocity command that moves the robot from its current position to reach a lookahead waypoint in front of the robot. The linear velocity is assumed constant. As the robot pursues the goal, the algorithm then moves the lookahead point further on the path based on the current position of the robot. The original pure-pursuit algorithm [34] was implemented on full-differential drive robot while taking into account it’s associated kino-dynamic constraints. Consider a robot $R$ whose pose is $(x_{1},y_{1},\phi)$ where $(x_{1},y_{1})$ represent the 2D position of the robot and $\phi$ is it’s current heading in the local frame, and a goal position ($x_{2},y_{2}$) that is lookahead distance $l_{d}$ away on the reference trajectory. The pure-pursuit controller is tasked with finding the curvature of the circular arc that will guide the robot from it’s current position to the goal. The relative angular offset, $\alpha$, between the robot’s current heading and the goal, and the curvature $k$ is calculated using: $\alpha=\tan^{-1}(\frac{y_{2}-y_{1}}{x_{2}-x_{1}});\quad k=\frac{2\sin(\alpha)}{l_{d}}$ (1) The curvature provided by equation (2) is used to calculate the heading required to move the robot at a constant speed along the circular arc. Once the arc is computed, the robot follows the arc at a fixed velocity for a certain time $\tau$, before recomputing the goal based on the lookahead distance. The LookAheadDistance, $l_{d}$ parameter controls how far along the reference path the robot should look from the current location to compute the steering/lateral correction commands. Changing this parameter affects the tracking behaviour of the robot: if the distance is low, it can lead to oscillations around the reference path, and if it is too high, it can cause large deviations and lead to corner-cutting [31, 32]. Figure 1: Calculating the desired heading $\theta$ using Ackermann-adjusted pure-pursuit from the racecar’s $base\\_link$ at the center of the rear axle Ackermann-Steering Adjustment The seminal pure-pursuit produces undesired driving behavior like cutting corners [35] when implemented in an Ackermann-steering [36] robot and when the look-ahead parameter is not well tuned [37]. For implementing pure-pursuit path tracking controller to non-holonomic Ackermann-steering robots, we need to add the geometric constraints of the robot to equation (1). To do so, we use the Ackermann adjusted pure-pursuit implementation as described in [14]. We define the $base\\_link$ ($x_{1},y_{1}$) as the center of the rear-axle of the racecar. By including the robot wheelbase $L$ (distance between the front and the rear axle), the pure-pursuit controller calculates the heading $\theta$ required to guide the robot along the curvature as: $\theta=tan^{-1}(kL)=tan^{-1}(\frac{2Lsin(\alpha)}{l_{d}})$ (2) This is depicted in Figure 1. The racecar finds the nearest point to its $base\\_link$ in the reference trajectory and identifies a goal waypoint on the trajectory that is distance $l_{d}$ away from the $base\\_link$. It then computes the arc of radius $R$ that joins the $base\\_link$ to the goal to find the angular offset $\alpha$. Adjusting for the racecar’s wheelbase $L$, the angular offset to the goal is calculated with reference to the front axle that is distance $L$ away from the $base\\_link$ in the heading of the racecar. This heading is $\theta$, and it is calculated from equation (2). The curvature $k$, goal, ($x_{2},y_{2}$) and angle $\theta$ are continuously updated as the racecar follows the reference trajectory. ### III-B Autonomous racing problem setup We define a race-track as any closed-loop drivable environment. The reference trajectory is a sequence of way-points that the car can follow. As described earlier, there are several ways of choosing the right reference trajectory - mathematical race lines such as minimum distance, or minimum curvature - or complicated race lines computed while taking into account the dynamics of the vehicle. Let $\mathcal{W}$ denote the set of $N$ waypoints $w_{i}$ that collectively form the reference trajectory: $\mathcal{W}=\\{w_{i},\quad i\quad\epsilon\quad 1\rightarrow N\\}$ (3) Each waypoint $w_{i}$, represents the coordinates $(x\\_map_{i},y\\_map_{i})$ from the beginning of the start, and heading $\theta_{i}$ to the next waypoint $w_{i+1}$, i.e. $w_{i}=\\{x\\_map_{i},y\\_map_{i},\theta_{i}\\}$ (4) Our approach is agnostic to whether the reference trajectory is optimal or not, and will work as long as any reference trajectory is a closed-loop. ### III-C Adaptive Pure-Pursuit Problem Statement In racing, the ultimate objective is to be faster than your opponents. This can be translated into having a lower lap time than the opponents. The lap time depends on many factors, including average velocity around the track, total distance travelled etc. In the absence of other opponents, the goal is to stick to the reference trajectory and be as fast as possible. For this paper we assume a single racecar on the track at any time (time-trial mode). As described in Section III-A, the lookahead distance $l_{d}$ of the pure- pursuit controller is the most important parameter which determines the behavior of the autonomous racecar. We pose the following question: _What is the optimal value of the lookahead distance for a pure-pursuit controller that will result in the fastest lap around the track ?_ One can think of this as an offline label assignment problem, where we want to assign each waypoint $w_{i}$, on the reference trajectory an associated optimal lookahead distance, $l_{j}$ that the pure-pursuit controller will take as input when it arrives at that waypoint. This idea forms the basis for an _adaptive lookahead pure-pursuit_ controller. Given a reference trajectory $\mathcal{W}$ that consists of way-points described in equation (4), consider a set of $K$ lookahead distances (labels) $\mathcal{L}$: $\mathcal{L}=\\{l_{j},\quad j\quad\epsilon\quad 1\dots K\\}$ (5) A lookahead label informs the underlying pure-pursuit controller about the control horizon. The pose of the racecar at any given time in the race-track is denoted as the tuple $\mathcal{T}_{i}$, such that; $\mathcal{T}_{i}=<x_{i},y_{i},\phi_{i},v_{i}>$ (6) Where $(x_{i},y_{i})$ is the position of the racecar in the race-track relative to the start/finish line, $\phi_{i}$ is the heading of the racecar at the given position, and $v_{i}$ is the current velocity of the racecar. Given each way-point $w_{i}$ and the lookahead distance set $\mathcal{L}$ we want to compute a function assignment $\gamma$ such that; $\gamma(w_{i},l_{j},T_{i})\rightarrow(v_{exit\\_i},\delta_{i})$ (7) Where $v_{exit\\_i}$ is the exit velocity of the racecar, and $\delta_{i}$ is the deviation from the reference trajectory of the racecar for the given way- point and lookahead distance. The label assignment policy $\pi$ can be defined as a mapping for every way- point with an optimal lookahead distance from the set $\mathcal{L}$. ## IV Adaptive Lookahead Pure-Pursuit Input: $T_{init},v_{init}=0,\mathcal{W},\mathcal{L}$ Compute: while _$i <N$_ do $R=SpawnCar(T_{i})$ for _$l_{j}=0$ to $K$_ do PurePursuit($l_{j}$, $R$) if _CrashDetected()_ then ResetCar($R$) $v_{exit\\_i}=0$ $\delta_{i}=\infty$ else until $R=l_{j}|\mathcal{W}$ calculate $\\{v_{current},\delta_{current}\\}$ then: $v_{exit\\_i}=v_{current}$ $\delta_{i}=\delta_{current}$ $v_{i+1}=v_{exit\\_i}$ end if end for $\pi(w_{i})=\pi^{*}(\beta,v_{exit\\_i},\delta_{i})\forall l_{j}\epsilon L$ end while Result: $\pi=l_{i}\forall w_{i};l_{i}\epsilon L$ Algorithm 1 Lookahead label Assignment Race-tracks have sections of lengthy corridors with no turns or small angled turns, and a racecar must utilize these sections of the race-track to achieve higher speeds in order to minimize lap times. Longer lookahead distances yield higher speeds; but at tight turns, the racecar will attempt to cut corners leading to collisions with the bounds of the race-track. This means the the lookahead distance has to be tuned to work with the most difficult section of the race-track. Consequentially, we use multiple lookahead distances for different sections of the track. We find the labelling policy $\pi$ which assigns lookahead distances to different sections of the track based on the desired racing objectives. We first define the racing objectives: #### IV-1 Maximum Velocity Pure-Pursuit (vel*) The labelling policy $\pi_{v}^{*}$ maximizes the $v_{exit\\_i}$ exit velocity for each way-point $w_{i}$ by selecting appropriate the lookahead distance $l_{i}$ from the set $\mathcal{L}$, i.e. $\pi_{v}^{*}=arg\max_{\mathcal{L}}(\sum_{i=1}^{N}v_{exit\\_i})\quad\forall\quad l_{i}\quad\epsilon\quad(1\dots K)$ (8) #### IV-2 Minimum Deviation Pure-Pursuit (dev*) The labelling policy $\pi_{\delta}^{*}$ minimizes $\delta_{i}$ deviation for each waypoint $w_{i}$ by selecting the lookahead distance $l_{i}$ which produces the minimum $\delta_{i}$ for all looaheads in $\mathcal{L}$. We define deviation as the area of the curve between the reference trajectory and the actual trajectory taken by the racecar. $\pi_{\delta}^{*}=arg\min_{\mathcal{L}}(\sum_{i=1}^{N}\delta_{i})\quad\forall\quad l_{i}\quad\epsilon\quad(1\dots K)$ (9) #### IV-3 Convex Combination Pure-Pursuit This objective is a convex combination of previous two objectives from equations (8) and (9), governed by trade-off factor $\beta$. $\pi_{v-\delta}^{*}=\beta(\pi_{v}^{*})+(1-\beta)(\pi_{\delta}^{*});\quad\beta\quad\epsilon\quad[0,1]$ (10) Figure 2: An iteration of the lookahead label assignment algorithm, with racecar spawned using $\mathcal{T}$, the goal set by current lookahead $l_{j}$, and the actual trajectory taken by the racecar using the current lookahead until the goal - where the exit pose, deviation ($\delta$) and $v_{exit}$ are logged Depending on the application, trade-off factor $\beta$ can be adjusted such that $\beta=0$ produces minimum deviation and $\beta=1$ produces maximum achievable velocity. Having defined the different objectives for the adaptive lookahead label assignment, we now present a novel lookahead label algorithm which can assign the optimal lookahead distance labels to each way-point based on the specified objective function (Eqs 8,9,& 10). An overview of our method is presented in Algo. 1, and a visual representation is provided in Fig. 2. Figure 3: Clockwise from Top Left: The F1/10 platform is manually driven around the race track to create a ROS map using traditional SLAM, the ROS map is exported to CAD where the map bounds are extruded and exported as a 3D mesh, the label assignment algorithm is performed on the new map using the set simulation parameters, & the labels are exported to be validated on the F1/10 platform For a waypoint $w_{i}$, we spawn the autonomous car in the simulator using the tuple $\mathcal{T}_{i}$, using the function $SpawnCar()$. At this waypoint, we simulate the function $\gamma$ (Eq 7) for each of the possible lookahead value in the set $\mathcal{L}$. Each iteration of $\gamma$ makes the racecar use Ackermann-adjusted pure-pursuit using the current lookahead until it approaches the goal on the reference trajectory originally set when the racecar was spawned (at both spawn and goal, the Euclidean distance between the racecar’s $base\\_link$ and the actual corresponding waypoint on the reference trajectory is minimal compared to all other waypoints in $\mathcal{W}$), during which time the algorithm continuously computes the racecar’s deviation from the reference trajectory and its current velocity. At the end of the current iteration, when the racecar is closest to the original goal set at spawn, the exit velocity, $v_{exit\\_i}$, which is the current velocity when the racecar is closest to the original goal, and the total deviation from the reference trajectory, $\delta_{i}$, from when the racecar travelled from the spawn location to the goal is logged. If the racecar collides with the race-track boundaries at any time during the current iteration of the algorithm, the corresponding lookahead distances are not considered as candidates for selection at the current waypoint $w_{i}$. This is captured by the $CrashDetected()$ subroutine in Algorithm 1. When the algorithm completely iterates through all lookaheads in $\mathcal{L}$ for all waypoints in $\mathcal{W}$, the logged data which contains [$v_{exit_{i}}$, $\delta_{i}$] is match to the corresponding lookahead and stored for offline tuning. Next, we greedily select the lookahead distance which is best suited for the objective function using the policy $\pi^{*}$. For e.g. for $vel*$, we would pick the lookahead distance with the maximum exit velocity at each waypoint and the corresponding lookahead is assigned as its label. The same criteria can be applied to the $dev*$, and $convex\\_combination$ objectives. The policy is applied to assign the best corresponding lookahead label from $\mathcal{L}$ for all waypoints in $\mathcal{W}$. Figure 4: [Left Half]: The labels generated using the lookahead label assignment assignment for various values of $\beta$; [Right Half]: Race metrics performance of the various pure-pursuit implementations compared to the baseline Ackermann-adjusted pure-pursuit on the simulator and testbed ## V Implementation on Simulator & Testbed The race-track used in the experiment is a small indoor setup with tight turns, and to ease computation on the racecar’s onboard embedded computer (we use the NVIDIA Jetson TX2), we decided to limit the number of lookaheads to 3. While this is not a limitation of the algorithm, the observable differences in performance of the racecar at tightly grouped lookaheads did not produce a larger racing performance increase compared to the additional computation demanded by the onboard computer. We chose lookahead distances (set $\mathcal{L}=\\{1.0,1.5,2.0\\},\quad K=3$). Empirically, the racecar tracked the reference trajectory best at $1.0m$ lookahead, and at $2.0m$ lookahead, the racecar was able to achieve the maximum permissible velocity. ### V-A Experiment Setup Fig. 3 provides an overview of the experiment workflow, where the major steps are descirbed below: 1. 1. Mapping the Race Track: The F1/10 racecar is manually driven around the race track to build a 2D occupancy grid map using the Hector SLAM algorithm [38]. 2. 2. ROSMap2Gazebo: We extrude the map bounds by using a smoothing filter and export the resulting 3D mesh to Gazebo as a world model. 3. 3. Label Assignment: The lookahead label assignment algorithm is run on the virtual race track in ROS F1Tenth simulator for $\beta=[0.0,0.25,0.5,0.75,1.0]$, and the resulting lookahead label sets are benchmarked for performance. 4. 4. Validation on Testbed: The labels generated from our algorithm are exported to the F1/10 testbed and verified against simulation results. In doing so, we can go from a real track, to a real map, to a simulated track and back to the testbed (Fig. 3). ### V-B Testbed Execution & Results For accurate localization at high speeds, the F1/10 testbed was equipped with the CDDT particle filter using a GPU enabled ray-tracing algorithm [39]. In Fig. 4, the left half shows the reference trajectory imposed with the lookahead labels where read, yellow and green represent short, medium and long lookahead distances respectively and the chart showing the effect of the trade-off factor $\beta$ on the best lap time for the current setting. Note that the extreme emphasis on either velocity or deviation optimization leads to worse lap times as opposed to a balanced emphasis. Observed lap times differences between simulation and real world implementation were within 0.5 seconds, and the total lap deviation during real world implementation was withing $5\%$ of the simulated deviation. This can be seen in the right half of Fig. 4 which compares race metrics of the F1/10 autonomous racecar on the real race-track. The $convex\\_combination$ label assignment has better performance in both lap time and average lap speed on the F1/10 testbed with $20\%$ improvement over the baseline implementation. The convex factor $\beta$ and its impact on the lap time is shown in Fig. 4. As $\beta$ changes from 0.25 (minimum deviation) to 0.75 (maximum velocity), the label assignments produce a varying lap time with the best performance on all metrics at around $\beta$=0.5. At $\beta$=0.0, the racecar’s performance was very similar to the baseline Ackermann-adjusted pure-pursuit, and several lookahead labels for $\beta$=1 led to undesirable behaviors including oscillations, drifting and general loss of path tracking on multiple turns. ## VI Conclusion & Future Work In this paper we have demonstrated that adaptive lookahead pure-pursuit out performs Ackermann-steering adjusted pure-pursuit in terms of race related metrics such as lap time and average lap speed, and is a novel fit for autonomous racing, both in simulation and the F1/10 testbed. The analysis focuses on a single agent setting, where a single race car is tasked with following a reference trajectory with the minimum lap time. Our future work involves using the adaptive lookahead pure-pursuit for multiple autonomous racecars & creating a formal framework for autonomous overtaking at high speeds and close-proximity situations. ## References * [1] Walt Scacchi. Autonomous emotorsports racing games: Emerging practices as speculative fictions. Journal of Gaming & Virtual Worlds, 10(3):261–285, 2018. * [2] Global championship of driverless cars. url=https://roborace.com/, journal=Roborace. * [3] Madhur Behl. F1/10 autonomous racing. 2018\. http://f1tenth.org. * [4] Matthew O’Kelly, Varundev Sukhil, Houssam Abbas, Jack Harkins, Chris Kao, Yash Vardhan Pant, Rahul Mangharam, Dipshil Agarwal, Madhur Behl, Paolo Burgio, et al. F1/10: An open-source autonomous cyber-physical platform. arXiv preprint arXiv:1901.08567, 2019. * [5] Skanda Koppula. Learning a cnn-based end-to-end controller for a formula sae racecar. arXiv preprint arXiv:1708.02215, 2017. * [6] Aws deepracer - the fastest way to get rolling with machine learning. url=https://aws.amazon.com/deepracer/. * [7] Hugh Durrant-Whyte and Tim Bailey. Simultaneous localization and mapping: part i. IEEE robotics & automation magazine, 13(2):99–110, 2006. * [8] Michael Montemerlo, Sebastian Thrun, Daphne Koller, Ben Wegbreit, et al. Fastslam: A factored solution to the simultaneous localization and mapping problem. Aaai/iaai, 593598, 2002. * [9] Tim Bailey and Hugh Durrant-Whyte. Simultaneous localization and mapping (slam): Part ii. IEEE robotics & automation magazine, 13(3):108–117, 2006. * [10] Wolfgang Hess, Damon Kohler, Holger Rapp, and Daniel Andor. Real-time loop closure in 2d lidar slam. In 2016 IEEE International Conference on Robotics and Automation (ICRA), pages 1271–1278. IEEE, 2016. * [11] Bruce Krogh and Charles Thorpe. Integrated path planning and dynamic steering control for autonomous vehicles. In Proceedings. 1986 IEEE International Conference on Robotics and Automation, volume 3, pages 1664–1669. IEEE, 1986. * [12] Zvi Shiller and Y-R Gwo. Dynamic motion planning of autonomous vehicles. IEEE Transactions on Robotics and Automation, 7(2):241–249, 1991\. * [13] Emilio Frazzoli, Munther A Dahleh, and Eric Feron. Real-time motion planning for agile autonomous vehicles. Journal of guidance, control, and dynamics, 25(1):116–129, 2002\. * [14] Myungwook Park, Sangwoo Lee, and Wooyong Han. Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm. Etri Journal, 37(3):617–625, 2015. * [15] Madhur Behl Varundev Suresh Babu. F1tenth.dev - an open-source ros based f1/10 autonomous racing simulator. IEEE CASE 2020 Proceedings, 2020. * [16] Alexander Liniger, Alexander Domahidi, and Manfred Morari. Optimization-based autonomous racing of 1: 43 scale rc cars. Optimal Control Applications and Methods, 36(5):628–647, 2015. * [17] Ugo Rosolia, Ashwin Carvalho, and Francesco Borrelli. Autonomous racing using learning model predictive control. In 2017 American Control Conference (ACC), pages 5115–5120. IEEE, 2017. * [18] Gabriel M Hoffmann, Claire J Tomlin, Michael Montemerlo, and Sebastian Thrun. Autonomous automobile trajectory tracking for off-road driving: Controller design, experimental validation and racing. In 2007 American Control Conference, pages 2296–2301. IEEE, 2007\. * [19] Jun Ni and Jibin Hu. Dynamics control of autonomous vehicle at driving limits and experiment on an autonomous formula racing car. Mechanical Systems and Signal Processing, 90:154–174, 2017. * [20] Brian Goldfain, Paul Drews, Changxi You, Matthew Barulic, Orlin Velev, Panagiotis Tsiotras, and James M Rehg. Autorally an open platform for aggressive autonomous driving. arXiv preprint arXiv:1806.00678, 2018. * [21] Alexander Heilmeier, Alexander Wischnewski, Leonhard Hermansdorfer, Johannes Betz, Markus Lienkamp, and Boris Lohmann. Minimum curvature trajectory planning and control for an autonomous race car. Vehicle System Dynamics, 0(0):1–31, 2019. * [22] DP Kelly and RS Sharp. Time-optimal control of the race car: a numerical method to emulate the ideal driver. Vehicle System Dynamics, 48(12):1461–1474, 2010. * [23] Michael E Tipping, Mark Andrew Hatton, and Ralf Herbrich. Racing line optimization, February 22 2011. US Patent 7,892,078. * [24] Ying Xiong et al. Racing line optimization. PhD thesis, Massachusetts Institute of Technology, 2010. * [25] Luigi Cardamone, Daniele Loiacono, and Pier Luca Lanzi. On-line neuroevolution applied to the open racing car simulator. In 2009 IEEE Congress on Evolutionary Computation, pages 2622–2629. IEEE, 2009. * [26] Paul A Theodosis and J Christian Gerdes. Nonlinear optimization of a racing line for an autonomous racecar using professional driving techniques. In ASME 2012 5th Annual Dynamic Systems and Control Conference joint with the JSME 2012 11th Motion and Vibration Conference, pages 235–241. American Society of Mechanical Engineers Digital Collection, 2013. * [27] M Elbanhawi, M Simic, and R Jazar. Receding horizon lateral vehicle control for pure pursuit path tracking. Journal of Vibration and Control, 24(3):619–642, 2018. * [28] Jesús Morales, Jorge L Martínez, María A Martínez, and Anthony Mandow. Pure-pursuit reactive path tracking for nonholonomic mobile robots with a 2d laser scanner. EURASIP Journal on Advances in Signal Processing, 2009:3, 2009. * [29] Anibal Ollero and Guillermo Heredia. Stability analysis of mobile robot path tracking. In Proceedings 1995 IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, volume 3, pages 461–466. IEEE, 1995. * [30] Moveh Samuel, Mohamed Hussein, and Maziah Binti Mohamad. A review of some pure-pursuit based path tracking techniques for control of autonomous vehicle. International Journal of Computer Applications, 135(1):35–38, 2016\. * [31] Myungwook Park, Sangwoo Lee, and Wooyong Han. Development of steering control system for autonomous vehicle using geometry-based path tracking algorithm. Etri Journal, 37(3):617–625, 2015. * [32] M. Park, S. Lee, and W. Han. Development of lateral control system for autonomous vehicle based on adaptive pure pursuit algorithm. In 2014 14th International Conference on Control, Automation and Systems (ICCAS 2014), pages 1443–1447, 2014. * [33] Louis L Scharf, William P Harthill, and Paul H Moose. A comparison of expected flight times for intercept and pure pursuit missiles. IEEE Transactions on Aerospace and Electronic Systems, (4):672–673, 1969. * [34] R Craig Coulter. Implementation of the pure pursuit path tracking algorithm. Technical report, Carnegie-Mellon UNIV Pittsburgh PA Robotics INST, 1992\. * [35] Chieh Chen and Han-Shue Tan. Experimental study of dynamic look-ahead scheme for vehicle steering control. In Proceedings of the 1999 American Control Conference (Cat. No. 99CH36251), volume 5, pages 3163–3167. IEEE, 1999. * [36] Wm C Mitchell, Allan Staniforth, and Ian Scott. Analysis of ackermann steering geometry. Technical report, SAE Technical Paper, 2006. * [37] Stefan Forrest Campbell. Steering control of an autonomous ground vehicle with application to the DARPA urban challenge. PhD thesis, Massachusetts Institute of Technology, 2007. * [38] Stefan Kohlbrecher, Johannes Meyer, Thorsten Graber, Karen Petersen, Uwe Klingauf, and Oskar von Stryk. Hector open source modules for autonomous mapping and navigation with rescue robots. In Robot Soccer World Cup, pages 624–631. Springer, 2013. * [39] Corey Walsh and Sertac Karaman. Cddt: Fast approximate 2d ray casting for accelerated localization. abs/1705.01167, 2017.
# Aquaculture field robotics: Applications, lessons learned and future prospects Herman B. Amundsen Dept. Aquaculture Technology SINTEF Ocean Trondheim, Norway <EMAIL_ADDRESS>Marios Xanthidis Dept. Aquaculture Technology SINTEF Ocean Trondheim, Norway <EMAIL_ADDRESS>Martin Føre Sveinung J. Ohrem Dept. Engineering Cybernetics NTNU Trondheim, Norway <EMAIL_ADDRESS>Dept. Aquaculture Technology SINTEF Ocean Trondheim, Norway <EMAIL_ADDRESS>Eleni Kelasidi Dept. Aquaculture Technology SINTEF Ocean Trondheim, Norway <EMAIL_ADDRESS> ###### Abstract Aquaculture is a big marine industry and contributes to securing global food demands. Underwater vehicles such as remotely operated vehicles (ROVs) are commonly used for inspection, maintenance, and intervention (IMR) tasks in fish farms. However, underwater vehicle operations in aquaculture face several unique and demanding challenges, such as navigation in dynamically changing environments with time-varying sealoads and poor hydroacoustic sensor capabilities, challenges yet to be properly addressed in research. This paper will present various endeavors to address these questions and improve the overall autonomy level in aquaculture robotics, with a focus on field experiments. We will also discuss lessons learned during field trials and potential future prospects in aquaculture robotics. ###### Index Terms: Aquaculture, Marine Robotics, Field Robotics, Autonomous systems ## I Introduction Aquaculture is an important source of protein, and will likely play an even more important role in securing global food demands going forward. In 2020, global aquaculture reached a production of 122.6 million tons, with a total value of 281.5 billion USD [1]. In Norway, aquaculture has been an industrial success story; from its humble beginnings in the 1970s, it has grown to be Norway’s second-largest export industry, exceeded only by oil and gas. Atlantic salmon (Salmo salar) is the dominating species, and most of the production is conducted in floating fish farms along the Norwegian coast. Farms consist of net cages, which are flexible structures that can deform with waves and currents. A net cage typically consists of a net enclosure that is suspended from a floating collar and whose lower edge is attached to bottom weights or a bottom ring to maintain a sufficient volume [2]. Further components in fish farms include mooring systems, automatic feeding systems, and various instrumentation. While dimensions vary, a typical fish farm can consist of more than 10 net cages, all measuring 50 m in diameter and 30 m in depth, and containing up to 200,000 individuals [3]. Fish farms are typically situated in sheltered coastal waters. However, due to a lack of available sites and because of environmental concerns, there is a trend of moving fish farms further offshore where the facilities are more exposed to weather and sealoads [4, 5]. Figure 1: SINTEF ACE is a full-scale aquaculture laboratory consisting of several fish farms at various locations along the Norwegian coast. Pictured is the facility at Rataren, Frøya. Traditionally, fish farms have been dependent on divers for monitoring and intervention operations inside the cages. As diving is associated with risk and because of stricter governmental regulations, unmanned underwater vehicles (UUVs) such as ROVs have been replacing divers for the last decades and are now indispensable tools for Norwegian fish farmers. Common ROV operations include cleaning the nets from biofouling [6], mooring line inspections, and fish monitoring [7]. The most important operation, however, is likely to inspect the nets for holes and structural failures, as such deficiencies can lead to the escape of farmed fish, which both represent a production loss and an environmental concern, as escapees may impact wild fish populations [8]. Net inspections are therefore performed on a regular basis. Since the complex mooring systems outside the net cages increase the risk of tether entanglement, operations are usually performed inside the net cages [9]. The pilot typically steers the ROV based on the video from a forward-looking camera and telemetry such as depth and heading readings. Apart from depth and heading hold, operations are without automatic control. Motivated by the need to reduce costs, mitigate risk from human errors, and increase the weather window when moving further offshore, there is currently a scientific effort to increase the autonomy level in ROV operations [7]. This aligns well with the concept of precision fish farming (PFF) [10], an ongoing effort to move the operational principles of aquaculture from manual operations and experiences-based reasoning, to autonomous operations, objective data interpretation, and decision support systems (Fig. 2). Research efforts include autonomous net inspection by using forward-looking sensors [11, 12, 13, 14] and camera [15, 16, 17, 18, 19, 20, 21, 22], autonomous navigating by installing instrumentation on the net cages [23, 24], and the use of autonomous net-cleaning robots that crawl along the net surface [25, 26]. Figure 2: Precision fish farming (PFF) aims at moving the operational principles of aquaculture from the current standard of manual operations and experience-based reasoning to a more control-oriented approach [10]. In the underwater robotic domain, aquaculture represents an especially demanding environment with many unique challenges that are non-trivial to solve. Operations take place in the wave-zone [27] and in irregular ocean currents induced by the dense biomass [28], which greatly affects control performance. Further, sensor capabilities are often degraded, as hydroacoustic sensors suffer from scattering and multipath propagation effects due to the air-filled swim-bladder of the fish [9], and cameras are affected by frequent occlusions [20]. Finally, operations take place in a highly dynamic and cluttered environment consisting of flexible structures and dense biomass, where safety is critical as collisions can harm the vehicle, the fish, and the net structure. In this article, we will present results from a decade of field experiments in aquaculture robotics. These field experiments have been conducted in SINTEF ACE [29], an industrial-scale aquaculture laboratory consisting of several fish farms dedicated to aquaculture research and development (Fig. 1). We will discuss lessons we have learned over the years and future prospects, with the aim that this paper can contribute to further interest in aquaculture robotics, a domain in which autonomy holds huge potential. The paper is organized as follows: In Section II, we present results and experiences from field experiments divided into four topics; localization, planning, control, and fish monitoring. Then, lessons learned are presented in Section III, while Section IV discusses future prospects. Finally, the paper is concluded in Section V. ## II Field work ### II-A Localization, perception, and mapping A fundamental challenge of most autonomous robotic systems is that of localization; without knowing the states of the robot, one cannot make informed decisions on future actions. In aquaculture, this is further complicated by the fact that the surroundings of the robot are in a constant state of motion and that most operational objectives are usually defined relative to the moving net structure [11]. It is therefore not enough to determine a georeferenced position of the robot, one must also determine the relative position between the robot and the net structure. This challenge has generated different methods that can largely be split into a local and global category. Local approaches rely on a forward-looking sensor or camera to estimate the robot’s pose relative to a local section of the net right in front of the robot, and in turn, make decisions based on this estimate. In contrast, global approaches are based upon concurrently estimating both the robot position and the net shape, and then navigating using these estimations. #### II-A1 Evaluation of hydroacoustic instrumentation Figure 3: ROVs are commonly used for net inspections in aquaculture. The image is from an experiment in SINTEF ACE where an ROV is performing autonomous net inspection using a DVL [11]. Our lab’s first endeavor in localization for aquaculture robotics was an experimental evaluation of hydroacoustic instrumentation in fish farms [9]. The first part of the experiment was an evaluation of an ultrashort baseline (USBL) positioning system. A Sonardyne Scout Plus system was used in the experiments, and the performance was evaluated by analyzing the position measurements of a transponder that was placed on various sections of a net cage or attached to an ROV performing various maneuvers. The USBL measurements had an acceptable precision and update rate, and, though both the standard deviation and dropout periods of the measurements increased with denser biomass between the transceiver and the transponder, the trials suggested that acoustic positioning systems such as USBL can be part of an ROV navigation system in aquaculture. The second part of the trials in [9] featured an evaluation of a Doppler velocity logger (DVL). In underwater navigation, DVLs have proven very useful tools for navigation; by sending hydroacoustic beams toward the seabed, it is possible to measure the linear velocity from the Doppler shift of the reflected beams. When the seabed is in the operational range of the DVL such that the hydroacoustic beams are reflected, the sensor is considered to have a bottom-lock. In an aquaculture cage bottom-lock will never be achieved as the fish and cage will interfere with the hydroacoustic beams. Instead, the DVL was tested by mounting it forward-looking, such that the beams would be reflected by the net cage in front of the ROV, yielding a net-lock. In the trials, the DVL was able to provide accurate measurements when the field-of- view (FOV) was unobstructed by fish, which was largely the case when the ROV was closer than 3 meters from the net. Further, the trials also showed that one could use the measured length of the DVL beams to estimate the pose of the ROV relative to the net in front, which was the inspiration for our first local method. (a) An acoustic point cloud of a net section. (b) The point cloud converted into a voxel map. (c) Orthomosaic of camera images. Figure 4: Mapping during autonomous net inspection using multibeam sonar [14]. #### II-A2 Local positioning Inspired by the result of [9] and DVL-aided altitude control strategies [30], an approach for locally estimating the net-relative pose of the ROV was developed in [11] (Fig. 3). From the measured distance of the reflected beams during net-lock, the local net section of the net in front of the ROV could be approximated as a plane. The plane approximation would update online as the ROV is moving, such that the approximation remained accurate regardless of displacements. As the plane is defined in the body-fixed reference frame of the ROV, it is possible to navigate relative to the plane, even without a georeferenced state estimate. The method provided good accuracy and robustness when the FOV was unobstructed, but was susceptible to fish swimming between the net and the sensor, which caused outliers or loss of net-lock. In [14], we experimented using a forward-looking multibeam sonar (FL-MBS) to approximate the net section in front of the ROV. The FL-MBS measurements yielded a higher resolution of the local net approximation than the DVL measurements, which could be utilized for mapping purposes as seen in Figure 4. Collecting the measurements in a dense point map, we were able to generate a voxel map representation of the net structure. Finally, during post- processing of the video, we were able to create an orthomosaic of the camera images. The FL-MBS measurements also provided more robustness to occlusions than the DVL, as the measurements have more acoustic points such that outliers can easily be rejected. #### II-A3 Global positioning Figure 5: Online estimation of net cage deformation by using measurements from an ADCP and SBL transponders attached to the net [31]. The methods presented thus far can give a local estimate of the net-relative pose, but the estimates do not hold any validity outside of a local region. An alternative can be to estimate the state of the ROV and the shape of the net cage in an inertial frame, as this holds more information, making it easier to make optimal and safe plans. While state estimation of ROVs is a widely researched topic and can largely be solved with inertial sensors and auxiliary sensors such as DVL and USBL to avoid drift, real-time estimation of net cage structures has been less studied. Over the years, different models of net structure deformation have been developed [32, 2, 31], but have either been used in simulations with known current velocities or validated by post- processing experimental data. Without any measurement, the estimation error of purely mathematical models is unbounded. In [24], a method for online estimation of net structure deformation was developed. To bound the estimation error, three short baseline (SBL) transponders were installed at different positions on the net structure as seen in Fig. 5. From their position measurements and measurements from an adaptive current velocity profiler (ADCP), the model of [31] was used to extrapolate the full net structure. Furthermore, a fourth transponder was installed on the vehicle, such that its position was measured. This approach provided holistic localization with more information compared to the local approaches, at the extra cost of the integration of transponders to the net structure. ### II-B Planning and autonomy Provided information on the vehicle state and its surroundings, one can plan autonomous operations. A typical objective for mobile robots is safely reaching a set of waypoints, which can also be applied in aquaculture robotics, for instance, if a certain region of the net cage is of special interest or inspection of instrumentation or mooring lines. A more specialized operation is net inspection, which can either be solved by defining a dense set of waypoints or by planning trajectories relative to a local net-relative pose. #### II-B1 Net inspection The industry standard is to perform net inspection by manually piloting the ROV along vertical or horizontal segments of the net structure while keeping the net within the FOV of the camera and at a safe distance from the ROV. Since the pilot must perform several tasks simultaneously, such as controlling the vehicle, tether management, inspecting the video feed, and keeping track of which parts of the structure have been covered, missions are prone to human errors [33]. In [11], the net-relative pose estimation from the DVL was used to perform semi-autonomous net inspections. The operator specified a desired distance, direction, and inspection speed, and, using the plane approximation of the net, desired vertical or horizontal straight-line trajectories were computed. Since the net approximation will update as the vehicle moves, the desired trajectory will update accordingly, essentially simplifying the mission to a set of straight-line path following segments. This approach was able to relieve the operator from steering the vehicle, though the operator still had to monitor the operation and change direction (up/down/starboard/port) to gain complete coverage. An attempt at increasing the autonomy level further was presented in [13]. The idea was based on monitoring the progress of an inspection from the depth of the ROV and the azimuth angle of the net, which can be identified from the local approximation and will be unique under the assumption that horizontal slices of the net structure maintain concavity. To this end, a lawn-mover pattern for inspection of the net was defined at start-up, and the direction of the inspection was changed by monitoring the depth and azimuth angle. While the approach worked well in simulations, outliers in the azimuth angle estimations caused problems in the field trials, such that parts of the net structure were not covered during the inspections. In [14], a different attempt was made. Similarly to [13], an inspection pattern was defined at start-up. In contrast, the progress was monitored by using the estimated inertial position of the ROV. This approach proved more successful, and full coverage of the inspected sections of the net structure was achieved. #### II-B2 Obstacle avoidance While the methods presented above provide methods to traverse the net at a safe distance, they do not take into regard other parts of the net cage environment, such as instrumentation, ropes and cables, feeding systems, and the biomass itself. Because of the safety-critical nature of aquaculture operations, it is therefore important to also incorporate more general obstacle avoidance into fully autonomous systems. Path planning and obstacle avoidance is a widely researched topic, but there are few examples where methods have been applied to aquaculture. In [34], the elastic band path planning method [35] was tested in an aquaculture setting. The experiments represented one of the first times general collision-free path planning was tested in fish farming. In the experiments, a separate intercepting vehicle, whose position was measured with a USBL transponder, acted as a dynamic obstacle. The experiments showed that the method was quickly able to plan safe paths such that the ROV was able to reach waypoints in the presence of obstacles. In real-world scenarios, obstacles have to be detected with the sensors of the ROV, which remains future work. A further effort to introduce safe planning with obstacle avoidance in aquaculture was the experimental testing of ResiPlan, a motion planning framework that improves safety by adaptively changing the required clearance to obstacles with errors in the path tracking performance of the vehicle, effectively taking into account control errors, uncertainty, and environmental disturbances [36]. In particular, the required clearance increased with state uncertainty stemming from noisy and infrequent USBL measurements that would cause jumps in state estimations, thus leading to more conservative but safer paths. ### II-C Control systems While control of underwater vehicles is generally challenging due to nonlinear hydrodynamics, this is even more challenging in aquaculture as operations take place in the wave zone and external disturbances are time-varying. We quickly learned this in practice during net inspection trials. PID controllers proved insufficient when controlling the velocity of the vehicle, which reduced the ROV’s ability to follow the desired trajectories and thus compromised safety. This sparked the development and testing of a set of control laws aimed at improving this performance. In [37], an adaptive backstepping dynamic positioning (DP) controller for ROVs was developed which can estimate both the vehicle model parameters and the direction and strength of external disturbances. This was further developed into an adaptive velocity controller, also able to measure the same parameters [38]. Finally, a generalized super- twisting sliding mode controller was developed and tested in [39], which provided robustness to external disturbances with unknown bounds. These efforts were important to improve our ability to control the vehicle even in the presence of harsh environmental disturbances. ### II-D Fish monitoring and fish-machine interaction The most important component to monitor for fish farmers is naturally the fish population, and many operations are conducted to improve the growth and/or welfare of the fish. Usually, the fish population is monitored on a group level using static cameras or passive sensors such as hydrophones or sonars [3], and parameters such as feeding activity, swimming activity, and size are assessed. The population can also be monitored on an individual level, for instance with bio-implants [40]. However, the spatial extent of the farms and the large populations make it challenging to obtain an accurate holistic picture of the complete population status. As such, underwater vehicles hold an unlocked potential, as they represent moving platforms able to carry a wide array of sensors. Research, however, implies that underwater vehicles affect the behavior of farmed salmon and that the fish adjust their swimming pattern to maintain a distance from ROVs [41]. This may indicate that underwater vehicles stress the fish, which has negative health consequences [42]. To understand this relationship better, we have conducted field trials aimed at quantifying the fish’s behavioral responses to different influence factors. In [43], we installed objects of various shapes and colors in SINTEF ACE’s facilities and recorded sonar and image data of the fish. Using deep-learning methods, we calculated the fish’s avoidance distance to the objects, from which we could conclude that in the trials, the fish kept greater distances to large objects, and to yellow objects versus white objects, and that the object shape had no apparent effect. Further, the avoidance distance grew proportional to the fish weight. In our latest trials, we explored similar fish behavioral responses to an ROV fitted with sonars, hydrophones, and cameras (Fig. 6). This data is still under processing. The experiments represent early steps to increase the knowledge of the dynamics between fish and vehicles in aquaculture, which should govern basic guidelines for operations in environments where fish and vehicles must coexist. ## III Lessons learned Over the years, we have had our fair share of successes and failures from which we have gathered experience. In this section, we will discuss some of the lessons we have learned. Figure 6: The Argus Mini ROV fitted with sonars, hydrophones, and cameras during trials aimed at identifying behavioural responses of fish to ROV operations. Underwater engineering remains challenging due to the harsh and unforgiving nature of the environment. Particularly, control is challenging due to hydrodynamics, communication capabilities are limited due to the high signal attenuation in water, and seawater corrosion may break equipment. Aquaculture engineering is also exposed to weather conditions (Fig 7), which can increase health, safety and environment (HSE) risks [44] and reduce the weather window where operational conditions are considered acceptable. We’ve had persistent challenges with acoustic position systems such as USBL. Specifically, the accuracy and the dropout rate of the measurements have varied between experiments; from experiments where the performance has been well within the error tolerances of the system, to experiments where it has been challenging to get converging measurements. These problems are likely due to multipath propagation and scattering effects. We have learned that USBL system settings should be tuned relative to the environment, and that it is often best to start with a lower transmit power, and then tune until an acceptable signal-to-noise ratio (SNR) is achieved. Still, frequent and long dropout periods can occur, so state estimators must have acceptable performance during dead-reckoning. Our experience with DVL has shown that it is able to accurately measure velocity and distance relative to the net. By setting a high transmit power, net-lock will be achieved when the FOV is unobstructed. When DVL measurements are intercepted by fish schools during inspections, outliers are generally easy to detect and reject, and dropout periods will usually be short. Bottom- lock can only be obtained outside of a net cage, and net-lock can only be obtained when reasonably close to the net (typically closer than 5 m). Experiments have also shown that water turbidity and lighting conditions have considerable variations in net cages and can highly affect camera images. In aquaculture, the water turbidity can be quite high due to feed spills and feces from the fish, while operations near the surface may be strongly affected by daylight. One specific time this gave us problems was when testing a localization method based on laser-camera triangulation. We installed front- facing lasers and then estimated the net-relative pose from the reflection of the lasers seen in the camera images [12]. In the first trial, we captured data, and post-processing the data yielded very good estimations. In the next trials, we aimed to test this method in closed-loop control but were unsuccessful as we were unable to see the laser reflections due to light attenuation in the water. Tether management is a difficult task, particularly when navigating outside of the net. Fish farms have complex mooring systems, and power lines and feeding lines also connect the net cages with the feeding barge. It is possible to perform inspections outside of the net, but tether management will then require constant focus, and it is usually best to split inspection tasks into segments where there are fewer cables and ropes that can cause entanglement. We have experienced one case of tether entanglement as seen in Fig, 8, which was caused by a sudden increase in current velocity while the ROV was briefly left unsupervised. Figure 7: Operations are exposed to weather conditions. The risk of entanglement is lower when navigating inside net cages, though navigation on the inside of a net calls for a greater responsibility towards the fish population. Fast maneuvers can cause flight responses in the fish, which may be related to an increase in stress levels. Further, we have also experienced that when operating in a specific part of a net cage over time, it appears that the fish population will try to avoid this area. Finally, vehicles should be designed in a way that reduces the risk of damaging the net cage structure or harming the fish. In particular, sharp edges to the vehicle or tools should be avoided as this can cause tear to the net, and thrusters should be covered with lattices to avoid fish getting caught by the propellers. ## IV Future prospects In Section II, we presented various efforts to increase the autonomy level in aquaculture robotics. While simple autonomy has been demonstrated, both in the topics discussed in this paper and other papers, more research is needed before human operators can safely be relieved from manually flying ROVs during aquaculture operations. Further, the technology readiness level in research has yet to reach a point where it is adopted by the industry. Even more, the impact of aquaculture robotics on the fish population has yet to be understood, with the risk that current operations may be harmful to the fish. Finally, the autonomy level is still far from the point where vehicles can be allowed to safely operate autonomously in fish cages without the possibility for humans to intervene, with the consequence that vehicles still have to remain tethered, thus risking tether entanglement. One of the prevailing challenges in aquaculture robotics remains localization and mapping. While various simultaneous localization and mapping (SLAM) algorithms have been successfully demonstrated in underwater environments, it is challenging to adopt these approaches to aquaculture due to the poor hydroacoustic conditions, the frequent visual occlusions from the fish population, poor visibility due to water turbidity from particles such as feed spills and feces, and the non-static surroundings. Methods implemented for aquaculture (Section II-A) have demonstrated the ability to estimate the position relative to the net structure, either through sensors attached to the vehicle or the net structure, but no method has yet been able to map the entire complexity of a net cage, which also should include obstacles such as instrumentation, cables, and ropes. During net inspections, the FOV of the vehicle is rarely aligned with the direction of the vehicle, which dictates that the vehicle also must maintain an awareness of potential obstacles in its path which is not captured by front-facing cameras or sensors. Figure 8: The article author pictured attempting to solve a case of tether entanglement. SLAM methods for aquaculture also holds great potential for improving reporting after ROV inspections of net cages. The current standard is that ROV service companies report inspection results to the fish farmers through a written report and video recording from the ROV camera. Research has shown, however, that pilots are not always able to identify structural failures, either due to poor visibility or incomplete coverage [33]. Mapping techniques such as photogrammetry may be a valuable tool for improving reporting, making it easier to assess operations and detect structural failures post-mission. Similar to other industries, underwater vehicle operations remain expensive in aquaculture. There are many drivers to this; expensive vehicles, expensive sensors, and, most importantly, costs related to the team of operators needed. A current trend in underwater robotics is the introduction of new commercial vehicles and sensors with significantly lower costs compared to previous standards, which help bring the overall costs down. Another trend that is bringing costs further down is remote operations where the ROV pilots operate from a land-based facility and control the ROV by wireless transfer of telemetry and video [45]. In the future, we may see more examples of this, which can be combined with an increased autonomy level and permanent resident ROVs at fish farms, removing the need to bring operators or equipment to the farms during missions. Realization of permanent resident robots in fish farms may also require development of docking and communication solutions tailored for aquaculture [7]. Aquaculture robotics have been inspired by underwater robotics in other industries, such as oil and gas, as the technology level of these industries has been more advanced. However, aquaculture is an inherently different environment, and, as such, it is not necessarily straightforward to adopt solutions from other industries. Especially, the interaction between robotic operations and the fish population is poorly understood. Further research is required to understand this relationship better, such that future aquaculture robotics have better vehicle designs and operational guidelines that are more friendly towards the fish. In Norwegian aquaculture, governmental incentives are encouraging the development of new production concepts, such as offshore aquaculture, submersible net cages, and rigid and closed structures [46]. These concepts have fundamental differences compared to the net cages that dominate the industry today. As such, operations will have to be adapted to these new concepts, which is expected to also affect aquaculture robotics in the future. Finally, due to the multiple mooring lines, ropes, and cables present at fish farms, tether management remains a difficult task. If the autonomy level can be increased to a point where the vehicle can safely operate without human surveillance, the tether can be removed, which would represent a milestone in aquaculture robotics that would mitigate completely the risk of entanglement. Due to the high safety requirements in aquaculture, this requires a higher level of autonomy than the current standards, as well as rigorous experimental validation. ## V Conclusion This paper has presented applications in aquaculture robotics, including localization, planning, control, and robotic interaction with the fish population, with a special focus on field experiments. These applications showcase potentials and challenges in aquaculture robotics. Further, lessons learned from the field are presented, and future prospects and directions in aquaculture robotics are discussed. ## Acknowledgement The results discussed in this paper have been collected through various project and funding schemes: projects funded by the Research Council of Norway (RCN) (pr. numbers: 217541, 256241, 269087, 313737, 327292), the SFI Exposed Center for Research-based Innovation funded by RCN, and internal funding through the SINTEF RACE funding scheme. We are grateful to the personell of SINTEF ACE for their help during experiments and to all our collaborators. ## References * [1] FAO, _The State of World Fisheries and Aquaculture 2022. Towards Blue Transformation_. Food and Agriculture Organization of the United Nations, 2022. * [2] O. M. Faltinsen and Y. Shen, “Wave and current effects on floating fish farms,” _J. Marine Science and Applications_ , vol. 17, pp. 284–296, 2018. * [3] M. Føre, M. O. Alver, J. A. Alfredsen, A. Rasheed, T. Hukkelås, H. V. Bjelland, B. Su, S. J. Ohrem, E. Kelasidi, T. Norton, and N. Papandroulakis, “Digital twins in intensive aquaculture — challenges, opportunities and future prospects,” _Computers and Electronics in Agriculture_ , vol. 218, p. 108676, 2024. * [4] H. V. Bjelland, M. Føre, P. Lader, D. Kristiansen, I. M. Holmen, A. Fredheim, E. I. Grøtli, D. E. Fathi, F. Oppedal, I. B. Utne, and I. Schjølberg, “Exposed aquaculture in Norway,” in _OCEANS 2015 - MTS/IEEE Washington_ , 2015, pp. 1–10. * [5] B. Morro, K. Davidson, T. P. Adams, L. Falconer, M. Holloway, A. Dale, D. Aleynik, P. R. Thies, F. Khalid, J. Hardwick, H. Smith, P. A. Gillibrand, and S. Rey-Planellas, “Offshore aquaculture of finfish: Big expectations at sea,” _Rev. Aquaculture_ , vol. 14, no. 2, pp. 791–815, 2021. * [6] J. Bannister, M. Sievers, F. Bush, and N. Bloecher, “Biofouling in marine aquaculture: a review of recent research and developments,” _Biofouling_ , vol. 35, pp. 631–648, 2019. * [7] E. Kelasidi and E. Svendsen, “Robotics for sea-based fish farming,” in _Encyclopedia of Smart Agriculture Technologies_ , Q. Zhang, Ed. Cham: Springer International Publishing, 2022, pp. 1–20. * [8] H. Moe Føre and T. Thorvaldsen, “Causal analysis of escape of Atlantic salmon and rainbow trout from Norwegian fish farms during 2010–2018,” _Aquaculture_ , vol. 532, p. 736002, 2021. * [9] P. Rundtop and K. Frank, “Experimental evaluation of hydroacoustic instruments for ROV navigation along aquaculture net pens,” _Aquaculture Engineering_ , vol. 74, pp. 143–156, Sep. 2016. * [10] M. Føre, K. Frank, T. Norton, E. Svendsen, J. A. Alfredsen, T. Dempster, H. Eguiraun, W. Watson, A. Stahl, L. M. Sunde, C. Schellewald, K. R. Skøien, M. O. Alver, and D. Berckmans, “Precision fish farming: A new framework to improve production in aquaculture,” _Biosystems Engineering_ , vol. 173, pp. 176 – 193, 2018. * [11] H. B. Amundsen, W. Caharija, and K. Y. Pettersen, “Autonomous ROV inspections of aquaculture net pens using DVL,” _IEEE J. Oceanic Eng._ , vol. 47, pp. 1–19, 2022. * [12] M. Bjerkeng, T. Kirkhus, W. Caharija, J. T. Thielemann, H. B. Amundsen, S. J. Ohrem, and E. I. Grøtli, “ROV navigation in a fish cage with laser-camera triangulation,” _J.Marine Science_ , vol. 9, 2021. * [13] H. Ø. Karlsen, H. B. Amundsen, W. Caharija, and M. Ludvigsen, “Autonomous aquaculture: Implementation of an autonomous mission control system for unmanned underwater vehicle operations,” in _Proc. IEEE/MTS OCEANS_ , 2021, pp. 1–10. * [14] A. Cardaillac, H. B. Amundsen, E. Kelasidi, and M. Ludvigsen, “Application of maneuvering based control for autonomous inspection of aquaculture net pens,” in _8th Asia-Pacific Conference on Intelligent Robot Systems (ACIRS)_ , 2023, pp. 44–51. * [15] G. Livanos, M. Zervakis, V. Chalkiadakis, K. Moirogiorgou, G. Giakos, and N. Papandroulakis, “Intelligent navigation and control of a prototype autonomous underwater vehicle for automated inspection of aquaculture net pen cages,” in _Proc. IEEE International Conference on Imaging Systems and Techniques (IST)_ , Oct 2018. * [16] T. X. Lin, Q. Tao, and F. Zhang, “Planning for fish net inspection with an autonomous OSV,” in _International Conference on System Science and Engineering (ICSSE)_ , 2020, pp. 1–5. * [17] D. A. Duecker, T. Hansen, and E. Kreuzer, “RGB-D camera-based navigation for autonomous underwater inspection using low-cost micro AUVs,” in _IEEE/OES Autonomous Underwater Vehicles Symposium (AUV)_ , 2020, pp. 1–7. * [18] Y.-P. Zhao, L.-J. Niu, H. Du, and C.-W. Bi, “An adaptive method of damage detection for fishing nets based on image processing technology,” _Aquacultural Engineering_ , vol. 90, p. 102071, 2020. * [19] J. Betancourt, W. Coral, and J. Colorado, “An integrated ROV solution for underwater net-cage inspection in fish farms using computer vision,” _SN Applied Sciences_ , 2020. * [20] C. Schellewald, A. Stahl, and E. Kelasidi, “Vision-based pose estimation for autonomous operations in aquacultural fish farms,” _IFAC-PapersOnLine_ , vol. 54, no. 16, pp. 438–443, 2021, 13th IFAC Conference on Control Applications in Marine Systems, Robotics, and Vehicles (CAMS). * [21] A. Madshaven, C. Schellewald, and A. Stahl, “Hole detection in aquaculture net cages from video footage,” in _14th International Conference on Machine Vision (ICMV)_ , W. Osten, D. Nikolaev, and J. Zhou, Eds., vol. 12084, International Society for Optics and Photonics. SPIE, 2022, p. 120840X. * [22] W. Akram, A. Casavola, N. Kapetanović, and N. Miškovic, “A visual servoing scheme for autonomous aquaculture net pens inspection using ROV,” _Sensors_ , vol. 22, no. 9, 2022. * [23] B. O. A. Haugaløkken, S. S. Sandøy, I. Schjølberg, J. A. Alfredsen, and I. B. Utne, “Probabilistic localization and mapping of flexible underwater structures using Octomap,” in _Proc. European Control Conference (ECC)_ , 2018, pp. 268–275. * [24] E. Kelasidi, B. Su, W. Caharija, M. Føre, M. Pedersen, and K. Frank, “Autonomous monitoring and inspection operations with UUVs in fish farms,” _IFAC-PapersOnLine_ , vol. 55, no. 31, pp. 401–408, 2022. * [25] S. J. Ohrem, H. B. Amundsen, and E. Kelasidi, “Control-oriented modeling of an underwater biofouling prevention robot,” in _Proc. 20th International Conference on Advanced Robotics (ICAR)_ , 2021, pp. 1121–1128. * [26] M. Skaldebø, S. J. Ohrem, H. B. Amundsen, E. Kelasidi, and N. Bloecher, “Framework for autonomous navigation for a permanent resident aquaculture net grooming robot,” in _31st Mediterranean Conference on Control and Automation (MED)_ , 2023, pp. 356–363. * [27] P. Lader, D. Kristiansen, M. Alver, H. V. Bjelland, and D. Myrhaug, “Classification of aquaculture locations in norway with respect to wind wave exposure,” in _Proc. ASME 36th International Conference on Ocean, Offshore and Arctic Engineering_ , vol. 6: Ocean Space Utilization, Jun. 2017. * [28] L. C. Gansel, S. Rackebrandt, F. Oppedal, and T. A. McClimans, “Flow fields inside stocked fish cages and the near environment,” _Journal of Offshore Mechanics and Arctic Engineering_ , vol. 136, Aug. 2014. * [29] SINTEF, “ACE,” https://www.sintef.no/en/all-laboratories/ace/, 2023, accessed: 2023-04-25. * [30] F. Dukan and A. J. Sørensen, “Sea floor geometry approximation and altitude control of ROVs,” _Control Engineering Practice_ , vol. 29, pp. 135–146, Aug. 2014. * [31] B. Su, E. Kelasidi, K. Frank, J. Haugen, M. Føre, and M. O. Pedersen, “An integrated approach for monitoring structural deformation of aquaculture net cages,” _Ocean Engineering_ , vol. 219, p. 108424, 2021. * [32] P. Klebert, Øystein Patursson, P. C. Endresen, P. Rundtop, J. Birkevold, and H. W. Rasmussen, “Three-dimensional deformation of a large circular flexible sea cage in high currents: Field experiment and modeling,” _Ocean Engineering_ , vol. 104, pp. 511 – 520, 2015. * [33] Norwegian Directorate of Fisheries, “Notinspeksjon med ROV: Fleire tilfelle der skadane på nota ikkje vart oppdaga,” https://www.fiskeridir.no/Akvakultur/erfaringsbase-romming/erfaringshendelser/notinspeksjon-med-rov-fleire-tilfelle-der-skadane-pa-nota-ikkje-vart-oppdaga, 2021, online; accessed 06.03.2024. * [34] H. B. Amundsen, M. Føre, S. J. Ohrem, B. O. A. Haugaløkken, and E. Kelasidi, “Three-dimensional collision avoidance and path planning for unmanned underwater vehicles using elastic bands,” _Field Robotics_ , 2024, Accepted. * [35] S. Quinlan and O. Khatib, “Elastic bands: connecting path planning and control,” in _Proc. IEEE International Conference on Robotics and Automation_ , 1993, pp. 802–807. * [36] M. Xanthidis, E. Kelasidi, and K. Alexis, “ResiPlan: Closing the planning-acting loop for safe underwater navigation,” in _Proc. IEEE International Conference on Robotics and Automation (ICRA)_ , 2023. * [37] S. J. Ohrem, H. B. Amundsen, W. Caharija, and C. Holden, “Robust adaptive backstepping DP control of ROVs,” _Control Engineering Practice_ , vol. 127, p. 105282, 2022. * [38] S. J. Ohrem, L. D. Evjemo, B. O. A. Haugaløkken, H. B. Amundsen, and E. Kelasidi, “Adaptive speed control of ROVs with experimental results from an aquaculture net pen inspection operation,” in _31st Mediterranean Conference on Control and Automation (MED)_ , 2023, pp. 868–875. * [39] B. O. A. Haugaløkken, H. B. Amundsen, H. S. Fadum, J. T. Gravdahl, and S. J. Ohrem, “Adaptive generalized super-twisting tracking control of an underwater vehicle,” in _Proc. IEEE Conference on Control Technology and Applications (CCTA)_ , 2023, pp. 687–693. * [40] E. Svendsen, M. Føre, L. L. Randeberg, and J. A. Alfredsen, “Design of a novel biosensor implant for farmed Atlantic salmon (Salmo salar),” in _IEEE Sensors_ , 2021, pp. 1–4. * [41] M. Kruusmaa, R. Gkliva, J. Tuhtan, A. Tuvikene, and J. A. Alfredsen, “Salmon behavioural response to robots in an aquaculture sea cage,” _Royal Society open science_ , vol. 7, no. 3, p. 191220, 2020. * [42] S. Espelid, G. B. Løkken, K. Steiro, and J. Bøgwald, “Effects of cortisol and stress on the immune system in Atlantic Salmon (Salmo salar L.),” _Fish & Shellfish Immunology_, vol. 6, no. 2, pp. 95–110, 1996. * [43] Q. Zhang, N. Bloecher, v. Linn Danielsen E, M. Føre, B. Su, E. Eilertsen, M. A. Mulelid, and E. Kelasidi, “Farmed atlantic salmon (salmo salar l.) avoid intrusive objects in cages: The influence of object shape, size and colour, and fish length,” _Aquaculture_ , vol. 581, p. 740429, 2024. * [44] S. M. Holen, I. B. Utne, I. M. Holmen, and H. Aasjord, “Occupational safety in aquaculture – part 2: Fatalities in Norway 1982–2015,” _Marine Policy_ , vol. 96, pp. 193–199, 2018. * [45] A. Vasilijevic, J. E. Bremnes, and M. Ludvigsen, “Remote operation of marine robotic systems and next-generation multi-purpose control rooms,” _Journal of Marine Science and Engineering_ , vol. 11, no. 10, 2023. * [46] H. Moe Føre, T. Thorvaldsen, T. C. Osmundsen, F. Asche, R. Tveterås, J. T. Fagertun, and H. V. Bjelland, “Technological innovations promoting sustainable salmon (Salmo salar) aquaculture in Norway,” _Aquaculture Reports_ , vol. 24, p. 101115, 2022.
and $B$ say. Moreover, the complementary regions incident to one endpoint of $e$ are all in $A$ or $B$. Hence, there is some edge $e^{\prime}$ of $\Gamma_{i}$ emanating from this endpoint of $e$ that has $A$ on one side and $B$ on the other. Let $X^{\prime}_{i}=X_{i}\cup\\{e^{\prime}\\}\setminus\\{e\\}$. Let $\Gamma^{\prime}_{i}=\mathcal{P}_{i}\setminus X^{\prime}_{i}$. Then $\Gamma_{i}^{\prime}$ is obtained from $\Gamma_{i}$ by an edge swap. Moreover, we may set $X_{i+1}$ to be $X^{\prime}_{i}$. Then $\Gamma_{i+1}$ is obtained from $\Gamma_{i}^{\prime}$ by contracting $e$. 5. (v) Suppose $\mathcal{P}_{i}\rightarrow\mathcal{P}_{i+1}$ is obtained by Dehn twisting about a curve $\alpha$ that is the closure of an embedded edge. Then we may perturb $\alpha$ so that it intersects $\mathcal{P}_{i}$ in the vertex at the endpoints of $\alpha$. Hence, $\alpha$ intersects $\Gamma_{i}$ in exactly this vertex. We let $X_{i+1}$ be the image of $X_{i}$ under these Dehn twists. Suppose now that $\partial S$ is non-empty. Cases (i), (iii) and (v) are identical to the situation where $S$ is closed. We now explain how (ii) is modified in the case where $\partial S$ is non- empty. Again suppose that $\mathcal{P}_{i}\rightarrow\mathcal{P}_{i+1}$ is the removal of a diagonal $e$. Again, the difficult case is where $e$ does not lie in $X_{i}$, and hence lies in $\Gamma_{i}$. If the two sides of $e$ lie in the same annular region of $S\setminus\setminus\Gamma_{i}$, then there must be an arc $\alpha$ consisting of a union of edges of $X_{i}$ in this annulus separating these two copies of $e$ in the boundary of the annulus. In that case, we set $X_{i+1}$ to be equal to $X_{i}$ minus the edges of $\alpha$, and then $\Gamma_{i+1}$ is obtained from $\Gamma_{i}$ by an edge swap. So suppose that the two sides of $e$ lie in distinct annular regions of $S\setminus\setminus\Gamma_{i}$. One side of $e$ lies in a disc component $D$ of $S\setminus\setminus\mathcal{P}_{i}$. Say that this lies in a component $A$ of $S\setminus\setminus\Gamma_{i}$. Then the edges in $X_{i}$ separate $D$ from $\partial S\cap A$. Hence, there is an arc $\alpha$ properly embedded in $A$ consisting of a union of edges of $X_{i}$ that separates $D$ from $\partial S\cap A$. Again set $X_{i+1}$ to be equal to $X_{i}$ minus the edges of $\alpha$, and again $\Gamma_{i+1}$ is obtained from $\Gamma_{i}$ by an edge swap. The argument in case (iv) is very similar to the case when $S$ is closed. Suppose that $\mathcal{P}_{i}\rightarrow\mathcal{P}_{i+1}$ is the contraction of an embedded edge $e$. The difficult situation is where $e$ is in $X_{i}$. Then $e$ is an arc properly embedded in an annulus component of $S\setminus\setminus\Gamma_{i}$. It is disjoint from $\partial S$, and hence it separates the annulus into an annulus and a disc $D$. Emanating from the vertex at one endpoint of $e$, there is an edge $e^{\prime}$ that has $D$ on one side and an annulus of $S\setminus\setminus(\Gamma_{i}\cup e)$ on the other. Let $X^{\prime}_{i}=X_{i}\cup\\{e^{\prime}\\}\setminus\\{e\\}$ and let $\Gamma^{\prime}_{i}=\mathcal{P}_{i}\setminus X^{\prime}_{i}$. Then $\Gamma^{\prime}_{i}$ is obtained from $\Gamma_{i}$ by an edge swap. Setting $X_{i+1}$ to be $X_{i}^{\prime}$, $\Gamma_{i+1}$ is obtained from $\Gamma_{i}$ by the contraction of $e$. ∎ ## 7\. One-vertex and ideal triangulations In this section, we improve Theorem 6.2 by showing that one can stay within the class of one-vertex triangulations or ideal triangulations. ###### Theorem 1.4. Let $S$ be a compact orientable surface. When $S$ is closed (respectively, has non-empty boundary), let $\mathcal{T}$ and $\mathcal{T}^{\prime}$ be one- vertex (respectively, ideal) triangulations of $S$. There is a sequence $\mathcal{T}=\mathcal{T}_{0},\mathcal{T}_{1},\cdots,\mathcal{T}_{n}=\mathcal{T}^{\prime}$ of one-vertex (respectively, ideal) triangulations of $S$ such that: 1. (1) Each $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by either a flip or a power of a Dehn twist. When $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by Dehn twisting $k$ times about a curve $\alpha$, then $\alpha$ is a normal curve intersecting each edge of $\mathcal{T}_{i}$ at most three times, and the absolute value of $k$ is bounded above by $i(\mathcal{T},\mathcal{T}^{\prime})$. 2. (2) $n=O(|\chi(S)|^{3}\log(i(\mathcal{T},\mathcal{T}^{\prime}))+|\chi(S)|^{3})$. Moreover, there is an algorithm that constructs the sequence $\mathcal{T}_{1},\cdots,\mathcal{T}_{n}$ in time that is a polynomial function of $\log(i(\mathcal{T},\mathcal{T}^{\prime}))$ and $|\chi(S)|$. Here we assume that $\mathcal{T}$ and $\mathcal{T}^{\prime}$ have the same vertex (when $S$ is closed), $\mathcal{T}$ is given as a union of (possibly ideal) triangles with gluing instructions, and $\mathcal{T}^{\prime}$ is given in terms of its normal coordinates with respect to $\mathcal{T}$. For the output, each $\mathcal{T}_{i}$ is given as a union of (possibly ideal) triangles with gluing instructions together with the flip or twist move from $\mathcal{T}_{i}$ to $\mathcal{T}_{i+1}$. We will prove this by dualising the spines of Theorem 6.7. Recall that the _dual_ of a spine $\Gamma$ is a polygonal decomposition that has a complementary disc for each vertex of $\Gamma$ and an edge dual to each edge of $\Gamma$. When the surface is closed, the dual of $\Gamma$ has a single vertex in the disc region $S\setminus\setminus\Gamma$. When the surface has non-empty boundary, the vertices of the dual of $\Gamma$ are all 1-valent and lie on $\partial S$. The dual 1-complex to a one-vertex triangulation of a closed surface $S$ or an ideal triangulation of a surface with boundary is a _trivalent spine_ ; i.e. a spine in which every vertex has degree 3. A flip on a triangulation or ideal triangulation corresponds to a _Whitehead move_ on its dual spine. Therefore, Theorem 1.4 can be rephrased in terms of two given trivalent spines and the existence of a ‘short’ sequence of Whitehead moves and twist maps taking one to the other. Each edge swap between trivalent spines can be written as a composition of a controlled number of Whitehead moves. This is the content of the next lemma which is an analogue of Lemma 8.3 of [LP19] for _trivalent_ spines. ###### Lemma 7.1. Let $\Gamma$ and $\Gamma^{\prime}$ be trivalent spines for a compact orientable surface $S$ such that $\Gamma^{\prime}$ is obtained from $\Gamma$ by an edge swap. Then $\Gamma^{\prime}$ can be obtained from $\Gamma$ by $O(|\chi(S)|)$ Whitehead moves. Moreover, there is an algorithm that constructs such a sequence of Whitehead moves in time that is a polynomial function of $|\chi(S)|$. ###### Proof. We follow the proof of Lemma 8.3 in [LP19]. Assume that $\Gamma^{\prime}$ is obtained from $\Gamma$ by deleting the edge $e$ and adding $e^{\prime}$. Let $A$ be the surface obtained by cutting $S$ along $\Gamma\setminus\\{e\\}$. This is an annulus (when $S$ is closed) or a three-times punctures sphere or once-punctured annulus (when $\partial S\not=\emptyset$). Then $e$ and $e^{\prime}$ are two essential properly embedded arcs in $A$, and hence they are isotopic in $A$. Indeed, there is a disc component of $A\backslash\backslash(e\cup e^{\prime})$ with boundary equal to the concatenation of an arc in $\partial A$, the edge $e$, another arc in $\partial A$ and the other edge $e^{\prime}$. First we move one endpoint of $e$ across this disc to one endpoint of $e^{\prime}$. Then we move the other endpoint of $e$ across this disc to the other endpoint of $e^{\prime}$. Finally we isotope $e$ to $e^{\prime}$ keeping its endpoints fixed; this third step only requires an isotopy and no Whitehead moves. It is enough to show that the first step can be done with $O(|\chi(S)|)$ Whitehead moves, as the second step is similar. There are $O(|\chi(S)|)$ vertices of $\Gamma$ between the first endpoints of $e$ and $e^{\prime}$ along $\partial A$. Passing the endpoint of $e$ across any one of these vertices can be seen as a Whitehead move. Therefore, the total number of Whitehead moves needed is $O(|\chi(S)|)$. ∎ ###### Lemma 7.2. Let $D$ be a polygon with $n$ sides. Let $\mathcal{T}$ and $\mathcal{T}^{\prime}$ be triangulations of $D$ where each side of $D$ is an edge and with no vertices in the interior of $D$. Then $\mathcal{T}$ and $\mathcal{T}^{\prime}$ differ by a sequence of at most $2n-6$ flips. Moreover, there is an algorithm that constructs such a sequence of flips in time that is polynomial in $n$. We term triangulations $\mathcal{T}$ and $\mathcal{T}^{\prime}$ as above _diagonal subdivisions_ of $D$. ###### Proof. The proof is as in [STT88, Lemma 2] and we repeat it to make the algorithmic part of the statement clear. Given a diagonal subdivision of $D$ and a vertex $x$, if the degree $\deg(x)$ is not equal to $n-3$, then we can increase $\deg(x)$ by performing a flip. Hence after $n-3-\deg(x)$ flips, we can covert the subdivision into a new subdivision where all diagonals have one endpoint at $x$. Hence, $\mathcal{T}$ can be converted to $\mathcal{T}^{\prime}$ using at most $2n-6$ flips. ∎ The dual of a triangulation $\mathcal{T}$ as in the above lemma is a tree embedded within the disc $D$ that has 1-valent vertices on $\partial D$ and trivalent vertices in the interior of $D$. We can view the lemma as providing a sequence of Whitehead moves between any two such trees. ###### Proof of Theorem 1.4. We are given 1-vertex or ideal triangulations $\mathcal{T}$ and $\mathcal{T}^{\prime}$. Let $\Gamma$ and $\Gamma^{\prime}$ be the spines dual to $\mathcal{T}$. By Theorem 6.7, there is a sequence $\Gamma=\Gamma_{0},\Gamma_{1},\cdots,\Gamma_{n}=\Gamma^{\prime}$ of spines for $S$ such that: 1. (1) Each $\Gamma_{i+1}$ is obtained from $\Gamma_{i}$ by an edge swap, an expansion or contraction of an embedded edge, or a power of a Dehn twist. When $\Gamma_{i+1}$ is obtained from $\Gamma_{i}$ by Dehn twisting $k$ times about a curve $\alpha$, then $\alpha\cap\Gamma_{i}$ is a vertex of $\Gamma_{i}$, and the absolute value of $k$ is bounded above by $i(\Gamma,\Gamma^{\prime})$. 2. (2) The number $n$ of steps in this sequence is $O(\chi(S)^{2}\log(i(\mathcal{T},\mathcal{T}^{\prime}))+\chi(S)^{2})$. We will remove a small regular neighbourhood of each vertex of $\Gamma_{i}$ and replace it by a tree. This tree has 1-valent vertices where it meets the remnants of the edges of $\Gamma_{i}$, and its remaining vertices are trivalent. Let $F_{i}$ be the union of these trees. Set $\mathcal{Q}_{i}$ to be the resulting trivalent spine. Initially, each component of $F_{0}$ has a single trivalent vertex and three 1-valent vertices. To define $F_{i+1}$, we consider various cases: 1. (1) Suppose that $\Gamma_{i}\rightarrow\Gamma_{i+1}$ is an edge swap, removing an edge $e$ and inserting an edge $e^{\prime}$. Then $e$ corresponds to an edge (also called $e$) of $\mathcal{Q}_{i}$. The edge $e^{\prime}$ may have one or both of its endpoints on a vertex of $\Gamma_{i}$, in which case when we remove a regular neighbourhood of these vertices, we also remove the end segments of $e^{\prime}$, but we can then extend the remnant of $e^{\prime}$ to an edge $e^{\prime\prime}$ with one or both endpoints on the interior of an edge of $F_{i}$. If we remove $e$ from $\mathcal{Q}_{i}$ and attach on $e^{\prime\prime}$, the result is a trivalent spine $\mathcal{Q}_{i+1}$. The forest $F_{i+1}$ is defined to be the intersection between $\mathcal{Q}_{i+1}$ and the regular neighbourhood of the vertices of $\Gamma_{i+1}$. By construction, $\mathcal{Q}_{i+1}$ is obtained from $\mathcal{Q}_{i}$ by an edge swap. Hence by Lemma 7.1, $\mathcal{Q}_{i}$ and $\mathcal{Q}_{i+1}$ differ by a sequence of $O(|\chi(S)|)$ Whitehead moves. 2. (2) Suppose $\Gamma_{i}\rightarrow\Gamma_{i+1}$ is the contraction of an edge $e$. In $\mathcal{Q}_{i}$, there is a copy of $e$ and at its endpoints there are two components of $F_{i}$. We amalgamate them into a single tree by attaching the edge $e$, and we declare that this is a component of $F_{i+1}$. The remaining components of $F_{i}$ become components of $F_{i+1}$. In this way, $\mathcal{Q}_{i+1}$ is isotopic to $\mathcal{Q}_{i}$. 3. (3) Now consider the case where $\Gamma_{i}\rightarrow\Gamma_{i+1}$ is the expansion of an edge $e$ from a vertex $v$. Let $T$ be the component of $F_{i}$ in a regular neighbourhood of $v$. Let $v_{1}$ and $v_{2}$ be the vertices at the endpoint of $e$. Pick trees $T_{1}$ and $T_{2}$ for these regular neighbourhoods to be components of $F_{i+1}$. We can view $T_{1}\cup e\cup T_{2}$ to be a tree lying in a regular neighbourhood of $v$. Using Lemma 7.2, $T$ can be transformed into $T_{1}\cup e\cup T_{2}$ using $O(|\chi(S)|)$ Whitehead moves. Hence, $\mathcal{Q}_{i}$ and $\mathcal{Q}_{i+1}$ differ by a sequence of $O(|\chi(S)|)$ Whitehead moves. 4. (4) Finally suppose that $\Gamma_{i}\rightarrow\Gamma_{i+1}$ is a power of a Dehn twist along a curve $\alpha$ that intersects $\Gamma_{i}$ in a vertex. In $\mathcal{Q}_{i}$, this vertex is replaced by a tree, and $\alpha$ can be arranged to intersect this tree in a connected union of edges or a single vertex. We set $F_{i+1}$ to be the image of $F_{i}$ under this power of a Dehn twist. We now dualise this sequence of trivalent spines to form a sequence of 1-vertex or ideal triangulations of $S$. The dual of each Whitehead move is a flip. Hence, we need at most $O(|\chi(S)|^{3}\log(i(\mathcal{T},\mathcal{T}^{\prime}))+|\chi(S)|^{3})$ flips and powers of Dehn twists. When we Dehn twist, the curves that we twist along intersects the spine in a connected union of edges or a single vertex. When we dualise, this curve is a concatenation of normal arcs, together with a part that runs into a (possibly ideal) vertex of the triangulation. Push the curve off the (possibly ideal) vertex, and we obtain a normal curve that intersects each edge of the triangulation at most three times. To see this note that when we push the curve off the vertex, it skirts around the vertex, and in doing so it picks up at most one new normal arc of each type in each triangle. Since such a triangle may already have had a normal arc in it, we get at most four normal arcs, and these may intersect each edge at most three times. It is clear that this sequence of 1-vertex and ideal triangulations is constructible in polynomial time as a function of $|\chi(S)|$ and $\log(i(\mathcal{T},\mathcal{T}^{\prime}))$. ∎ ###### Lemma 7.3. Let $S$ be a compact connected orientable surface with non-empty boundary, and let $A$ be a collection of disjoint arcs properly embedded in $S$. Let $V$ be a finite collection of points on $\partial S$ disjoint from $A$, with at least one point of $V$ on each component of $\partial S$. In the case where $S$ is a disc, suppose also $|V|\geq 3$. Then $V$ is the vertex set of a triangulation $\mathcal{T}$ of $S$ with the property that each edge of $\mathcal{T}$ intersects each component of $A$ at most twice. Moreover, given a triangulated surface $S$, a normal multi-arc $A$, and a set of points $V$ as above, there is an algorithm that constructs the triangulation $\mathcal{T}$. The algorithm runs in time that is a polynomial function of the number of triangles in some input triangulation of $S$, the cardinality of $V$, and the $\ell^{1}$-norm of the normal coordinates of $A$. ###### Proof. Consider first the case where $S$ is a disc. Note that each component of $\partial S\setminus V$ intersects each component of $A$ at most twice, in a subset of the endpoints of that component. These components of $\partial S\setminus V$ will be edges of $\mathcal{T}$, and so we have verified the required condition for these edges. If $|V|=3$, then we set $\mathcal{T}$ to be a single triangle. So suppose $|V|>3$. Pick three vertices in $V$ that are consecutive around $\partial S$. Join the outermost two by an edge that runs parallel to $\partial S$. This will be an edge of $\mathcal{T}$. It intersects each component of $A$ at most twice. These three vertices now span a triangle. Removing this triangle from $S$ gives a disc with one fewer vertices in its boundary. Hence, by induction, $S$ has the required triangulation. Now suppose that $S$ is not a disc. Suppose also that some component of $\partial S$ contains more than one vertex in $V$. Pick three consecutive vertices on this component of $\partial S$ (where the outermost two may be equal) and join the outermost two by an edge and then remove a triangle, as above. In this way, we may suppose that each component of $\partial S$ contains a single vertex. We now modify $A$ to a new set of arcs $A^{\prime}$ as follows: 1. (1) remove any inessential arcs; 2. (2) replace parallel essential arcs by a single arc; 3. (3) if any complementary region is not a disc or is a disc with more than three arcs in its boundary, then add in an essential arc not parallel to a previous one, and avoiding $A$. Repeat this as much as possible. The resulting arcs form the 1-skeleton of an ideal triangulation of $S$. By construction, each either is equal to a component of $A$ or is disjoint from $A$. We now add further arcs to $A^{\prime}$, one for each component of $\partial S$. Consider any component $C$ of $\partial S$ and the vertex $v$ in $V$ that it contains. Pick some orientation on $C$. Let $p_{1}$ and $p_{2}$ be the endpoints of $A^{\prime}$ that are adjacent to $v$, where the orientation on $C$ runs from $p_{1}$ to $v$. Say that $p_{i}$ lies in the arc $a_{i}$ in $A^{\prime}$. We add the following arc to $A^{\prime}$: it starts at the end of $a_{1}$ that is not $p_{1}$, it runs along $a_{1}$ and then along the sub- arc of $\partial S$ containing $v$ up to $p_{2}$. This sub-arc of $\partial S$ may contain endpoints of inessential arcs of $A$, in which case we modify this new arc so that it avoids these inessential arcs of $A$. We repeat this for each component of $\partial S$. Let $A^{\prime\prime}$ be the union of $A^{\prime}$ and these new arcs, perturbed a little so that they are disjoint from each other. By construction, they are disjoint from $A$. We now slide the endpoints of $A^{\prime\prime}$ along $\partial S$, using the chosen orientations on the components of $\partial S$. We stop when all the endpoints of $A^{\prime\prime}$ lie in $V$. The result is the 1-skeleton of the required triangulation $\mathcal{T}$. Note that this sliding operation may introduce points of intersection between the edges of $\mathcal{T}$ and $A$, but each edge of $\mathcal{T}$ intersects each component of $A$ at most twice, near the endpoints of that component of $A$. ∎ ###### Theorem 7.4. Let $S$ be a closed orientable surface, and $\mathcal{T}$ be a one-vertex triangulation of $S$. Let $\gamma$ be a simple closed normal curve given by its normal vector $(\gamma)$ with respect to $\mathcal{T}$, and denote the bit-sized complexity of $(\gamma)$ by $|(\gamma)|_{\mathrm{bit}}$ and its $\ell^{1}$-norm by $|(\gamma)|_{1}$. There is an algorithm that constructs a sequence of one-vertex triangulations $\mathcal{T}=\mathcal{T}_{0},\mathcal{T}_{1},\cdots,\mathcal{T}_{n}$ of $S$ and a sequence of curves $\gamma=\gamma_{0},\gamma_{1},\cdots,\gamma_{n}$ such that 1. (1) $\gamma_{i}$ is isotopic to $\gamma$ for every $i$. 2. (2) $\gamma_{i}$ is in normal form with respect to $\mathcal{T}_{i}$ for every $i<n$. 3. (3) $\gamma_{n}$ lies in the 1-skeleton of $\mathcal{T}_{n}$. 4. (4) Each $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by either a flip or a power of a Dehn twist. When $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by Dehn twisting $k$ times about a curve $\alpha$, then $\alpha$ intersects each edge of $\mathcal{T}_{i}$ at most three times, and the absolute value of $k$ is bounded above by a polynomial function of $|(\gamma)|_{1}$ and $|\chi(S)|$. 5. (5) The algorithm runs in time that is a polynomial function of $|(\gamma)|_{\mathrm{bit}}$ and $|\chi(S)|$. For the output, each $\mathcal{T}_{i}$ is given as a union of triangles with gluing instructions together with the flip or twist move from $\mathcal{T}_{i}$ to $\mathcal{T}_{i+1}$, and $\gamma_{i}$ is given by its normal coordinates with respect to $\mathcal{T}_{i}$. ###### Proof. The idea is to extend $\gamma$ to a one-vertex triangulation $\mathcal{T}^{\prime}$ of $S$, and then repeat the proof of Theorem 1.4. More precisely, we show that there is an algorithm that extends $\gamma$ to a one- vertex triangulation $\mathcal{T}^{\prime}$ of $S$ such that: 1. (1) $\mathcal{T}^{\prime}$ is in normal form with respect to $\mathcal{T}$. 2. (2) The algorithm runs in time that is a polynomial function of $|(\gamma)|_{\mathrm{bit}}$ and $\chi(S)$. In particular the bit-sized complexity of the normal coordinates of $\mathcal{T}^{\prime}$ with respect to $\mathcal{T}$ are bounded from above by such a polynomial function. Step 1: Construction of $\mathcal{T}^{\prime}$. The normal arcs of $\gamma$ decompose triangles of $\mathcal{T}$ into several $0$-handles (or $2$-cells). A $0$-handle of $S\setminus\setminus(\mathcal{T}\cup\gamma)$ is called a _parallelity handle_ if it is a 4-gon with two of its opposite sides being parallel normal arcs of $\gamma$ and the other two sides lying in the edges of $\mathcal{T}$. Each parallelity 0-handle comes with the structure of an $I$-bundle over an interval, where the interval base is parallel to a normal arc of $\gamma$. The $I$-bundle structures on parallelity $0$-handles glue together to form an $I$-bundle $\mathcal{B}$, called the _parallelity bundle_ , whose base $B$ is a possibly disconnected compact 1-manifold. Moreover, the base $B$ can be considered as a normal (possibly not closed) multicurve. Since we assumed that $\gamma$ is connected, the base $B$ is a finite union of closed intervals. The _vertical boundary $\partial_{v}\mathcal{B}$ of $\mathcal{B}$_ is defined as the restriction of the $I$-bundle $\mathcal{B}$ to $\partial B$. The _horizontal boundary $\partial_{h}\mathcal{B}$ of $\mathcal{B}$_ is the $(\partial I)$-bundle over $B$, obtained by the restriction of the $I$-bundle. Therefore, the boundary of $\mathcal{B}$ is the union of its vertical boundary and horizontal boundary. Similarly, for each component of $\mathcal{B}$, we can speak of its vertical and horizontal boundary. In each triangle of $\mathcal{T}$, there are at most four 0-handles that are not parallelity handles. The union of the $0$-handles that are not parallelity handles forms a 2-complex called the _gut region_. Therefore, the number of 0-handles of the gut region is at most $4t$, where $t$ is the number of triangles in $\mathcal{T}$. Denote the vertex of $\mathcal{T}$ by $v$. Let $\Delta$ be a triangle $0$-handle of the gut region, and $x$ be the side of $\Delta$ opposite the vertex $v$. Place a vertex $w$ on $x$, and isotope $\gamma$ by dragging the vertex $w$ to $v$ along a straight line in $\Delta$. After this isotopy, $\gamma$ is a simple closed normal curve with one vertex on it that coincides with the vertex of $\mathcal{T}$. We will apply the Agol–Hass–Thurston algorithm to compute the following data about the parallelity bundle: Denote the components of $\mathcal{B}$ by $\mathcal{B}_{1},\cdots,\mathcal{B}_{k}$. For each $\mathcal{B}_{i}$, we compute the normal coordinates for its base, together with the attachment of the vertical boundary of $\mathcal{B}_{i}$ to the gut region, and the relative $I$-direction for the two components of the vertical boundary of $\mathcal{B}$. Therefore, we have a handle decomposition $\mathcal{H}$ of $X=S\setminus\setminus\gamma$ into 0-handles, where each 0-handle is either a 0-handle of the gut region, or it is a 4-gon that is equal to a component of $\mathcal{B}$. This handle decomposition $\mathcal{H}$ of $X$ has $O(t)$ 0-handles. There is a natural immersion $i\colon X\rightarrow S$ whose restriction to the interior of $X$ is an embedding, and such that $\gamma$ lies in the image of the boundary of $X$ under the map $i$. Note that $X$ is a compact orientable surface with two or one connected components according to whether $\gamma$ is separating in $S$ or not. Let $V$ be the copies of the vertex $v$ in $X$. By Lemma 7.3, $X$ admits a triangulation $\mathcal{T}^{\prime}_{X}$ with vertex set $V$ and where each edge intersects each component of $\partial_{v}\mathcal{B}$ at most twice. Define the one-vertex triangulation $\mathcal{T}^{\prime}$ of $S$ as the image of the triangulation $\mathcal{T}^{\prime}_{X}$ of $S\setminus\setminus\gamma$ under the map $i$. We can now read off the normal coordinate with respect to $\mathcal{T}$ of each edge $e$ of $\mathcal{T^{\prime}}$. To see this, consider two cases: 1. i) For each part of $e$ in the gut region, we can read its normal coordinate with respect to $\mathcal{T}$. 2. ii) Let $H$ be a 0-handle of $\mathcal{H}$ that forms a component of the parallelity bundle $\mathcal{B}$. We previously computed the base of $H$ as a normal arc with respect to $\mathcal{T}$, using the Agol–Hass–Thurston algorithm. Each time $e$ runs through $H$, it enters and exists $H$ via $\partial_{v}H$ and so each component of $e\cap H$ is normally parallel to the base of $H$. So, we can read off the normal coordinate of $e\cap H$ with respect to $\mathcal{T}$. Summing these coordinates over each 0-handle of $\mathcal{H}$ gives the normal vector of $e$ with respect to $\mathcal{T}$. Finally, the normal coordinate of $\mathcal{T}^{\prime}$ with respect to $\mathcal{T}$ can be obtained by summing up the normal coordinates of its edges. Note that by construction we have (2) $\displaystyle i(\mathcal{T},\mathcal{T}^{\prime})\leq O(|\chi(S)|)\cdot|(\gamma)|_{1}+O(\chi(S)^{2}).$ To see this note that the base of the parallelity bundle is a normal multi-arc of $\ell^{1}$-norm at most $|(\gamma)|_{1}$. Moreover, $\mathcal{T}^{\prime}$ has $O(|\chi(S)|)$ edges and each edge of $\mathcal{T}^{\prime}$ passes through each component of $\mathcal{B}$ at most twice. Therefore the intersection of $\mathcal{T}^{\prime}$ with $\mathcal{T}\cap\mathcal{B}$ contributes at most $O(|\chi(S)|)\cdot|(\gamma)|_{1}$ intersection points. Additionally, each edge of $\mathcal{T}^{\prime}$ intersects the gut region of $\mathcal{H}$ at most $O(|\chi(S)|)$ times, and so $\mathcal{T}^{\prime}$ intersects the restriction of $\mathcal{T}$ to the gut region at most $O(\chi(S)^{2})$ times. Step 2: Construction of $\mathcal{T}_{i}$ and $\gamma_{i}$. Let $\mathcal{T}=\mathcal{T}_{0},\mathcal{T}_{1},\cdots,\mathcal{T}_{n}=\mathcal{T}^{\prime}$ be the sequence of one-vertex triangulations given by Theorem 1.4. Set $\gamma_{0}:=\gamma$. Each $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by either a flip or a twist map. Given the normal curve $\gamma_{i}$ with respect to $\mathcal{T}_{i}$, we put $\gamma_{i}$ in normal form with respect to $\mathcal{T}_{i+1}$ and define $\gamma_{i+1}$ as this normal representative. Consider two cases: 1. a) If $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by a flip, then it is easy to put $\gamma_{i}$ in normal form with respect to $\mathcal{T}_{i+1}$ and find its normal coordinates. 2. b) If $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by a twist map $(T_{\alpha})^{k}$, then the normal coordinates of $\gamma_{i}$ with respect to $\mathcal{T}_{i+1}$ are equal to the normal coordinates of $(T_{\alpha})^{-k}(\gamma_{i})$ with respect to $\mathcal{T}_{i}$. The normal coordinates of $(T_{\alpha})^{-k}(\gamma_{i})$ with respect to $\mathcal{T}_{i}$ can be read off from the normal coordinates of $\gamma_{i}$ with respect to $\mathcal{T}_{i}$, the normal coordinates of $\alpha$ with respect to $\mathcal{T}_{i}$, and the value of $k$, and all this information is given to us by Theorem 1.4. By construction, $\gamma_{i}$ satisfy conditions (1)–(3) of the statement of the theorem. By Theorem 1.4 and equation (2), conditions (4)–(5) of the statement are satisfied as well. ∎ We can similarly prove a version of Theorem 7.4 for ideal triangulations of surfaces with boundary. ###### Theorem 7.5. Let $S$ be a compact orientable surface with non-empty boundary, and $\mathcal{T}$ be an ideal triangulation of $S$. Let $\gamma$ be a simple closed normal curve or a simple normal arc given by its normal vector $(\gamma)$ with respect to $\mathcal{T}$, and denote the bit-sized complexity of $(\gamma)$ by $|(\gamma)|_{\mathrm{bit}}$ and its $\ell^{1}$-norm by $|(\gamma)|_{1}$. There is an algorithm that constructs a sequence of ideal triangulations $\mathcal{T}=\mathcal{T}_{0},\mathcal{T}_{1},\cdots,\mathcal{T}_{n}$ of $S$ and a sequence of curves $\gamma=\gamma_{0},\gamma_{1},\cdots,\gamma_{n}$ such that 1. (1) $\gamma_{i}$ is isotopic to $\gamma$ for every $i$. 2. (2) $\gamma_{i}$ is in normal form with respect to $\mathcal{T}_{i}$ for every $i<n$. 3. (3) $\gamma_{n}$ intersects each edge of $\mathcal{T}_{n}$ at most twice. 4. (4) Each $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by either a flip or a power of a Dehn twist. When $\mathcal{T}_{i+1}$ is obtained from $\mathcal{T}_{i}$ by Dehn twisting $k$ times about a curve $\alpha$, then $\alpha$ intersects each edge of $\mathcal{T}_{i}$ at most three times, and the absolute value of $k$ is bounded above by a polynomial function of $|(\gamma)|_{1}$ and $|\chi(S)|$. 5. (5) The algorithm runs in time that is a polynomial function of $|(\gamma)|_{\mathrm{bit}}$ and $|\chi(S)|$. For the output, each $\mathcal{T}_{i}$ is given as a union of ideal triangles with gluing instructions together with the flip or twist move from $\mathcal{T}_{i}$ to $\mathcal{T}_{i+1}$, and $\gamma_{i}$ is given by its normal coordinates with respect to $\mathcal{T}_{i}$. ###### Proof. The proof is similar to the proof of Theorem 7.4. When $\gamma$ is a closed curve, we first isotope it to create an arc that passes through an ideal vertex $v$ of $\mathcal{T}$. We then repeat the argument as in the proof of Theorem 7.4 to construct a sequence of ideal triangulations $\mathcal{T}=\mathcal{T}_{0},\cdots,\mathcal{T}_{n}$ and arcs $\gamma_{0},\cdots,\gamma_{n}$ such that $\gamma_{n}$ is an edge of $\mathcal{T}_{n}$. Finally when $\gamma$ is a closed curve, we perturb $\gamma_{n}$ off the ideal vertex $v$ to create a curve that intersects each edge of $\mathcal{T}^{\prime}$ at most twice. When $\gamma$ is an arc, we perturb $\gamma_{n}$ so that it is normal and disjoint from the edges of $\mathcal{T}^{\prime}$. Note that if $\gamma$ is a separating curve and $\mathcal{T}$ has exactly one ideal vertex, the geometric intersection number between $\gamma_{n}$ and an edge of $\mathcal{T}_{n}$ is even. Therefore, assuming further that $\gamma$ is essential, there is an edge $e$ of $\mathcal{T}_{n}$ such that $\gamma$ intersects $e$ at least twice. ∎ ## 8\. Application to volumes of hyperbolic 3-manifolds Theorem 1.1 together with Agol’s explicit construction of hyperbolic structures in [Ago03] has the following corollary. ###### Corollary 1.5. Let $\Sigma$ a closed orientable surface of genus $g$, and $P$ and $P^{\prime}$ be pants decompositions of $\Sigma$ with no curve in common. Assume that $M$ is a maximal cusp obtained from a quasi-Fuchsian 3-manifold homeomorphic to $\Sigma\times\mathbb{R}$ by pinching the multicurves $P$ and $P^{\prime}$ to annular cusps on the two conformal boundary components of $M$. The volume of the convex core of $M$ is $O(g^{2})\hskip 2.84526pt(1+\log(i(P,P^{\prime}))).$ ###### Proof. We recall Agol’s bound from [Ago03]. Let $\phi\colon\Sigma\rightarrow\Sigma$ be a homeomorphism such that $\phi(P)=P^{\prime}$, and denote the mapping torus of $\phi$ by $T_{\phi}=(\Sigma\times[0,1])/\\{(x,0)\sim(\phi(x),1)\\}$. By Theorem 1.1, there is a path $C$ consisting of $P_{0}=P,P_{1},\cdots,P_{m}=P^{\prime}$ in the pants graph of $\Sigma$ with length $m=O(g^{2})\hskip 2.84526pt(1+\log(i(P,P^{\prime})))$. Assume that no curve appears in all of $P_{i}$ for $1\leq i\leq m$. Define the sequence of circles $\beta_{1},\cdots,\beta_{m}$ in $\Sigma$ where $\beta_{i+1}$ is the circle in $P_{i+1}$ replacing a circle in $P_{i}$ ($i$ is taken mod $m$). For each $\beta_{i}$ define the curve $B_{i}=\beta_{i}\times\\{\frac{i}{m}\\}$ in $T_{\phi}$, and consider the link complement $M_{C}:=T_{\phi}\setminus N(\cup B_{i})$ where $N(\cup B_{i})$ is a regular neighbourhood of $\cup B_{i}$. Then $T_{\phi}$ is obtained from $M_{C}$ by Dehn filling boundary components corresponding to $\cup B_{i}$. Agol [Ago03, Lemma 2.3] constructed a complete hyperbolic structure of finite volume on $M_{C}$ by gluing together ‘model pieces’; the hyperbolic structure is unique by Mostow–Prasad rigidity. The explicit construction of the hyperbolic structure shows that $\textrm{Vol}(M_{C})=2A+S\leq 2m,$ where Vol denotes the volume of the hyperbolic structure, and $A$ and $S$ indicate the number of _associativity/simple_ moves in the path $C$; see the proof of [Ago03][Corollary 2.4]. Define the subset $\mathcal{A}$ of $\cup B_{i}$ as follows: $B_{i}=\beta_{i}\times\frac{i}{m}$ belongs to $\mathcal{A}$ whenever $\beta_{i}$ is isotopic to a curve in $P^{\prime}$ and for no $j>i$, $\beta_{j}$ is isotopic to $\beta_{i}$. Let $N$ be the manifold obtained by filling in the boundary components of $M_{C}$ corresponding to $(\cup B_{i})\setminus\mathcal{A}$; then $N$ is obtained from $T_{\phi}$ by removing a neighbourhood of the curves $P^{\prime}\times\\{1\\}\subset S\times\\{1\\}$ (or equivalently the curves $P\times\\{0\\}\subset S\times\\{0\\}$ ). By Thurston’s hyperbolisation theorem for Haken manifolds, $N$ is hyperbolic since it is Haken and atoroidal. Hence by a theorem of Thurston [Thu79] $\textrm{Vol}(N)<\textrm{Vol}(M_{C}).$ By Adams [Ada85], the open pairs of pants in $(S\times\\{0\\})\setminus(P\times\\{0\\})$ are totally geodesic in $N$, and hence cutting along them gives a hyperbolic manifold of the same volume. But the manifold obtained by cutting $N$ along the pairs of pants in $(S\times\\{0\\})\setminus(P\times\\{0\\})$ is the maximal cusp $M$, implying that $\textrm{Vol}(M)=\textrm{Vol}(N)<\textrm{Vol}(M_{C})\leq 2m=O(g^{2})\hskip 2.84526pt(1+\log(i(P,P^{\prime}))).$ ∎ The next result shows that our bound in Corollary 1.5 is sharp up to a multiplicative factor of $g$. ###### Proposition 8.1. Fix $0<\alpha<1$ and $C>0$. For any large $g$, there are maximal cusps $M$ obtained from $\Sigma_{g}\times\mathbb{R}$ by pinching the multicurves $P$ and $P^{\prime}$ to annular cusps such that the volume of the convex core of $M$ is greater than $C\medspace g^{\alpha}(1+\log(i(P,P^{\prime}))).$ ###### Proof. Let $P_{1}$ be a pants decomposition of a surface $\Sigma_{2}$ of genus $2$, and denote the simple closed curves in $P_{1}$ by $\\{\alpha_{1},\alpha_{2},\alpha_{3}\\}$. Let $f\colon\Sigma_{2}\rightarrow\Sigma_{2}$ be a pseudo-Anosov mapping class that acts trivially on the first homology group $H_{1}(\Sigma_{2};\mathbb{Z})$. After an isotopy, we can assume that $f$ fixes the base point $b$, and so it acts on $\pi_{1}(\Sigma_{2},b)$. Define the pants decomposition $P^{\prime}_{1}:=f(P_{1})$. Since $f$ is pseudo-Anosov, it does not fix the isotopy class of any essential simple closed curve on $\Sigma_{2}$. So, after possibly replacing $f$ by a power of itself, we can assume that $f(\alpha_{i})$ is not isotopic to $\alpha_{j}$ for any $1\leq i,j\leq 3$. Picking a base point $b\in\Sigma_{2}$ and a tree connecting $\alpha_{i}$ to $b$ in $\Sigma_{2}$, we can identify $\alpha_{i}$ with elements of $\pi_{1}(\Sigma_{2},b)$. Let $\phi\colon\pi_{1}(\Sigma_{2},b)\rightarrow\mathbb{Z}$ be a surjective homomorphism such that $\phi(\alpha_{i})=0$ for $1\leq i\leq 3$, for example $\phi$ can be taken to be the algebraic intersection pairing with some $\alpha_{j}$ that is non-separating. Denote by $\phi_{k}$ the composition of $\phi$ with the reduction map $\mathbb{Z}\rightarrow\mathbb{Z}/k\mathbb{Z}$ modulo $k$, and define $G$ as the kernel of $\phi_{k}$. Therefore, $G$ is an index $k$ subgroup of $\pi_{1}(\Sigma,b)$ that contains all $\alpha_{i}$ for $1\leq i\leq 3$. The image of $\phi_{k}$ is abelian and so it factors through the abelianisation map $\pi_{1}(\Sigma_{2},b)\rightarrow H_{1}(\Sigma_{2};\mathbb{Z})$. Since $f$ acts trivially on homology, we conclude that the curves in $P^{\prime}_{1}=f(P_{1})$ also lie in $G=\ker(\phi_{k})$. Let $M_{1}$ be the maximal cusp obtained from $\Sigma_{2}\times\mathbb{R}$ by pinching the pants decompositions $P_{1}$ and $P^{\prime}_{1}$ to annular cusps, here we use the fact that $P_{1}$ and $P^{\prime}_{1}$ have no curve in common. Let $M$ be the $k$-sheeted cover of $M_{1}$ corresponding to the subgroup $G<\pi_{1}(\Sigma_{2},b)\cong\pi_{1}(M_{1})$. The hyperbolic structure on $M_{1}$ lifts to a hyperbolic structure on $M$, and so $M$ is obtained from $\Sigma_{g}\times\mathbb{R}$ by pinching the lifts $P:=\tilde{P_{1}}$ and $P^{\prime}:=\tilde{P^{\prime}_{1}}$ of respectively $P_{1}$ and $P^{\prime}_{1}$ to annular cusps. Since the curves in $P_{1}$ and $P^{\prime}_{1}$ lie in $G=\ker(\phi_{k})$ by construction, we deduce that $P:=\tilde{P_{1}}$ and $P^{\prime}:=\tilde{P^{\prime}_{1}}$ are pants decompositions (that is, they cut $\Sigma$ into pairs of pants). By comparing Euler characteristics we have $\chi(\Sigma_{g})=k\cdot\chi(\Sigma_{2})\implies g-1=k.$ Denote the convex core of a hyperbolic manifold $N$ by $\mathrm{CC}(N)$, and its hyperbolic volume by $\mathrm{Vol}(N)$. Since $M$ is a $k$-sheeted cover of $M_{1}$ $\mathrm{Vol}(\mathrm{CC}(M))=k\cdot\mathrm{Vol}(\mathrm{CC}(M_{1}))=(g-1)\mathrm{Vol}(\mathrm{CC}(M_{1})).$ To see this, let $\Gamma_{1}$ be a discrete group of isometries of hyperbolic three-space $\mathbb{H}^{3}$ such that $M_{1}=\mathbb{H}^{3}/\Gamma_{1}$, and $\Gamma<\Gamma_{1}$ be a subgroup of index $k$ with $M=\mathbb{H}^{3}/\Gamma$. Denote the limit sets of $\Gamma_{1}$ and $\Gamma$ by respectively $\Lambda(\Gamma_{1})$ and $\Lambda(\Gamma)$. It is easy to see that $\Lambda(\Gamma_{1})=\Lambda(\Gamma)$, since $\Gamma<\Gamma_{1}$ is of finite index. Denote the convex hull of $\Lambda(\Gamma_{1})$ by $\mathrm{CH}(\Lambda(\Gamma_{1}))$, and define $\mathrm{CH}(\Lambda(\Gamma))$ similarly. Then the convex core of the hyperbolic manifold $\mathbb{H}^{3}/\Gamma_{1}$ is the image of $\mathrm{CH}(\Lambda(\Gamma_{1}))$ under the projection $p_{1}\colon\mathbb{H}^{3}\rightarrow\mathbb{H}^{3}/\Gamma_{1}$, see [Thu79, Chapter 8.3]. Similarly, the convex core of $\mathbb{H}^{3}/\Gamma$ is the image of $\mathrm{CH}(\Lambda(\Gamma))$ under the projection $p\colon\mathbb{H}^{3}\rightarrow\mathbb{H}^{3}/\Gamma$. It follows from $\Lambda(\Gamma_{1})=\Lambda(\Gamma)$ that $\mathrm{Vol}(\mathrm{CH}(\Lambda(\Gamma_{1}))/\Gamma_{1})=k\cdot\mathrm{Vol}(\mathrm{CH}(\Lambda(\Gamma))/\Gamma)$, proving the claim. We also have $i(P,P^{\prime})=k\cdot i(P_{1},P^{\prime}_{1})=(g-1)i(P_{1},P^{\prime}_{1}).$ Fix the pants decompositions $P_{1},P^{\prime}_{1}$ on $\Sigma_{2}$ and the homomorphism $\phi$, and allow $k$ to vary. Therefore, for any fixed $0<\alpha<1$ and $C>0$, for $k=g-1$ sufficiently large we have $\mathrm{Vol}(\mathrm{CC}(M))=(g-1)\mathrm{Vol}(\mathrm{CC}(M_{1}))>Cg^{\alpha}(1+\log((g-1)i(P_{1},P^{\prime}_{1})))\geq C\medspace g^{\alpha}(1+\log(i(P,P^{\prime}))).$ ∎ ## 9\. Application to Teichmüller geometry Our next application is to Teichmüller space, endowed with the Weil-Petersson metric. Denote by $\mathrm{Teich}(S)$ the Teichmüller space of a compact orientable surface $S$ (in other words, the space of marked hyperbolic metrics possibly with cusps but with no boundary). Denote its Weil-Petersson metric by $d_{\mathrm{WP}}$. ###### Theorem 1.6. Let $S$ be a compact orientable surface, and let $X,Y\in\mathrm{Teich}(S)$. Let $P_{X}$ and $P_{Y}$ be pants decompositions for $X$ and $Y$ respectively, in which each curve has length at most $2\pi|\chi(S)|$, which exist by a theorem of Parlier. Then $d_{\mathrm{WP}}(X,Y)\leq O(|\chi(S)|^{2})\hskip 2.84526pt(1+\log(i(P_{X},P_{Y}))).$ ###### Proof. Set $L=2\pi|\chi(S)|$. By Parlier’s quantified version of Bers’s theorem [Par23], which is building on and improving the work of Buser and Seppälä [BS92], every hyperbolic metric $X$ on $S$ has a pants decomposition $P$ in which every curve has length at most $L$. Let $\ell_{X}(P)$ denote the sum of the lengths of the curves in $P$. Hence, $\ell_{X}(P)\leq 3|\chi(S)|L$. Let $N(P)$ be the nodal surface where each curve in $P$ has length pinched to zero. We may view $N(P)$ as a point in the metric completion of $\mathrm{Teich}(S)$ with the Weil-Petersson distance. Wolpert showed (Corollary 4.10 in [Wol08]) that the Weil-Petersson distance from $X$ to $N(P)$ is at most $\sqrt{2\pi\ell_{X}(P)}\leq\sqrt{12}\pi|\chi(S)|.$ Therefore, $d_{\mathrm{WP}}(X,Y)\leq d_{\mathrm{WP}}(N(P_{X}),N(P_{Y}))+O(|\chi(S)|)$. Cavendish and Parlier [CP12] introduced a metric graph which they term the _cubical pants graph_ $\mathcal{CP}(S)$. This is obtained from the pants graph $\mathcal{P}(S)$ by adding edges (which may have length greater than one). Hence, for the two pants decompositions $P_{X}$ and $P_{Y}$, their distance in $\mathcal{CP}(S)$ is at most their distance in $\mathcal{P}(S)$. In Lemma 4.1 of [CP12], it is shown that there is an absolute constant $C$ such that $d_{\mathrm{WP}}(N(P_{X}),N(P_{Y}))\leq C\,d_{\mathcal{CP}(S)}(P_{X},P_{Y}).$ So, $\displaystyle d_{\mathrm{WP}}(X,Y)$ $\displaystyle\leq d_{\mathrm{WP}}(N(P_{X}),N(P_{Y}))+O(|\chi(S)|)$ $\displaystyle\leq C\,d_{\mathcal{CP}(S)}(P_{X},P_{Y})+O(|\chi(S)|)$ $\displaystyle\leq C\,d_{\mathcal{P}(S)}(P_{X},P_{Y})+O(|\chi(S)|)$ $\displaystyle\leq O(|\chi(S)|^{2})\hskip 2.84526pt(1+\log(i(P_{X},P_{Y}))),$ where the latter inequality is from Theorem 1.1. ∎ ## 10\. Questions ###### Question 10.1. Is there an algorithm that takes as input a compact connected orientable surface $S$ and two pants decompositions $P$ and $P^{\prime}$, and computes the distance $d_{\mathcal{P}}(P,P^{\prime})$ in the pants graph, in time that is a polynomial function of $|\chi(S)|$ and $\log(1+i(P,P^{\prime}))$? Saul Schleimer has communicated to us that there is an algorithm, using Masur–Minsky’s work, that computes the distance in the pants graph. A simple argument shows that the distance in the flip graph of one-vertex triangulations is bounded from above by the intersection number; see [DP19, Lemma 2.1]. This prompts the following question. ###### Question 10.2. Is there a universal constant $C>0$ such that for every compact orientable surface $S$ and pants decompositions $P$ and $P^{\prime}$ of $S$, the distance between $P$ and $P^{\prime}$ in the pants graph is at most $Ci(P,P^{\prime})$? ## References * [Ada85] Colin C Adams. Thrice-punctured spheres in hyperbolic 3-manifolds. Transactions of the American Mathematical Society, 287(2):645–656, 1985. * [Ago03] Ian Agol. Small 3-manifolds of large genus. Geometriae Dedicata, 102(1):53–64, 2003. * [AHT06] Ian Agol, Joel Hass, and William Thurston. The computational complexity of knot genus and spanning area. Transactions of the American Mathematical Society, 358(9):3821–3850, 2006. * [Bel21] Mark C Bell. Simplifying triangulations. Discrete & Computational Geometry, 66(1):1–11, 2021. * [Bro03] Jeffrey Brock. The Weil-Petersson metric and volumes of 3-dimensional hyperbolic convex cores. Journal of the American Mathematical Society, 16(3):495–535, 2003\. * [BS92] Peter Buser and Mika Seppälä. Symmetric pants decompositions of riemann surfaces. 1992\. * [CCHS03] Richard D Canary, Marc Culler, Sa’ar Hersonsky, and Peter B Shalen. Approximation by maximal cusps in boundaries of deformation spaces of Kleinian groups. Journal of Differential Geometry, 64(1):57–109, 2003. * [CP12] William Cavendish and Hugo Parlier. Growth of the Weil-Petersson diameter of moduli space. Duke Math. J., 161(1):139–171, 2012. * [DP19] Valentina Disarlo and Hugo Parlier. The geometry of flip graphs and mapping class groups. Transactions of the American Mathematical Society, 372(6):3809–3844, 2019. * [DT06] Nathan M Dunfield and Dylan P Thurston. A random tunnel number one 3–manifold does not fiber over the circle. Geometry & Topology, 10(4):2431–2499, 2006. * [DW07] Ivan Dynnikov and Bert Wiest. On the complexity of braids. Journal of the European Mathematical Society, 9(4):801–840, 2007\. * [EN13] Jeff Erickson and Amir Nayyeri. Tracing compressed curves in triangulated surfaces. Discrete Comput Geom, 49:823–863, 2013. * [FLM01] Benson Farb, Alexander Lubotzky, and Yair Minsky. Rank-1 phenomena for mapping class groups. Duke Mathematical Journal, 106(3):581 – 597, 2001. * [Har81] Willam J Harvey. Boundary structure of the modular group. In Riemann surfaces and related topics: Proceedings of the 1978 Stony Brook Conference (State Univ. New York, Stony Brook, NY, 1978), volume 97, pages 245–251, 1981. * [Hem01] John Hempel. 3-manifolds as viewed from the curve complex. Topology, 40(3):631–657, 2001. * [HT80] Allen Hatcher and William Thurston. A presentation for the mapping class group of a closed orientable surface. Topology, 19(3):221–237, 1980. * [Lic62] WB Raymond Lickorish. A representation of orientable combinatorial 3-manifolds. Annals of Mathematics, pages 531–540, 1962. * [LP19] Marc Lackenby and Jessica S Purcell. The triangulation complexity of fibred 3-manifolds. arXiv preprint arXiv:1910.10914, 2019. * [Par23] Hugo Parlier. A shorter note on shorter pants. arXiv preprint arXiv:2304.06973, 2023. * [PH92] Robert C Penner and John L Harer. Combinatorics of train tracks. Number 125. Princeton University Press, 1992. * [STT88] Daniel D Sleator, Robert E Tarjan, and William P Thurston. Rotation distance, triangulations, and hyperbolic geometry. Journal of the American Mathematical Society, 1(3):647–681, 1988\. * [Thu79] William P Thurston. The geometry and topology of three-manifolds. Princeton University Princeton, NJ, 1979. * [Wol08] Scott A. Wolpert. Behavior of geodesic-length functions on Teichmüller space. J. Differential Geom., 79(2):277–334, 2008.
# Self-supervised Group Meiosis Contrastive Learning for EEG-Based Emotion Recognition ††thanks: *Corresponding author 1 https://github.com/kanhaoning/Self-supervised-group-meiosis-contrastive- learning-for-EEG-based-emotion-recognition Haoning Kan Faculty of Science Beijing University of Technology Beijing, China <EMAIL_ADDRESS>Jiale Yu Beijing-Dublin International College Beijing University of Technology Beijing, China <EMAIL_ADDRESS>Jiajin Huang Faculty of Information Technology Beijing University of Technology Beijing, China <EMAIL_ADDRESS>Zihe Liu Faculty of Science Beijing University of Technology Beijing, China <EMAIL_ADDRESS>Haiyan Zhou Faculty of Information Technology Beijing University of Technology Beijing, China <EMAIL_ADDRESS> ###### Abstract The progress of EEG-based emotion recognition has received widespread attention from the fields of human-machine interactions and cognitive science in recent years. However, how to recognize emotions with limited labels has become a new research and application bottleneck. To address the issue, this paper proposes a Self-supervised Group Meiosis Contrastive learning framework (SGMC) based on the stimuli consistent EEG signals in human being. In the SGMC, a novel genetics-inspired data augmentation method, named Meiosis, is developed. It takes advantage of the alignment of stimuli among the EEG samples in a group for generating augmented groups by pairing, cross exchanging, and separating. And the model adopts a group projector to extract group-level feature representations from group EEG samples triggered by the same emotion video stimuli. Then contrastive learning is employed to maximize the similarity of group-level representations of augmented groups with the same stimuli. The SGMC achieves the state-of-the-art emotion recognition results on the publicly available DEAP dataset with an accuracy of 94.72% and 95.68% in valence and arousal dimensions, and also reaches competitive performance on the public SEED dataset with an accuracy of 94.04%. It is worthy of noting that the SGMC shows significant performance even when using limited labels. Moreover, the results of feature visualization suggest that the model might have learned video-level emotion-related feature representations to improve emotion recognition. And the effects of group size are further evaluated in the hyper parametric analysis. Finally, a control experiment and ablation study are carried out to examine the rationality of architecture. The code is provided publicly online1. ###### Index Terms: EEG-based emotion recognition, group-level representation, contrastive- learning, self-supervised learning, data augmentation, meiosis ## I Introduction Emotion plays a crucial role in human cognition and involves many application fields. For example, in the field of human-machine interaction [1], emotion recognition enables the machine to provide more humanized interaction. In consumer neuroscience, emotion analysis is a common tool to obtain the user experience for product design[2]. Recently, the method of emotion recognition based on Electroencephalography (EEG) signal has shown its advantages. Compared to conscious behavior signals such as facial expression and body language, the EEG has the advantage of being difficult to hide or disguise. Compared with other physiological signals such as the fMRI (functional magnetic resonance imaging), and ECG (Electrocardiogram), the EEG is more convenient for sampling and has a higher time resolution. There is great progress in the field of EEG-based emotion recognition. With traditional machine learning techniques, the handcrafted features are calculated and selected carefully, which is quite critical during emotion recognition. While these approaches relie too much on the researcher’s experiences on EEG signals and cognitive related knowledge. In recent years, the development of deep learning methods achieves competitive accuracy, which could not pay attention to the handcrafted features. With the guidance of a large number of data with labels, the deep learning models would learn high- level emotion-related feature representation for precise affective computing [3, 4, 5, 6, 7]. Generally, artificial labels are crucial for training deep learning models based on the common supervised methods. But there is some condition requiring higher accurate and real-time recognition, obtaining qualified labels is expensive. For example, in neuroscience, EEG is frequently used to explore the process of emotion, such as in the tasks of empathy and reading comprehension. Participants are usually required to answer questions, so that the researchers could get their emotion situations. However, the emotion labels obtained through this way are time-consuming and laborious. And it is easy to generate subjective bias, which might decrease the reliability of labels [8, 9]. Similarly, in the application of consumer neuroscience, the EEG signals are recorded to evaluate the participants’ emotional states while they are playing games, listening to music, and watching movies, and advertisements, which aims to provide instructive references to the content creator, [10, 11, 12, 13]. In these conditions, the precise and time-intensive labels are also required. Therefore, the lack of qualified labels hinders the application of machine learning-based models in many precise fields. Previous studies have explored to reduce the dependence on artificial labels [14, 15]. Several neuroscience studies have shown the exploitable consistency of stimuli in emotion EEG signals. They have discovered that the EEG signals among a group of subjects who watched the same emotional video clips share similar group-level stimuli-related features [16, 17]. Such features correlated with preference, arousal, valence, etc, are potential to make up for the lack of artificial labels. Existing methods mainly adopt the self- supervised learning (SSL) method to exploit such stimuli consistency. SSL can generate labels according to the attributes of data for learning. For example, shen et al. proposed a novel contrastive learning framework [18] to learn representation by making the model maximizing the similarity between representations of EEG signals corresponding to the same stimuli. However, there exist random effects in the emotion-related EEG signals. For example, whether the subjects were distracted during the emotional tasks and their fatigue situations would increase the noise of the signals. And also the responses of participants could not be totally the same, which increases the difficulty in maximizing the similarity across subjects in contrastive learning. To further improve the EEG-based emotion recognition under the SSL framework, we proposed a Self-supervised Group Meiosis Contrastive learning (SGMC) framework for EEG-based emotion recognition. First, since larger samples could be better to represent the characteristics of signals from the view of statistics, we design a group projector in SGMC to collect a group of EEG samples to extract group-level representation for contrastive learning. Second, we proposed a novel method of data augmentation to provides augmented group samples for contrastive learning. Applying data augmentation to enhance contrastive learning is a basic paradigm, however, there are few studies on augmenting group samples. Inspired by the meiosis mechanism in genetics [19], we augment data without changing stimuli features by pairing, cross exchanging, and separating. In this way, data augmentation enables contrastive learning further take the advantage of the alignment of stimuli in the EEG signal group. And then the SGMC enables the model learn critical representations and achieve competitive emotion recognition performance with a significant improvement. Here we summarize the contributions of this paper as follows: * • To reduce the dependence on emotion labels, we introduce a self-supervised contrastive learning framework to further exploit the consistency of stimuli for EEG-based emotion recognition. * • To decrease the effects of individual difference and random effects in EEG signals, we design a group-based contrastive learning framework to extract group-level stimuli-related feature representations. * • To augment the group sample, we design a genetics-inspired data augmentation method, named Meiosis. It utilizes the alignment of stimuli to augment group samples without changing stimuli features. which provides augmented group samples to enhance contrastive learning. * • The SGMC achieves the state-of-the-art emotion classification results on the publicly available DEAP dataset with an accuracy of 94.72% and 95.68% in valence and arousal dimensions. On public SEED dataset also reaches competitive 94.04% accuracy, and achieve 91.01% accuracy fine-tuned with 50 labeled samples per category (0.14% of the full training set), exceeding 89.83% accuracy of fully-supervised learning with the full training set. ## II Related Work ### II-A EEG-based Emotion Recognition In earlier studies, emotional features of EEG signals were usually extracted to recognize by some traditional machine learning strategies. Such as the support vector machine (SVM)[20], Gaussian Naive Bayes classification [21], and k-nearest neighbor (k-NN)[22] are widely used classify emotion of the EEG signal. Compared with the traditional machine learning method, the deep learning model has more advantages in extracting high-level emotional features. In recent years, more and more deep learning neural networks based on emotion recognition models achieved good performance on EEG-based emotion recognition tasks [3, 4, 5, 6, 7]. Recently popular methods focus on recurrent neural networks (RNNs/LSTMs), and convolutional neural networks (CNNs). In 2017, Alhagry et al [23] adopted a two-layer long-short term memory (LSTM) to reach satisfactory emotion classification with the input of the raw EEG signals. In 2020, Li et al [24] constructed model BiHDM adopted four RNN modules to capture the input of each hemispheric EEG electrode’s data from horizontal and vertical streams and achieved the SOTA. CNN is also widely used for extracting spatial features of the EEG signal. In 2016, Li et al [25] proposed a hybrid network structure based on CNN and RNN for emotion recognition based on multi-channel EEG signals, which shown the effectiveness of a hybrid network in the trial-level emotion recognition tasks. In 2017, Alhagry et al. [26] explored a convolutional neural network and a simple deep neural network. This CNN model shown more significant performance and achieved the SOTA. In 2018 Shawky et al. [27] proposed a 3D CNNs model, which divides raw signals into 6-s segments to input. In the same year, Yang et al. [28] proposed a hybrid model combining CNN and RNN networks to learn spatial-temporal representation for emotion recognition. It utilized a sparse matrix as input to reflect the relative position of the electrodes. Compared with complex input of RNNs and 2D/3D CNNs, Cheah et al. [29] proposed a 1D-CNN based ResNet18, which adopted simple input(channel $\times$ time) to train the deeper neural network. It is more suitable to perform pre-training with simple data processing and a faster training process. ### II-B Self-supervised Learning Self-supervised learning aims to learn representation without relying on artificial labels. The latest research in the field of machine learning and deep learning shown the potential of the SSL method in learning generalized and robust representations [30, 31, 32, 33, 34, 35]. SSL has been widely used in many fields. For example, in computer vision (CV), Gidaris et al. [30] based on spatial properties designed an SSL task to rotate the original image and require the model to predict the rotation angle. Based on the temporal properties of the video, an SSL task [31] was designed to require the model to predict whether the two video frames are close in time. In natural language processing (NLP), $word2vec$ [32] designed SSL tasks such as predicting headword and adjacency words, etc. $BERT$ [33] designed two SSL tasks masked language prediction and next sentence prediction, and achieved SOTA on 11 NLP tasks. In EEG signal processing, Zhang et al. [36] applied Generative Adversarial Network to design the SSL method. It makes the generator augment masked original signals to get simulated signals and requires the model to distinguish real and simulated signals, which alleviates the problem of EEG data scarcity and achieves SOTA. Recently contrastive-learning-based SSL has made progress in EEG signal processing. Contrastive learning defines any two samples with internal relations as the positive pair, otherwise, it is the negative pair, whose loss function aims to maximize the similarity of representations between positive pairs minimums the similarity of representation between negative pairs. Shen et al. [18] proposed a self-supervised contrastive learning framework CLISA to improve inter-subjects prediction, which requires the model to predict whether two EEG signals are recorded when two subjects watch the same video clip. In this way, the model learned well inter-subject representation ability and achieved SOTA in inter-subject prediction after fine-tuning. In [14] several self-supervised contrastive learning methods were proposed to improve performance on limited label sample tasks. Among them, Relative Positioning (RP) requires the model to predict whether the two EEG signals are recorded in close time, and Contrastive Predictive Coding (CPC) requires the model to predict the representation of adjacent EEG signals via the anchor signal. They confirmed that models learned physiologically and clinically meaningful feature representations by SSL pre-training without label guidance. Further, they fine-tuned the pre-trained model to significantly outperformed the fully- supervised baseline on less labeled sample learning tasks. In [15] an augment- based SSL method is proposed, which requires the model to predict whether two augmented EEG signals come from the same original signal. It applies classical data augmentation such as time warping permutation and crop$\&$resize and so on. The generalization ability of the model has significantly improved and exceeded fully-supervised learning in both the full and the limited labeled sample learning on sleep staging. Contrastive learning shows its excellence in improving inter-subject prediction, learning physiological feature representation without labels, and so on in EEG signal processing. ## III Proposed Method ### III-A Overall Framework This paper designs a Self-supervised Group Meiosis Contrastive learning (SGMC) framework for EEG-based emotion recognition. As illustrated in Fig.1 the proposed framework consists of a contrastive learning pre-training process and an emotion recognition fine-tuning process. In the pre-training process, SGMC contains five components: a group sampler, the Meiosis data augmentation, a base encoder, a group projector, and a contrastive loss function. Firstly, the group sampler generates a minibatch containing several groups of EEG signals for augmenting. Secondly, the Meiosis augments each EEG group to generate two groups for constructing the positive and negative pairs. Nextly the base encoder extracts individual-level stimuli-related representations from each EEG signal. Then the group projector aggregates each group of Figure 1: Illustration of the proposed SGMC. During the process of pre- training, each group of samples is sampled from EEG samples corresponding to the same video clip stimuli. Then each group of EEG samples is augmented by genetics inspired Meiosis data augmentation to generate two augmented group samples. Each augmented group is sent to the base encoder to extract individual representations of each individual sample and then the group projector aggregates them to obtain the group-level representation. The model is required to maximize the similarity of representations between groups sharing the same stimuli and minimize the representations of groups that correspond to the different stimuli for minimizing the contrastive loss. The pre-trained base encoder will be fine-tuned with a classifier for emotion recognition. representations to extract group-level stimuli-related representations and map them into another latent space for computing the similarity. Together, the parameters of the base encoder and group projector are optimized by minimizing the contrastive loss. In the fine-tuning process, the model that consisted of the pre-trained base encoder and initialized classifier performs the emotion classification training. ### III-B Group Sampler Generally, it is difficult to contrastive learning through extracting stimuli- related features from individual EEG samples. So we take the strategy of extracting from group EEG samples, to achieve it we construct the sampler to provide input for the minibatch. In the processed dataset, video clips and subjects correspond to two axes of the dataset tensor. Among it, each EEG sample was defined as $\boldsymbol{X}^{s}_{v}$ $\in$ $\mathbb{R^{M\times C}}$, corresponding to a 1-second signal recorded when subject $s$ watched a 1-second video clip $v$, where $M$ is the number of times samples and $C$ is the dimension of signals ($\it e.g.$,channels). To obtain a minibatch, illustrated in Fig.2 sampler first randomly sample $P$ video clips $v_{1},v_{2},...,v_{P}$ that have not been sampled in the current epoch. To sample two equal sample groups to construct positive pair for each clip stimuli, sampler nextly randomly select $2Q$ subjects $s_{1},s_{2},...,s_{2Q}$ to prepare for grouping. Further sampler extract the EEG signals corresponding to selected subjects and video clips, $2PQ$ samples $\mathcal{D}=\\{\boldsymbol{{X}}^{s_{k}}_{v_{i}}|i=1,2,...,P;k=1,2,...,2Q\\}$ are obtained, which were recorded by $2Q$ subjects when watched $P$ video clips respectively. Furthermore, we note a group samples $\boldsymbol{G}_{i}=\\{\boldsymbol{X}^{s_{1}}_{v_{i}},\boldsymbol{X}^{s_{2}}_{v_{i}},...,\boldsymbol{X}^{s_{2Q}}_{v_{i}}\\}$ corresponding to the video clip $v_{i}$. Among $\boldsymbol{G}_{i}$, each individual sample shared the similar -related features. In this way, sampler would provide the minibatch with $P$ group samples $\\{\boldsymbol{G}_{1},\boldsymbol{G}_{2},...,\boldsymbol{G}_{P}\\}$ corresponding to $P$ different stimuli for pre-training. Figure 2: The illustration of sampling for a minibatch. Sampler first samples $P$ video clip and $2Q$ subjects. For each sampled video clip, next the sampler samples a group of EEG signals recorded when sampled $2Q$ subject watched it. Then $P$ groups of EEG samples are obtained for a minibatch. ### III-C Meiosis Data Augmentation Meiosis aims to augment one group sample to generate two groups that preserve the same stimuli-related features by utilizing the alignment of stimuli in the EEG group for constructing the positive pair. Figure 3: The illustration of the Meiosis data augmentation. A group of EEG samples sharing the same stimuli are randomly paired and cross exchanged a part of the signal in a pair, and then separated into two groups. To increase the meaningful difficulty of the model decoding EEG samples, we hope to mix signals of different subjects. Moreover, to preserve the original stimuli-related features for extraction by SGMC, we select the signals corresponding to the same stimuli to split and splice. So we design the crossover transformation as follows: We represent $\\{\boldsymbol{a}_{1},\boldsymbol{a}_{2},...,\boldsymbol{a}_{M}\\}$ as any EEG signal $\boldsymbol{A}$, where $\boldsymbol{a}_{i}$ is the data at $i^{th}$ sampling point (i=1,2,…,M). Similarly represent $\\{\boldsymbol{b}_{1},\boldsymbol{b}_{2},...,\boldsymbol{b}_{M}\\}$ as any other signal $\boldsymbol{B}$. Further we exchange the data of the first $c$ sampling points of two samples $\boldsymbol{A}$ and $\boldsymbol{B}$ to obtain $\widetilde{\boldsymbol{A}}=\\{\boldsymbol{b}_{1},\boldsymbol{b}_{2},...,\boldsymbol{b}_{c},\boldsymbol{a}_{c+1},\boldsymbol{a}_{c+2},...,\boldsymbol{a}_{M}\\}$ and $\widetilde{\boldsymbol{B}}=\\{\boldsymbol{a}_{1},\boldsymbol{a}_{2},...,\boldsymbol{a}_{c},\boldsymbol{b}_{c+1},\boldsymbol{b}_{c+2},...,\boldsymbol{b}_{M}\\}$ , where $c$ is given. Such transformation for any two EEG signals is encapsulated as the following function expression: $\displaystyle\\{\widetilde{\boldsymbol{A}},\widetilde{\boldsymbol{B}}\\}=T({\boldsymbol{A}},{\boldsymbol{B}},c)$ (1) Furthermore, to take the advantage of the diversity of group combinations, we can randomly pair for crossover and separating. As illustrated in Fig.3 the overall Meiosis data augmentation can be designed as follows: 1) Individual pairing: For one original EEG signals group $\boldsymbol{G}_{i}$=$\\{\boldsymbol{X}^{s_{1}}_{v_{i}}|k\\!=\\!1,2,...,2Q\\}$ (corresponding to a video clip $v_{i}$) individual signals are randomly paired to form $Q$ pairs $\\{\boldsymbol{X}^{s_{1}}_{v_{i}},\boldsymbol{X}^{s_{1+Q}}_{v_{i}}\\},\\{\boldsymbol{X}^{s_{2}}_{v_{i}},\boldsymbol{X}^{s_{2+Q}}_{v_{i}}\\},...,\\{\boldsymbol{X}^{s_{Q}}_{v_{i}},\boldsymbol{X}^{s_{2Q}}_{v_{i}}\\}$ for crossover. 2) Crossover : Meiosis receives a randomly given split position $c$ to perform transformation (1) for each pairs to obtain {$\\{\widetilde{\boldsymbol{X}}^{s_{k}}_{v_{i}},\widetilde{\boldsymbol{X}}^{s_{k+Q}}_{v_{i}}\\}|k=1,2,...,Q$}. 3) Separation: The transformed signals are randomly divided into two groups, and paired transformed signals are required enter into the different groups $A$ and $B$. Two homologous groups of EEG $\boldsymbol{\widetilde{G}}_{i}^{A}=\\{\widetilde{\boldsymbol{X}}^{s_{k}}_{v_{i}}|k=1,2,...,Q\\}$ and $\boldsymbol{\widetilde{G}}_{i}^{B}=\\{\widetilde{\boldsymbol{X}}^{s_{k}}_{v_{i}}|k=Q+1,Q+2,...,2Q\\}$ can be obtained that sharing the similar group-level stimuli-related features. Such data augmentation for group sample we represent it as follows function expression: $\displaystyle\\{\boldsymbol{\widetilde{G}}_{i}^{A},\boldsymbol{\widetilde{G}}_{i}^{B}\\}=Meiosis(\widetilde{\boldsymbol{G}}_{i})$ (2) When Meiosis is built, for one minibatch of $P$ group samples $\mathcal{G}$, $2P$ group samples $\mathcal{\widetilde{G}}$ can be obtained as follows: $\displaystyle\mathcal{\widetilde{G}}=\\{\boldsymbol{\widetilde{G}}_{i}^{t}|i=1,2,...,P;t\in\\{A,B\\}\\}=Meiosis(\mathcal{G})$ (3) $\boldsymbol{\widetilde{G}}_{i}^{A}$ could form a positive pair with $\boldsymbol{\widetilde{G}}_{i}^{B}$, form negative pairs with any other $2(P-1)$ group samples . ### III-D Base Encoder To extract group-level stimuli-related features for contrastive learning, we fisrt design a base encoder to extract individual-level stimuli-related features from each individual EEG sample. We introduce the base encoder $f$ : $\mathbb{R^{M\times C}}$ $\rightarrow$ $\mathbb{R^{D}}$ which map individual EEG sample $\boldsymbol{X}$ to its representation $\boldsymbol{h}$ on a 512-dimensional feature space. Based on the existing model ResNet18-1D [29], the base encoder is designed as follows: As illustrated in Fig.4. It mainly contains 17 convolutional layers (Conv) with a 1D kernel. The kernels of the first convolutional layer parallel the time axis of the EEG signal tensor with a length of 9. Each residual block contains two convolutional layers with the same number and length of the kernels. In each residual block, kernels of the first layer parallel the time axis of the input EEG tensor, and the second layer parallels the channel axis. For the eight residual blocks, the length of the kernels is 15, 15, 11, 11, 7, 7, 3, and 3 in descending order. Max pooling with the 1D kernel (Maxpool), Avg pooling with the 1D kernel (Avgpool), Batch Normalization (BN), and Rectified Linear Unit (RELU) layers are shown in the corresponding positions in the figure. Through the base encoder, for a augmented group sample $\boldsymbol{\widetilde{G}}_{i}^{t}$, its individual-level stimuli-related representation set {${\boldsymbol{h}}_{1},{\boldsymbol{h}}_{2},...,{\boldsymbol{h}}_{Q}$} can be obtained as by: $\displaystyle\boldsymbol{H}_{i}^{t}=f(\boldsymbol{\widetilde{G}}_{i}^{t})$ (4) The set is used for further extracting group-level features. The individual representations can also be used for extracting emotional features for emotion classification. Figure 4: Details of the architecture of the base encoder, group projector, and classifier. $Conv$ represent convolutional layer with 1D kernel. $Maxpool$ and $Avgpool$ represet Max pooling and Avg pooling with 1D kernel. $BN$ represent Batch Normalization. $FC$ layer represent fully-connected layer. $RELU$ represent Rectified Linear Unit. ### III-E Group Projector The group projector aims to accurately project stimuli-related representations into latent space from just 1-second EEG signals for calculating the similarity of video clip stimuli. To alleviate the hinders in extracting stimuli-related features from individual samples ( fatigue, distraction, etc), the group projector is designed to extract group-level features from multiple samples. A group of samples is an unordered set of matrixes that lacks a special extraction method. Most models focus on regular input representations. Such as the input of multi-channel images, there is a fixed order between different channels, as well as video, there is a fixed sequence between different frames. In the problem of unordered point cloud classification, [37] proposed PointNet adopting the symmetric function to build a network realized the features extraction of the unordered point cloud. Inspired by it, we adopted a symmetric function to design a model suitable for extracting features from group EEG signals. As illustrated in Fig.4 we designed the group projector consisting of a base projector and symmetric function MaxPool1D. To mitigate individual feature loss, the dimension of individual representation can be upgraded for extraction. We introduce the base projector $l$ : $\mathbb{R^{D}}$ $\rightarrow$ $\mathbb{R^{H}}$ that adopt a multilayer perceptron (MLP) to project each individual representation $\boldsymbol{h}$ on a 4096-dimensional feature space. The base projector contains three fully- connected layers with 1024, 2048, and 4096 hidden units in ascending order and adopt ReLU as the activation function of the first two layers. Batch Normalization and Dropout with 0.5 are shown in the corresponding positions in the figure. To ensure an invariant output to represent the group sample with any input permutations, 1-dimension max-pooling (MaxPool1D) is adopted to aggregate the information from each dimension-upgraded representation. As illustrated in Fig.4, the 1D kernel of MaxPool1D is perpendicular to the dimension-upgraded representation vector. The scanning direction of the kernel is parallel to upgraded representation vector with a stride of 1, and the padding is 0. Such MaxPool can extract the maximum values on 4096 feature dimensions from $Q$ dimension-upgraded representations to obtain the group-level feature representation in latent space. We note group projector as $g$ : $\mathbb{R^{Q\times D}}$ $\rightarrow$ $\mathbb{R^{H}}$. Extracted group represetation in latent space can be obtained through $g$ as follows: $\displaystyle\boldsymbol{z}_{v}^{t}=g(\boldsymbol{H}_{v}^{t})=MaxPool1D(l(\boldsymbol{h}_{1}),l(\boldsymbol{h}_{2}),...,l(\boldsymbol{h}_{Q}))$ (5) ### III-F Classifier In the emotion classification fine-tuing task, we use the classifier to extract emotional features and predict emotion labels from the representations extracted by the base encoder. As illustrated in Fig.4 the classifier mainly contains three fully-connected layers with 512, 256, and 128 hidden units in descending order. Batch Normalization ReLU and Dropout with 0.5 are shown in the corresponding positions in the figure. ### III-G The Contrastive Loss To measure the similarity of group-level stimuli-related features between two group samples, we can calculate the cosine similarity of their group representation vectors. The input group samples $\\{\boldsymbol{\widetilde{G}}_{i}^{t}|i=1,2,...,P;t\in\\{A,B\\}\\}$ would be extracted to obtain group feature representations $\\{\boldsymbol{z}_{i}^{t}|i=1,2,...,P;t\in\\{A,B\\}\\}$ via the base encoder and group projector. Then, the similarity of two augmented group samples $\boldsymbol{\widetilde{G}}_{i}^{A}$ and $\boldsymbol{\widetilde{G}}_{j}^{B}$ can be calculated on $\boldsymbol{z}_{i}^{A}$ and $\boldsymbol{z}_{j}^{B}$: $\displaystyle s(\boldsymbol{z}_{i}^{A},\boldsymbol{z}_{j}^{B})=\frac{\boldsymbol{z}_{i}^{A}\cdot\boldsymbol{z}_{j}^{B}}{\|\boldsymbol{z}_{i}^{A}\|\|\boldsymbol{z}_{j}^{B}\|},s(\boldsymbol{z}_{i}^{A},\boldsymbol{z}_{j}^{B})\in[0,1]$ (6) The contrastive loss is designed to maximize the similarity of two group-level representations of groups sharing the same stimuli label in a positive pair. Similar to the SimCLR framework[38], we adopt the normalized temperature- scaled cross-entropy to define loss function as follows: $\displaystyle\ell_{i}^{A}\\!=\\!-\\!log\frac{exp(\\!s(\\!\boldsymbol{z}_{i}^{A},\boldsymbol{z}_{j}^{B}\\!)\\!/\\!\tau\\!)}{\sum_{\\!j\\!=\\!1}^{\\!P}\\!\mathbb{1}_{\\![j\\!\neq\\!i]\\!}exp(\\!s(\\!\boldsymbol{z}_{i}^{A},\boldsymbol{z}_{j}^{A}\\!)\\!/\\!\tau\\!)\\!+\\!\sum_{\\!j\\!=\\!1}^{\\!P}\\!exp(\\!s(\\!\boldsymbol{z}_{i}^{A},\boldsymbol{z}_{j}^{B}\\!)\\!/\\!\tau\\!)}$ (7) where $\mathbb{1}_{[j\neq i]}\in\\{0,1\\}$ is an indicator function equaling to 1 if $j\neq i$. $\tau$ is the temperature parameter of softmax. The smaller the loss function is, the larger similarity between $\boldsymbol{z}_{i}^{A}$ and $\boldsymbol{z}_{i}^{B}$ , and the smaller the similarity between $\boldsymbol{z}_{i}^{A}$ and other group representations come from the same minibatch. Finally, the total loss for an iteration is the average of all contrastive losses for backpropagation as follows: $\displaystyle\mathcal{L}=\frac{1}{2P}\sum_{i=1}^{P}(\ell_{i}^{A}+\ell_{i}^{B})$ (8) ### III-H Pre-training Process Based on the constructed group sampler, data augmentation, base encoder, group projector, and loss function the SGMC pre-training can be performed. In a pre-training, we first set a number of epochs $T_{1}$, and then iterate the epoch. In each epoch, we continue to sample $P$ video clips per iteration until all video clips are enumerated. Each iteration, Sampler extract $2PQ$ EEG samples $\mathcal{D}=\\{\boldsymbol{X}^{s_{k}}_{v_{i}}|i=1,2,...,P;k=1,2,...,2Q\\}$ and pack them into groups $\mathcal{G}=\\{\boldsymbol{G}_{i}|i=1,2,...,P\\}$. Nextly for the Meiosis data augmentation, to avoid the model cheating by recognizing the split position, we randomly generate a fixed split position $c$, sent it to each time of Meiosis in this iteration ($1<c<M-1$). $2Q$ augmented group samples $\widetilde{\mathcal{G}}=\\{\widetilde{\boldsymbol{G}}_{i}^{t}|i=1,2,...,P;t\in\\{A,B\\}\\}\\}$ can be obtained by (3). Further we extract group-level features and project them to latent space to obtain group representations by (4) and (5). Furthermore, we calculate loss $\mathcal{L}$ by (6)-(8). Finally, we abate loss $\mathcal{L}$ by backpropagation to calculate the gradient for optimizer updating parameters of $f$ and $g$. Detailed procedures are summarized in Algorithm 1. Algorithm 1 Self-supervised Group Meiosis Contrastive Learning 0: Number of video clips $P$ per minibatch, number of subjects $Q$ per group. Initilized base encoder $f$ and group projector $g$. 1: for $epoch=1$ to $T_{1}$ do 2: repeat 3: Sample $P$ video clips $\\{v_{i}|i=1,2,...,P\\}$. 4: Randomly select $2Q$ subjects $\\{s_{k}|k=1,2,...,2Q\\}$. 5: Sampler pack minibatch $\mathcal{G}=\\{\boldsymbol{G}_{i}|i=1,2,...,P\\}\\}$ from $\mathcal{D}=\\{\boldsymbol{X}^{s_{k}}_{v_{i}}|i=1,2,...,P;k=1,2,...,2Q\\}$ 6: Randomly generate a split position $c$. 7: Obtain $\widetilde{\mathcal{G}}=\\{\widetilde{\boldsymbol{G}}_{i}^{t}|i=1,2,...,P;t\in\\{A,B\\}\\}$ from $\mathcal{G}$ through Meiosis with $c$ by (1)-(3). 8: Obtain $\mathcal{Z}=\\{\boldsymbol{z}_{i}^{t}|i=1,2,...,P;t=\in\\{A,B\\}\\}$ from $\widetilde{\mathcal{G}}$ through $f$ and $g$ by (4) and (5). 9: Calculate loss $\mathcal{L}$ by (6)-(8). 10: Abate loss $\mathcal{L}$ through optimizer updating parameters of $f$ and $g$. 11: until all video clips are enumerated. 12: end for 12: base encoder $f$, throw away group projector $g$. ### III-I Fine-tuning Process To achieve excellent emotional classification performance, based on learned feature representations we further fine-tune the model with labeled samples. As illustrated in Fig.1 emotion classification supervised training is performed on the model consisting of an initialized classifier and the SGMC pre-trained base encoder. We denote the training data as $\boldsymbol{X}$ and their labels as $\boldsymbol{y}$. We denote the classifier as $k(\cdot)$ The label $\boldsymbol{y}$ is a categorical variable. For example, if there are four emotional categories, $\boldsymbol{y}$ can take four values: 0, 1, 2 or 3. We need to predict the emotion category $\boldsymbol{y}$ for each sample $\boldsymbol{X}\in\mathbb{R^{M\times C}}$. The pre-trained base encoder $f$ extracts the representation from original EEG signal $\boldsymbol{X}$ for classifier $k(\cdot)$ extract predictive features to obtain prediction category $\boldsymbol{y^{pre}}=k(f(\boldsymbol{X}))$. We apply the cross entropy function to define the loss function for the emotion classification task and apply an optimizer to minimize the loss function to optimize the parameters of the model. Finally, when the loss function converges, a predictive EEG-based emotion recognition model is obtained. ## IV Experiments In this section, we introduce the implementation detail on the DEAP and SEED dataset and our experiment evaluation. In our experiment, we verify the effectiveness by comparing the SGMC with other competitive methods of emotion recognition and evaluating its performance on limited labeled sample learning. Further, we explore the reason for the effectiveness by visualizing the feature representation learned by the SGMC. Moreover, we explore the meaningful law of the framework by evaluating the different combinations of hyper parameters. Furthermore, we verify the rationality of architecture design by conducting control and ablation experiments. ### IV-A Implementation Detail In this section, we elaborate on our implementation detail of the dataset, data processing, and basic hyper parameters utilized in the experiments. TABLE I: Hyper parameters utilized in the proposed SGMC | $Epoch$ | $batchsize$ | $lr$ | $\tau$ | $P$ | $Q$ | $Shape_{tr}$ | $Shape_{te}/Shape_{val}$ ---|---|---|---|---|---|---|---|--- DEAP | Pre-training | 2800 | 32 | $10^{-4}$ | $10^{-1}$ | 8 | 2 | $(1680,32,1,32,128)$ | $(360,32,1,32,128)$ Fine-tuning | 60 | 2048 | $10^{-3}$ | - | - | - | $(53760,1,32,128)$ | $(11520,1,32,128)$ SEED | Pre-training | 3288 | 64 | $10^{-3}$ | $10^{-1}$ | 16 | 2 | $(2374,45,1,62,200)$ | $(510,45,1,62,200)$ Fine-tuning | 70 | 256 | $10^{-3}$ | - | - | - | $(106380,1,62,200)$ | $(22950,1,62,200)$ * • $Shape_{tr},Shape_{te},Shape_{val}$ respectively represent size of tensor of training test and validation dataset for pre-training or fine-tuning. $Epoch$ represent an appropriate number of the pre-training or fine-tuning epochs for achieving the fine emotion recognition performance. $batchsize$ represent the number of samples in a minibatch. (1) Dataset DEAP: The widely-used DEAP dataset [21] includes 32-channel EEG signals and 8-channel peripheral physiological signals recorded by 32 subjects when watched 40 pieces of a one-minute music video. Each trial data was recorded under 3-seconds of resting state and 60-seconds of stimuli. The recorded EEG signals are down-sampled to a 128 Hz sampling rate and processed with a bandpass frequency filter from 4-45 Hz by the provider. After watching each video, subjects were asked to rate their emotional levels of arousal, valence, liking, and dominance from 1 to 9 for each video. We adopt the EEG signals and rating values of arousal and valence to perform emotion recognition. We set the threshold value of the rating value of arousal and valence at 5. When the rating value is more than 5.0, the corresponding EEG signals are labeled as high arousal or valence. Otherwise, it is labeled as low arousal or valence. Each EEG signal corresponds to valence and arousal two labels, which can be used to construct two or four classification tasks. SEED: The SEED dataset is widely used in emotion recognition algorithms [39]. The dataset recorded the EEG signals from 15 subjects when watching 15 videos selected from movies in three categories of emotions, including positive, neutral, and negative. Each video is about 4 minutes long. Each subject repeated the experiments for three sessions, with an interval of more than one week. The EEG signals were recorded via 62 electrodes at a sampling rate of 1000Hz and have been downsampled to 200 Hz and filtered from 0 to 75 Hz by the provider. (2) Data Process On the DEAP, we use a 1-second-long sliding window to separate the 63s signal of each trial into 63 non-overlapping EEG signal segments. To improve accuracy, following existing work [28] we reduce the 3s resting state EEG signals from the 60s emotional stimuli EEG signal. In detail, in each trial, we average the 3s baseline EEG signal segments to get a 1s average baseline EEG signal segment. The remaining 60 segments each subtract the average baseline segment to become input samples. All samples correspond to a total of 2400 (40 videos with 60-seconds-long) repeated 1-second-long video clips. 1680, 320, and 320 1-second video clips are randomly divided into three sets from 2400 video clips in the ratio of 70:15:15. These three sets of video clips that were watched by 32 subjects correspond to 53760, 11520, and 11520 (70:15:15) EEG segments which are used as the training set, testing set, and validation set respectively. On SEED, we first perform an L2 normalization for each trial of EEG signal in each channel. Similar to the DEAP dataset we divide movie videos into 1-second windows. Because the length between the trial videos is different, we segment adjacent windows from front to back according to the time axis until the coverage of windows exceeds the video range. 3394 video clips are obtained from 15 movie videos and randomly divided into 2734, 510, and 510 clips , which three sets of video clips are in the ratio of 70:15:15. These three sets of video clips that were watched by 15 subjects three times correspond to 123030, 22950, and 22950 (70:15:15) EEG segments which are used as the training set, testing set, and validation set respectively. (3) Basic Configuration To accurately evaluate the performance of emotion recognition for a pre- training framework, there are two steps we adopted for evaluating the results. We first save pre-trained models with the different epochs. Next, we select the model with the highest average accuracy on emotion recognition obtained by five times of fine-tuning. Such average accuracy is evaluated as the result. To speed up sampling, in the pre-training process we set the five axes of dataset tensor to correspond to $video$ $clip$, $subject$, $1$, $channel$, $sampling$ $point$ respectivey. In the fine-tuning process, the first two axes $video$ $clip$ and $subject$ of the dataset are reshaped into a sample axis. Each axis of reshaped dataset corresponds to $sample$, $1$, $channel$, $sampling$ $point$ in turn. In the pre-training task, each epoch traverses every video clip of the dataset, a fine pre-training task generally needs to train more than 2000 epochs. To reduce the workload, we use the validation dataset to adjust the hyper parameters of the SGMC framework and use the test dataset to evaluate the model. The tensor shape of the training set, testing set, and validation set are represented as $Shape_{tr}$, $Shape_{te}$, and $Shape_{val}$ and are listed in Table I. In this paper, we use PyTorch [40] to implement our experiments based on the NVIDIA RTX3060 GPU. The Adam optimizer [41] is used to minimize the loss functions for both the pre-training and fine-tuning process. We represent $lr$ as the learning rate of the optimizer. In the pre-training process and fine- tuning process, the number of epochs, batch size, the temperature parameter $\tau$, learning rate $lr$, number of video clips per iteration $P$, number of samples per group $Q$, and size of the tensor of the dataset have applied different values, as shown in Table I, we list all hyper parameters utilized in two processes on DEAP and SEED dataset. ### IV-B Emotion Classification Performance (1) Performance on DEAP As illustrated in Table II, On the DEAP dataset, We first compare the SGMC with four state-of-the-art methods in the two emotion dimensions of valence and arousal: one residual long short-term memory network utilizing multi-modal data MMResLSTM [43], a channel-fused dense convolutional network CDCN [42], and a hybrid network of convolutional neural networks and recurrent networks with a channel-wise attention mechanism ACRNN [44]. From Table II, it can be found that the accuracy of the proposed SGMC is 1% higher than the second in the valence dimension and 2.3% higher than in the arousal dimension. The comparison results demonstrate the effectiveness of the SGMC on EEG-based emotion recognition. To verify the effectiveness of the proposed framework in the data augmentation and self-supervised learning fields, we further compare the SGMC with a GAN- based data augmentation method MCLFS-GAN [45] and a self-supervised GAN-based data augmentation framework GANSER [36]. Especially, according to the experimental setting of MCLFS-GAN [45] and GANSER [36], we further performe a comparison on a four-category classification problem: distinguishing EEG signals of four categories: high valence and high arousal, high valence and low arousal, low valence and high arousal, and low valence and high arousal. In Table II, it can be found that the proposed method outperforms the existing data augmentation and self-supervised learning method over $11.33\%$ and $2.09\%$ on four-category classification. As illustrated in Fig.5. meanwhile, the confusion matrices of the SGMC on four-category classification are presented. It shows that the SGMC achieves good performance in each category, especially in low arousal and high valence. Furthermore, we first compare the proposed SGMC with our own fully-supervised baseline using the same network model without pre-training. In valence, arousal, and four-category dimensions, the accuracy of the SGMC exceedes the fully-supervised baseline over $3.49\%$ $3.32\%$ and $4.97\%$, which shows the significant effect of improving emotion recognition. TABLE II: Performances on DEAP Method | Valence | Arousal | Four ---|---|---|--- CNN-LSTM (2020)[28] | 90.82 | 86.13 | - CDCN (2020)[42] | 92.24 | 92.92 | - MMResLSTM (2019)[43] | 92.87 | 92.30 | - ARCNN (2019)[44] | 93.72 | 93.38 | - MCLFS-GAN (2020)[45] | - | - | 81.32 GANSER (2022)[36] | 93.52 | 94.21 | 89.74 Proposed(Fully-supervised) | 91.23 | 92.36 | 87.68 Proposed(Fine-tuned) | 94.72 | 95.68 | 92.65 * • Average accuracy($\%$) of state-of-the-art method on the DEAP dataset for valence classification, arousal classification and four classification. * • Figure 5: The confusion matrix of classification on DEAP (2) Performance on SEED As illustrated in Table III, Similar to the DEAP, we first compare our proposed SGMC with four fully-supervised state-of-the-art studies : GRSLR [46] adopting a graph regularized sparse linear regression model, BiHDM [24] utilizing two independent recurrent networks for the left and right hemispheres of the brain, DGCNN [47] adopting a dynamic graph convolutional neural network, and a 1D CNN-based residual neural network ResNet18 [29]. Results use accuracy in the three classification tasks of positive neutral, and negative emotions. As illustrated in Fig.5. The details of the classification result are shown in the confusion matrix. The SGMC achieves good accuracy in three categories, especially performing better on positive than negative and neutral. As illustrated in Table III the proposed SGMC outperforms the four state-of-the-art studies, reflecting its good emotion recognition performance on the SEED. Further, we compare the SGMC with our fully-supervised baseline using the same model. Especially, the SEED dataset has nearly five times the data volume of the DEAP dataset. Therefore, it can better reflect the performance of self- supervised learning by utilizing a large number of unlabeled samples to make up for scarce artificial accurate labels. We report results obtained from fine-tuning with four various percentages of the total training set labeled samples (based on pre-training on the full training set). From $1\%$ to $50\%$ percentage of labeled samples, the SGMC exceeds our fully-supervised baseline over $44.84\%$, $33.52\%$, and $8.24\%$. Such results show the proposed SGMC can take advantage of consistency of stimuli to significantly make up for artificial accurate labels. Using the full training set labeled samples to fine-tune, the SGMC significantly exceed our fully-supervised baseline over $4.27\%$ as well. This shows the SGMC contributes a significant improvement by utilizing large unlabeled data. TABLE III: Performances on SEED Method | Accuracy($\%$) | ---|---|--- Percentage of labels | $1\%$ | $10\%$ | $50\%$ | 100% | GRSLR(2018)[46] | - | - | - | 87.39 | DGCNN(2018)[47] | - | - | - | 90.40 | BiHDM(2019)[24] | - | - | - | 93.12 | ResNet18 1D kernel(2021)[29] | - | - | - | 93.43 | Proposed(Fully-supervised) | 44.81 | 59.77 | 85.47 | 89.83 | Proposed(Fine-tuned) | 89.65 | 93.29 | 93.71 | 94.04 | * • Average accuracy($\%$) of state-of-the-art method on the SEED dataset for postive, neutral and negative three-classification. Percentages of labels represent labeled samples use to training emotion recognition account for the percentage of the full training set. * • Figure 6: The confusion matrix of classification on SEED Figure 7: Learning effect of the labeled sample size for the emotion recognition. the left is for the result on DEAP, and the right is for SEED. The red line represents the model is based on the SGMC pre-trained with the full training set and fine- tuned with the different number of labeled samples per category. Blue line represents the model is only fully-supervised trained with the different number of labeled samples per category. The results are the average test accuracy of five times of emotion classification training and the shade area represents standard deviation. ### IV-C Performance on Limited Labeled Sample Learning Based on the above results on SEED, it can be found that fewer labeled samples can also lead to good results. To evaluate the performance on limited labeled sample learning, we further evaluate the results on DEAP and SEED when the number of labeled samples per category increasing. We adopte a model based on SGMC pre-trained with the full training set and an initialized model to compare their performance on fine-tuning/fully-supervised learning with the same limited labeled sample. On the DEAP the results adopt a four-category classification of arousal and valence. On SEED the results adopt a three- category classification. As illustrated in Fig. 7. the results of the different number of labeled samples per category for fine-tuning/fully- supervised learning are reported. we can find that in any amount of labeled samples regimes, the accuracy of the SGMC fine-tuning is significantly superior to the fully-supervised baseline, and it is more significant in the lower labeled samples regime. On the DEAP dataset, when the number of labeled samples per category is over 10, the performance of the SGMC significantly outperforms the supervised. When fine- tuned with 5000 labeled samples per category (37.2% of the full training set), the SGMC reaches a good accuracy of $87.51\%$ which is nearly by $87.68\%$ of fully-supervised accuracy training with the full training set. On the SEED dataset, when fine-tuned with even only one labeled sample per category ($0.00278\%$ of the training set), the SGMC reached an accuracy of $59.42\%$. When fine-tuned with 50 samples per category ($0.14\%$ of the training set), the accuracy of 91.01% outperforms the fully-supervised baseline with 100% labeled data. Further, we observe that when the number of category is over 500, the curve has converged. This shows the SGMC enables a significant decline in the demand for artificial labels and reflects the consistency of stimuli have been well exploited to make up for artificial labels. ### IV-D Representation Visualization To explore how SGMC contributes to superior performance on emotion recognition, we visualize the learned feature representations of the SGMC fine-tuned model and the only fully-supervised model. Figure 8: t-SNE visualization for feature representations demonstrated on SEED with fully-supervised (left) and the SGMC fine-tuned (right). Tops are the visualization marked by movie videos, and the different colors represent the 15 videos. Bottoms are the visualization marked by emotion labels, and three colors represent positive, neutral, and negative videos respectively. As illustrated in Fig. 8, the 512-dimension feature representations extracted by the base encoder from the samples of the full SEED testing set are projected to two dimensions through t-SNE[48]. In the figure above, 15 colors represent samples corresponding to the 15 trial video clips (about 4-minutes). It can be found that in the visualization of the SGMC fine-tuned (right), the feature representation of the same video clip tends to gather together to form 15 distinguishable groups. On the contrary, in the visualization of fully- supervised (left), the representations corresponding to the different video clips cannot be distinguished significantly. Visualization reveals that the SGMC not only learns stimuli-related feature representations but also enables the model to distinguish whether different stimuli come from a continuous video. Further, we mark the corresponding emotion labels with three colors in the figure below. There are more indistinguishable representations with different emotion labels mixed together in fully-supervised visualization (left). In the SGMC fine-tuned visualization (right), there are fewer feature representations with the different emotion labels mixed together and shows better emotional discrimination. It reflects that the SGMC enables the model to learn the video-level stimuli-related representation to improve emotion recognition performance. ### IV-E Effect of Hyper Parameters To explore the effect of the number of samples per group (Q) and the number of selected video clips per iteration (P) on contrastive learning, we evaluate various combinations of hyper parameters. In our experiment strategy, each given $Q$, we evaluate various $P$ including $2,4,8,16,32,64$, and select the one that achieves the best result on emotion recognition as the appropriate $P$ for given $Q$. The results of the different $Q$ on emotion recognition are illustrated in Fig.11. The appropriate $P$ and number of epochs of pre- training, and corresponding pre-training accuracy of the different $Q$ are reported in Table IV . TABLE IV: Illustration of the appropriate combination of hyper parameters of $Q$ and $P$ in the hyper parameter analysis on DEAP and SEED. DEAP | SEED ---|--- Q | P | $Epoch_{pre}$ | $acc_{pre}$ | Q | P | $Epoch_{pre}$ | $acc_{pre}$ 1 | 16 | 440 | 70.56 | 1 | 8 | 1992 | 71.58 2 | 8 | 2800 | 91.11 | 2 | 16 | 3288 | 80.68 3 | 8 | 3600 | 87.50 | 3 | 32 | 1296 | 66.24 4 | 4 | 800 | 93.06 | 4 | 32 | 744 | 69.82 8 | 4 | 475 | 96.94 | 7 | 32 | 548 | 72.45 16 | 4 | 450 | 97.08 | | | | * • $Epoch_{pre}$ represent the appropriate number of epochs of pre-training, $acc_{pre}$ represent the accuracy of pre-training task, $Q$ represent number of samples per group, $P$ represent number of sampled video clips per iteration. Figure 9: The group size effect on accuracy of DEAP (left) and SEED (right). The x-axis represents the number of samples ($Q$). The four lines with colors show the various percentage of labeled samples used for fine-tuning in the training set. On the DEAP, When $Q=2$ and $P=4$, the SGMC achieves the best performance. On the SEED, when $Q=2$ and $P=16$, the SGMC achieves the best performance. Further, it can be observed that an opposite law exists in the DEAP and SEED datasets. When given a larger $Q$, the appropriate $P$ on the DEAP tends to be smaller, and on the SEED tends to be larger. The possible reason is the difference in labeling between the two datasets. On the SEED, the emotional labels are labeled by the experiment designer, which is determined by the emotional attribute of the video stimuli. On the DEAP dataset, emotional labels are labeled by the rating of the subjects. Such labeling is more related to the personalized differences of the subject than to the SEED. And because the larger $P$, the more difficult the contrastive learning is. At the time the model is more encouraged to focus on extracting stimuli-related features and ignore the personalized features that are irrelevant stimuli. So the larger $P$ lead to better results on the SEED and hinders better results on the DEAP. This indicates that a smaller $P$ should be considered first to use when the data was labeled by the subject, and a larger $P$ should be considered first to use when the data was labeled by the emotional attributes of the stimuli. Furthermore, it can be found that generally the greater the $Q$ (when $P$ are constant), the greater the accuracy of pre-training. The possible reason is that the greater group sample contains the more comprehensive group-level stimuli-related features to alleviate the interference of random distractions, fatigue, and individual differences. However, good accuracy in pre-training is not always beneficial to emotion recognition. Too smaller $Q$ leads to lower accuracy of pre-training, which hinders the learning of meaningful representation. Too larger $Q$ leads the model to focus on the aggregation of group-level stimuli-related features and leads the base encoder to ignore learning some emotion-related features to hinder better emotion recognition. So it is critical to select an appropriate $Q$ for constructing the group- sample-based contrastive learning. ### IV-F Archtechture Design Analysis In this section, we validate our designed choices by control and ablation experiments. We first verify the rationality of the symmetric function we choose. Furthermore, we evaluate the rationality of the strategy of constructing the group sample, utilizing Meiosis augmentation, and constructing the positive-negative pairs. (1) Comparison with Various Symmetric Function The SGMC selects the symmetric function MaxPool1D to construct the group projector. To verify its rationality, we compare MaxPool1D with a common similar AvgPool1D and an additional opposite MinPool1D which is implemented by taking the minimum value in each dimension of upgraded representations. Illustrate in Fig.4. and Table IV MaxPool1D is significantly better than others. The possible reason is that MaxPool1D is more beneficial for model selecting emotion-related features to extract from upgraded feature representations. Although MinPool1D also has a selection ability, the features it selects are more detrimental to improving learning emotion-related representation. This verifies the rationality of using MaxPool1D to aggregating group features for contrastive learning. TABLE V: Illustration of appropriate number of epochs of the pre-training on DEAP and SEED when comparing the symmetric functions. Symmetric function | $Epoch_{pre}$ ---|--- DEAP | SEED MinPool1D | 2440 | 1480 AvePool1D | 1720 | 2472 MaxPool1D | 2880 | 3288 Figure 10: Emotion classification accuracy based on fully-supervised model and SGMC fine-tuned model with various symmetric functions on DEAP (left) and SEED (right). The x-axis represents the percentages of labeled sample for fine-tuning/supervised training in the training set. (2) Ablation Study To investigate the rationality of some novel designs of the architecture, we conduct an ablation study for these three components: group sample, Meiosis data augmentation, and stimuli consistency. We can get the new version by removing one or two components, and the evaluation strategy is consistent with the basic configuration. When the group sample is ablated, we use individual samples for contrastive learning (just let $Q=1$). When Meiosis data augmentation is ablated, for augmenting the group/individual samples we skip the crossover process and go directly into the separation process after completing individual pairing. After removing the stimuli consistency, we change the way of constructing the positive pair with samples sharing the same stimuli. Instead, the sampler is required randomly sample EEG signals with any stimuli to form the sample group for augmenting and constructing pairs. TABLE VI: The components of the five new versions and the complete SGMC, and the appropriate number of epochs for pre-training with each version on DEAP and SEED. Method | Group | Augment | Consistent | $Epoch_{pre}$ ---|---|---|---|--- DEAP | SEED Non-group | ✗ | Crossover | ✓ | 440 | 1848 Non-augment | ✓ | No augment | ✓ | 800 | 2280 Mixup-augment | ✓ | Mixup | ✓ | 275 | 1304 Non-consistent | ✓ | Crossover | ✗ | 60 | 2752* Consistent-only | ✗ | No augment | ✓ | 1740 | 2368 Proposed | ✓ | Crossover | ✓ | 2800 | 3288 * • * Non-consistent leads to worse performance of emotion recognition than fully-supervised on the SEED dataset, so we adopt the result obtained when the loss function of pre-training converges. * • $Epoch_{pre}$ represents the appropriate number of epochs of pre-training. * • No augmet represent ablating crossover, Crossover represent adopt crossover to data augment ,and Mixup represent adopt Mixup to substitute crossover Figure 11: Emotion classification accuracy based on the five new versions fully-supervised, and the complete SGMC on DEAP (left) and SEED (right). The x-axis represents the percentages of labeled sample for fine-tuning/supervised training in the training set. The results of the four-category classification on DEAP and three-category classification on SEED are reported in Fig.11. The detail of ablation and the number of epochs of pre-training are reported in Table VI. To verify the effectiveness of group sample on the SGMC, we design a version Non-group by removing the group sample. It can be observed that the emotion recognition performance significantly declined on DEAP and SEED by more than 1.5%. This reflects group sample is important to alleviate the obstacles of contrastive learning for the SGMC framework. To verify the effectiveness of Meiosis augmentation, we design a version Non-augment by removing Meiosis augmentation. It can be observed that on DEAP the accuracy decreases significantly more than 3%, and on SEED decreases by more than 1.2%. It verifies the critical role of Meiosis data augmentation to improve emotion recognition in the SGMC. To verify the superiority of Meiosis utilizing the stimuli alignment in the group sample, we design a competitive version Mixup- augment. For the mutual augmentation of two samples, we can naturally think of Mixup [49] data augmentation, which can mix two samples and generate two samples. We construct the Mixup-augment version by substituting the crossover of Meiosis with Mixup. The result shows the Meiosis-based SGMC significantly exceeds Mixup-augment by more than 2.3% on the DEAP and 2.6% on SEED. This shows the effectiveness of designing the Meiosis data augmentation by mimicking the physiological mechanisms of meiosis. To verify the importance of constructing the positive-negative pair based on the consistent stimuli, we design a version Non-aligned by removing stimuli consistent. The results significantly decrease by more than 2.7% on DEAP, even lower than the fully- supervised baseline on SEED. It reflects that stimuli consistent is critical to guiding learning meaningful stimuli-related feature representation by constructing instructive positive-negative pairs. Further, to investigate the utilizability of potential stimuli consistency, we perform a version Consistent-only by removing the group sample and Meiosis augment, and keeping stimuli consistent only for contrastive learning. The result exceeds the fully-supervised baseline by more than 1.7 % on DEAP and by more than 0.6% on SEED, which indicates that consistency of stimuli are exploitable but hindered. ## Conclusion and Future Work In this work, a self-supervised Group Meiosis Contrastive learning (SGMC) framework is designed to improve emotion recognition. In the proposed framework, Meiosis data augmentation is introduced to augment EEG group samples without changing stimuli features. A base encoder and a group projector are designed in the model to extract group-level feature representations. With the consistency of stimuli, contrastive learning is designed to learn stimuli-related feature representation. The proposed framework achieves state-of-the-art emotion recognition results on the DEAP, and also reaches competitive performance on the SEED datasets. Compared to the fully-supervised baseline, the SGMC improves emotion recognition significantly, especially when there are limited labels. In addition, the results of feature visualization suggest that the model might have learned the video-level feature representations, and improves the performance of the model. The hyper parametric analysis further demonstrates the role of group samples during emotion recognition. Finally, the rationality of the framework design including the selection of symmetric functions, the construction of the positive-negative pairs, and Meiosis data augmentation are verified. In the future, we will continue to develop such kinds of group-sample-based SSL frameworks while with low calculation costs. ## References * [1] U. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Votsis, S. Kollias, W. Fellenz, and J. G. Taylor, ”Emotion recognition in human-computer interaction”, IEEE Signal processing magazine, vol. 18, no. 1, pp. 32–80, 2001\. * [2] I. Ariely and G. S. Berns,”Neuromarketing: the hope and hype of neuroimaging in business”, Nature reviews neuroscience, vol. 11, no. 4, pp. 284–292, 2010. * [3] D. Song, W. Zheng, P. Song, and Z. Cui, ”EEG emotion recognition using dynamical graph convolutional neural networks,” Trans. Affective Computing, 2020, vol. 11, no. 3, pp. 532–541. * [4] E. Du, C. Ma, G. Zhang, J. Li, Y.-K. Lai, G. Zhao, X. Deng, Y.-J. Liu, and H. Wang, “An efficient LSTM network for emotion recognition from multichannel EEG signals,” Trans. Affective Computing, pp. 1–1, 2020. * [5] Y. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, ”EEG-based emotion recognition via channel-wise attention and self attention” , Trans. Affective Computing, 2020, pp. 1–1. * [6] F. Becker, J. Fleureau, P. Guillotel, F. Wendling, I. Merlet, and L. Al bera, ”Emotion recognition based on high-resolution EEG recordings and reconstructed brain sources” , Trans. Affective Computing, vol. 11, no.2, 2020, pp. 244–257. * [7] H. Zhang, M. Yu, Y.-J. Liu, G. Zhao, D. Zhang, and W. Zheng, ”SparseDGCNN: Recognizing emotion from multichannel EEG signals” , Trans. Affective Computing, 2021, pp. 1–1. * [8] I. Ye Z, Xie X, Y Liu, et al, ”Understanding Human Reading Comprehension with brain signals”,arXiv preprint, arXiv: 2108.01360, 2021. * [9] F. Alimardani. M , Hermans. A , Tinga. M, et al, ”Assessment of Empathy in an Affective VR Environment using EEG Signals”, arXiv preprint, arXiv: 2003.10886, 2020. * [10] B. Giakoumis D , Tzovaras D , Moustakas K , et al, ”Automatic Recognition of Boredom in Video Games Using Novel Biosignal Moment-Based Features”, Trans. Affective Computing, 2011, pp. 119–133. * [11] L. Kalaganis F P , Adamos D A , Laskaris N A . ”Musical NeuroPicks: a consumer-grade BCI for on-demand music streaming services”, Neurocomputing, 2017, pp. 65–75. * [12] H. Pandey P , Swarnkar R , Kakaria S , et al. Understanding Consumer Preferences for Movie Trailers from EEG using Machine Learning, Annual Conference of Cognitive Science 2020. * [13] P. Balasubramanian S , Gullapuram S S , Shukla A . ”Engagement Estimation in Advertisement Videos with EEG”,arXiv preprint arXiv:1812.03364, 2018. * [14] J. Banville H , Chehab O , Hyvrinen A , et al, ”Uncovering the structure of clinical EEG signals with self-supervised learning”, Journal of Neural Engineering, 2021, 18(4): 046020 (22pp). * [15] I. Jiang X , Zhao J , Du B , et al, ”Self-supervised Contrastive Learning for EEG-based Sleep Staging”, International Joint Conference on Neural Networks, 2021, pp. 1–8. * [16] J. P. Dmochowski, M. A. Bezdek, B. P. Abelson, J. S. Johnson, E. H. Schumacher, and L. C. Parra, ”Audience preferences are predicted by temporal reliability of neural processing”, Nature communications, vol. 5, no. 1, pp. 1-9, 2014. * [17] J. P. Dmochowski, S. Paul, D. Joao, and L. C. Parra, ”Correlated Components of Ongoing EEG Point to Emotionally Laden Attention–A Possible Marker of Engagement?”, Frontiers in Human Neuroscience, vol. 6, 2012. * [18] H. Shen X , Liu X , Hu X , et al, ”Contrastive Learning of Subject-Invariant EEG Representations for Cross-Subject Emotion Recognition”, Trans. Affective Computing. 2022. * [19] C. Sara, B.C, Buonomo, et al, ”Disjunction of Homologous Chromosomes in Meiosis I Depends on Proteolytic Cleavage of the Meiotic Cohesin Rec8 by Separin”, Cell, 2000, pp. 387–397. * [20] D. Bahari, Fatemeh, and Amin Janghorbani. ”Eeg-based emotion recognition using recurrence plot analysis and k nearest neighbor classifier”, 20th Iranian Conference on Biomedical Engineering (ICBME). IEEE, 2013\. * [21] K. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and I. Patras,”DEAP: A database for emotion analysis using physiological signals”, Trans. Affective Computing, vol. 3, no. 1, pp. 18–31, 2011. * [22] G. Wang, Xiao-Wei, Dan Nie, and Bao-Liang Lu. ”EEG-based emotion recognition using frequency domain features and support vector machines”, International conference on neural information processing. Springer, Berlin, Heidelberg, 2011. * [23] G. Alhagry, Salma, Aly Aly Fahmy, and Reda A. El-Khoribi. ”Emotion recognition based on EEG using LSTM recurrent neural network”. Emotion 8.10 : 355-358, 2017. * [24] B. Li, W. Zheng, L. Wang, Y. Zong, L. Qi, Z. Cui, T. Zhang, and T. Song, ”A novel bi-hemispheric discrepancy model for eeg emotion recognition.” IEEE Transactions on Cognitive and Developmental Systems 13.2 pp: 354–367. 2020. * [25] Y. Li, D. Song, P. Zhang, G. Yu, Y. Hou, and B. Hu, ”Emotion recognition from multi-channel EEG data through convolutional recurrent neural network” , International Conference on Bioinformatics and Biomedicine, 2016, pp. 352–359. * [26] T. Tripathi, S.; Acharya, S.; Sharma, R.D.; Mittal, S.; Bhattacharya, S. ”Using deep and convolutional neural networks for accurate emotion classification on DEAP dataset”, AAAI, pp. 4746–4752. 2017. * [27] X. Shawky. E, El-Khoribi R, Shoman M.A.I, et al, ”EEG-Based Emotion Recognition using 3D Convolutional Neural Networks”, International Journal of Advanced Computer Science and Applications 2018, pp.329–337. * [28] G. Yang, Q. Wu, M. Qiu et al, ”Emotion recognition from multi-channel EEG through parallel convolutional recurrent neural network” , International Joint Conference on Neural Networks, 2018, pp. 1–7. * [29] K. Cheah K H , Nisar H , Yap V V , et al, ”Optimizing Residual Networks and VGG for Classification of EEG Signals: Identifying Ideal Channels for Emotion Recognition”, Journal of Healthcare Engineering, 2021, pp. 1–14. * [30] I. Gidaris S , Singh P , Komodakis N , ”Unsupervised Representation Learning by Predicting Image Rotations”, International Conference on Learning Representations (ICLR), 2018. * [31] H. Sermanet P , Lynch C , Hsu J , et al, ”Time-Contrastive Networks: Self-Supervised Learning from Video”,International Conference on Robotics and Automation (ICRA), 2017, pp. 1-11. * [32] E. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, ”Distributed representations of words and phrases and their compositionality”, Advances in neural information processing systems, vol. 26, 2013 pp. 3111–3119. * [33] C. Devlin J , Chang M W , Lee K , et al, ”BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding”, Association for Computational Linguistics 2018, pp. 4171-4186. * [34] J. Saeed, F. D. Salim, T. Ozcelebi, and J. Lukkien, “Federated self-supervised learning of multisensor representations for embedded intelligence” , IEEE Internet of Things Journal, vol. 8, no. 2, 2020, pp. 1030–1040. * [35] O. Sarkar and A. Etemad, ”Self-supervised learning for ecg-based emotion recognition” , International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020 pp. 3217–3221. * [36] G. Zhang Z , Zhong S H , Liu Y . ”GANSER: A Self-supervised Data Augmentation Framework for EEG-based Emotion Recognition”.Trans. Affective Computing, pp. 1–1, 2022. * [37] S. Qi C R , Su H , Mo K , et al, ”PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation”, Conference on Computer Vision and Pattern Recognition, 2017, pp. 77–85. * [38] M. Chen, S. Kornblith, M. Norouzi, and G. Hinton, ”A simple framework for contrastive learning of visual representations”, International conference on machine learning, pp. 1597–1607, 2020. * [39] W. L. Zheng and B.-L. Lu, ”Investigating critical frequency bands and channels for EEG-based emotion recognition with deep neural networks”, IEEE Transactions on Autonomous Mental Development, vol. 7, no. 3, pp. 162-175, 2015. * [40] T. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., ”PyTorch: An imperative style, high-performance deep learning library” , Conference and Workshop on Neural Information Processing Systems, 2019, pp. 8026–8037. * [41] R. P. Kingma and J. Ba, ”Adam: A method for tochastic optimization”, CoRR, 2014. * [42] U. Gao, X. Wang, Y. Yang, Y. Li, K. Ma, and G. Chen, ”A channel fused dense convolutional network for EEG-based emotion recognition” ,Trans. Cognitive and Developmental Systems, 2020, pp. 1–1. * [43] L. Ma, H. Tang, W.-L. Zheng, and B.-L. Lu, ”Emotion recognition using multimodal residual lstm network”, ACM International Conference on Multimedia, 2019, pp. 176–183. * [44] T. Tao, C. Li, R. Song, J. Cheng, Y. Liu, F. Wan, and X. Chen, ”EEG-based emotion recognition via channel-wise attention and self attention”, IEEE Transactions on Affective Computing, 2020. * [45] F. Dong and F. Ren, ”Multi-reservoirs eeg signal feature sensing and recognition method based on generative adversarial networks”, Computer Communications, vol. 164, 2020 pp. 177–184. * [46] W. Li, W. Zheng, Z. Cui, Y. Zong, and S. Ge, ”Eeg emotion recognition based on graph regularized sparse linear regression,” Neural Processing Letters, pp. 1–17, 2018. * [47] R. Song, W. Zheng, P. Song, and Z. Cui, “Eeg emotion recognition using dynamical graph convolutional neural networks,” IEEE Transactions on Affective Computing, 2018. * [48] S. Van der Maaten and G. Hinton, ”Visualizing data using t-sne”, Journal of machine learning research, vol. 9, no. 11, 2008. * [49] H. Zhang H , Cisse M , Dauphin Y N , et al, ”mixup: Beyond Empirical Risk Minimization”, International Conference on Learning Representations, 2018.
# Floquet Nonequilibrium Green’s functions with Fluctuation-Exchange Approximation: Application to Periodically Driven Capacitively Coupled Quantum Dots Thomas D. Honeychurch Daniel S. Kosov College of Science and Engineering, James Cook University, Townsville, QLD, 4811, Australia ###### Abstract We study the dynamics of two capacitively coupled quantum dots, each coupled to a lead. A Floquet Green’s function approach described the system’s dynamics, with the electron-electron interactions handled with the fluctuation-exchange approximation. While electrons cannot move between the separate sections of the device, energy transfer occurs with the periodic driving of one of the leads. This process was found to be explained with four stages. The energy transfer was also found to be sensitive to the driving frequency of the leads, with an optimal frequency corresponding to the optimal completion of the four stages of the identified process. ## I Introduction Capacitive coupling offers a unique tool for investigating and designing open quantum systems. Of particular interest are systems where energy transport occurs between regions without the addition of charge transport. Interesting phenomena that utilize capacitive coupling include coulomb dragKeller2016 ; Sierra2019 and heat rectificationTesser2022 . Capacitively coupled quantum dots offer a simple testbed for such phenomena, with heat current across capacitively coupled quantum dotsArrachea2020 ; Harbola2019 and their use in energy harvesting devices Dare2019 ; Sanchez2011 ; Thierschmann2015 ; Sothmann2015 ; Kaasbjerg2017 having been investigated. The addition of time-dependent drivings of lead and gate voltages offers a further avenue for exploring particle and energy transport within quantum devices, most usually in the case of periodic drivingsLudovico2016_review : energy transport and entropy production of a noninteracting single electronic level with a periodically modulated gate voltage has been discussed Ludovico2016 ; Ludovico2014 ; the periodic modulation of parameters has also been utilized to investigate nanoscale thermal machinesDare2016 ; Ludovico2018 ; Juergens2013 ; and the AC linear response of both particle and heat current has been investigated for a mesoscopic capacitorSanchez2013 . In the context of capacitively coupled devices, the electrothermal admittance has been calculated for a nanoscale parallel plate capacitor in the linear response regimeChen2015 , and, most recently, the energy transfer in a system of capacitively coupled dots was investigated when the gate voltage of one dot is modulated periodicallyLudovico2022 . This paper investigates the energy transfer between two capacitively coupled quantum dots, each connected to a respective lead [see Fig. 1]. We study the energy and particle transport within the system due to the periodic driving of one lead’s energies. While particles cannot move between systems, the capacitive coupling between the dots allows energy transfer through the system. We make use of a Floquet nonequilibrium Green’s functions approachBrandes1997 ; Honeychurch2020 ; Honeychurch2023 ; Haughian2017 ; Aoki2014 , allowing for the exploration of nonadiabatic drivings. The Coulomb interaction is handled with self-consistent perturbation theory, using the fluctuation-exchange (FLEX) approximationSchlunzen2020 . A self-consistent approximation, FLEX includes both particle-particle and particle-hole T-matrix and GW terms [see Fig. 2]. FLEX subsumes the advantages of its constituent terms, making it applicable to a wide variety of interaction strengths and occupationsSchlunzen2020 . It was found that the average energy current through the system is sensitive to the driving frequency, with a frequency corresponding to the maximum energy transference observed. This energy transfer was found to be described by a four-stage process. The effects of the other parameters were also investigated. The paper is organized as follows: Section II lays out theory and implementation; Section III investigates the energy transfer while one lead is driven periodically; and within section IV the paper’s results are summarized and extensions suggested. Natural units for quantum transport are used throughout the paper, with $\hbar$, $e$, and $k_{B}$ set to unity. $\mu_{\alpha}$$T$$\epsilon_{k\alpha}(t)$$\mu_{\beta}$$T$$\Gamma_{\alpha}$$\Gamma_{\beta}$$\epsilon_{k\beta}$$\epsilon_{A}$$\epsilon_{B}$$U$ Figure 1: Schematic representation of the model investigated. The two quantum dots are coupled to noninteracting electron reservoirs and coupled to each other by Coulomb interaction. Within the investigation, energies of reservoir $A$ are driven harmonically, resulting in a nonzero current between the dots and reservoirs and energy transfer between the reservoirs. ## II Theory ### II.1 Hamiltonian and NEGF For simplicity, we focus on two spinless dots, $A$ and $B$, coupled to an associated lead, labeled $\alpha$ and $\beta$, and coupled capacitively: $\begin{split}H(t)=H_{A}+H_{B}+H_{int}+H_{A\alpha}+H_{B\beta}\\\ +H_{\alpha}(t)+H_{\beta},\end{split}$ (1) $H_{S}=\epsilon_{S}\hat{d}_{S}^{\dagger}\hat{d}_{S},\;\;\;\;\;\;\;\;\;H_{int}=U\hat{d}_{A}^{\dagger}\hat{d}_{A}\hat{d}_{B}^{\dagger}\hat{d}_{B},$ (2) $H_{S\sigma}=\sum_{k}t_{k\sigma S}\;\hat{c}^{\dagger}_{k\sigma}\hat{d}_{S}+t^{*}_{k\sigma S}\;\hat{d}^{\dagger}_{S}\hat{c}_{k\sigma},$ (3) and $H_{\sigma}=\sum_{k}\left(\epsilon_{k\sigma}+\psi_{\sigma}\left(t\right)\right)\hat{c}^{\dagger}_{k\sigma}\hat{c}_{k\sigma}.$ (4) Here, $S$ refers to a dot, $\sigma$ refers to its corresponding lead, and $\bar{S}$ and $\bar{\sigma}$ refer to the opposing dot and lead, respectively. The two interacting dots’ energies are given by $\epsilon_{S}$, and the electron-electron repulsion between the sites, given by $H_{int}$, has a strength $U$. The coupling of the quantum dots to their respective leads is governed by $H_{k\sigma S}$, with $t_{k\sigma S}$ denoting a hopping between the lead site $k\sigma$ and the dot $S$. The leads are taken as noninteracting, with the explicit time-dependence entering the Hamiltonian via $\psi_{\sigma}(t)$, which varies the energies $\epsilon_{k\sigma}$. $\displaystyle\Sigma^{SB}_{S}\;=\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman[] \vertex(a); \vertex[right = 0.8cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[above = 0.8cm of b] (t2); \diagram* { (b) -- [red,thick,fermion] (a), (a) -- [dashed] (t1), (b) -- [dashed] (t2), (t1) -- [blue,thick,fermion,quarter left] (t2) -- [blue,thick,fermion,quarter left] (t1), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$ $\displaystyle\Sigma^{GW}_{S}\;=\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 0.8cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[above = 0.8cm of b] (t2); \diagram* { (b) -- [red,thick,fermion] (a), (a) -- [dashed] (t1), (b) -- [dashed] (t2), (t1) -- [blue,thick,fermion,quarter left] (t2) -- [blue,thick,fermion,quarter left] (t1), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 3.6cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[above right = 0.8*1.41421cm of a] (t2); \vertex[right = 0.6*1cm of t2] (t3); \vertex[right = 0.8*1cm of t3] (t4); \vertex[right = 0.6*1cm of t4] (t5); \vertex[right = 0.8*1cm of t5] (t6); \diagram* { (b) -- [red,thick,fermion] (a), (a) -- [dashed] (t1), (b) -- [dashed] (t6), (t2) -- [dashed] (t3), (t4) -- [dashed] (t5), (t1) -- [blue,thick,fermion,quarter left] (t2) -- [blue,thick,fermion,quarter left] (t1), (t5) -- [blue,thick,fermion,quarter left] (t6) -- [blue,thick,fermion,quarter left] (t5), (t3) -- [red,thick,fermion,quarter left] (t4) -- [red,thick,fermion,quarter left] (t3), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\;\cdots\quad$ $\displaystyle\Sigma^{TPP}_{S}\;=\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 0.8cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[above = 0.8cm of b] (t2); \diagram* { (b) -- [red,thick,fermion] (a), (a) -- [dashed] (t1), (b) -- [dashed] (t2), (t1) -- [blue,thick,fermion,quarter left] (t2) -- [blue,thick,fermion,quarter left] (t1), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 0.8cm of a] (b1); \vertex[right = 1.6cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[right = 0.8cm of t1] (t2); \vertex[right = 0.8cm of t2] (t3); \diagram* { (b) -- [red,thick,fermion] (b1), (b1) -- [red,thick,fermion] (a), (b1) -- [dashed] (t2), (a) -- [dashed] (t1), (b) -- [dashed] (t3), (t1) -- [blue,thick,fermion,quarter left] (t3), (t2) -- [blue,thick,fermion,quarter left] (t1), (t3) -- [blue,thick,fermion,quarter left] (t2), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\;\cdots\quad$ $\displaystyle\Sigma^{TPH}_{S}\;=\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 0.8cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[above = 0.8cm of b] (t2); \diagram* { (b) -- [red,thick,fermion] (a), (a) -- [dashed] (t1), (b) -- [dashed] (t2), (t1) -- [blue,thick,fermion,quarter left] (t2) -- [blue,thick,fermion,quarter left] (t1), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\quad\leavevmode\hbox to0pt{\vbox to0pt{\pgfpicture\makeatletter\hbox{\hskip 0.0pt\lower 0.0pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{ }\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}} \feynman \vertex(a); \vertex[right = 0.8cm of a] (b1); \vertex[right = 1.6cm of a] (b); \vertex[above = 0.8cm of a] (t1); \vertex[right = 0.8cm of t1] (t2); \vertex[right = 0.8cm of t2] (t3); \diagram* { (b) -- [red,thick,fermion] (b1), (b1) -- [red,thick,fermion] (a), (b1) -- [dashed] (t2), (a) -- [dashed] (t1), (b) -- [dashed] (t3), (t3) -- [blue,thick,fermion,quarter right] (t1), (t1) -- [blue,thick,fermion,quarter right] (t2), (t2) -- [blue,thick,fermion,quarter right] (t3), }; \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\quad+\;\cdots\quad$ Figure 2: The Feynman diagrams considered within the investigation. Here, $S$ refers to the red fermionic line and corresponds to $G_{S}(\tau,\tau^{\prime})$. The blue fermionic line corresponds to the opposing dot’s Green’s function, $G_{\bar{S}}(\tau,\tau^{\prime})$. To model the system out of equilibrium, we make use of nonequilibrium Green’s functions: $G_{S}(\tau,\tau^{\prime})=-i\left\langle T_{c}\left(d_{S}\left(\tau\right)d^{\dagger}_{S}\left(\tau^{\prime}\right)\right)\right\rangle,$ (5) with the equation of motion, $\begin{split}\left(i\frac{\partial}{\partial\tau}-\epsilon_{S}\right)G_{S}\left(\tau,\tau^{\prime}\right)\\\ -\int_{c}d\tau_{1}\;\Sigma_{S}\left(\tau,\tau_{1}\right)G_{S}\left(\tau_{1},\tau^{\prime}\right)=\delta_{c}\left(\tau-\tau^{\prime}\right),\end{split}$ (6) where the self-energy term consists of contributions from the associated lead and the interaction between the quantum dots: $\Sigma_{S}\left(\tau,\tau^{\prime}\right)=\Sigma_{\sigma}\left(\tau,\tau^{\prime}\right)+\Sigma^{int}_{S}\left(\tau,\tau^{\prime}\right).$ (7) To account for the capacitive coupling between the dots, we make use of self- consistent perturbation theory: $\Sigma^{int}_{S}\left(\tau,\tau^{\prime}\right)=-iUG_{\bar{S}}\left(\tau,\tau^{+}\right)\delta(\tau,\tau^{\prime})+\Sigma^{corr}_{S}(\tau,\tau^{\prime}),$ (8) where the correlations were investigated with the FLEX approximationSchlunzen2020 : $\begin{split}\Sigma^{FLEX}_{S}(\tau,\tau^{\prime})=\Sigma^{TPP}_{S}(\tau,\tau^{\prime})+\Sigma^{TPH}_{S}(\tau,\tau^{\prime})\\\ +\Sigma^{GW}_{S}(\tau,\tau^{\prime})-2\Sigma^{SB}_{S}(\tau,\tau^{\prime}).\end{split}$ (9) The single-bubble, GW, particle-particle, and particle-hole T-matrix approximations follow standard definitions here [see Fig. 2]. The single- bubble approximation is given by $\Sigma^{SB}_{S}\left(\tau,\tau^{\prime}\right)=UG_{\bar{S}}\left(\tau,\tau^{\prime}\right)G_{\bar{S}}\left(\tau^{\prime},\tau\right)UG_{S}\left(\tau,\tau^{\prime}\right).$ (10) The $GW$ self-energy is given by $\begin{split}\Sigma^{GW}_{S}(\tau,\tau^{\prime})=iW_{S}^{ns}\left(\tau,\tau^{\prime}\right)G_{S}\left(\tau,\tau^{\prime}\right),\end{split}$ (11) $\begin{split}W^{ns}_{S}\left(\tau,\tau^{\prime}\right)=\Phi_{S}\left(\tau,\tau^{\prime}\right)\\\ +\int_{c}\tau_{1}\int_{c}\tau_{2}\;\Phi_{S}\left(\tau,\tau_{1}\right)P_{S}\left(\tau_{1},\tau_{2}\right)W_{S}^{ns}\left(\tau_{2},\tau^{\prime}\right),\end{split}$ (12) $\begin{split}\Phi_{S}\left(\tau,\tau^{\prime}\right)=UP_{\mkern 1.5mu\overline{\mkern-1.5muS\mkern-1.5mu}\mkern 1.5mu}\left(\tau,\tau^{\prime}\right)U\end{split}$ (13) and $\begin{split}P_{S}\left(\tau,\tau^{\prime}\right)=-iG_{S}\left(\tau,\tau^{\prime}\right)G_{S}\left(\tau^{\prime},\tau\right).\end{split}$ (14) The particle-particle T-matrix self-energy is given by $\Sigma^{PP}_{S}\left(\tau,\tau^{\prime}\right)=i\;T^{PP}(\tau,\tau^{\prime})G_{\bar{S}}\left(\tau^{\prime},\tau\right),$ (15) $\begin{split}T^{PP}\left(\tau,\tau^{\prime}\right)=-UG^{H}\left(\tau,\tau^{\prime}\right)U\\\ +\int d\tau_{1}UG^{H}\left(\tau,\tau_{1}\right)T^{PP}\left(\tau_{1},\tau^{\prime}\right)\end{split}$ (16) and $\begin{split}G^{H}(\tau,\tau^{\prime})=iG_{A}(\tau,\tau^{\prime})G_{B}(\tau,\tau^{\prime}).\end{split}$ (17) The particle-hole T-matrix self-energy is given by $\Sigma^{PH}_{S}\left(\tau,\tau^{\prime}\right)=i\;T_{S}^{PH}(\tau,\tau^{\prime})G_{\mkern 1.5mu\overline{\mkern-1.5muS\mkern-1.5mu}\mkern 1.5mu}\left(\tau,\tau^{\prime}\right),$ (18) $\begin{split}T_{S}^{PH}\left(\tau,\tau^{\prime}\right)=UG_{S\mkern 1.5mu\overline{\mkern-1.5muS\mkern-1.5mu}\mkern 1.5mu}^{F}\left(\tau,\tau^{\prime}\right)U\\\ -\int d\tau_{1}UG_{S\mkern 1.5mu\overline{\mkern-1.5muS\mkern-1.5mu}\mkern 1.5mu}^{F}\left(\tau,\tau_{1}\right)T_{S}^{PH}\left(\tau_{1},\tau^{\prime}\right)\end{split}$ (19) and $\begin{split}G^{F}_{S\bar{S}}(\tau,\tau^{\prime})=-iG_{S}(\tau,\tau^{\prime})G_{\bar{S}}(\tau^{\prime},\tau).\end{split}$ (20) For the leads, time dependence within the energies, $\epsilon_{k\sigma}+\psi_{\sigma}(t)$, results in an additional phase to the otherwise static lead self-energies: $\begin{split}\Sigma_{\sigma}(t,t^{\prime})=\Sigma^{\prime}_{\sigma}(t-t^{\prime})e^{-i\int_{t^{\prime}}^{t}dt_{1}\psi_{\sigma}(t_{1})}\\\ =e^{-i\Psi_{\sigma}(t)}\Sigma^{\prime}_{\sigma}(t-t^{\prime})e^{i\Psi_{\sigma}(t^{\prime})},\end{split}$ (21) where $\Psi_{\sigma}(t)$ is the anti-derivative of $\psi_{\sigma}(t)$ and $\Sigma^{\prime}_{\sigma}(t-t^{\prime})$ is the self-energies of the leads without driving, given in Fourier space with the wide-band approximation as $\begin{split}\Sigma_{\sigma}^{R/A}\left(\omega\right)=\mp\frac{i}{2}\Gamma_{\sigma},\end{split}$ (22) $\begin{split}\Sigma^{<}_{\sigma}\left(\omega\right)=if_{\sigma}\left(\omega\right)\Gamma_{\sigma},\end{split}$ (23) $\begin{split}\Sigma^{>}_{\sigma}\left(\omega\right)=-i\left(1-f_{\sigma}\left(\omega\right)\right)\Gamma_{\sigma},\end{split}$ (24) where the Fermi distribution follows the standard definition of $f_{\sigma}(\omega)=1/\left[\exp\left(\left(\omega-\mu_{\sigma}\right)/T\right)+1\right]$. The particle current is given by, $\begin{split}I^{P}_{\sigma}(t)=2Re\left\\{\int^{\infty}_{-\infty}dt_{1}Tr\left[G_{S}^{<}(t,t_{1})\Sigma_{\sigma}^{A}(t_{1},t)\right.\right.\\\ \left.\left.+G_{S}^{R}(t,t_{1})\Sigma_{\sigma}^{<}(t_{1},t)\right]\right\\},\end{split}$ (25) and the occupation of the dots is given by $n_{S}(t)=-iG_{S}^{<}\left(t,t\right),$ (26) where continuity dictates that $I^{P}_{\sigma}(t)=-\frac{dn_{S}(t)}{dt}$. To calculate the energy that passes from the leads into the system, we use $\begin{split}I^{E}_{\sigma}(t)=-i\langle\left[H(t),H_{\sigma}(t)\right]_{-}\rangle\\\ =-2\operatorname{Re}\left\\{\int dt_{1}\left[i\frac{d}{dt}\Sigma^{<}_{\sigma}(t,t_{1})\right]G^{A}_{S}(t_{1},t)\right.\\\ \left.+\left[i\frac{d}{dt}\Sigma^{R}_{\sigma}(t,t_{1})\right]G^{<}_{S}(t_{1},t)\right\\}.\end{split}$ (27) Here, energy moves from the leads into the system and system-lead coupling, resulting in the continuity equation $\begin{split}I^{E}_{A}(t)+I^{E}_{B}(t)=\frac{d}{dt}\left(\langle H_{A\alpha}\rangle+\langle H_{B\beta}\rangle\right.\\\ \left.+\langle H_{A}\rangle+\langle H_{B}\rangle+E_{int}(t)\right)\end{split}$ (28) where $\begin{split}\langle H_{S\sigma}\rangle=2\operatorname{Im}\left\\{\int dt_{1}\Sigma^{<}_{\sigma}(t,t_{1})G^{A}_{S}(t_{1},t)\right.\\\ +\left.\Sigma^{R}_{\sigma}(t,t_{1})G^{<}_{S}(t_{1},t)\right\\}\end{split}$ (29) and $\begin{split}E_{int}(t)=\sum_{S=A,B}-\frac{i}{2}\left[\int dt_{1}\;\Sigma^{int,R}_{S}\left(t,t_{1}\right)G^{<}_{S}\left(t_{1},t\right)\right.\\\ \left.+\Sigma^{int,<}_{S}\left(t,t_{1}\right)G^{A}_{S}\left(t_{1},t\right)\right].\end{split}$ (30) Taking the time-average of equation (28), gives $\bar{I}^{E}_{A}=-\bar{I}^{E}_{B},$ (31) where $\bar{O}=\lim_{\tau\rightarrow\infty}\left(\int^{\tau}_{0}O(t)\;dt\right)/\tau$. ### II.2 Floquet approach We use a Floquet nonequilibrium Green’s function approach to solve the equations of motion, assuming the periodicity of the system’s dynamicsBrandes1997 ; Honeychurch2020 ; Honeychurch2023 ; Haughian2017 ; Aoki2014 . Within this context, Green’s functions are periodic in the central time, $T=\frac{t+t^{\prime}}{2}$: $\begin{split}A(t,t^{\prime})=A\left(T=\frac{t+t^{\prime}}{2},\tau=t-t^{\prime}\right)\\\ =\sum^{\infty}_{n=-\infty}A(\tau,n)e^{i\Omega nT}\end{split}$ (32) and $A\left(\omega,n\right)=\frac{1}{P}\int^{P}_{0}dT\;e^{-i\Omega nT}\int^{\infty}_{-\infty}d\tau e^{i\omega\tau}A(T,\tau),$ (33) which allows us to cast the terms $C_{+}(t,t^{\prime})=A(t,t^{\prime})B(t,t^{\prime})$ and $C_{-}(t,t^{\prime})=A(t,t^{\prime})B(t^{\prime},t)$ as $\begin{split}C_{\pm}\left(\omega,n\right)=\\\ \sum^{\infty}_{m=-\infty}\;\int\frac{d\omega^{\prime}}{2\pi}A\left(\omega^{\prime},m\right)B\left(\pm\omega\mp\omega^{\prime},n-m\right)\\\ =\left[A\;\square_{\pm}\;B\right]\left(\omega,n\right).\end{split}$ (34) Making a further transformation of the two-time objects into the Floquet matrix form $\widetilde{A}\left(\omega,m,n\right)=A\left(\omega+\frac{\Omega}{2}\left(m+n\right),n-m\right)$ (35) allows for convolutions of the type $C(t,t^{\prime})=\int dt_{1}A(t,t_{1})B(t_{1},t^{\prime})$ to be recast as matrix equations $\begin{split}\widetilde{C}\left(\omega,m,n\right)=\sum_{r=-\infty}^{\infty}\widetilde{A}\left(\omega,m,r\right)\widetilde{B}\left(\omega,r,n\right)\\\ =\left[\widetilde{A}\circ\widetilde{B}\right]\left(\omega,m,n\right).\end{split}$ (36) (a) (b) (c) Figure 3: Observables measured over a period with driving of the left lead. Here, red lines correspond to the observables relating to the driven section, while blue lines refer to the undriven section of the model. Here, the energy current refers to the energy transfer from the lead and coupling region into the central region. The black dashed line is given by $\cos(\Omega t)$, while the colored dashed lines of Fig. 3(c) correspond to the averages of similarly colored energy currents. The parameters are $\Gamma_{\alpha}=\Gamma_{\beta}=0.5$, $U=0.6$, $T=0.001$, $\mu_{\alpha}=\mu_{\beta}=0.3$, $\epsilon_{A}=\epsilon_{B}=0$, $\Delta_{\alpha}=0.2$ and $\Omega=0.32$. The discretization was taken at $0.01$, the bounds of integration between $-40$ and $40$, and 49 Fourier coefficients were used. The convergence for both dots was taken as $10^{-4}$. Taking real-time projections, we transform the equations of motion into matrix equations with the above transformations: $\begin{split}\left(\omega+\Omega m-\epsilon_{S}\right)\widetilde{G}_{S}^{R/A}\left(\omega,m,n\right)=\\\ \delta_{mn}+\left[\widetilde{\Sigma}_{S}^{R/A}\circ\widetilde{G}_{S}^{R/A}\right]\left(\omega,m,n\right),\end{split}$ (37) $\begin{split}\widetilde{G}_{S}^{<}(\omega,m,n)=\\\ \left[\widetilde{G}_{S}^{R}\circ\widetilde{\Sigma}^{<}_{S}\circ\widetilde{G}_{S}^{A}\right]\left(\omega,m,n\right).\end{split}$ (38) The interaction self-energies can be cast in terms of Fourier coefficients: for GW, $\begin{split}\Sigma^{R/A}_{GW,S}\left(\omega,n\right)\\\ =i\left[W_{S}^{ns,<}\;\square_{+}\;G^{R/A}_{S}+W_{S}^{ns,R/A}\;\square_{+}\;G^{>}_{S}\right]\left(\omega,n\right),\end{split}$ (39) $\Sigma^{</>}_{GW,S}\left(\omega,n\right)=i\left[W_{S}^{ns,</>}\;\square_{+}\;G^{</>}_{S}\right]\left(\omega,n\right),$ (40) $\begin{split}P_{S}^{R/A}(\omega,n)\\\ =-i\left[G_{S}^{<}\;\square_{-}\;G^{A/R}_{S}+G_{S}^{R/A}\;\square_{-}\;G^{<}_{S}\right](\omega,n),\end{split}$ (41) $\begin{split}P_{S}^{</>}(\omega,n)\\\ =-i\left[G_{S}^{</>}\;\square_{-}\;G^{>/<}_{S}\right](\omega,n),\end{split}$ (42) $\begin{split}\widetilde{W}_{S}^{ns,R/A}\left(\omega,m,n\right)=\widetilde{\Phi}^{R/A}_{S}\left(\omega,m,n\right)\\\ +\left[\widetilde{\Phi}^{R/A}_{S}\circ\widetilde{P}^{R/A}_{S}\circ\widetilde{W}_{S}^{ns,R/A}\right]\left(\omega,m,n\right)\end{split}$ (43) and $\begin{split}\widetilde{W}_{S}^{ns,</>}\left(\omega,m,n\right)=\widetilde{\Phi}^{<\>}_{S}\left(\omega,m,n\right)\\\ +\left[\widetilde{\Phi}^{R}_{S}\circ\widetilde{P}^{R}_{S}\circ\widetilde{W}_{S}^{ns,</>}\right]\left(\omega,m,n\right)\\\ +\left[\widetilde{\Phi}^{R}_{S}\circ\widetilde{P}^{</>}_{S}\circ\widetilde{W}_{S}^{ns,A}\right]\left(\omega,m,n\right)\\\ +\left[\widetilde{\Phi}^{</>}_{S}\circ\widetilde{P}^{A}_{S}\circ\widetilde{W}_{S}^{ns,A}\right]\left(\omega,m,n\right);\end{split}$ (44) for the particle-particle T-matrix, $\begin{split}\Sigma^{PP,R/A}_{S}\left(\omega,n\right)\\\ =i\left[T^{PP,<}\;\square_{-}\;G^{A/R}_{\bar{S}}+T^{PP,R/A}\;\square_{-}\;G^{<}_{\bar{S}}\right]\left(\omega,n\right),\end{split}$ (45) $\Sigma^{PP,</>}_{S}\left(\omega,n\right)=i\left[T^{PP,</>}\;\square_{-}\;G^{>/<}_{\bar{S}}\right]\left(\omega,n\right),$ (46) $\begin{split}\widetilde{T}^{PP,R/A}\left(\omega,m,n\right)=-U\widetilde{G}^{H,R/A}\left(\omega,m,n\right)U\\\ +U\left[\widetilde{G}^{H,R/A}\circ\widetilde{T}^{PP,R/A}\right](\omega,m,n),\end{split}$ (47) $\begin{split}\widetilde{T}^{PP,</>}\left(\omega,m,n\right)=-U\widetilde{G}^{H,</>}\left(\omega,m,n\right)U\\\ +U\left[\widetilde{G}^{H,R}\circ\widetilde{T}^{PP,</>}\right](\omega,m,n)\\\ +U\left[\widetilde{G}^{H,</>}\circ\widetilde{T}^{PP,A}\right](\omega,m,n),\end{split}$ (48) $\begin{split}G^{H,R/A}(\omega,n)\\\ =i\left[G^{<}_{A}\;\square_{+}\;G^{R/A}_{B}+G^{R/A}_{A}\;\square_{+}\;G^{>}_{B}\right],(\omega,n),\end{split}$ (49) and $\begin{split}G^{H,</>}(\omega,n)\\\ =i\left[G^{</>}_{A}\;\square_{+}\;G^{</>}_{B}\right](\omega,n);\end{split}$ (50) and for the particle-hole T-matrix, $\begin{split}\Sigma^{PH,R/A}_{S}\left(\omega,n\right)\\\ =i\left[T_{S}^{PH,<}\;\square_{+}\;G^{R/A}_{\bar{S}}+T_{S}^{PH,R/A}\;\square_{+}\;G^{>}_{\bar{S}}\right]\left(\omega,n\right),\end{split}$ (51) $\Sigma^{PH,</>}_{S}\left(\omega,n\right)=i\left[T_{S}^{PH,</>}\;\square_{+}\;G^{</>}_{\bar{S}}\right]\left(\omega,n\right),$ (52) $\begin{split}\widetilde{T}_{S}^{PH,R/A}\left(\omega,m,n\right)=U\widetilde{G}_{S\bar{S}}^{F,R/A}\left(\omega,m,n\right)U\\\ -U\left[\widetilde{G}_{S\bar{S}}^{F,R/A}\circ\widetilde{T}^{PH,R/A}_{S}\right](\omega,m,n),\end{split}$ (53) $\begin{split}\widetilde{T}_{S}^{PH,</>}\left(\omega,m,n\right)=U\widetilde{G}_{S\bar{S}}^{F,</>}\left(\omega,m,n\right)U\\\ -U\left[\widetilde{G}_{S\bar{S}}^{F,</>}\circ\widetilde{T}^{PH,A}_{S}\right](\omega,m,n)\\\ -U\left[\widetilde{G}_{S\bar{S}}^{F,R}\circ\widetilde{T}^{PH,</>}_{S}\right](\omega,m,n),\end{split}$ (54) $\begin{split}G_{S\bar{S}}^{F,R/A}(\omega,n)\\\ =-i\left[G^{<}_{S}\;\square_{-}\;G^{A/R}_{\bar{S}}+G^{R/A}_{S}\;\square_{-}\;G^{<}_{\bar{S}}\right](\omega,n),\end{split}$ (55) and $\begin{split}G_{S\bar{S}}^{F,</>}(\omega,n)\\\ =-i\left[G^{</>}_{S}\;\square_{-}\;G^{>/<}_{\bar{S}}\right](\omega,n).\end{split}$ (56) (a) (b) (c) Figure 4: Time-averaged energy current through the system due to the periodic driving of the left lead. The parameters, unless specified, are $\Gamma_{\alpha}=\Gamma_{\beta}=0.5$, $\epsilon_{A}=\epsilon_{B}$=-0.2, $U=0.4$, $T=0.001$, $\mu_{\alpha}=\mu_{\beta}=0$ and $\Delta_{\alpha}=0.2$. The discretization was taken at $0.01$, the bounds of integration between $-40$ and $40$, and 49 Fourier coefficients were used. The convergence for both dots was taken as $10^{-4}$ The time-dependent driving of the leads’ energies was taken as sinusoidal, $\psi_{\alpha}(t)=\Delta_{\alpha}\cos(\Omega t),$ (57) giving $\Psi_{\alpha}(t)=\left(\Delta_{\alpha}/\Omega\right)\sin\left(\Omega t\right)$, which can be expanded with the Jacobi-Anger expansion $e^{i\frac{\Delta_{\alpha}}{\Omega}\sin(\Omega t)}=\sum_{n=-\infty}^{n=\infty}J_{n}\left(\frac{\Delta_{\alpha}}{\Omega}\right)e^{in\Omega t},$ (58) where $J_{n}(x)$ are Bessel functions of the first kind. We can recast equation (21), as a Floquet matrices, $\bar{\Sigma}_{\sigma}\left(\omega,m,n\right)=\left[\widetilde{S}_{\sigma}\circ{\widetilde{\Sigma}^{\prime}}_{\sigma}\circ{\widetilde{S}_{\sigma}}^{\dagger}\right]\left(\omega,m,n\right),$ (59) where $\widetilde{S}_{\sigma}\left(m,n\right)=J_{m-n}\left(\Delta_{\sigma}/\Omega\right)$. In a similar manner to the equations of motion, the observables can be cast in terms of Fourier coefficients: $\begin{split}I^{P}_{\sigma}(n-m)=\widetilde{I}^{P}_{\sigma}\left(m,n\right)\\\ =2\int^{\infty}_{-\infty}\frac{d\omega}{2\pi}\;\left[\widetilde{G}_{S}^{R}\circ\widetilde{\Sigma}^{<}_{\sigma}+\widetilde{G}_{S}^{<}\circ\widetilde{\Sigma}^{A}_{\sigma}\right](\omega,m,n),\end{split}$ (60) $\begin{split}n_{S}\left(n-m\right)=\widetilde{n}_{S}\left(m,n\right)=-i\int^{\infty}_{-\infty}\frac{d\omega}{2\pi}\widetilde{G}_{S}^{<}(\omega,m,n),\end{split}$ (61) and $\begin{split}I^{E}_{\sigma}(n-m)=\widetilde{I}^{E}_{\sigma}\left(m,n\right)\\\ =-2\int^{\infty}_{-\infty}\frac{d\omega}{2\pi}\;\left(\omega+m\Omega\right)\left[\widetilde{\Sigma}_{\sigma}^{<}\circ\widetilde{G}^{A}_{S}+\widetilde{\Sigma}_{\sigma}^{R}\circ\widetilde{G}^{<}_{S}\right](\omega,m,n).\end{split}$ (62) ### II.3 Implementation To solve the equations of motion, we invert equations (37),(43),(44),(47),(48),(53) and (54) by first truncating the Floquet matrices, as defined in Eq. (35). Equations (39), (40), (41), (42),(45), (46), (49), (50), (51), (52), (55) and (56) are calculated by using the Fourier coefficient taken from the appropriate Floquet matrices. These Fourier coefficients were taken from the terms of the Floquet matrices given by $n+m=0,-1$ of equation (35). The self-energy terms were then transformed back to Floquet matrix form, using equation (35), for use in the equations of motion. The self-consistent process begins with calculating the noninteracting case followed by the interaction self-energies, as specified above. The interaction self-energies are then used to calculate successive iterations of the Green’s functions before the following convergence is satisfied: $\begin{split}\frac{\sum_{m}\left|n_{m}^{k}-n_{m}^{k-1}\right|}{\sum_{m}\left|n^{k}_{m}\right|}\leq\delta,\end{split}$ (63) where $n^{k}_{m}$ is the $k$th iteration of the $m$th Fourier coefficent of the occupation in question, with $\delta$ as the convergence. This convergence was satisfied for each dot’s occupation. (a) (b) (c) Figure 5: Time-averaged observables through the system, with driving of the left lead. The parameters are $\Gamma_{\alpha}=\Gamma_{\beta}=0.5$, $U=0.4$, $T=0.001$, $\mu_{\alpha}=\mu_{\beta}=0$, $\Delta_{\alpha}=0.2$ and $\Omega=0.4$. The discretization was taken at $0.01$, the bounds of integration between $-40$ and $40$ and 49 Fourier coefficients were used. The convergence for both dots was taken as $10^{-4}$. The red dashed line follows $\epsilon_{A}=\epsilon_{B}$. ## III Results and Discussion Within the system, energy moves from the driven to the undriven section via the capacitive coupling of the two dots. In particular, energy transfer occurs when the driven dot is occupied at a higher energy and unoccupied at a lower energy, and the undriven dot is unoccupied at a higher energy and occupied at a lower energy. This process is complicated because the energies at which a dot is occupied are informed by the occupation of the opposing dot. This relationship is transparent in the Hartree approximation, where the average energy current into the central region due to current into the dot $S$ is given by $\bar{I^{E}_{\sigma}}=U\int^{P}_{0}\frac{dt}{P}\;n_{\bar{S}}(t)I^{P}_{\sigma}(t).$ (64) These observations, coupled with the sinusoidal nature of the driving, suggest the following approximate cyclic stages in the energy transfer process: 1. 1. Following stage $4$, charge moves onto the driven dot while the undriven dot is largely occupied. 2. 2. The driven dot is largely occupied as charge moves off the undriven dot. 3. 3. Charge moves off the driven dot as the undriven dot is largely unoccupied. 4. 4. The driven dot is largely unoccupied as charge moves onto the undriven dot. Stages one and two capture the movement of higher energy electrons moving onto the driven dot and off the undriven dot, resulting in the energy transfer from the driven to the undriven region. Stages three and four capture the lower energy electrons moving off the driven dot and onto the undriven dot, resulting in a lower energy transfer than the first two steps in the cycle in the opposite direction. An example of this can be seen in Fig. 3, where the regions in which the stages are most prominent have been highlighted. The amount of energy transferred through the system is sensitive to the driving frequency, with the maximum transference a result of the balancing of stages of energy transfer [see in Fig. 4]. As the driving frequency decreases, electrons move between the dots and their respective leads quicker than the energy transfer stages can complete. In particular, the dots that remain largely occupied in stages one and two and largely unoccupied in processes three and four begin to change in occupation, resulting in less pronounced changes in the opposing dot’s occupation energy and outgoing energy current. Conversely, as the driving frequency increases, the charge has less time to move between the dots and their respective leads, resulting in smaller maxima and minima for the occupations over the period, which reduces the opposing dot’s occupation energy and its outgoing energy current. The driving profile and the system parameters beyond the driving frequency also inform the effectiveness of the energy transfer process. As expected, given Eq. (64), increases in the interaction strength $U$ were found to increase the average energy current through the system [see Fig. 4(b)]. It was also found that significant asymmetry of the coupling strengths diminished energy flow through the system [see Fig. 4(c)]. This is due to uneven transfer rates, $\tau_{\sigma}\sim 1/\Gamma_{\sigma}$, for the movement of electrons between the dots and their respective leads, resulting in the inefficient completion of the stages of the energy transfer process. For the energies of the dots, the largest transference of energy through the system was achieved with dots with equal energies situated below the chemical potential of the two leads, such that, on average, the dots are both around half filled [see Fig. 5]. ## IV Conclusion We have investigated two capacitively coupled quantum dots coupled to respective leads, where one lead’s energies are driven sinusoidally. While particles cannot move between the dots, Coulomb repulsion between the dots allows for the transfer of energy. The stages of the energy transfer were identified, and the effects of system parameters’ were investigated. In particular, it was found that energy transfer was maximized for a given driving, corresponding to the efficient completion of the identified energy transfer process. This work has focused on a regime of relatively weak coupling with $\Gamma>U$. Further work in a regime of $\Gamma<U$ may result in different energy transfer stages, as outlined in Sec. III, for various drivings, suggesting interesting possible avenues for further research. Moreover, more complicated driving profiles and statistics relating to energy transfer may prove valuable in understanding and manipulating energy transfer in systems like that investigated. This result furthers the understanding of particle and energy transfer in capacitively coupled quantum dots, particularly within the context of nonadiabatic driving. This is particularly important as the miniaturization of nanoelectronics brings active elements closer together, resulting in the potential for unwarranted capacitive coupling. ## References * [1] A. J. Keller, J. S. Lim, David Sánchez, Rosa López, S. Amasha, J. A. Katine, Hadas Shtrikman, and D. Goldhaber-Gordon. Cotunneling drag effect in coulomb-coupled quantum dots. Phys. Rev. Lett., 117:066602, Aug 2016. * [2] Miguel A. Sierra, David Sánchez, Antti-Pekka Jauho, and Kristen Kaasbjerg. Fluctuation-driven coulomb drag in interacting quantum dot systems. Phys. Rev. B, 100:081404(R), Aug 2019. * [3] Ludovico Tesser, Bibek Bhandari, Paolo Andrea Erdman, Elisabetta Paladino, Rosario Fazio, and Fabio Taddei. Heat rectification through single and coupled quantum dots. New Journal of Physics, 24(3):035001, mar 2022. * [4] A. A. Aligia, D. Pérez Daroca, Liliana Arrachea, and P. Roura-Bas. Heat current across a capacitively coupled double quantum dot. Phys. Rev. B, 101:075417, Feb 2020. * [5] Hari Kumar Yadalam and Upendra Harbola. Statistics of heat transport across a capacitively coupled double quantum dot circuit. Phys. Rev. B, 99:195449, May 2019. * [6] A.-M. Daré. Comparative study of heat-driven and power-driven refrigerators with coulomb-coupled quantum dots. Phys. Rev. B, 100:195427, Nov 2019. * [7] Rafael Sánchez and Markus Büttiker. Optimal energy quanta to current conversion. Phys. Rev. B, 83:085428, Feb 2011. * [8] Holger Thierschmann, Rafael Sánchez, Björn Sothmann, Fabian Arnold, Christian Heyn, Wolfgang Hansen, Hartmut Buhmann, and Laurens W. Molenkamp. Three-terminal energy harvester with coupled quantum dots. Nature Nanotechnology, 10(10):854–858, Oct 2015. * [9] Björn Sothmann, Rafael Sánchez, and Andrew N Jordan. Thermoelectric energy harvesting with quantum dots. Nanotechnology, 26(3):032001, dec 2014. * [10] Nicklas Walldorf, Antti-Pekka Jauho, and Kristen Kaasbjerg. Thermoelectrics in coulomb-coupled quantum dots: Cotunneling and energy-dependent lead couplings. Phys. Rev. B, 96:115415, Sep 2017. * [11] María Florencia Ludovico, Liliana Arrachea, Michael Moskalets, and David Sánchez. Periodic energy transport and entropy production in quantum electronics. Entropy, 18(11), 2016. * [12] María Florencia Ludovico, Michael Moskalets, David Sánchez, and Liliana Arrachea. Dynamics of energy transport and entropy production in ac-driven quantum electron systems. Phys. Rev. B, 94:035436, Jul 2016. * [13] María Florencia Ludovico, Jong Soo Lim, Michael Moskalets, Liliana Arrachea, and David Sánchez. Time resolved heat exchange in driven quantum systems. Journal of Physics: Conference Series, 568(5):052017, dec 2014. * [14] A.-M. Daré and P. Lombardo. Time-dependent thermoelectric transport for nanoscale thermal machines. Phys. Rev. B, 93:035303, Jan 2016. * [15] María Florencia Ludovico and Massimo Capone. Enhanced performance of a quantum-dot-based nanomotor due to coulomb interactions. Phys. Rev. B, 98:235409, Dec 2018. * [16] Stefan Juergens, Federica Haupt, Michael Moskalets, and Janine Splettstoesser. Thermoelectric performance of a driven double quantum dot. Phys. Rev. B, 87:245423, Jun 2013. * [17] Jong Soo Lim, Rosa López, and David Sánchez. Dynamic thermoelectric and heat transport in mesoscopic capacitors. Phys. Rev. B, 88:201304(R), Nov 2013. * [18] Jian Chen, Minhui ShangGuan, and Jian Wang. A gauge invariant theory for time dependent heat current. New Journal of Physics, 17(5):053034, may 2015. * [19] María Florencia Ludovico and Massimo Capone. Charge and energy transfer in ac-driven coulomb-coupled double quantum dots. The European Physical Journal B, 95(6):99, Jun 2022. * [20] Tobias Brandes. Truncation method for green’s functions in time-dependent fields. Phys. Rev. B, 56:1213–1224, Jul 1997. * [21] Thomas D. Honeychurch and Daniel S. Kosov. Full counting statistics for electron transport in periodically driven quantum dots. Phys. Rev. B, 102:195409, Nov 2020. * [22] Thomas D. Honeychurch and Daniel S. Kosov. Quantum transport in driven systems with vibrations: Floquet nonequilibrium green’s functions and the self-consistent born approximation. Phys. Rev. B, 107:035410, Jan 2023. * [23] Patrick Haughian, Han Hoe Yap, Jiangbin Gong, and Thomas L. Schmidt. Charge pumping in strongly coupled molecular quantum dots. Phys. Rev. B, 96:195432, Nov 2017. * [24] Hideo Aoki, Naoto Tsuji, Martin Eckstein, Marcus Kollar, Takashi Oka, and Philipp Werner. Nonequilibrium dynamical mean-field theory and its applications. Rev. Mod. Phys., 86:779–837, Jun 2014. * [25] N Schlünzen, S Hermanns, M Scharnke, and M Bonitz. Ultrafast dynamics of strongly correlated fermions—nonequilibrium green functions and selfenergy approximations. Journal of Physics: Condensed Matter, 32(10):103001, dec 2019.
11institutetext: Université Paris-Saclay, CNRS/IN2P3, IJCLab, 91405 Orsay, France 22institutetext: IRAP, Université de Toulouse, CNRS, CNES, UPS, (Toulouse), France 33institutetext: Université Paris-Saclay, CNRS, Institut d’Astrophysique Spatiale, 91405, Orsay, France 44institutetext: Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, California, U.S.A. 55institutetext: Warsaw University Observatory, Aleje Ujazdowskie 4, 00-478 Warszawa, Poland 66institutetext: Centre National d’Etudes Spatiales – Centre spatial de Toulouse, 18 avenue Edouard Belin, 31401 Toulouse Cedex 9, France 77institutetext: Computational Cosmology Center, Lawrence Berkeley National Laboratory, Berkeley, California, U.S.A. 88institutetext: Space Sciences Laboratory, University of California, Berkeley, California, U.S.A. 99institutetext: Aix Marseille Univ, CNRS, CNES, LAM, Marseille, France 1010institutetext: Dept. of Astronomy, Haverford College, Haverford PA 19041, USA 1111institutetext: Department of Physics & Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, British Columbia, Canada # Cosmological parameters derived from the final (PR4) Planck data release M. Tristram Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release A. J. Banday Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release M. Douspis Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release X. Garrido Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release K. M. Górski Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release S. Henrot-Versillé Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release S. Ilić Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release R. Keskitalo Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release G. Lagache Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release C. R. Lawrence Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release B. Partridge Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release D. Scott Cosmological parameters derived from the final (PR4) Planck data releaseCosmological parameters derived from the final (PR4) Planck data release We present constraints on cosmological parameters using maps from the last Planck data release (PR4). In particular, we detail an upgraded version of the cosmic microwave background likelihood, HiLLiPoP, based on angular power spectra and relying on a physical modelling of the foreground residuals in the spectral domain. This new version of the likelihood retains a larger sky fraction (up to 75 %) and uses an extended multipole range. Using this likelihood, along with low-$\ell$ measurements from LoLLiPoP, we derive constraints on $\Lambda$CDM parameters that are in good agreement with previous Planck 2018 results, but with 10 % to 20 % smaller uncertainties. We demonstrate that the foregrounds can be accurately described in the spectral domain with only negligible impact on $\Lambda$CDM parameters. We also derive constraints on single-parameter extensions to $\Lambda$CDM including $A_{\mathrm{L}}$, $\Omega_{K}$, $N_{\mathrm{eff}}$, and $\sum m_{\nu}$. Noteworthy results from this updated analysis include a lensing amplitude value of ${A_{\mathrm{L}}}=1.036\pm 0.051$, which aligns more closely with theoretical expectations within the $\Lambda$CDM framework. Additionally, our curvature measurement, ${\Omega_{K}}=-0.012\pm 0.010$, now demonstrates complete consistency with a flat universe, and our measurement of $S_{8}$ is closer to the measurements derived from large-scale structure surveys (at the 1.6 $\sigma$ level). ###### Key Words.: Cosmic background radiation – cosmological parameters – cosmology: observations – methods: data analysis ## 1 Introduction Since the first results were released in 2013, the Planck satellite’s measurements of the cosmic microwave background (CMB) anisotropies have provided highly precise constraints on cosmological models. These measurements have tested the cosmological-constant-dominated cold dark matter ($\Lambda$CDM) model, given tight constraints on its parameters, and ruled out many plausible extensions. As a consequence, the best-fitting 6-parameter $\Lambda$CDM model is now frequently used as the standard reference to be compared to new observational results, and when combining with other data sets to provide further constraints. Since the last Planck Collaboration cosmological analysis in 2018 (Planck Collaboration VI 2020), the very last version of the Planck data processing, called NPIPE, was released as the Planck Public Release 4 (PR4) and extensively detailed in Planck Collaboration Int. LVII (2020). As well as including previously neglected data from the repointing periods, NPIPE processed the entire set of Planck channels within the same framework, including the latest versions of corrections for systematics and data treatment. In this paper, our objective is to enhance the precision on cosmological parameters through the utilization of PR4 data. Indeed, we expect better sensitivity on almost all cosmological parameters owing to improved map sensitivity. Additionally, we look for better internal consistency for the lensing amplitude affecting the primordial CMB anisotropies. We thus derive constraints on cosmology using both low-$\ell$ and high-$\ell$ likelihoods based on Planck PR4. The only part still relying on PR3 (also known as Planck 2018) is the low-$\ell$ temperature likelihood, Commander, as we do not anticipate significant improvements at large scales in temperature between PR3 and PR4. On the other hand, our analysis includes the large scales in polarization from PR4 for which the NPIPE processing provides a significant improvement compared to PR3. Since the foregrounds dominate polarization at large scales, for the low-$\ell$ likelihood, LoLLiPoP, we make use of component-separated CMB maps processed by Commander using the whole range of Planck polarized frequencies from 30 to 353 GHz. This has been extensively discussed in Tristram et al. (2021) and Tristram et al. (2022), where it was combined with the BICEP2/Keck likelihood (Ade et al. 2021) in order to provide constraints on the tensor-to- scalar ratio $r$. For the high-$\ell$ power-spectrum analysis, HiLLiPoP, we use a multi- frequency Gaussian likelihood approximation using sky maps at three frequencies (100, 143 and 217 GHz), while the channel at 353 GHz is used to derive a template for the dust power spectrum contaminating the CMB signal at large scales. HiLLiPoP is one of the likelihoods developed within the Planck collaboration and used to analyse previous Planck data sets (Planck Collaboration XV 2014; Planck Collaboration XI 2016). Here, we describe a new version adapted to PR4 and called “HiLLiPoP V4.2.” It differs from the previous one essentially by using a larger sky fraction (covering 75 % of the sky) and a refined model for the foregrounds (in particular for point sources and dust emission). We specifically use High Frequency Instrument “detsets”, which are splits of the detectors at each frequency into specific subsets. We compute cross-spectra for each of the CMB modes ($TT$, $TE$, $EE$), cross- correlating the two detset maps at each of the three Planck channels dominated by the CMB (100, 143, and 217 GHz), together with their associated covariance. As illustrated in Fig. 1, the variance of the cross-spectra is close to the expected sample variance for 75 % of the sky in temperature for $TT$, while the impact of the Planck noise in polarization is more visible in $TE$ and $EE$. However, at those scales ($\ell$ ¡ 2000), Planck PR4 is the most sensitive data set for CMB anisotropies as of today. Figure 1: Uncertainties on each angular cross-power spectrum (blue lines) and their combination (red line) for the Planck $TT$ (top), $TE$ (middle), and $EE$ (bottom) data, compared to sample variance for 75 % of the sky (black dashed line). The cross-spectra are then co-added into cross-frequency spectra and compared through a Gaussian likelihood to a model taking into account Galactic as well as extragalactic residual emission on top of the CMB signal. Even if the Planck PR4 data set is dominated by CMB anisotropies over the entire range of multipoles considered in the high-$\ell$ likelihood ($30\thinspace{<}\thinspace\ell\thinspace{<}\thinspace 2500$), using all cross-frequency spectra allows us to check the robustness of the results with respect to our knowledge of the astrophysical foregrounds. Indeed, even if the basic $\Lambda$CDM parameters are insignificantly affected by the details of the foreground modelling, the constraints on extensions to $\Lambda$CDM might depend more critically on the accuracy of the foreground description. Moreover, future ground-based experiments, measuring smaller scales than those accessible by Planck, will be even more sensitive to extragalactic foregrounds. We begin this paper by summarizing the Planck PR4 pipeline (NPIPE), focusing on the improvements as compared to PR3 (Sect. 2). Then, in Sect. 3, we explain how the angular power spectra are calculated, and describe the masks we use, the multipole ranges, the pseudo-$C_{\ell}$ algorithm, and the covariance matrix. The LoLLiPoP likelihood is briefly described in Sect. 4, with reference to Tristram et al. (2021, 2022). The HiLLiPoP likelihood is described in Sect. 5, including details of foreground modelling and instrumental effects. Results on the parameters for the $\Lambda$CDM model are described and commented on in Sect. 6. Constraints on foreground parameters and instrumental parameters are discussed in Sects. 7 and 8, respectively. Section 9 is dedicated to consistency checks with respect to previous Planck results. Finally we explore some extensions to $\Lambda$CDM in Sect. 10, specifically $A_{\mathrm{L}}$, $\Omega_{K}$, $N_{\mathrm{eff}}$, and $\sum m_{\nu}$. ## 2 The Planck PR4 data set The Planck sky measurements used in this analysis are the PR4 maps available from the Planck Legacy Archive111pla.esac.esa.int (PLA) and from the National Energy Research Scientific Computing Center (NERSC).222portal.nersc.gov/project/cmb/planck2020 They have been produced with the NPIPE processing pipeline, which creates calibrated frequency maps in temperature and polarization from both the Planck Low-Frequency Instrument (LFI) and the High-Frequency Instrument (HFI) data. As described in Planck Collaboration Int. LVII (2020), NPIPE processing includes data from the repointing periods that were neglected in previous data releases. There were additionally several improvements, resulting in lower levels of noise and systematics in both frequency and component-separated maps at essentially all angular scales, as well as notably improved internal consistency between the various frequencies. Moreover, PR4 also provides a set of “End-to-End” Monte Carlo simulations processed with NPIPE, which enables the characterization of potential biases and the uncertainties associated with the pipeline. To compute unbiased estimates of the angular power spectra, we perform cross- correlations of two independent splits of the data. As shown in Planck Collaboration Int. LVII (2020), the most appropriate split for the Planck data is represented by the detset maps, comprising two subsets of maps with nearly independent noise characteristics, made by combining half of the detectors at each frequency. This was obtained by processing each split independently, in contrast to the split maps produced in the previous Planck releases. We note that time-split maps (made from, e.g., “odd $-$ even rings” or “half-mission data”) share the same instrumental detectors, and therefore exhibit noise correlations due to identical spectral bandpasses and optical responses. As a consequence, the use of time-split maps gives rise to systematic biases in the cross-power spectra (see section 3.3.3 in Planck Collaboration V 2020), as well as underestimation of the noise levels in computing the half-differences (which needed to be compensated by a rescaling of the noise in PR3, as described in appendix A.7 of Planck Collaboration III 2020). For this reason, we cross-correlate using detset splits only. Nevertheless, in order to verify the level of noise correlation between detsets, we computed the detset cross-power spectra from the half-ring difference maps, which we show in Fig. 2. The spectra are computed on 75 % of the sky and are fully compatible with zero, ensuring that any correlated noise is much smaller than the uncorrelated noise over the range of multipoles from $\ell=30$ to 2500. As discussed above, this test is not sensitive to correlations at scales smaller than the half-ring period. Indeed, if both halves of a ring are affected by the same systematic effect, it will vanish in the half-ring difference map and thus will not be tested in cross-correlation with another detset. Figure 2: Detset cross-spectra for half-ring differences computed on 75 % of the sky, divided by their uncertainties. From top to bottom we show $TT$, $EE$, $TE$, and $ET$. Spectra are binned with $\Delta\ell=40$. The projections on the right show the distribution for each unbinned spectrum over the range $\ell=30$–2500. ## 3 Planck PR4 angular power spectra ### 3.1 Large-scale polarized power spectra Foregrounds are stronger in polarization relative to the CMB than in temperature, and cleaning the Planck frequencies using $C_{\ell}$ templates in the likelihood (as done at small scales) is not accurate enough, especially at large angular scales. In order to clean sky maps of polarized foregrounds, we use the Commander component-separation code (Eriksen et al. 2008), with a model that includes three polarized components, namely the CMB, synchrotron emission, and thermal dust emission. Commander is run on each detset map independently, as well as on each realization from the PR4 Monte Carlo simulations. We then compute unbiased estimates of the angular power spectra by cross- correlating the two detset-cleaned maps. We compute power spectra using an extension of the quadratic maximum-likelihood estimator (Tegmark & de Oliveira-Costa 2001) adapted for cross-spectra in Vanneste et al. (2018). At multipoles below 40, this has been shown to produce unbiased polarized power spectra with almost optimal errors. We use downgraded ${N_{\rm side}}\thinspace{=}\thinspace 16$ maps (Górski et al. 2005) after convolution with a cosine apodizing kernel $b_{\ell}=\frac{1}{2}\left\\{1+\cos\pi(\ell-1)/(3{N_{\rm side}}-1)\right\\}$. The signal is then corrected with the PR4 transfer function, to compensate for the filtering induced by the degeneracies between the signal and the templates for systematics used in the mapmaking procedure (see Planck Collaboration Int. LVII 2020). The resulting power spectrum estimated on the cleanest 50 % of the sky is plotted in Fig. 3 up to $\ell\thinspace{=}\thinspace 30$ (for more details, see Tristram et al. 2021). We also performed the same estimation on each of the PR4 simulations and derive the $\ell$-by-$\ell$ covariance matrix that is then used to propagate uncertainties in LoLLiPoP, the low-$\ell$ CMB likelihood described in Sect. 4. Figure 3: $EE$ power spectrum of the CMB computed on 50 % of the sky with the PR4 maps at low multipoles (Tristram et al. 2021). The Planck 2018 $\Lambda$CDM model is plotted in black. The grey band represents the associated sample variance. Error bars are deduced from the PR4 Monte Carlo simulations. ### 3.2 Small-scale power spectra #### 3.2.1 Sky fractions For small scales ($\ell\thinspace{>}\thinspace 30$), we are using detset maps at frequencies 100, 143, and 217 GHz, and we select only a fraction of the sky in order to reduce the contamination from Galactic foregrounds. The main difference with respect to the masks used for the previous versions of HiLLiPoP (Couchot et al. 2017b) lies in two points: the new Galactic masks allow for a larger sky fraction; and the point-source mask is common to all three frequencies. The resulting masks applied to each frequency are made of a combination of four main components, which we now describe. ##### Galactic mask. We apply a mask to remove the region of strongest Galactic emission, adapted to each frequency. We can keep a larger sky fraction at the lowest frequency (100 GHz) where the emission from the Galactic sources is low. Since Planck uncertainty is dominated by sample variance up to multipole $\ell\thinspace{\simeq}\thinspace 1800$ in temperature (and $\ell\thinspace{\simeq}\thinspace 1100$ in $TE$ polarization), this allows us to reduce the sampling variance by ensuring a larger sky fraction. However, we remove a larger fraction of the sky for the highest frequency channel (217 GHz) since it is significantly more contaminated by Galactic dust emission. We build Galactic masks using the Planck 353-GHz map as a tracer of the thermal dust emission in intensity. In practice, we smooth the Planck 353-GHz map to increase the signal-to-noise ratio before applying a threshold that depends on the frequency. Masks are then apodized using a $1$ .${}^{\circ}0$ Gaussian taper for power spectra estimation. For polarization, Planck dust maps show that the diffuse emission is strongly related to the Galactic magnetic field at large scales (Planck Collaboration Int. XIX 2015). However, at the smaller scales that matter here ($\ell\thinspace{>}\thinspace 30$), the orientation of dust grains is driven by local turbulent magnetic fields that produce a polarization intensity approximately proportional to the total intensity dust map. We thus use the same Galactic mask for polarization as for temperature. ##### CO mask. We apply a mask for CO line emission. We consider the combination of maps of the two lines in the Planck frequency bands at 115 and 230 GHz. We smooth the Planck reconstructed CO maps to 30 arcmin before applying a threshold at $2\thinspace{\rm K}\thinspace{\rm km}\thinspace{\rm s}^{-1}$. The resulting masks are then apodized at 15 arcmin. The CO masks remove 17 % and 19 % of the sky at 100 and 217 GHz, respectively, although the removed pixels largely fall within the Galactic masks. ##### Point-sources mask. We use a common mask for the three CMB frequencies to cover strong sources (both radio and infrared). In contrast to the masks used in Plik or CamSpec, the point-source mask used in our analysis relies on a more refined procedure that preserves Galactic compact structures and ensures the completeness level at each frequency, but with a higher flux cut on sources (approximately 340, 250, and 200 mJy at 100, 143, and 217 GHz, respectively). The consequence is that these masks leave a slightly greater number of unmasked extragalactic sources, but more accurately preserve the power spectra of dust emission (see Sect. 5.2). We apodize these masks with a Gaussian taper of 15 arcmin. We produce a single point-source mask as the combination of the three frequency masks; in total, this removes 8.3 % of the sky. ##### Large objects. We mask a limited number of resolved objects in the sky, essentially nearby galaxies including the LMC, SMC, and M31, as well as the Coma cluster. This removes less than 0.4 % of the sky. We use the same mask for temperature and polarization. Even though masking point sources in polarization is not mandatory (given the Planck noise in $EE$, and $TE$); this makes the computation of the covariance matrix much simpler while not removing a significant part of the sky. The Galactic masks ultimately used for HiLLiPoP V4.2 cover 20 %, 30 %, and 45 % of the sky for the 100, 143 and 217 GHz channels, respectively. After combining with the other masks, the effective sky fraction used for computing cross-spectra are 75 %, 66 %, and 52 %, respectively (see Fig. 4). The sky fractions retained for the likelihood analysis are about 5 % larger than the ones used in the previous version of HiLLiPoP. Before extending the sky fraction used in the likelihood, we have checked the robustness of the results and the goodness-of-fit (through estimating $\chi^{2}$) using various combinations of Galactic masks (see Sect. 9). Figure 4: Sky masks used for HiLLiPoP V4.2 as a combination of a Galactic mask (blue, green, and red for the 100, 143, and 217 GHz channel, respectively), a CO mask, a point-source mask, and a mask removing nearby galaxies. The effective sky fractions remaining at 100, 143 and 217 GHz are 75 %, 66 %, and 52 %, respectively. #### 3.2.2 PR4 small-scale spectra Figure 5: Frequency cross-power spectra with respect to the mean spectra for $TT$, $EE$, $TE$, and $ET$. Spectra are binned with $\Delta\ell=40$ for this figure. We use Xpol (an extension to polarization of Xspect, described in Tristram et al. 2005) to compute the cross-power spectra in temperature and polarization ($TT$, $EE$, and $TE$). Xpol is a pseudo-$C_{\ell}$ method (see e.g. Hivon et al. 2002; Brown et al. 2005) that also computes an analytical approximation of the $C_{\ell}$ covariance matrix directly from data.333gitlab.in2p3.fr/tristram/Xpol Using the six maps presented in Sect. 2, we derive the 15 cross-power spectra for each CMB mode, as outlined below: one each for 100$\times$100, 143$\times$143, and 217$\times$217; and four each for 100$\times$143, 100$\times$217, and 143$\times$217. From the coefficients of the spherical harmonic decomposition of the ($I$,$Q$,$U$) masked maps $\mathbf{\tilde{a}}_{\ell m}^{X}=\\{\tilde{a}^{T}_{\ell m},\tilde{a}^{E}_{\ell m},\tilde{a}^{B}_{\ell m}\\}$, we form the pseudo cross-power spectra between map $i$ and map $j$, $\widetilde{\mathbf{C}}_{\ell}^{ij}=\frac{1}{2\ell+1}\sum_{m}\mathbf{\tilde{a}}^{i*}_{\ell m}\mathbf{\tilde{a}}^{j}_{\ell m}\thinspace,$ (1) where the vector $\mathbf{\widetilde{C}}_{\ell}$ includes the four modes $\\{\widetilde{C}^{\thinspace TT}_{\ell},\widetilde{C}^{\thinspace EE}_{\ell},\widetilde{C}^{\thinspace TE}_{\ell},\widetilde{C}^{\thinspace ET}_{\ell}\\}$. We note that the $TE$ and $ET$ cross-power spectra do not carry the same information, since computing $T$ from map $i$ and $E$ from map $j$ is different from computing $E$ from map $j$ and $T$ from $i$. They are computed independently and averaged afterwards using their relative weights for each cross-frequency. The pseudo-spectra are then corrected for beam and sky fraction using a mode-mixing coupling matrix, $\@tens{M}$, which depends on the masks used for each set of maps (Peebles 1973; Hivon et al. 2002), <EMAIL_ADDRESS>(2) The Planck data set suffers from leakage of $T$ to $E$ and $B$, essentially due to beam mismatch between the detectors used to construct the ($I,Q,U$) maps. We debias the beam leakage together with the beam transfer function using the beam window functions evaluated with QuickPol (Hivon et al. 2017), adapted to the PR4 data. Once corrected, the cross-spectra are inverse- variance averaged for each frequency pair in order to form six unbiased (though correlated) estimates of the angular power spectrum. The resulting cross-frequency spectra are plotted in Fig. 5 with respect to the $C_{\ell}$ average. For $TT$, the agreement between the different spectra is better than $20\thinspace\mu$K2, except (as expected) for the 100$\times$100 and the 217$\times$217 cases, which are affected by residuals from point sources and Galactic emission (for the latter). In $EE$, only the 217$\times$217 case is affected by Galactic emission residuals at low multipoles, but the spectra are still consistent at the few $\mu{\rm K}^{2}$ level. For $TE$ and $ET$, we can see various features at the level of $10\thinspace\mu$K2 (especially for the $100T\times 100E$ and $217E\times 217T$ spectra). Even though the consistency between the cross-frequencies is very good, the likelihood presented in Sect. 5 will take into account those residuals from foreground emission. #### 3.2.3 Multipole ranges The HiLLiPoP likelihood covers the multipoles starting from $\ell_{\rm min}=30$ up to $\ell_{\rm max}=2500$ in temperature and $\ell_{\rm max}=2000$ in polarization. The multipoles below $\ell<30$ are considered in the low-$\ell$ likelihoods (LoLLiPoP and Commander, see Sect. 4). Table 1 gives the HiLLiPoP multipole ranges, $[\ell_{\rm min},\ell_{\rm max}]$, considered for each of the six cross-frequencies in $TT$, $TE$, and $EE$. The multipole ranges used in the likelihood analysis have been chosen to limit the contamination by Galactic dust emission at low $\ell$ and instrumental noise at high $\ell$. In practice, we ignore the lowest multipoles for cross-spectra involving the 217 GHz map, where dust contamination is the highest, and cut out multipoles higher than $\ell=1500$ for cross-spectra involving the 100 GHz channel given its high noise level. In total, the number of multipoles considered is now 29 758 for $TT+TE+EE$, to be compared to the number in the HiLLiPoP analysis of PR3, which was 25 597. The spectra are sample-variance limited up to $\ell\simeq 1800$ in $TT$ and $\ell\simeq 1100$ in $TE$, while the $EE$ mode is essentially limited by instrumental noise. Channels | $TT$ | $TE$ | $EE$ ---|---|---|--- 100$\times$100 | [30,1500] | [30,1500] | [100,1200] 100$\times$143 | [30,1500] | [30,1500] | [30,1500] 100$\times$217 | [250,1500] | [100,1500] | [250,1500] 143$\times$143 | [50,2000] | [30,2000] | [30,2000] 143$\times$217 | [250,2500] | [200,2000] | [250,2000] 217$\times$217 | [250,2500] | [300,2000] | [250,2000] | 10646 | 9816 | 9296 Table 1: Multipole ranges used in the HiLLiPoP analysis and corresponding number of $\ell$s available ($n_{\ell}=\ell_{\rm max}-\ell_{\rm min}+1$). The total number of $\ell$s across all spectra is $29\thinspace 758$. #### 3.2.4 The covariance matrix We use a semi-analytical estimate of the $C_{\ell}$ covariance matrix computed using Xpol. The matrix captures the $\ell$-by-$\ell$ correlations between all the power spectra involved in the analysis. The computation relies directly on data for the estimates. It follows that contributions from noise (correlated and uncorrelated), sky emission (from astrophysical and cosmological origin), and the sample variance are implicitly taken into account in this computation without relying on any model or simulations. The covariance matrix $\@tens{\Sigma}$ of the cross-power spectra is directly related to the covariance $\@tens{\widetilde{\Sigma}}$ of the pseudo cross- power spectra through the coupling matrices: $\displaystyle\@tens{\Sigma}_{\ell_{1}\ell_{2}}^{ab,cd}\equiv\left<\Delta C_{\ell}^{ab}\Delta C_{\ell^{\prime}}^{cd*}\right>=\left(M_{\ell\ell_{1}}^{ab}\right)^{-1}\widetilde{\@tens{\Sigma}}_{\ell_{1}\ell_{2}}^{ab,cd}\left(M_{\ell^{\prime}\ell_{2}}^{cd*}\right)^{-1},$ (3) with $(a,b,c,d)\in\\{T,E\\}$ for each map. We compute $\@tens{\widetilde{\Sigma}}$ for each pseudo cross-spectra block independently, which includes $\ell$-by-$\ell$ correlation and four spectral mode correlations $\\{TT,EE,TE,ET\\}$. The matrix $\@tens{\widetilde{\Sigma}}$, which gives the correlations between the pseudo cross-power spectra ($ab$) and ($cd$), is an N-by-N matrix (where $N=4\ell_{\rm max}$) and reads $\displaystyle\@tens{\widetilde{\Sigma}}_{\ell\ell^{\prime}}^{ab,cd}$ $\displaystyle\equiv$ $\displaystyle\left<\Delta\tilde{C}_{\ell}^{ab}\Delta\tilde{C}_{\ell^{\prime}}^{cd*}\right>=\left<\tilde{C}_{\ell}^{ab}\tilde{C}_{\ell^{\prime}}^{cd*}\right>-\tilde{C}_{\ell}^{ab}\tilde{C}_{\ell^{\prime}}^{cd*}$ $\displaystyle=$ $\displaystyle\sum_{mm^{\prime}}\frac{\left<\tilde{a}_{\ell m}^{a}\tilde{a}_{\ell^{\prime}m^{\prime}}^{c*}\right>\left<\tilde{a}_{\ell m}^{b*}\tilde{a}_{\ell^{\prime}m^{\prime}}^{d}\right>+\left<\tilde{a}_{\ell m}^{a}\tilde{a}_{\ell^{\prime}m^{\prime}}^{d*}\right>\left<\tilde{a}_{\ell m}^{b*}\tilde{a}_{\ell^{\prime}m^{\prime}}^{c}\right>}{(2\ell+1)(2\ell^{\prime}+1)},$ by expanding the four-point Gaussian correlation using Isserlis’ formula (or Wick’s theorem). Each two-point correlation of pseudo-${\mathbf{a}_{\ell m}}$s can be expressed as the convolution of $\mathbf{C}_{\ell}$ with a kernel that depends on the polarization mode considered: $\displaystyle{\left\langle\tilde{a}^{T_{a}*}_{{\ell m}}\tilde{a}^{T_{b}}_{{\ell^{\prime}m^{\prime}}}\right\rangle}$ $\displaystyle=\sum_{{\ell_{1}m_{1}}}C_{\ell_{1}}^{T_{a}T_{b}}{W^{\scriptscriptstyle 0,T_{a}}_{\scriptscriptstyle{\ell m}{\ell_{1}m_{1}}}}{W^{\scriptscriptstyle 0,T_{b}*}_{\scriptscriptstyle{\ell^{\prime}m^{\prime}}{\ell_{1}m_{1}}}},$ $\displaystyle{\left\langle\tilde{a}^{E_{a}*}_{{\ell m}}\tilde{a}^{E_{b}}_{{\ell^{\prime}m^{\prime}}}\right\rangle}$ $\displaystyle=\frac{1}{4}\sum_{\ell_{1}m_{1}}\left\\{C_{\ell_{1}}^{E_{a}E_{b}}{W^{\scriptscriptstyle+,E_{a}*}_{\scriptscriptstyle{\ell m}{\ell_{1}m_{1}}}}{W^{\scriptscriptstyle+,E_{b}}_{\scriptscriptstyle{\ell^{\prime}m^{\prime}}{\ell_{1}m_{1}}}}+C_{\ell_{1}}^{B_{a}B_{b}}{W^{\scriptscriptstyle-,E_{a}*}_{\scriptscriptstyle{\ell m}{\ell_{1}m_{1}}}}{W^{\scriptscriptstyle-,E_{b}}_{\scriptscriptstyle{\ell^{\prime}m^{\prime}}{\ell_{1}m_{1}}}}\right\\},$ $\displaystyle{\left\langle\tilde{a}^{T_{a}*}_{{\ell m}}\tilde{a}^{E_{b}}_{{\ell^{\prime}m^{\prime}}}\right\rangle}$ $\displaystyle=\frac{1}{2}\sum_{\ell_{1}m_{1}}C_{\ell_{1}}^{T_{a}E_{b}}{W^{\scriptscriptstyle 0,T_{a}*}_{\scriptscriptstyle{\ell m}{\ell_{1}m_{1}}}}{W^{\scriptscriptstyle+,E_{b}}_{\scriptscriptstyle{\ell^{\prime}m^{\prime}}{\ell_{1}m_{1}}}},$ where the kernels $W^{0}$, $W^{+}$, and $W^{-}$ are defined as linear combinations of products of $Y_{\ell m}$ of spin 0 and $\pm 2$, weighted by the spherical transform of the window function in the pixel domain (the apodized mask). As suggested in Efstathiou (2006), by neglecting the gradients of the window function and applying the completeness relation for spherical harmonics (Varshalovich et al. 1988), we can reduce the products of four $W$s into kernels similar to the coupling matrix $\@tens{M}$ defined in Eq. (2). In the end, the blocks of the $\@tens{\Sigma}$ matrices are $\displaystyle\@tens{\Sigma}^{T_{a}T_{b},T_{c}T_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{T_{a}T_{c}}C_{\ell\ell^{\prime}}^{T_{b}T_{d}}\@tens{M}_{TT,TT}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{T_{a}T_{d}}C_{\ell\ell^{\prime}}^{T_{b}T_{c}}\@tens{M}_{TT,TT},$ $\displaystyle\@tens{\Sigma}^{E_{a}E_{b},E_{c}E_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{E_{a}E_{c}}C_{\ell\ell^{\prime}}^{E_{b}E_{d}}\@tens{M}_{EE,EE}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{E_{a}E_{d}}C_{\ell\ell^{\prime}}^{E_{b}E_{c}}\@tens{M}_{EE,EE},$ $\displaystyle\@tens{\Sigma}^{T_{a}E_{b},T_{c}E_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{T_{a}T_{c}}C_{\ell\ell^{\prime}}^{E_{b}E_{d}}\@tens{M}_{TE,TE}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{T_{a}E_{d}}C_{\ell\ell^{\prime}}^{E_{b}T_{c}}\@tens{M}_{TT,TT},$ $\displaystyle\@tens{\Sigma}^{T_{a}T_{b},T_{c}E_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{T_{a}T_{c}}C_{\ell\ell^{\prime}}^{T_{b}E_{d}}\@tens{M}_{TT,TT}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{T_{a}E_{d}}C_{\ell\ell^{\prime}}^{T_{b}T_{c}}\@tens{M}_{TT,TT},$ $\displaystyle\@tens{\Sigma}^{T_{a}T_{b},E_{c}E_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{T_{a}E_{c}}C_{\ell\ell^{\prime}}^{T_{b}E_{d}}\@tens{M}_{TT,TT}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{T_{a}E_{d}}C_{\ell\ell^{\prime}}^{T_{b}E_{c}}\@tens{M}_{TT,TT},$ $\displaystyle\@tens{\Sigma}^{E_{a}E_{b},T_{c}E_{d}}$ $\displaystyle\simeq C_{\ell\ell^{\prime}}^{E_{a}T_{c}}C_{\ell\ell^{\prime}}^{E_{b}E_{d}}\@tens{M}_{TE,TE}$ $\displaystyle+\ C_{\ell\ell^{\prime}}^{E_{a}E_{d}}C_{\ell\ell^{\prime}}^{E_{b}T_{c}}\@tens{M}_{TE,TE},$ which are thus directly related to the measured auto- and cross-power spectra (see the appendix in Couchot et al. 2017b). In practice, to avoid any correlation between $C_{\ell}$ estimates and their covariance, we use a smoothed version of each measured power spectrum (using a Gaussian filter with $\sigma_{\ell}=5$) to estimate the covariance matrix. We finally average the cross-power spectra covariance matrix to form the full cross-frequency power-spectra matrices for the three modes $\\{TT,TE,EE\\}$. The resulting covariance matrix (Fig. 6) has $29\thinspace 758\times 29\thinspace 758$ elements, and is symmetric as well as positive definite. This semi-analytical estimation has been tested against Monte Carlo simulations. In particular, we tested how accurate the approximations are in the case of a non-ideal Gaussian signal (due to the presence of small foregrounds residuals), Planck’s realistic (low) level of pixel-pixel correlated noise, and the apodization length used for the mask. We found no deviation to the sample covariance estimated from the 1000 realizations of the full focal plane Planck simulations that include anisotropic correlated noise and foreground residuals. To go further and check the detailed impact from the sky mask (including the choice of the apodization length), we simulated CMB maps from the Planck best-fit $\Lambda$CDM angular power spectrum, to which we added realistic anisotropic Gaussian noise (non-white, but without correlation) corresponding to each of the six data set maps. We then computed their cross-power spectra using the same foreground masks as for the data. A total of $15\thinspace 000$ sets of cross-power spectra were produced. When comparing the diagonal of the covariance matrix from the analytical estimation with the corresponding simulated variance, a precision better than a few percent is found (see Couchot et al. 2017b). Since we are using a Gaussian approximation of the likelihood, the uncertainty of the covariance matrix will not bias the estimation of the cosmological parameters. The percent-level precision obtained here will then only propagate into a sub-percent error on the variance of the recovered cosmological parameters. Figure 6: Full HiLLiPoP covariance matrix, including all correlations in multipoles between cross-frequencies and power spectra. ## 4 Large-scale CMB likelihoods: LoLLiPoP and Commander LoLLiPoP (LOw-$\ell$ LIkelihood on POlarized Power spectra) is a Planck low-$\ell$ polarization likelihood based on cross-spectra. It was first applied to Planck PR3 $EE$ data for investigating the reionization history in Planck Collaboration Int. XLVII (2016). It was then upgraded to PR4 data and was described in detail in Tristram et al. (2021) and Tristram et al. (2022), where it was used to derive constraints on the tensor-to-scalar ratio. LoLLiPoP can include $EE$, $BB$, and $EB$ cross-power spectra calculated on component-separated CMB detset maps processed by Commander from the PR4 frequency maps. Here we are focusing only on the $E$-mode component. Systematic effects are considerably reduced in cross-correlation compared to auto-correlation, and LoLLiPoP is based on cross-power spectra for which the bias is zero when the noise is uncorrelated between maps. It uses the approximation presented in Hamimeche & Lewis (2008), modified as described in Mangilli et al. (2015) to apply to cross-power spectra. The idea is to apply a change of variable $C_{\ell}\rightarrow X_{\ell}$ so that the new variable $X_{\ell}$ is nearly Gaussian-distributed. Similarly to Hamimeche & Lewis (2008), we define $X_{\ell}=\sqrt{C_{\ell}^{\rm f}+O_{\ell}}\thinspace\thinspace g{\left(\frac{\widetilde{C}_{\ell}+O_{\ell}}{C_{\ell}+O_{\ell}}\right)}\thinspace\thinspace\sqrt{C_{\ell}^{\rm f}+O_{\ell}}\thinspace,$ (4) where $g(x)=\sqrt{2(x-\ln(x)-1)}$, $\widetilde{C}_{\ell}$ are the measured cross-power spectra, $C_{\ell}$ are the power spectra of the model to be evaluated, $C_{\ell}^{\rm f}$ is a fiducial CMB model, and $O_{\ell}$ are the offsets needed in the case of cross-spectra. In the case of auto-power spectra, the offsets $O_{\ell}$ are given by the noise bias effectively present in the measured power spectra. For cross-power spectra, the noise bias is zero, and we use effective offsets defined from the $C_{\ell}$ noise variance: $\Delta C_{\ell}\equiv\sqrt{\frac{2}{2\ell+1}}O_{\ell}.$ (5) The distribution of the new variable $X_{\ell}$ can be approximated as Gaussian, with a covariance given by the covariance of the $C_{\ell}$s. The likelihood function of the $C_{\ell}$ given the data $\widetilde{C}_{\ell}$ is then $-2\ln P(C_{\ell}|\widetilde{C}_{\ell})=\sum_{\ell\ell^{\prime}}X^{\sf <EMAIL_ADDRESS>(6) Uncertainties are incorporated into the $C_{\ell}$ covariance matrix $\@tens{M}_{\ell\ell^{\prime}}$, which is evaluated after applying the same pipeline (including Commander component-separation and cross-spectrum estimation on each simulation) to the Monte Carlo simulations provided in PR4. While foreground emission and the cleaning procedure are kept fixed in the simulations (so that we cannot include uncertainties arising from an imperfect foreground model), the resulting $C_{\ell}$ covariance consistently includes CMB sample variance, statistical noise, and systematic residuals, as well as uncertainties from the foreground-cleaning procedure, together with the correlations induced by masking. We further marginalize the likelihood over the unknown true covariance matrix (as proposed in Sellentin & Heavens 2016) in order to propagate the uncertainty in the estimation of the covariance matrix caused by a limited number of simulations. LoLLiPoP is publicly available on GitHub.444github.com/planck-npipe/lollipop In this work, we consider only the information from $E$ modes, and restrict the multipole range from $\ell=2$ to $\ell=30$. To cover the low multipoles ($\ell\thinspace{<}\thinspace 30$) in temperature, we make use of the Commander $TT$ likelihood. It is based on a Bayesian posterior sampling that combines astrophysical component separation and likelihood estimation, and employs Gibbs sampling to map out the full joint posterior (Eriksen et al. 2008). It was extensively used in previous Planck analyses (Planck Collaboration XV 2014; Planck Collaboration XI 2016). For the 2018 analysis, the version which is used in this work, Commander makes use of all Planck frequency channels, with a simplified foreground model including CMB, a unique low-frequency power-law component, thermal dust, and CO line emission (see Planck Collaboration V 2020). ## 5 Small-scale CMB likelihood: HiLLiPoP This section describes HiLLiPoP (High-$\ell$ Likelihood on Polarized Power spectra), including the models used for the foreground residuals and the instrumental systematic residuals. HiLLiPoP was developed for the Planck 2013 results and then applied to PR3 and PR4 (e.g. Planck Collaboration XI 2016; Couchot et al. 2017c; Tristram et al. 2021). Here we focus on the latest version of HiLLiPoP, released as V4.2.555github.com/planck-npipe/hillipop We make use of the 15 cross-spectra computed from the six detset maps at 100, 143, and 217 GHz (see Sect. 3). From those 15 cross-spectra (one each for 100$\times$100, 143$\times$143, and 217$\times$217; four each for 100$\times$143, 100$\times$217, and 143$\times$217), we derive six cross- frequency spectra after recalibration and co-addition, and these are compared to the model. Using all cross-frequencies allows us to break some degeneracies in the foreground domain. However, because Planck spectra are dominated by sample variance, the six cross-frequency spectra are highly correlated. We use the full semi-analytic covariance matrix that includes $\ell$-by-$\ell$ correlation, and $\\{TT,TE,EE\\}$ mode correlation as described in Sect. 3.2.4. ### 5.1 The likelihood approximation On the full-sky, the distribution of auto-spectra is a scaled-$\chi^{2}$ with $2\ell+1$ degrees of freedom. The distribution of the cross-spectra is slightly different (see appendix A in Mangilli et al. 2015); however, above $\ell\thinspace{=}\thinspace 30$ the number of modes is large enough that we can safely assume that the $\widetilde{C}_{\ell}$ are Gaussian-distributed. Consequently, for high multipoles the resulting likelihood can be approximated by a multivariate Gaussian, including correlations between the values of $C_{\ell}$ arising from the cut-sky, and reads $-2\ln\mathcal{L}=\sum_{\begin{subarray}{c}i\leqslant j\\\ i^{\prime}\leqslant j^{\prime}\end{subarray}}\sum_{\ell\ell^{\prime}}\mathbf{R}_{\ell}^{ij}\thinspace\left[\@tens{\Sigma}^{-1}\right]_{\ell\ell^{\prime}}^{ij,{i^{\prime}}{j^{\prime}}}\thinspace\mathbf{R}_{\ell^{\prime}}^{{i^{\prime}}{j^{\prime}}}+\ln|\@tens{\Sigma}|,$ (7) where $\mathbf{R}^{ij}_{\ell}=\mathbf{\widetilde{C}}^{ij}_{\ell}-\mathbf{C}^{ij}_{\ell}$ denotes the residual of the estimated cross-power spectrum $\mathbf{\widetilde{C}}_{\ell}$ with respect to the model $\mathbf{C}_{\ell}$, which depends on the frequencies $\\{i,j\\}$ and is described in the next section. The matrix $\@tens{\Sigma}=\left<\mathbf{R}\mathbf{R}^{\@tens{T}}\right>$ is the full covariance matrix that includes the instrumental variance from the data as well as the cosmic variance from the model. The latter is directly proportional to the model so that the matrix $\@tens{\Sigma}$ should, in principle, depend on the model. In practice, given our current knowledge of the cosmological parameters, the theoretical power spectra typically differ from each other at each $\ell$ by less than they differ from the observed $\widetilde{C}_{\ell}$, so that we can expand $\@tens{\Sigma}$ around a reasonable fiducial model. As described in Planck Collaboration XV (2014), the additional terms in the expansion are small if the fiducial model is accurate and leaving it out entirely does not bias the likelihood. Using a fixed covariance matrix $\@tens{\Sigma}$, we can drop the constant term $\ln|\@tens{\Sigma}|$ and recover nearly optimal variance (see Carron 2013). Within the approximations discussed above, we expect the likelihood to be $\chi^{2}$-distributed with a mean equal to the number of degrees of freedom $n_{\rm dof}=n_{\ell}-n_{\rm p}$ ($n_{\ell}$ being the number of band powers in the power spectra and $n_{\rm p}$ the number of fitted parameters) and a variance equal to $2n_{\rm dof}$. ### 5.2 The model We now present the model ($\mathbf{\hat{C}}_{\ell}$) used in the likelihood of Eq. (7). The foreground emission is mitigated by masking the part of the sky with high foreground signal (Sect. 3.2.1) and using an appropriate choice of multipole range (Sect. 3.2.3). However, our likelihood function explicitly takes into account residuals of foreground emission in the power spectra, together with the CMB model and instrumental systematic effects. In practice, we consider the model and the data in the form $D_{\ell}=\ell(\ell+1)C_{\ell}/2\pi$. We include in the foregrounds, for the temperature likelihood, contributions from: * • Galactic dust; * • cosmic infrared background (CIB); * • thermal (tSZ) and kinetic (kSZ) Sunyaev-Zeldovich components; * • Poisson-distributed point sources from radio and infrared star-forming galaxies; * • the correlation between CIB and the tSZ effect (tSZ$\times$CIB). We highlight that this new version of HiLLiPoP, labelled V4.2, now includes a model for two point-source components, namely dusty star-forming galaxies and radio sources. Consequently the term “CIB” hereafter refers to the clustered part only. For all components, we take into account the bandpass response using effective frequencies as listed in table 4 of Planck Collaboration IX (2014). Galactic emission from free-free or synchrotron radiation is supposed to be weak at the frequencies considered here (above 100 GHz). Nevertheless, we implemented a model for such emission and were not able to detect any residuals from Galactic synchrotron or free-free emission. In the following, we therefore neglect these contributions. ##### Galactic dust emission. At frequencies above 100 GHz, Galactic emission is dominated by dust. The dust template is fitted on the Planck 353-GHz data using a power-law model. In practice, we compute the 353-GHz cross-spectra $\mathbf{\hat{C}}_{\ell}^{353Ax353B}$ for each pair of masks $(M_{i},M_{j})$ associated with the cross-spectra $\nu_{i}\times\nu_{j}$ (Fig. 7). We then subtract the Planck best-fit CMB power spectrum and fit a power-law model with a free constant $A\ell^{\alpha_{\rm d}}+B$ in the range $\ell=[30,1500]$ for $TT$, to account for the unresolved point sources at 353 GHz. A simple power law is used to fit the $EE$ and $TE$ power spectra in the range $\ell=[30,1000]$. Thanks to the use of the point-source mask (described in Sect. 3.2.1), our Galactic dust residual power spectrum is much simpler than in the case of other Planck likelihoods. Indeed, the point-source masks used in the Planck PR3 analysis removes some Galactic structures and bright cirrus, which induces an artificial knee in the residual dust power spectra around $\ell=200$ (see section 3.3.1 in Planck Collaboration XI 2016). In contrast, with our point-source mask, the Galactic dust power spectra are fully compatible with power laws (Fig. 7). While the $EE$ and $TE$ power spectra are directly comparable to those derived in Planck Collaboration Int. XXX (2016), with indices of $\alpha_{\rm d}=-2.3$ and $-2.4$ for $EE$ and $TE$, respectively, the indices for $TT$ vary with the sky fraction considered, ranging from $\alpha_{\rm d}=-2.2$ down to $-2.6$ for the largest sky fraction. Figure 7: Dust power spectra, $D_{\ell}=\ell(\ell+1)C_{\ell}/2\pi$, at 353 GHz for $TT$ (top), $EE$ (middle), and $TE$ (bottom). The power spectra are computed from cross-correlation between detset maps at 353 GHz for different sets of masks, as defined in Sect. 3.2.1, and further corrected for the CMB power spectrum (solid black line) and CIB power spectrum (dashed black line). The coloured dashed lines are simple fits, as described in the text. For each polarization mode ($TT$, $EE$, $TE$), we then extrapolate the dust templates at 353 GHz for each cross-mask to the cross-frequency considered: $D_{\ell}^{\rm dust}(\nu\times\nu^{\prime})=c_{\mathrm{\rm dust}}\frac{a^{\rm dust}_{\nu}}{a^{\rm dust}_{353}}\frac{a^{\rm dust}_{\nu^{\prime}}}{a^{\rm dust}_{353}}\mathcal{D}_{\ell}^{\rm dust}(M_{\nu},M_{\nu^{\prime}}),$ (8) where $a^{\rm dust}_{\nu}=\nu^{\thinspace\beta_{\rm d}}B_{\nu}(T_{\rm d})$ is a modified blackbody with $T_{\rm d}$ fixed to $19.6\thinspace$K, while $c_{\mathrm{dust}}$ and $\beta_{\rm d}$ are sampled independently for temperature and polarization. We use Gaussian priors for the spectral indices $\beta_{\rm d}$ from Planck Collaboration Int. XXII (2015), which gives $\beta_{\rm d}^{T}=\mathcal{N}(1.51,0.01)$ and $\beta_{\rm d}^{T}=\mathcal{N}(1.59,0.02)$ for temperature and polarization, respectively. The coefficient $c_{\mathrm{dust}}$ allows us to propagate the uncertainty from fitting the 353-GHz dust spectrum with a power law. We sample $c_{\mathrm{dust}}$ with a Gaussian prior, $c_{\mathrm{dust}}=\mathcal{N}(1.0,0.1)$. ##### Cosmic infrared background (CIB). We use a template based on the halo model fitted on Planck and Herschel data (Planck Collaboration XXX 2014), extrapolated with a power-law at high multipoles. The template is rescaled by $A^{\rm CIB}$, the amplitude of the contamination at our reference frequency ($\nu_{0}=143$ GHz) and $\ell\thinspace{=}\thinspace 3000$. The emission law is modelled by a modified blackbody $a_{\nu}^{\rm CIB}=\nu^{\thinspace\beta_{\mathrm{CIB}}}B_{\nu}(T)$ with a fixed temperature ($T=25\thinspace$K) and a variable index $\beta_{\mathrm{CIB}}$. We use a strong prior $\beta_{\mathrm{CIB}}=\mathcal{N}(1.75,0.06)$ (Planck Collaboration XXX 2014) and assume perfect correlation between the emission in the frequency range considered (from 100 to 217 GHz), $D_{\ell}^{\rm CIB}(\nu\times\nu^{\prime})=A^{\rm CIB}\frac{a_{\nu}^{\rm CIB}}{a_{\nu_{0}}^{\rm CIB}}\frac{a_{\nu^{\prime}}^{\rm CIB}}{a_{\nu_{0}}^{\rm CIB}}\mathcal{D}_{\ell}^{\rm CIB}.$ (9) ##### Thermal Sunyaev-Zeldovich (tSZ) effect. The template for the tSZ emission comes from the halo model fitted on Planck measurements in Planck Collaboration XXII (2016) and used more recently with PR4 data in Tanimura et al. (2022). The tSZ signal is parameterized by a single amplitude $A^{\rm tSZ}$, corresponding to the amplitude of the tSZ signal at our reference frequency ($\nu_{0}=143$ GHz) at $\ell=3000$, $D_{\ell}^{\rm tSZ}(\nu\times\nu^{\prime})=A^{\rm tSZ}\frac{a_{\nu}^{\rm tSZ}}{a_{\nu_{0}}^{\rm tSZ}}\frac{a_{\nu^{\prime}}^{\rm tSZ}}{a_{\nu_{0}}^{\rm tSZ}}\mathcal{D}_{\ell}^{\rm tSZ},$ (10) where $a_{\nu}^{\rm tSZ}=x[e^{x}+1]/[e^{x}-1]-4$ (with $x=h\nu/k_{\rm B}T_{\rm CMB}$). ##### Kinetic Sunyaev-Zeldovich (kSZ) effect. The kSZ emission is parameterized by $A^{\rm kSZ}$, the amplitude at $\ell\thinspace{=}\thinspace 3000$, scaling a fixed template including homogeneous and patchy reionization components from Shaw et al. (2012) and Battaglia et al. (2013), $D_{\ell}^{\rm kSZ}(\nu\times\nu^{\prime})=A^{\rm kSZ}\ \mathcal{D}_{\ell}^{\rm kSZ}\thinspace.$ (11) ##### Thermal SZ$\times$CIB correlation. The cross-correlation between the thermal SZ and the CIB is parameterized as $\displaystyle D_{\ell}^{\rm tSZ\times CIB}(\nu\times\nu^{\prime})$ $\displaystyle=$ $\displaystyle-\xi\sqrt{A^{\rm tSZ}A^{\rm CIB}}$ (12) $\displaystyle\times\ (\frac{a_{\nu}^{\rm tSZ}a_{\nu^{\prime}}^{\rm CIB}+a_{\nu}^{\rm CIB}a_{\nu^{\prime}}^{\rm tSZ}}{a_{\nu_{0}}^{\rm tSZ}a_{\nu_{0}}^{\rm CIB}})\ \mathcal{D}_{\ell}^{\rm tSZ\times CIB},$ with $\xi$ the correlation coefficient rescaling the template $\mathcal{D}_{\ell}^{\rm tSZ\times CIB}$ from Addison et al. (2012). ##### Point sources. Point-source residuals in CMB data sets consist of a combination of the emission coming from radio and infrared sources. For earlier Planck data releases HiLLiPoP used different point-source masks adapted to each frequency. This would require the estimation of the flux cut for each mask in order to use a physical model for the two point-source components. Since the flux-cut estimates are subject to large uncertainties, we used to fit one amplitude for the Poisson term at each cross-frequency in previous HiLLiPoP versions. In this new version of HiLLiPoP, we adopt a common mask for point sources (see Sect. 3.2.1). We then consider a flat Poisson-like power spectrum for each component and use a power law to describe the spectral energy distribution (SED) for the radio sources as $a_{\nu}^{\rm rad}\propto\nu^{-\beta_{\rm s}}$ (Tucci et al. 2011) while we use $a_{\nu}^{\rm IR}=\nu^{\thinspace\beta_{\rm IR}}B_{\nu}(T)$ (Béthermin et al. 2012) for infrared dusty star-forming galaxies. The residual cross-power spectra for point sources are finally $C_{\ell}^{\rm PS}(\nu\times\nu^{\prime})=A^{\rm rad}\frac{a_{\nu}^{\rm rad}}{a_{\nu_{0}}^{\rm rad}}\frac{a_{\nu^{\prime}}^{\rm rad}}{a_{\nu_{0}}^{\rm rad}}+A^{\rm IR}\frac{a_{\nu}^{\rm IR}}{a_{\nu_{0}}^{\rm IR}}\frac{a_{\nu^{\prime}}^{\rm IR}}{a_{\nu_{0}}^{\rm IR}}\thinspace.$ (13) Following Lagache et al. (2020), radio source emission is dominated at frequencies above about 100 GHz by radio quasars whose spectral indices can vary from $-1.0$ to $0.0$ (Planck Collaboration XIII 2011; Planck Collaboration Int. VII 2013). We constrain the SED by fixing $\beta_{\rm s}\thinspace{=}\thinspace-0.8$, following results from Reichardt et al. (2021). For infrared dusty star-forming galaxies, we adopt $\beta_{\rm IR}$ identical to $\beta_{\rm CIB}$ and $T=25$ K. The $C_{\ell}$s are then converted into $D_{\ell}$s such that the amplitudes $A^{\rm rad}$ and $A^{\rm IR}$ refer to the amplitude of $D_{3000}$ at 143 GHz. In polarization, we do not include any contribution from point sources, since it is negligible compared to Planck noise for both components (Tucci et al. 2004; Lagache et al. 2020). With the frequencies and the range of multipoles used in the HiLLiPoP likelihood, the foreground residuals are small in amplitude and mostly degenerate in the SED domain. As a result, we choose to set priors on the SED parameters so that the correlation between the amplitudes of residuals is significantly reduced. The optimization of the foreground model and in particular the determination of the priors adopted for the baseline analysis have been driven by astrophysical knowledge and results from the literature. We have extensively tested the impact of the priors using the $\Lambda$CDM model as a baseline (without any of its extensions). The results of these tests are discussed in Sect. 8. ### 5.3 Instrumental effects The main instrumental effects that we propagate to the likelihood are the calibration uncertainties of each of the frequency maps in temperature and polarization (through the polarization efficiency). As a consequence, we sample five inter-calibration coefficients while fixing as the reference the calibration of the most sensitive map (the first detset at 143 GHz, 143A). In addition, we sample a Planck calibration parameter $A_{\textit{Planck}}$ with a strong prior, $A_{\textit{Planck}}=\mathcal{N}(1.0000,0.0025)$, in order to propagate the uncertainty coming from the absolute calibration based on the Planck orbital dipole. We also allow for a recalibration of the polarized maps using polar efficiencies for each of the six maps considered. Those coefficients have been re-estimated in the NPIPE processing and we expect them to now be closer to unity and consistent within a frequency channel (Planck Collaboration Int. LVII 2020). By default, we fixed the polarization efficiencies to their best- fit values (unity at 100 and 143 GHz and 0.975 at 217 GHz, see Sect. 8 for details). Angular power spectra have been corrected for beam effects using the beam window functions, including the beam leakage, estimated with QuickPol (see Sect. 3.2.2). With the improvement of the beam-estimation pipeline in Planck Collaboration XI (2016), the associated uncertainties have been shown to be negligible in Planck data and are ignored in this analysis. Discrete sampling of the sky can lead to a small additive (rather than multiplicative) noise contribution known as the “subpixel” effect. Its amplitude depends on the temperature gradient within each pixel. With a limited number of detectors per frequency (and even more so per detset), the Planck maps are affected by the subpixel effect. However, estimation of the size of the effect using QuickPol (Hivon et al. 2017), assuming fiducial spectra including CMB and foreground contributions, has shown it to be small (Planck Collaboration V 2020) and it is therefore neglected in this work. ## 6 Results on the 6-parameter $\Lambda$CDM model In this section, we describe the constraints on cosmological parameters in the $\Lambda$CDM model using the Planck PR4 data. In combination with HiLLiPoP, we systematically include the polarized low-$\ell$ $EE$ likelihood LoLLiPoP (discussed in Sect. 4). The Commander low-$\ell$ likelihood (Planck Collaboration IV 2020) is added only for $TT$ and $TE$ (as well as the combination of the three mode spectra labelled $TTTEEE$), but not for $EE$. The model for the CMB is computed by numerically solving the background and perturbation equations for a specific cosmological model using CAMB (Lewis et al. 2000; Howlett et al. 2012).666One can equally well use CLASS (Lesgourgues 2011; Blas et al. 2011) instead, except that the definition of $\theta_{*}$ differs slightly between the two codes. In this paper, we consider a $\Lambda$CDM model with six free parameters describing: the current physical densities of baryons ($\Omega_{\rm b}h^{2}$) and cold dark matter ($\Omega_{\rm c}h^{2}$); the angular acoustic scale ($\theta_{*}$); the reionization optical depth ($\tau$); and the amplitude and spectral index of the primordial scalar spectrum ($A_{\rm s}$ and $n_{\rm s}$). Here $h$ is the dimensionless Hubble constant, $h=H_{0}/(100\thinspace{\rm km}\thinspace{\rm s}^{-1}\thinspace{\rm Mpc}^{-1})$. In addition, we fit for six inter-calibration parameters, seven foreground residual amplitudes in temperature ($c_{\mathrm{dust}}^{T}$, $A_{\mathrm{radio}}$, $A_{\mathrm{IR}}$, $A_{\mathrm{CIB}}$, $A_{\mathrm{tSZ}}$, $A_{\mathrm{kSZ}}$, and $\xi_{\mathrm{SZ\times CIB}}$), plus one in polarization ($c_{\mathrm{dust}}^{P}$), and three foreground spectral indices ($\beta_{\mathrm{dust}}^{T}$, $\beta_{\mathrm{dust}}^{P}$, and $\beta_{\mathrm{CIB}}$). Foreground and instrumental parameters are listed in Table 6, together with their respective priors. To quantify the agreement between the data and the model, we computed the $\chi^{2}$ values with respect to the best-fit model for each of the data sets. The $\chi^{2}$ values and the number of standard deviation from unity are given in Table 2. The goodness-of-fit is better than for previous Planck releases, but we still found a relatively large $\chi^{2}$ value for $TT$ (corresponding to about 2.7$\thinspace\sigma$), while the $TE$ and $EE$ $\chi^{2}$ values are compatible with unity, at 1.8$\thinspace\sigma$ and 0.1$\thinspace\sigma$, respectively. For the full combination $TTTEEE$, we obtain a $\chi^{2}=30495$ for a data size of 29768, corresponding to a 3.02$\thinspace\sigma$ deviation. As described in Rosenberg et al. (2022), where the goodness of fit is also somewhat poor (4.07$\thinspace\sigma$ for $TT$ and 4.46$\thinspace\sigma$ for the $TTTEEE$), this could be explained by a slight misestimation of the instrumental noise, rather than a bias that could be fit by an improved foreground model or a different cosmology. However, we emphasize that the level of this divergence is small, since the recovered reduced-$\chi^{2}$, $\chi^{2}/n_{\rm d}=1.02$, shows that the semi- analytical estimation of the covariance of the data is accurate at the percent level. The goodness-of-fit values for individual cross-spectra are given in Table 7. Likelihood | $\chi^{2}$ | $n_{\rm d}$ | $\chi^{2}/n_{\rm d}$ | $\delta\sigma(\chi^{2})$ ---|---|---|---|--- EE | 9290 | 9296 | 1.00 | 0.05 TE | 10072 | 9816 | 1.03 | 1.82 TT | 11046 | 10646 | 1.04 | 2.74 TTTEEE | 30495 | 29758 | 1.02 | 3.02 Table 2: $\chi^{2}$ values compared to the size of the data vector ($n_{\rm d}$) for each of the Planck likelihoods. Here $\delta\sigma(\chi^{2})=(\chi^{2}/n_{\rm d}-1)/\sqrt{2/n_{\rm d}}$. Co-added CMB power spectra are shown in Figs. 9 and 10, for $TT$, $TE$, and $EE$; they are compared to the best-fit obtained with the full $TTTEEE$ combination. Planck spectra are binned with $\Delta\ell=30$ for the plots, but considered $\ell$-by-$\ell$ in the likelihood. The plots also show the residuals relative to the $\Lambda$CDM best-fit to $TTTEEE$, as well as the normalized residuals. We cannot identify any deviation from statistical noise or any bias from foreground residuals. In Fig. 11, we compare the constraints on $\Lambda$CDM parameters obtained using $TT$, $TE$, and $EE$ and their combination. We find very good consistency between $TT$ and $TE$, while $EE$ constraints are wider, with a deviation in the acoustic scale $\theta_{*}$ toward lower values. This feature of the Planck PR4 data was previously reported in Rosenberg et al. (2022), in which the authors studied the correlation with other parameters and concluded that this is likely due to parameter degeneracies coupling to residual systematics in $EE$. However, the deviation of $\theta_{*}$ between $EE$ and $TT$ is now reduced with the increase of the sky fraction enabled by HiLLiPoP V4.2, though still present at the 1.9$\thinspace\sigma$ level. In addition, we have checked that this shift in $\theta_{*}$ is not related to any super- sample lensing effect (as described in Manzotti et al. 2014), or to any aberration correction (see Jeong et al. 2014), both of which are negligible for the large sky fraction considered in the Planck data set. We note that, interestingly, $\theta_{*}$ is the only parameter that deviates in $EE$; the others, including $H_{0}$, are compatible with $TT$ at much better than 1$\thinspace\sigma$. Given the weak sensitivity of the Planck $EE$ spectra as compared to $TT$ and $TE$, discrepancies in the $EE$ parameter reconstruction will have little impact on overall cosmological parameter results. The HiLLiPoP V4.2 constraints on $\Lambda$CDM cosmological parameters are summarized in Table. 3. As compared to the last Planck cosmological results in Planck Collaboration VI (2020), the constraints are tighter, with no major shifts. The error bars are reduced by 10 to 20 %, depending on the parameter. The reionization optical depth is now constrained at close to the 10 % level: $\tau=0.058\pm 0.006.$ (14) This is the result of the NPIPE treatment of the PR4 data associated with the low-$\ell$ likelihood LoLLiPoP (see Planck Collaboration Int. LVII 2020). For the constraint on the Hubble constant, we obtain $H_{0}=(67.68\pm 0.52)\thinspace\text{km\thinspace s${}^{-1}$\thinspace Mpc${}^{-1}$},$ (15) consistent with previous Planck results and still significantly lower than the local distance-ladder measurements, which typically range from $H_{0}=70$ to $76$, depending on the data set and the calibration used for the first step of the distance ladder (see for instance Abdalla et al. 2022). The amplitude of density fluctuations is $\sigma_{8}=0.8077\pm 0.0061,$ (16) compatible with PR3 results ($\sigma_{8}=0.8120\pm 0.0073$) but lower by $0.7\thinspace\sigma$. The matter density, $\Omega_{\mathrm{m}}$ also shifts by roughly $1\thinspace\sigma$, so that $S_{8}\equiv\sigma_{8}(\Omega_{\rm m}/0.3)^{0.5}=0.818\pm 0.013.$ (17) Compared to PR3 ($S_{8}=0.834\pm 0.016$), this shift to a lower value of $S_{8}$ brings it closer to the measurements derived from galaxy clustering and weak lensing from the Dark Energy Survey Year 3 analysis ($S_{8}=0.782\pm 0.019$, for $\Lambda$CDM with fixed ${\sum m_{\nu}}$, Abbott et al. 2022), decreasing the CMB versus large-scale structure tension on $S_{8}$ from 2.1$\thinspace\sigma$ to 1.6$\thinspace\sigma$. Before discussing results on the foreground parameters (Sect. 7) and instrumental parameters (Sect. 8), we show in Fig. 8 the correlation matrix for the fitted parameters. We can see that foreground parameters are only weakly correlated with the cosmological parameters and the inter-calibrations. This strengthens the robustness of the results with respect to the foreground model and ensures very low impact on cosmology. Figure 8: Correlation matrix for the fitted parameters of the combined HiLLiPoP likelihood $TTTEEE$. The first block corresponds to cosmological parameters from the $\Lambda$CDM model, the second block gathers the foreground parameters, and the last block shows the instrumental parameters. Figure 9: Maximum-likelihood frequency-co-added temperature power spectrum for HiLLiPoP V4.2. For the purposes of this figure, the power spectrum is binned with $\Delta\ell=30$. The middle panel shows the residuals with respect to the fiducial base-$\Lambda$CDM cosmology, and the bottom panel shows the residuals normalized by the uncertainties. Figure 10: As in Fig. 9, but for $TE$ (left) and $EE$ (right) power spectra. Figure 11: Posterior distributions for the cosmological parameters using power spectra from Planck PR4. Parameter | TT | TE | EE | TTTEEE ---|---|---|---|--- $\Omega_{\mathrm{b}}h^{2}$ | $0.02217\pm 0.00020$ | $0.02240\pm 0.00021$ | $0.02282\pm 0.00080$ | $0.02226\pm 0.00013$ $\Omega_{\mathrm{c}}h^{2}$ | $0.1191\pm 0.0019$ | $0.1171\pm 0.0017$ | $0.1178\pm 0.0034$ | $0.1188\pm 0.0012$ $100\theta_{\ast}$ | $1.04116\pm 0.00040$ | $1.04146\pm 0.00039$ | $1.03982\pm 0.00058$ | $1.04108\pm 0.00024$ $\log(10^{10}A_{\mathrm{s}})$ | $3.043\pm 0.013$ | $3.036\pm 0.016$ | $3.052\pm 0.016$ | $3.043\pm 0.013$ $\mathrm{n}_{\mathrm{s}}$ | $0.9645\pm 0.0054$ | $0.9703\pm 0.0086$ | $0.9737\pm 0.0106$ | $0.9682\pm 0.0039$ $\tau$ | $0.0571\pm 0.0063$ | $0.0575\pm 0.0064$ | $0.0579\pm 0.0066$ | $0.0578\pm 0.0061$ $H_{0}$ | $67.54\pm 0.84$0 | $68.56\pm 0.75$0 | $68.15\pm 1.87$0 | $67.68\pm 0.52$0 $\sigma_{8}$ | $0.8078\pm 0.0079$ | $0.7992\pm 0.0094$ | $0.8057\pm 0.0131$ | $0.8077\pm 0.0061$ $S_{8}$ | $0.821\pm 0.022$ | $0.795\pm 0.020$ | $0.811\pm 0.042$ | $0.818\pm 0.013$ $\Omega_{\mathrm{m}}$ | $0.3099\pm 0.0115$ | $0.2969\pm 0.0099$ | $0.3039\pm 0.0225$ | $0.3081\pm 0.0070$ Table 3: Parameter constraints in the 6-parameter $\Lambda$CDM model for each data set and their combination, using HiLLiPoP V4.2 in addition to LoLLiPoP and Commander. We report mean values and symmetrical 68 % confidence intervals. ## 7 Foreground parameters All Planck cross-spectra are dominated by the CMB signal at all the scales we consider. This is illustrated for $TT$ in Fig. 22 of Appendix B, where we show each component of the model fitted in the likelihood with the best-fit parameters for the six cross-frequencies. It is also true for $TE$ and $EE$. Thanks to the multi-frequency analysis, we are able to break degeneracies related to the fact that some foreground-component power spectra are very similar. The resulting marginalized posteriors are plotted in Fig. 12. With the choice made for the multipole range and sky fraction, the Planck PR4 data set is sensitive to the CIB, the tSZ, and residual point sources (radio at 100 GHz and infrared at 217 GHz). Very low multipoles are sensitive to residuals from Galactic dust emission, especially at 217 GHz. Figure 12: Posteriors for foreground amplitudes. Units are $\mu{\rm K}^{2}$ normalized at $\ell=3000$ and $\nu=143$ GHz. We detect the emission of radio point sources at better than $10\thinspace\sigma$. The preferred radio power in $D_{\ell}$ at $\ell=3000$ for 143 GHz is $\displaystyle A_{\mathrm{radio}}$ $\displaystyle=$ $\displaystyle(63.3\pm 5.0)\thinspace\mu{\rm K}^{2},$ (18) with a population spectral index for the radio power fixed to $\beta_{\mathrm{s}}=-0.8$, close to the value recovered by the SPT team ($\beta_{\mathrm{s}}=-0.76\pm 0.15$, Reichardt et al. 2021). Allowing $\beta_{\mathrm{s}}$ to vary in Planck data, gives $\beta_{\mathrm{s}}=-0.54\pm 0.08$, with a corresponding increase of the amplitude $A_{\mathrm{radio}}$. This also impacts the SZ-CIB cross-correlation amplitude with a significant increase of $\xi$. We obtain a high-significance detection of CIB anisotopies, with amplitudes at 143 GHz and $\ell=3000$ given by $\displaystyle A_{\mathrm{CIB}}$ $\displaystyle=$ $\displaystyle(1.04\pm 0.31)\thinspace\mu{\rm K}^{2},$ (19) $\displaystyle A_{\mathrm{IR}}$ $\displaystyle=$ $\displaystyle(6.12\pm 0.52)\thinspace\mu{\rm K}^{2},$ (20) for the clustered and Poisson parts, respectively. We note that these amplitudes cannot be directly compared to values in previous works because they strongly depend on the prior used for the $\beta_{\mathrm{CIB}}$ index for the former and on the flux cut applied by the point-source mask for the latter. The thermal Sunyaev-Zeldovich effect is also significantly detected, with an amplitude at 143 GHz and $\ell=3000$ of $\displaystyle A_{\mathrm{tSZ}}$ $\displaystyle=$ $\displaystyle(6.1\pm 1.5)\thinspace\mu{\rm K}^{2}.$ (21) This is close to (but somewhat higher than) what is reported in Reichardt et al. (2021), with $A_{\mathrm{tSZ}}=(3.42\pm 0.54)\thinspace\mu{\rm K}^{2}$, even though the uncertainties are larger. However, it is more closely comparable with ACTpol results, $A_{\mathrm{tSZ}}=(5.29\pm 0.66)\thinspace\mu{\rm K}^{2}$ (Choi et al. 2020). We find an upper-limit for the kSZ effect, while the correlation between tSZ and CIB is compatible with zero: $\displaystyle A_{\mathrm{kSZ}}$ $\displaystyle<$ $\displaystyle 7.9\thinspace\mu{\rm K}^{2}\quad\quad\text{(at 95\thinspace\%\leavevmode\nobreak\ CL)};$ (22) $\displaystyle\xi_{\mathrm{SZ\times CIB}}$ $\displaystyle=$ $\displaystyle 0.45\pm 0.31.$ (23) We note that those last results are about 10 times less sensitive than the constraints from ground-based CMB measurements, such as those from SPT or ACTpol. For the residuals of Galactic dust emission, with priors on the spectral indices driven by Planck Collaboration Int. XXII (2015), we find rescaling coefficients $c_{\mathrm{dust}}$ to be $1.08\pm 0.03$ and $1.20\pm 0.03$ for temperature and polarization, respectively. This indicates that we recover slightly more dust contamination than our expectations derived from the measurements at 353 GHz, especially in polarization. To estimate the impact on the reconstructed parameters (both cosmological and from foregrounds), we sampled the dust amplitudes at each frequency. The constraints are shown in Fig. 13 for temperature (top) and polarization (bottom). The figure illustrates that we have a good fit of the dust emission in temperature, while we are marginally sensitive to dust residuals in polarization. This explains why, given our prior on the SED for the polarized dust emission, $\beta_{\rm dust}^{P}=\mathcal{N}(1.59,0.02)$, we recover an amplitude higher than expected. Figure 13: Amplitude of the dust emission relative to 353 GHz for a modified- blackbody dust model (blue line) as a function of the effective frequency (computed as the geometric mean of the two frequencies involved), compared to a fit using one amplitude per frequency (black dots). The top panel is for temperature and the bottom panel for polarization. As discussed in Sect. 5.2, HiLLiPoP V4.2 also includes a 2-component model for point sources. Figure 14 shows how the model, as the sum of the two point- source components, matches with the fit with one amplitude for each cross- frequency. Figure 14: Point-source model as a function of the effective frequency (computed as the geometric mean of the two frequencies involved), compared to the fit of one amplitude per cross-spectrum. While changing the models as described above, the impact on $\Lambda$CDM parameters is very limited. We experienced variations of less than 0.11$\thinspace\sigma$ for all $\Lambda$CDM parameter, with the exception of $n_{\rm s}$, can vary by $0.18\thinspace\sigma$ when changing the model for point sources. Error bars on $\Lambda$CDM parameters are also stable with respect to foreground modelling, with variations limited to less than 2 % (4 % for $n_{\rm s}$). ## 8 Instrumental parameters Inter-calibration parameters are fitted in HiLLiPoP with respect to the first detset at 143 GHz (see Sect. 5.3). The inter-calibrations are recovered at better than the percent level and are compatible with unity. Using the full $TTTEEE$ likelihood, we find $\displaystyle c_{\rm 100A}$ $\displaystyle=$ $\displaystyle 1.003\pm 0.007,$ (24) $\displaystyle c_{\rm 100B}$ $\displaystyle=$ $\displaystyle 1.004\pm 0.007,$ (25) $\displaystyle c_{\rm 143B}$ $\displaystyle=$ $\displaystyle 1.004\pm 0.006,$ (26) $\displaystyle c_{\rm 217A}$ $\displaystyle=$ $\displaystyle 1.001\pm 0.008,$ (27) $\displaystyle c_{\rm 217B}$ $\displaystyle=$ $\displaystyle 1.001\pm 0.008.$ (28) HiLLiPoP also allows us to fit for the polarization efficiency even though, by default, those are fixed. Using the full $TTTEEE$ likelihood, we constrain the polarization efficiencies for each map at the percent level. The mean posteriors show polarization efficiencies compatible with unity at better than $1\thinspace\sigma$, except for the two maps at 217 GHz, which differ from unity by about $2\thinspace\sigma$: $\displaystyle\eta_{\rm 100A}$ $\displaystyle=$ $\displaystyle 0.994\pm 0.013;$ (29) $\displaystyle\eta_{\rm 100B}$ $\displaystyle=$ $\displaystyle 0.987\pm 0.013;$ (30) $\displaystyle\eta_{\rm 143A}$ $\displaystyle=$ $\displaystyle 1.016\pm 0.013;$ (31) $\displaystyle\eta_{\rm 143B}$ $\displaystyle=$ $\displaystyle 1.001\pm 0.010;$ (32) $\displaystyle\eta_{\rm 217A}$ $\displaystyle=$ $\displaystyle 0.978\pm 0.013;$ (33) $\displaystyle\eta_{\rm 217B}$ $\displaystyle=$ $\displaystyle 0.972\pm 0.014.$ (34) Fixing polarization efficiencies to $1.00$, $1.00$, and $0.975$ (at 100, 143, and 217 GHz, respectively) increases the $\chi^{2}$ by $\Delta\chi^{2}=36$ for 29758 data points. However, this choice has no effect on either the $\Lambda$CDM parameters or the foreground parameters. ## 9 Consistency between Planck likelihoods We now investigate the impact of the increased sky fraction used in this new version of HiLLiPoP. We repeat the analysis using more conservative Galactic masks reducing the sky fraction at each frequency by 5 % (labelled “XL”) or 10 % (labelled “L”) with respect to our baseline (“XXL”, which masks, 20 %, 30 %, and 45 % at 100, 143, and 217 GHz, respectively; see Sect. 3.2.1 for more details). Within $\Lambda$CDM, we obtain similar $\chi^{2}$ for the fits, demonstrating that the model used in HiLLiPoP V4.2 is valid for the considered sky fraction. For the $TTTEEE$ likelihood, the $\Delta\chi^{2}$ values are lower than 100 for 29758 data points. The other Planck likelihood using PR4 data is CamSpec and is described in detail in Rosenberg et al. (2022). Although CamSpec is focused on cleaning procedures to build co-added polarization spectra rather than modelling of foreground residuals in cross-frequency spectra, we find consistent constraints at better than the $1\thinspace\sigma$ level. This gives confidence in the robustness of our cosmological constraints. Figure 15 shows the 1-d posterior distributions for the $\Lambda$CDM parameters using different sky fractions. We also compare to the posteriors obtained from Planck PR3 and with CamSpec PR4 (where we use LoLLiPoP instead of the polarized low-$\ell$ constraint from PR3 used in Rosenberg et al. 2022). We find good consistency between the different likelihoods and between the two data sets (PR3 and PR4). Figure 15: Posterior distributions for the cosmological parameters from PR4 for HiLLiPoP (using different sky fractions labelled L, XL, and XXL) and CamSpec, as compared to Planck 2018 (Plik PR3). Table 4 shows the relative difference in the cosmological parameters between Planck 2018 (Planck Collaboration VI 2020) and HiLLiPoP V4.2, together with the gain in accuracy. The largest difference with respect to Planck 2018 appears for $\Omega_{\rm c}h^{2}$, for which HiLLiPoP on PR4 finds a value $1.0\thinspace\sigma$ lower. Associated with LoLLiPoP and Commander, CamSpec on PR4 also gives lower $\Omega_{\rm c}h^{2}$ by $-0.45\thinspace\sigma$. The spectral index $n_{\rm s}$ is found to be a bit higher with HiLLiPoP by $0.7\thinspace\sigma$. As discussed in Sect. 6, we obtain a slightly higher value for the Hubble constant ($+0.7\thinspace\sigma$) with $h=0.6768\pm 0.0052$, compared to $h=0.6727\pm 0.0060$ for PR3. The amplitude of density fluctuations, $\sigma_{8}$, and the matter density, $\Omega_{\rm m}$, are lower by $0.7\thinspace\sigma$ and $1.0\thinspace\sigma$, respectively, so that $S_{8}$ is also lower by about $1\thinspace\sigma$. The error bars shrink by more than 10 %, with a noticeable gain of 20 % for the acoustic scale ($\theta_{*}$) and the amplitude of the primordial scalar spectrum on a logarithmic scale ($\log A_{\mathrm{s}}$). Parameter | 0$\Delta/\sigma$ | $\Delta\sigma$ ---|---|--- $\Omega_{\mathrm{b}}h^{2}$ | $-$0.69 | $-$15.4 % $\Omega_{\mathrm{c}}h^{2}$ | $-$1.00 | $-$15.8 % $100\theta_{\ast}$ | $-$0.03 | $-$20.1 % $\log(10^{10}A_{\mathrm{s}})$ | $-$0.11 | $-$20.4 % $n_{\mathrm{s}}$ | $+$0.75 | $-$11.1 % $\tau$ | $+$0.42 | $-$22.7 % $H_{0}$ | $+$0.65 | $-$14.9 % $\sigma_{8}$ | $-$0.57 | $-$16.8 % Table 4: Relative variation and improvement in the error bars between Planck 2018 and HiLLiPoP V4.2 for each cosmological parameter. ## 10 Extensions We now discuss constraints on some extensions to the base-$\Lambda$CDM model. ### 10.1 Gravitational lensing, $A_{\mathrm{L}}$ We sample the phenomenological extension $A_{\mathrm{L}}$ in order to check the consistency of the Planck PR4 data set with the smoothing of the power spectra by weak gravitational lensing as predicted by the $\Lambda$CDM model. A mild preference for ${A_{\mathrm{L}}}>1$ was seen in the Planck PR1 data (Planck Collaboration XVI 2014) and since the analysis of Planck PR2 data (Planck Collaboration XI 2016; Planck Collaboration XIII 2016), HiLLiPoP has provided a significantly lower $A_{\mathrm{L}}$ value than the public Planck likelihood Plik, but still slightly higher than unity. The tension was at the 2.2$\thinspace\sigma$ level for PR3 (Couchot et al. 2017c). Figure 16: Posterior distributions for $A_{\mathrm{L}}$ using HiLLiPoP PR4. With Planck PR4, we find results even more compatible with unity compared to previous releases. Using HiLLiPoP $TTTEEE$, we now obtain ${A_{\mathrm{L}}}=1.036\pm 0.051,$ (35) which is compatible with the $\Lambda$CDM expectation (at the $0.7\thinspace\sigma$ level). As shown in Table 5, while the results for $EE$ and $TE$ are compatible with unity, the $A_{\mathrm{L}}$ value for $TT$ is still high by 0.8$\thinspace\sigma$. Figure 16 shows posterior distributions of $A_{\mathrm{L}}$ for each of the mode-spectra and for the $TTTEEE$ combination using Planck PR4. Likelihood | $A_{\mathrm{L}}$ | $\Delta{A_{\mathrm{L}}}$ ---|---|--- TT | $1.068\pm 0.081$ | $\phantom{-}0.84\thinspace\sigma$ TE | $0.946\pm 0.160$ | $-0.34\thinspace\sigma$ EE | $0.899\pm 0.150$ | $-0.67\thinspace\sigma$ TTTEEE | $1.036\pm 0.051$ | $\phantom{-}0.71\thinspace\sigma$ Table 5: Mean values and 68 % confidence intervals for $A_{\mathrm{L}}$. The significance of the deviation from unity is given in the last column. In Rosenberg et al. (2022), the CamSpec likelihood associated with low-$\ell$ likelihoods from Planck 2018 also showed a decrease in the ${A_{\mathrm{L}}}$ parameter in Planck PR4 data compared to PR3 data, reducing the difference from unity from 2.4$\thinspace\sigma$ to 1.7$\thinspace\sigma$. When LoLLiPoP is adopted as the low-$\ell$ polarized likelihood, instead of the low-$\ell$ likelihoods from Planck 2018, the constraint on $A_{\mathrm{L}}$ from CamSpec changed from ${A_{\mathrm{L}}}=1.095\pm 0.056$ to ${A_{\mathrm{L}}}=1.075\pm 0.058$, still a 1.3$\thinspace\sigma$ difference from unity. We compare the posteriors for Plik (PR3), CamSpec (PR4), and HiLLiPoP (PR4) in Fig. 17. Figure 17: Posterior distributions for $A_{\mathrm{L}}$ from HiLLiPoP PR4, compared to CamSpec (PR4) and Plik (PR3). Previously, when there was a preference for ${A_{\mathrm{L}}}>1$, adding ${A_{\mathrm{L}}}$ as a seventh parameter could lead to shifts in other cosmological parameters (e.g. Planck Collaboration Int. LI 2017). However, we confirm that with LoLLiPoP on PR4, the $\Lambda$CDM parameters are only affected through a very slight increase of the error bars, without significantly affecting the mean posterior values. ### 10.2 Curvature, $\Omega_{K}$ For the spatial curvature parameter, we report a significant difference with respect to Planck Collaboration VI (2020), which used PR3 and reported a mild preference for closed models (i.e. $\Omega_{K}<0$). Indeed, with HiLLiPoP V4.2, the measurements are consistent with a flat universe ($\Omega_{K}=0$) for all spectra (Fig. 18). Figure 18: Posterior distributions for $\Omega_{K}$ using HiLLiPoP PR4. As noticed in Rosenberg et al. (2022), with Planck PR4, the constraint on ${\Omega_{K}}$ is more precise and shifts toward zero, along the so-called geometrical degeneracy with $H_{0}$ (Fig. 19). Indeed, with HiLLiPoP V4.2 on PR4, the posterior is more symmetrical and the mean value of the posterior for $TTTEEE$ is $\Omega_{K}=-0.012\pm 0.010,$ (36) which is only $1.2\thinspace\sigma$ discrepant from zero. This is to be compared to ${\Omega_{K}}=-0.044_{-0.015}^{+0.018}$ obtained for Plik on PR3 (Planck Collaboration VI 2020) and ${\Omega_{K}}=-0.025_{-0.010}^{+0.013}$ obtained with CamSpec on PR4 (Rosenberg et al. 2022). As a consequence, the tail of the 2-d posterior in the $H_{0}$–$\Omega_{K}$ plane at low $H_{0}$ and negative $\Omega_{K}$ is no longer favoured. Indeed, when fitting for a non-flat Universe, the recovered value for the Hubble constant is $H_{0}=(63.03\pm 3.60)\thinspace{\rm km}\thinspace{\rm s}^{-1}\thinspace{\rm Mpc}^{-1}$, only $1.3\thinspace\sigma$ away from the constraint with fixed ${\Omega_{K}}=0$. Figure 19: Posterior distributions in the $\Omega_{\rm K}$–$H_{0}$ plane using HiLLiPoP PR4, compared to CamSpec (PR4) and Plik (PR3). ### 10.3 Effective number of relativistic species, $N_{\mathrm{eff}}$ Figure 20 shows the posteriors for $TT$, $TE$, $EE$, and their combination when we consider the $N_{\mathrm{eff}}$ extension. Both $TT$ and $TE$ are compatible with similar uncertainties, while $EE$ is not sensitive to $N_{\mathrm{eff}}$. The mean posterior for $TTTEEE$ is ${N_{\mathrm{eff}}}=3.08\pm 0.17.$ (37) The uncertainties are comparable to Planck 2018 results (${N_{\mathrm{eff}}}=2.92\pm 0.19$, Planck Collaboration VI 2020) with a slight shift toward higher values, closer to the theoretical expectation ${N_{\mathrm{eff}}}=3.044$ (Froustey et al. 2020; Bennett et al. 2021), which was also reported with CamSpec analysis based on PR4 data (${N_{\mathrm{eff}}}=3.00\pm 0.21$, Rosenberg et al. 2022). Figure 20: Posterior distributions for $N_{\mathrm{eff}}$ using HiLLiPoP PR4. The vertical dashed line shows the theoretical expectation (${N_{\mathrm{eff}}}=3.044$). ### 10.4 Sum of the neutrino masses, $\sum m_{\nu}$ Figure 21 shows the posterior distribution for the sum of the neutrino masses, $\sum m_{\nu}$. There is no detection of the effects of neutrino mass and we report an upper limit of ${\sum m_{\nu}}<0.40\thinspace\text{eV}\quad\text{(95\thinspace\% CL)}.$ (38) Despite the increase in sensitivity associated with PR4, the constraint is slightly weaker (the upper limit is larger) than the one reported for Planck 2018: ${\sum m_{\nu}}<0.26$ eV at 95 % CL. Our constraint is comparable to CamSpec, which gives ${\sum m_{\nu}}<0.36$ eV at 95 % CL. As explained in Couchot et al. (2017a) and Planck Collaboration VI (2020), this is directly related to the value of $A_{\mathrm{L}}$. Indeed, the correlation between $A_{\mathrm{L}}$ and $\sum m_{\nu}$ pushes the peak posterior of $\sum m_{\nu}$ toward negative values when $A_{\mathrm{L}}$ is fixed to unity; the data, however, prefer values of $A_{\mathrm{L}}$ larger than 1. With HiLLiPoP V4.2, the value of $A_{\mathrm{L}}$ reported in this work is more compatible with unity (${A_{\mathrm{L}}}=1.036\pm 0.051$, see Sect. 10.1), thus, the posterior for $\sum m_{\nu}$ is shifted to higher values, with a peak closer to zero, increasing the upper limit accordingly. Figure 21: Posterior distributions for $\sum m_{\nu}$ using HiLLiPoP PR4. Units are electronvolts. ## 11 Conclusions In this paper, we have derived cosmological constraints using CMB anisotropies from the final Planck data release (PR4). We detailed a new version of a CMB high-$\ell$ likelihood based on cross-power spectra computed from the PR4 maps. This version of HiLLiPoP, labelled V4.2, uses more sky (75 %) and a wider range of multipoles. Our likelihood makes use of physically-motivated models for foreground-emission residuals. Using only priors on the foreground spectral energy distributions, we found amplitudes for residuals consistent with expectations. Moreover, we have shown that the impact of this modelling on cosmological $\Lambda$CDM parameters is negligible. Combined with the low-$\ell$ $EE$ likelihood LoLLiPoP, we derived constraints on $\Lambda$CDM and find good consistency with Planck 2018 results (based on PR3) with better goodness-of-fit and higher sensitivity (from 10 % to 20 %, depending on the parameters). In particular, we now constrain the reionization optical depth at the 10 % level. We found a value for the Hubble constant consistent with previous CMB measurements and thus still in tension with distance-ladder results. We also obtained a lower value for $S_{8}$, alleviating the CMB versus large-scale structure tension to 1.6$\thinspace\sigma$. We found good consistency with the other published CMB likelihood analysis based on PR4, CamSpec (Rosenberg et al. 2022), which relies on a procedure to clean power spectra prior to constructing the likelihood. The consistency of the results using two different approaches reinforces the robustness of the results obtained with Planck data. We also provided constraints on some extensions to $\Lambda$CDM, including the lensing amplitude $A_{\mathrm{L}}$, the curvature $\Omega_{K}$, the effective number of relativistic species $N_{\mathrm{eff}}$, and the sum of the neutrino masses $\sum m_{\nu}$. For both $A_{\mathrm{L}}$ and $\Omega_{K}$, our results show a significant reduction of the so-called “tensions” with standard $\Lambda$CDM, together with a reduction of the uncertainties. The final constraints indeed are fully compatible with $\Lambda$CDM predictions. In particular, with the new version of the likelihood presented in this work, we report ${A_{\mathrm{L}}}=1.036\pm 0.051$, entirely compatible with the $\Lambda$CDM prediction. The better agreement is explained both by the improvement of the Planck maps thanks to the NPIPE processing (with less noise and better systematic control in polarization) and the use of the HiLLiPoP likelihood. ###### Acknowledgements. Planck is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from NASA (USA). Some of the results in this paper have been derived using the HEALPix package. We acknowledge use of the following packages: xQML, for the computation of large-scale power spectra (gitlab.in2p3.fr/xQML); Xpol, for the computation of large-scale power spectra (gitlab.in2p3.fr/tristram/Xpol); Cobaya, for the sampling of the likelihoods (github.com/CobayaSampler); and CLASS (github.com/lesgourg/class_public) and CAMB (github.com/cmbant/CAMB) for calculating power spectra. We gratefully acknowledge support from the CNRS/IN2P3 Computing Center for providing computing and data-processing resources needed for this work. The Planck PR4 data are publicly available on the Planck Legacy Archive (pla.esac.esa.int). Both likelihoods LoLLiPoP and HiLLiPoP based on PR4 are publicly available on github (github.com/planck- npipe) as external likelihoods for Cobaya. ## References * Abbott et al. (2022) Abbott, T. M. C., Aguena, M., Alarcon, A., et al., Dark Energy Survey Year 3 results: Cosmological constraints from galaxy clustering and weak lensing. 2022, Phys. Rev. D, 105, 023520, 2105.13549 * Abdalla et al. (2022) Abdalla, E., Abellán, G. F., Aboubrahim, A., et al., Cosmology intertwined: A review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies. 2022, Journal of High Energy Astrophysics, 34, 49, 2203.06142 * Addison et al. (2012) Addison, G. E., Dunkley, J., & Spergel, D. N., Modelling the correlation between the thermal Sunyaev Zel’dovich effect and the cosmic infrared background. 2012, MNRAS, 427, 1741, 1204.5927 * Ade et al. (2021) Ade, P. A. R., Ahmed, Z., Amiri, M., et al., Improved Constraints on Primordial Gravitational Waves using Planck, WMAP, and BICEP/Keck Observations through the 2018 Observing Season. 2021, Phys. Rev. Lett., 127, 151301 * Battaglia et al. (2013) Battaglia, N., Natarajan, A., Trac, H., Cen, R., & Loeb, A., Reionization on Large Scales. III. Predictions for Low-l Cosmic Microwave Background Polarization and High-l Kinetic Sunyaev-Zel’dovich Observables. 2013, ApJ, 776, 83, 1211.2832 * Bennett et al. (2021) Bennett, J. J., Buldgen, G., de Salas, P. F., et al., Towards a precision calculation of the effective number of neutrinos N_eff in the Standard Model. Part II. Neutrino decoupling in the presence of flavour oscillations and finite-temperature QED. 2021, J. Cosmology Astropart. Phys., 2021, 073, 2012.02726 * Béthermin et al. (2012) Béthermin, M., Daddi, E., Magdis, G., et al., A Unified Empirical Model for Infrared Galaxy Counts Based on the Observed Physical Evolution of Distant Galaxies. 2012, ApJ, 757, L23, 1208.6512 * Blas et al. (2011) Blas, D., Lesgourgues, J., & Tram, T., The Cosmic Linear Anisotropy Solving System (CLASS). Part II: Approximation schemes. 2011, J. Cosmology Astropart. Phys., 2011, 034, 1104.2933 * Brown et al. (2005) Brown, M. L., Castro, P. G., & Taylor, A. N., Cosmic microwave background temperature and polarization pseudo-Cl estimators and covariances. 2005, MNRAS, 360, 1262, astro-ph/0410394 * Carron (2013) Carron, J., On the assumption of Gaussianity for cosmological two-point statistics and parameter dependent covariance matrices. 2013, A&A, 551, A88, 1204.4724 * Choi et al. (2020) Choi, S. K., Hasselfield, M., Ho, S.-P. P., et al., The Atacama Cosmology Telescope: a measurement of the Cosmic Microwave Background power spectra at 98 and 150 GHz. 2020, J. Cosmology Astropart. Phys., 2020, 045, 2007.07289 * Couchot et al. (2017a) Couchot, F., Henrot-Versillé, S., Perdereau, O., et al., Cosmological constraints on the neutrino mass including systematic uncertainties. 2017a, A&A, 606, A104, 1703.10829 * Couchot et al. (2017b) Couchot, F., Henrot-Versillé, S., Perdereau, O., et al., Cosmology with the cosmic microwave background temperature-polarization correlation. 2017b, A&A, 602, A41, 1609.09730 * Couchot et al. (2017c) Couchot, F., Henrot-Versillé, S., Perdereau, O., et al., Relieving tensions related to the lensing of the cosmic microwave background temperature power spectra. 2017c, A&A, 597, A126, 1510.07600 * Efstathiou (2006) Efstathiou, G. P., Hybrid estimation of cosmic microwave background polarization power spectra. 2006, MNRAS, 370, 343 * Eriksen et al. (2008) Eriksen, H. K., Jewell, J. B., Dickinson, C., et al., Joint Bayesian Component Separation and CMB Power Spectrum Estimation. 2008, ApJ, 676, 10, 0709.1058 * Froustey et al. (2020) Froustey, J., Pitrou, C., & Volpe, M. C., Neutrino decoupling including flavour oscillations and primordial nucleosynthesis. 2020, J. Cosmology Astropart. Phys., 2020, 015, 2008.01074 * Górski et al. (2005) Górski, K. M., Hivon, E., Banday, A. J., et al., HEALPix: A Framework for High-Resolution Discretization and Fast Analysis of Data Distributed on the Sphere. 2005, ApJ, 622, 759, astro-ph/0409513 * Hamimeche & Lewis (2008) Hamimeche, S. & Lewis, A., Likelihood analysis of CMB temperature and polarization power spectra. 2008, Phys. Rev. D, 77, 103013, 0801.0554 * Hivon et al. (2002) Hivon, E., Górski, K. M., Netterfield, C. B., et al., MASTER of the Cosmic Microwave Background Anisotropy Power Spectrum: A Fast Method for Statistical Analysis of Large and Complex Cosmic Microwave Background Data Sets. 2002, ApJ, 567, 2, astro-ph/0105302 * Hivon et al. (2017) Hivon, E., Mottet, S., & Ponthieu, N., QuickPol: Fast calculation of effective beam matrices for CMB polarization. 2017, A&A, 598, A25, 1608.08833 * Howlett et al. (2012) Howlett, C., Lewis, A., Hall, A., & Challinor, A., CMB power spectrum parameter degeneracies in the era of precision cosmology. 2012, J. Cosmology Astropart. Phys., 2012, 027, 1201.3654 * Jeong et al. (2014) Jeong, D., Chluba, J., Dai, L., Kamionkowski, M., & Wang, X., Effect of aberration on partial-sky measurements of the cosmic microwave background temperature power spectrum. 2014, Phys. Rev. D, 89, 023003, 1309.2285 * Lagache et al. (2020) Lagache, G., Béthermin, M., Montier, L., Serra, P., & Tucci, M., Impact of polarised extragalactic sources on the measurement of CMB B-mode anisotropies. 2020, A&A, 642, A232, 1911.09466 * Lesgourgues (2011) Lesgourgues, J., The Cosmic Linear Anisotropy Solving System (CLASS) I: Overview. 2011, arXiv e-prints, arXiv:1104.2932, 1104.2932 * Lewis et al. (2000) Lewis, A., Challinor, A., & Lasenby, A., Efficient Computation of Cosmic Microwave Background Anisotropies in Closed Friedmann-Robertson-Walker Models. 2000, ApJ, 538, 473, astro-ph/9911177 * Mangilli et al. (2015) Mangilli, A., Plaszczynski, S., & Tristram, M., Large-scale cosmic microwave background temperature and polarization cross-spectra likelihoods. 2015, MNRAS, 453, 3174, 1503.01347 * Manzotti et al. (2014) Manzotti, A., Hu, W., & Benoit-Lévy, A., Super-sample CMB lensing. 2014, Phys. Rev. D, 90, 023003, 1401.7992 * Peebles (1973) Peebles, P. J. E., Statistical Analysis of Catalogs of Extragalactic Objects. I. Theory. 1973, ApJ, 185, 413 * Planck Collaboration XIII (2011) Planck Collaboration XIII, Planck early results. XIII. Statistical properties of extragalactic radio sources in the Planck Early Release Compact Source Catalogue. 2011, A&A, 536, A13, 1101.2044 * Planck Collaboration IX (2014) Planck Collaboration IX, Planck 2013 results. IX. HFI spectral response. 2014, A&A, 571, A9, 1303.5070 * Planck Collaboration XV (2014) Planck Collaboration XV, Planck 2013 results. XV. CMB power spectra and likelihood. 2014, A&A, 571, A15, 1303.5075 * Planck Collaboration XVI (2014) Planck Collaboration XVI, Planck 2013 results. XVI. Cosmological parameters. 2014, A&A, 571, A16, 1303.5076 * Planck Collaboration XXX (2014) Planck Collaboration XXX, Planck 2013 results. XXX. Cosmic infrared background measurements and implications for star formation. 2014, A&A, 571, A30, 1309.0382 * Planck Collaboration XI (2016) Planck Collaboration XI, Planck 2015 results. XI. CMB power spectra, likelihoods, and robustness of parameters. 2016, A&A, 594, A11, 1507.02704 * Planck Collaboration XIII (2016) Planck Collaboration XIII, Planck 2015 results. XIII. Cosmological parameters. 2016, A&A, 594, A13, 1502.01589 * Planck Collaboration XXII (2016) Planck Collaboration XXII, Planck 2015 results. XXII. A map of the thermal Sunyaev-Zeldovich effect. 2016, A&A, 594, A22, 1502.01596 * Planck Collaboration III (2020) Planck Collaboration III, Planck 2018 results. III. High Frequency Instrument data processing. 2020, A&A, 641, A3, 1807.06207 * Planck Collaboration IV (2020) Planck Collaboration IV, Planck 2018 results. IV. Diffuse component separation. 2020, A&A, 641, A4, 1807.06208 * Planck Collaboration V (2020) Planck Collaboration V, Planck 2018 results. V. Power spectra and likelihoods. 2020, A&A, 641, A5, 1907.12875 * Planck Collaboration VI (2020) Planck Collaboration VI, Planck 2018 results. VI. Cosmological parameters. 2020, A&A, 641, A6, 1807.06209 * Planck Collaboration Int. VII (2013) Planck Collaboration Int. VII, Planck intermediate results. VII. Statistical properties of infrared and radio extragalactic sources from the Planck Early Release Compact Source Catalogue at frequencies between 100 and 857 GHz. 2013, A&A, 550, A133, 1207.4706 * Planck Collaboration Int. XIX (2015) Planck Collaboration Int. XIX, Planck intermediate results. XIX. An overview of the polarized thermal emission from Galactic dust. 2015, A&A, 576, A104, 1405.0871 * Planck Collaboration Int. XXII (2015) Planck Collaboration Int. XXII, Planck intermediate results. XXII. Frequency dependence of thermal emission from Galactic dust in intensity and polarization. 2015, A&A, 576, A107, 1405.0874 * Planck Collaboration Int. XXX (2016) Planck Collaboration Int. XXX, Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes. 2016, A&A, 586, A133, 1409.5738 * Planck Collaboration Int. XLVII (2016) Planck Collaboration Int. XLVII, Planck intermediate results. XLVII. Constraints on reionization history. 2016, A&A, 596, A108, 1605.03507 * Planck Collaboration Int. LI (2017) Planck Collaboration Int. LI, Planck intermediate results. LI. Features in the cosmic microwave background temperature power spectrum and shifts in cosmological parameters. 2017, A&A, 607, A95, 1608.02487 * Planck Collaboration Int. LVII (2020) Planck Collaboration Int. LVII, Planck intermediate results. LVII. NPIPE: Joint Planck LFI and HFI data processing. 2020, A&A, 643, A42, 2007.04997 * Reichardt et al. (2021) Reichardt, C. L., Patil, S., Ade, P. A. R., et al., An Improved Measurement of the Secondary Cosmic Microwave Background Anisotropies from the SPT-SZ + SPTpol Surveys. 2021, ApJ, 908, 199, 2002.06197 * Rosenberg et al. (2022) Rosenberg, E., Gratton, S., & Efstathiou, G., CMB power spectra and cosmological parameters from Planck PR4 with CamSpec. 2022, MNRAS, 517, 4620, 2205.10869 * Sellentin & Heavens (2016) Sellentin, E. & Heavens, A. F., Parameter inference with estimated covariance matrices. 2016, MNRAS, 456, L132, 1511.05969 * Shaw et al. (2012) Shaw, L. D., Rudd, D. H., & Nagai, D., Deconstructing the Kinetic SZ Power Spectrum. 2012, ApJ, 756, 15, 1109.0553 * Tanimura et al. (2022) Tanimura, H., Douspis, M., Aghanim, N., & Salvati, L., Constraining cosmology with a new all-sky Compton parameter map from the Planck PR4 data. 2022, MNRAS, 509, 300, 2110.08880 * Tegmark & de Oliveira-Costa (2001) Tegmark, M. & de Oliveira-Costa, A., How to measure CMB polarization power spectra without losing information. 2001, Phys. Rev. D, 64, 063001, arXiv:astro-ph/0012120 * Tristram et al. (2022) Tristram, M., Banday, A. J., Górski, K. M., et al., Improved limits on the tensor-to-scalar ratio using BICEP and Planck data. 2022, Phys. Rev. D, 105, 083524 * Tristram et al. (2021) Tristram, M., Banday, A. J., Górski, K. M., et al., Planck constraints on the tensor-to-scalar ratio. 2021, A&A, 647, A128, 2010.01139 * Tristram et al. (2005) Tristram, M., Macías-Pérez, J. F., Renault, C., & Santos, D., XSPECT, estimation of the angular power spectrum by computing cross-power spectra with analytical error bars. 2005, MNRAS, 358, 833, astro-ph/0405575 * Tucci et al. (2004) Tucci, M., Martínez-González, E., Toffolatti, L., González-Nuevo, J., & De Zotti, G., Predictions on the high-frequency polarization properties of extragalactic radio sources and implications for polarization measurements of the cosmic microwave background. 2004, MNRAS, 349, 1267, astro-ph/0307073 * Tucci et al. (2011) Tucci, M., Toffolatti, L., de Zotti, G., & Martínez-González, E., High-frequency predictions for number counts and spectral properties of extragalactic radio sources. New evidence of a break at mm wavelengths in spectra of bright blazar sources. 2011, A&A, 533, 57 * Vanneste et al. (2018) Vanneste, S., Henrot-Versillé, S., Louis, T., & Tristram, M., Quadratic estimator for CMB cross-correlation. 2018, Phys. Rev. D, 98, 103526, 1807.02484 * Varshalovich et al. (1988) Varshalovich, D. A., Moskalev, A. N., & Khersonskii, V. K. 1988, Quantum Theory of Angular Momentum (Singapore: World Scientific) ## Appendix A Foregrounds and instrumental parameters Here we describe the “nuisance” parameters relating to foreground emission components and the instrument. They are listed in Table 6 together with their prior and the recovered best-fit value for the combination $TTTEEE$. Name | Definition | Prior | Mean ---|---|---|--- $A_{\mathrm{planck}}$ | Absolute calibration | $1.0000\pm 0.0025$ | $0.9997\pm 0.0029$ $c_{\mathrm{100A}}$ | Map recalibration (100A) | [0.9,1.1] | $1.003\pm 0.007$ $c_{\mathrm{100B}}$ | Map recalibration (100B) | [0.9,1.1] | $1.004\pm 0.007$ $c_{\mathrm{143A}}$ | Map recalibration (143A) | 1.0 (fixed) | $c_{\mathrm{143B}}$ | Map recalibration (143B) | [0.9,1.1] | $1.004\pm 0.006$ $c_{\mathrm{217A}}$ | Map recalibration (217A) | [0.9,1.1] | $1.001\pm 0.008$ $c_{\mathrm{217B}}$ | Map recalibration (217B) | [0.9,1.1] | $1.001\pm 0.008$ $\eta_{\rm 100-A}$ | Cross-polarization (100-A) | $1.000$ (fixed) | $\eta_{\rm 100-B}$ | Cross-polarization (100-B) | $1.000$ (fixed) | $\eta_{\rm 143-A}$ | Cross-polarization (143-A) | $1.000$ (fixed) | $\eta_{\rm 143-B}$ | Cross-polarization (143-B) | $1.000$ (fixed) | $\eta_{\rm 217-A}$ | Cross-polarization (217-A) | $0.975$ (fixed) | $\eta_{\rm 217-B}$ | Cross-polarization (217-B) | $0.975$ (fixed) | $c_{\mathrm{dust}}^{T}$ | Rescaling for Galactic dust in temperature | $1.0\pm 0.1$ | $1.08\pm 0.03$ $c_{\mathrm{dust}}^{P}$ | Rescaling for Galactic dust in polarization | $1.0\pm 0.1$ | $1.20\pm 0.03$ $A_{\mathrm{radio}}$ | Amplitude for radio sources | [0,150] | $63.3\pm 5.0$ $A_{\mathrm{IR}}$ | Amplitude for IR sources | [0,150] | $6.1\pm 0.5$ $A_{\mathrm{CIB}}$ | Amplitude for the CIB | [0,20] | $1.04\pm 0.31$ $A_{\mathrm{tSZ}}$ | Amplitude for the tSZ effect | [0,50] | $6.12\pm 1.48$ $A_{\mathrm{kSZ}}$ | Amplitude for the kSZ effect | [0,50] | $<7.9$ $\xi_{\mathrm{SZ\times CIB}}$ | Cross-correlation tSZ$\times$CIB | [$-$1,1] | $0.45\pm 0.31$ $\beta_{\mathrm{dust}}^{T}$ | Spectral index for dust in temperature | $1.51\pm 0.01$ | $1.51\pm 0.01$ $\beta_{\mathrm{dust}}^{P}$ | Spectral index for dust in polarization | $1.59\pm 0.02$ | $1.59\pm 0.02$ $\beta_{\mathrm{CIB}}$ | Spectral index for CIB | $1.75\pm 0.06$ | $1.85\pm 0.06$ $\beta_{\mathrm{radio}}$ | Spectral index for radio sources | $-$0.8 | Table 6: Instrumental and foreground parameters for the HiLLiPoP likelihood with their respective priors. Amplitudes refer to $D_{\ell}=\ell(\ell+1)C_{\ell}/2\pi$ for $\ell=3000$ at 143 GHz, except for dust coefficients, $c_{\mathrm{dust}}$, for which the priors are found by rescaling the dust power spectrum at 353 GHz. ## Appendix B Best-fit model components Here we present our results for the best-fitting model components for each cross-power spectrum. These are shown in Fig. 22 and the corresponding $\chi^{2}$ values are given in Table 7. Figure 22: Best-fit model for each cross-frequency power spectrum in temperature, including emission from CMB, dust, tSZ, kSZ, CIB, SZ$\times$CIB, and Poisson-noise from radio sources and dusty galaxies. Negative components are shown as dashed lines. Vertical black dashed lines show the range of multipoles considered in HiLLiPoP V4.2. The bottom panels show the residuals normalized by the error bars. Data are binned with $\Delta\ell=20$ for this plot. Cross-spectrum | $TT$ | $EE$ | $TE$ | $ET$ ---|---|---|---|--- | $\chi^{2}/n_{\rm d}$ | $\delta\sigma(\chi^{2})$ | $\chi^{2}/n_{\rm d}$ | $\delta\sigma(\chi^{2})$ | $\chi^{2}/n_{\rm d}$ | $\delta\sigma(\chi^{2})$ | $\chi^{2}/n_{\rm d}$ | $\delta\sigma(\chi^{2})$ 100A$\times$100B | 1590.0 / 1471 | 2.19 | 1079.1 / 1101 | $-$0.47 | 1597.4 / 1471 | $-$2.33 | 1450.1 / 1471 | $-$0.39 100A$\times$143A | 1616.5 / 1471 | 2.68 | 1551.5 / 1471 | $-$1.48 | 1564.5 / 1471 | $-$1.72 | 1490.8 / 1471 | $-$0.37 100A$\times$143B | 1605.1 / 1471 | 2.47 | 1431.3 / 1471 | $-$0.73 | 1396.4 / 1471 | $-$1.38 | 1520.6 / 1471 | $-$0.92 100B$\times$143A | 1596.3 / 1471 | 2.31 | 1485.7 / 1471 | $-$0.27 | 1535.2 / 1471 | $-$1.18 | 1615.8 / 1471 | $-$2.67 100B$\times$143B | 1576.5 / 1471 | 1.94 | 1495.5 / 1471 | $-$0.45 | 1466.9 / 1471 | $-$0.08 | 1614.6 / 1471 | $-$2.65 100A$\times$217A | 1379.1 / 1251 | 2.56 | 1331.5 / 1251 | $-$1.61 | 1478.0 / 1401 | $-$1.45 | 1432.3 / 1401 | $-$0.59 100A$\times$217B | 1364.5 / 1251 | 2.27 | 1278.4 / 1251 | $-$0.55 | 1481.3 / 1401 | $-$1.52 | 1445.3 / 1401 | $-$0.84 100B$\times$217A | 1336.8 / 1251 | 1.71 | 1283.0 / 1251 | $-$0.64 | 1507.3 / 1401 | $-$2.01 | 1545.9 / 1401 | $-$2.74 100B$\times$217B | 1335.0 / 1251 | 1.68 | 1218.3 / 1251 | $-$0.65 | 1466.8 / 1401 | $-$1.24 | 1505.7 / 1401 | $-$1.98 143A$\times$143B | 2108.5 / 1951 | 2.52 | 1995.3 / 1971 | $-$0.39 | 2014.7 / 1971 | $-$0.70 | 1972.1 / 1971 | $-$0.02 143A$\times$217A | 2324.1 / 2251 | 1.09 | 1647.4 / 1751 | $-$1.75 | 1847.9 / 1801 | $-$0.78 | 1868.7 / 1801 | $-$1.13 143A$\times$217B | 2327.3 / 2251 | 1.14 | 1853.6 / 1751 | $-$1.73 | 1746.9 / 1801 | $-$0.90 | 1898.1 / 1801 | $-$1.62 143B$\times$217A | 2351.2 / 2251 | 1.49 | 1725.3 / 1751 | $-$0.43 | 1812.0 / 1801 | $-$0.18 | 1835.4 / 1801 | $-$0.57 143B$\times$217B | 2321.0 / 2251 | 1.04 | 1799.5 / 1751 | $-$0.82 | 1696.4 / 1801 | $-$1.74 | 1862.6 / 1801 | $-$1.03 217A$\times$217B | 2283.6 / 2251 | 0.49 | 1732.8 / 1751 | $-$0.31 | 1625.4 / 1701 | $-$1.30 | 1725.8 / 1701 | $-$0.43 Table 7: $\chi^{2}$ values for each cross-spectrum compared to the size of the data vector ($n_{\rm d}$).
# Learning user’s confidence for active learning Devis Tuia, , Jordi Muñoz-Marí Manuscript received XXXX;This work has been partly supported by the Swiss National Science Foundation (grant PZ00P2-136827) and by the Spanish Ministry of Science and Innovation under the projects CICYT-FEDER TEC2009-13696, AYA2008-05965-C04-03, and CSD2007-00018. DT is with the Laboratoire des Systèmes d’Information Géographique (LaSIG), Lausanne Institute of Technology (EPFL), Switzerland<EMAIL_ADDRESS>http://devis.tuia.googlepages.com, Phone: +41-216935785, Fax : +41-216935790. JMM is with the Image Processing Laboratory (IPL), Universitat de València, València, Spain. E-mail<EMAIL_ADDRESS>http://isp.uv.es, Phone: +34-963544021, Fax: +34-963544353. ###### Abstract This is the pre-acceptance version, to read the final version published in 2013 in the IEEE Transactions on Geoscience and Remote Sensing (IEEE TGRS), please go to: 10.1109/TGRS.2012.2203605 In this paper, we study the applicability of active learning in operative scenarios: more particularly, we consider the well-known contradiction between the active learning heuristics, which rank the pixels according to their uncertainty, and the user’s confidence in labeling, which is related to both the homogeneity of the pixel context and user’s knowledge of the scene. We propose a filtering scheme based on a classifier that learns the confidence of the user in labeling, thus minimizing the queries where the user would not be able to provide a class for the pixel. The capacity of a model to learn the user’s confidence is studied in detail, also showing the effect of resolution is such a learning task. Experiments on two QuickBird images of different resolutions (with and without pansharpening) and considering committees of users prove the efficiency of the filtering scheme proposed, which maximizes the number of useful queries with respect to traditional active learning. ###### Index Terms: Active learning, photointerpretation, user’s confidence, bad states, VHR imagery, SVM. ## I Introduction The advent of remote sensing imagery has opened a wide range of possibilities for surveying and analyzing the processes occurring on the surface of the Earth, thus allowing significant advances in the monitoring of agricultural [1, 2] or urban processes [3]. Among all the products retrieved from very high resolution (VHR) imagery, classification maps describing landuse remain the most common. Several methods have been proposed to perform the classification task, but until now supervised methods remain the most successful approaches in the remote sensing community [4, 5]. However, these approaches rely on a set of pixels for which the class is known and that are used to train the classifier: the training set. The representativity of this set is crucial for the success of the classification [6, 7]. There are two major ways of obtaining a training set: one is to organize in-situ campaigns, where the landuse represented by the pixel is assessed and georeferenced by teams on the field; the other is to proceed by photointerpretation, i.e., having a human operator define labeled polygons on screen. Photointerpretation is particularly successful when dealing with VHR images, since the objects are recognizable on screen. In this case, two problems arise: first, many redundant pixels are added to the training set and second, the pixels added are not necessarily the most relevant for the classifier, but those that were most convenient (for a variety of reasons) in the photointerpretation phase. This last point is crucial, since the photointerpreter tends to label easily- recognizable pixels and to avoid areas of high variance/contrast (unless this variance is the specificity of the class) or underrepresented landuse classes. To make photointerpretation efficient, active learning methods have recently been proposed in the community (a review in [8]): with active learning, the model and the user interact, the first ranking unlabeled pixels by their classification uncertainty and the latter providing the labels of the highly ranked. After retraining with these difficult pixels (now labeled), the model is expected to improve its performance greatly. Several ranking criteria (heuristics) have been proposed in the remote sensing literature: some use committees of machines working on subsets of training pixels [9] or of input features [10, 11], others use the SVM decision function [12, 13, 14], posterior probabilities [15, 16, 17] or cluster coherence [18, 19] as a criterion to rank pixels. Questions of batch diversity [9, 20], inter-iterations diversity [21] and inter-dataset adaptation [16, 22] have also been considered. All these studied proved the efficiency of active learning heuristics in querying the most informative and diverse pixels in a pool of possible candidates. Despite the theoretical appeal of this solution, the constraints of photointerpretation are often contradictory to the common active learning setting: while the first are driven by the user’s capacity to recognize the objects on the surface, the second ranks the pixels by their uncertainty, i.e., the complexity and mixture of their signature. As a consequence, the user is constantly required to label those samples with the highest uncertainty, which is a very complex (and often unfeasible) task even for a trained operator. Figure 1 illustrates this principle for a 2.4 m QuickBird image: frequently, the pixels queried by an active learning heuristic are situated on the borders between objets, in areas which are not homogeneous, between several classes or in shadowed areas. | | ---|---|--- | | Figure 1: Six examples of pixels (in the white circles) selected to be labeled by a standard active learning criterion. Most of them are placed in areas under shadows, or at the boundary between several classes. In all recent active learning literature (see above), this problem was avoided since, in order to avoid tedious photointerpretation, a pre-labeled set of candidates was provided and the human user was replaced by the a-priori known labels. This produces two undesired effects: first, all the borders between the objects were avoided, since the pre-labeled set is usually defined by photointerpretation and does not cover the whole of the image; and secondly, the user was considered infallible, in the sense that he could always give the correct answer. Reality, however, is often very different, and even an experienced operator can have difficulties in labeling the pixels returned by an active learning heuristic. In this paper, we propose a solution to maximize the chances of querying pixels that the user can label correctly. To do so, we consider the idea of _learning the confidence of the user in a photointerpreting task_. Each time a user is unable to label a query, he gives information about a query he does not want to encounter again in the process, i.e., a bad state [23]. Avoiding bad states through estimation of the confidence of the oracle performing the task has been studied in robotics [24, 25, 26] and for time-evolving queries in policy evaluation [27]. To avoid bad states, we propose to train a second classifier learning to separate valid states (pixels that the user can label) from bad states. Using the confidence of this classifier, we build a mask for the active learning heuristic, which then avoids queries where the model learnt that the user was unable to give an answer. In short, we introduce the concept of learning the confidence of the user to avoid bad states when performing active learning while he/she is photointerpreting. We apply the proposed technique to two VHR images of urban areas. In both cases, the proposed active learning system allows to reconstruct the scene with a minimal number of queries, also minimizing the number of bad states. This means that for the same effort on the part of the user, the learning curve is steeper and all the queries presented are answered and are relevant for the model. Finally, the proposed algorithm also returns a confidence map that shows where the user would be capable to provide an answer. The remainder of the paper is organized as follows: Section II illustrates the proposed methodology. Section III describes the data considered and the experimental setup of the experiments. Section IV presents the results ans Section V gives the conclusions of the work. ## II Active learning with user’s confidence This section presents the Active Learning with User’s Confidence (AL-UC) method proposed. We first review the standard active learning framework and then present the strategy used to avoid bad states. ### II-A Active learning and uncertainty sampling Active learning [28, 29] is a way to address the problem of ranking a set of unlabeled pixels, $U$, according to a score providing information about their classification uncertainty. Starting at iteration $\epsilon=1$ with a training set composed by $l$ labeled pixels, $X^{\epsilon}=\\{{\mathbf{x}}_{i},y_{i}\\}_{i=1}^{l}$, a supervised model is trained and the pixels in $U^{\epsilon}$ are ranked according to a heuristic accounting for the content in information carried by every unlabeled pixels for the model at the current iteration $\epsilon$. The pixels related to maximal uncertainty are presented to an oracle (a photointerpreter in our case) who labels them, thus discovering their labels. The $m$ newly labeled pixels form a batch $S^{\epsilon}=\\{{\mathbf{x}}_{j},y_{j}\\}_{j=1}^{m}$ that is added to the training set ($X^{\epsilon+1}=X^{\epsilon}\cup S^{\epsilon}$) and removed from the unlabeled set of candidates ($U^{\epsilon+1}=U\setminus S^{\epsilon}$). As stated in the introduction, many heuristics exist to perform the ranking of the candidates. In this paper, we used two different heuristics: * - The Multi-Class Level Uncertainty criterion proposed in [20], which is a state of the art criterion for uncertainty sampling. This criterion, based on the SVM decision function [30], ranks the pixels by confronting output of the two most confident classes in a One-Against-All setting. For a given iteration $t$ and $\Omega$ possible classes, the most uncertain pixel is the one for which $\displaystyle\hat{{\mathbf{x}}}^{\text{MCLU}}=\arg\min_{{\mathbf{x}}_{i}\in U}\Big{\\{}f({\mathbf{x}}_{i})^{\text{MC}}\Big{\\}}$ (1) $\displaystyle\text{where}\qquad f({\mathbf{x}}_{i})^{\text{MC}}=\max_{\omega\in\Omega}|f({\mathbf{x}}_{i},\omega)|-\max_{\omega\in\Omega\backslash\omega^{+}}|f({\mathbf{x}}_{i},\omega)|$ (2) where $\omega^{+}$ is the class showing maximal confidence, i.e. the argument of the first term of Eq. (2). A high value of this criterion corresponds to samples assigned with high certainty to the most confident class, while a small value represents unreliable classification. * - The Entropy-Query-by-Bagging (EQB) proposed in [9]. This criterion ranks the candidates using a committee of classifiers trained with a subset of the training data $X^{\epsilon}$. Pixels related to maximal entropy in the predictions given by the committee ($H^{\text{BAG}}$) are retained. In this paper, we use the normalized version of the heuristic ($nEQB$) [8] $\hat{{\mathbf{x}}}^{\text{\emph{n}EQB}}=\arg\max_{{\mathbf{x}}_{i}\in U}\Big{\\{}\frac{H^{\text{BAG}}({\mathbf{x}}_{i})}{\mbox{log}(N_{i})}\Big{\\}}$ (3) where $\displaystyle H^{\text{BAG}}({\mathbf{x}}_{i})=-\sum_{\omega=1}^{N_{i}}p^{\text{BAG}}(y_{i}^{*}=\omega|{\mathbf{x}}_{i})\mbox{log}\left[p^{\text{BAG}}(y^{*}_{i}=\omega|{\mathbf{x}}_{i})\right]$ (4) $\displaystyle\text{where}\qquad p^{\text{BAG}}(y^{*}_{i}=\omega|{\mathbf{x}}_{i})=\frac{\sum_{m=1}^{k}\delta(y_{i,m}^{*},\omega)}{\sum_{m=1}^{k}\sum_{j=1}^{N_{i}}\delta(y_{i,m}^{*},\omega_{j})}$ $N_{i}$ is the number of classes predicted for pixel ${\mathbf{x}}_{i}$ by the committee, with $1\leq N_{i}\leq N$. $N$ is the total number of classes. $\delta$ is a function returning $1$ if the class predicted by the $m$-th member of the committee is $\omega_{j}$ and $0$ otherwise. Once the ranking of the $U$ set is provided, the user is asked to label the pixels minimizing the criterion in (1) or (3) respectively. This paper only considers these two heuristics because i) all heuristics based on pixel’s uncertainty only produce similar results [8] and ii) including a criterion on pixel diversity would make the queries more effective [20], but do not change the conclusions in the optic of this study. Therefore, we limit the experimentation to these two heuristics. ### II-B Learning user’s confidence Assuming that the photointerpreter is able to label all the pixels in $U$ makes active learning algorithms efficient tools for semiautomatic training set composition. However, the pixels minimizing Eq. (1) are often difficult to label, since i) they correspond to pixels related to maximal uncertainty ii) they are situated on the border between classes. As a consequence, these pixels often lie on the border between objects in the spatial domain, as observed in the zooms reported in Fig. 1. These difficult pixels can be considered as bad states, in the sense that the user may be uncertain or agnostic about the response to give for these queries, since their choice does not depend on the ability of the user nor on his/her knowledge of the scene [23]. A user can become frustrated when encountering several bad states, since he/she is forced to use useless resources (he / she cannot provide an answer repeatedly, and the time needed to get the necessary number of labeled pixels is therefore greatly lengthened) at the risk of degrading his/her performances (for example for an increased fatigue). Worse, the user can decide to give advice that is unreliable, thus degrading the model performance with mislabeled training samples. To decide which states the user would or wouldn’t like to encounter, we consider a strategy close to the Confident Execution proposed in [27]: at each iteration, the confidence of the user is assessed and a query minimizing Eq. (1) is presented to him/her only if the confidence about that pixel exceeds a given threshold $\theta$. If the threshold is not met, the query is skipped and the definition of states is upgraded using this negative example. Practically, we train a second model that learns the user’s confidence in labeling. This model learns to separate situations where labeling is feasible (with current knowledge) from other where the user is not supposed to be able to provide a label ($Y_{\theta}=[-1;+1]$). Contrarily to [27], we do not make the difference between states that are unfamiliar (that would correspond to new classes) and merely ambiguous. To interpret the confidence as a probability and normalize it across iterations of the active learning loop, the outputs of the model are converted into probabilities: this operation is natural for models as LDA or neural networks, but when using SVM (as in this study) an estimation as the one proposed by Platt [31] has to be used. Algorithm 1 summarizes the flowchart of the proposed method. Note that in AL- UC there are two training sets: the first is the usual set of the classifier with output space $Y=[1,...,\Omega]$, while the second is a training set containing the confidence samples and a binary output $Y_{\theta}=[-1;1]$. If at the beginning the input samples coincide (${\mathbf{x}}^{\epsilon}={\mathbf{x}}^{\epsilon}_{\theta},\epsilon=1$), they start to diverge as soon as bad states are encountered: in this case, the batch of training examples for the multiclass classifier is not updated, but the uncertain sample is added to the confidence classifier as a negative example (line 16 of the Algorithm) . This way the confidence classifier is constantly updated as long as bad states are encountered. For this reason $|X^{\epsilon}|\leq|X^{\epsilon}_{\theta}|$. Algorithm 1 AL-UC algorithm Inputs \- Initial training set $X^{\epsilon}=\\{{\mathbf{x}}_{i},y_{i}\\}_{i=1}^{l}$ - ($X\in\mathbb{R}^{d}$, $Y\in[1,...,\Omega]$, $\epsilon=1$, $\epsilon=$ iteration). \- Initial confidence training set $X^{\epsilon}_{\theta}=\\{{\mathbf{x}}_{i},\boldsymbol{1}_{i}\\}_{i=1}^{l}$ - ($X\in\mathbb{R}^{d}$, $\epsilon=1$). \- Pool of candidates $U^{\epsilon}=\\{{\mathbf{x}}_{i}\\}_{i=l+1}^{l+u}$ ($U\in\mathbb{R}^{d}$, $\epsilon=1$). \- Number of pixels $m$ to add at each iteration - (defining the size of the batch of selected pixels $S$). 1: repeat 2: Train the classifier with current training set $X^{\epsilon}$; 3: Train the confidence classifier with current $X^{\epsilon}_{\theta}$; 4: for each candidate in $U^{\epsilon}$ do 5: Evaluate the active learning _heuristic_ ; 6: Assess assignment’s confidence $p(y_{\theta}=+1|X^{\epsilon}_{\theta})$; 7: end for 8: Rank the candidates in $U^{\epsilon}$ according to the score of the heuristic, obtain ranking $r$; 9: repeat 10: Select next candidate in $r$, ${\mathbf{x}}_{r}$; 11: if $p(y_{\theta,r}=+1|X^{\epsilon}_{\theta})>\theta$ then 12: if the user can provide the label $y_{r}$ then 13: Add the labeled candidate to the batch $S^{\epsilon}=S\epsilon\cup\\{{\mathbf{x}}_{r},y_{r}\\}$; 14: Add the positive example to the confidence training set $X^{\epsilon}_{\theta}=X^{\epsilon}_{\theta}\cup\\{{\mathbf{x}}_{r},1\\}$; 15: else 16: Add the negative example to the confidence training set $X^{\epsilon}_{\theta}=X^{\epsilon}_{\theta}\cup\\{{\mathbf{x}}_{r},-1\\}$; 17: end if 18: else 19: Add the negative example to the confidence training set $X^{\epsilon}_{\theta}=X^{\epsilon}_{\theta}\cup\\{{\mathbf{x}}_{r},-1\\}$; 20: end if 21: until Batch $S$ has $m$ candidates 22: Add the batch to the training set $X^{\epsilon+1}=X^{\epsilon}\cup S^{\epsilon}$; 23: Remove the batch from the pool of candidates $U^{\epsilon+1}=U^{\epsilon}\backslash S^{\epsilon}$; 24: $\epsilon=\epsilon+1$. 25: until a stopping criterion is met. ## III Data and setup In this section, we present the datasets considered in the experiments, as well as the general experimental setup adopted. ### III-A Datasets Two urban VHR images are considered for the experiments (Fig. 2). They describe urban environments at different spatial resolutions, in order to assess differences in confidence of labeling related to the resolution of objects. * - QuickBird “Brüttisellen”. The first image is a 4-bands optical image of a residential neighborhood of the city of Zurich named Brüttisellen acquired by the sensor QuickBird in 2002. The image has a size of $329\times 347$ pixels, and a geometrical resolution of 2.4 m. Nine classes of interest have been highlighted by photointerpretation and $40,762$ pixels are available (see Tab I). * - QuickBird “Highway”. The second image is another 4-bands optical image of an industrial neighborhood of the city of Zurich acquired by the sensor QuickBird in 2006. The image has a size of $828\times 889$ pixels. The original image was pansharpened using Bayesian Data Fusion [32] to attain a spatial resolution of 0.6 m. Seven classes of interest have been highlighted by photointerpretation and $254,469$ pixels are available (see Tab I). To account for the spatial context of the pixel, we stacked morphological features [33] to the spectral vector: we added opening and closing features computed on the first PCA extracted on the multispectral image, as in [34, 35], which is a valid alternative to the use of a panchromatic image (as in [36, 37]). This operation allows to separate land use classes of similar materials but with different spatial extents as, for instance, roads and parking lots. Since the images have different spatial resolutions, the structuring element sizes are $\\{1,3\\}$ in radius for the Brüttisellen image and $\\{3,6,9\\}$ for the Highway image. Shape is kept circular in both cases. The pixels highlighted by photointerpretation compose the test set on which the different approaches are evaluated. Their specific quantities are highlighted in Tab. I. | ---|--- | (a) Brüttisellen | (b) Highway Figure 2: Images considered in the experiments, along with their corresponding GTs used for testing purposes. TABLE I: Number of labeled pixels used for evaluation of the Brüttisellen and Highway images. Image | Class | GT pixels | Legend color ---|---|---|--- Brüttisellen | Trees | $1,095$ | Light green Meadows | $13,123$ | Dark green Harvested vegetation | $2,523$ | Light brown Bare soil | $3,822$ | Brown Residential buildings | $6,746$ | Orange Commercial buildings | $5,277$ | Red Asphalt | $6,158$ | Light gray Parkings | $1,749$ | Dark gray | Pools | $269$ | Blue Highway | Trees | $52,813$ | Light green Meadows | $12,347$ | Dark green Residential buildings | $78,018$ | Orange Commercial buildings | $25,389$ | Red Highway | $28,827$ | Dark gray Asphalt | $43,005$ | Light gray Shadows | $14,071$ | Cyan ### III-B Experimental setup To test the proposed active learning with user’s confidence model we built a MATLAB graphic user interface (Fig. 3), where real users have the task to label the VHR images of Zurich, Switzerland presented above. Contrarily to active learning papers in remote sensing previously published, all the image can be sampled and a human user is performing the labeling. --- Figure 3: Graphic user interface developed for testing the AL-UC model. On the left, the image to be labeled; in the middle, class buttons and a zoom; on the right the current classification map, the confidence map and the spectrum of the pixel in the white circle. Iteration $\epsilon=1$ | ($|X^{1}|=65,|X^{1}_{\theta}|=82$) | | | ---|---|---|---|--- Iteration $\epsilon=2$ | ($|X^{2}|=85,|X^{2}_{\theta}|=111$) | | | Iteration $\epsilon=5$ | ($|X^{5}|=165,|X^{5}_{\theta}|=207$) | | | | | (a) AL criterion | (b) Confidence map | (c) Confidence mask, $\theta=0.6$ Figure 4: AL-UC ingredients. (a) A classical active learning heuristic: MCLU. [Uncertain samples correspond to dark colors] (b) A user’s confidence estimation: SVM trained on user’s responses [Easily labelizable samples correspond to light colors]. (c) Mask obtained by thresholding the confidence map at $\theta=0.6$. Only pixels in the black areas can now be selected by minimizing the MCLU criterion. After initialization, the user is invited to enter an initial training set by photointerpretation. As for their difference in size, 5 pixels per class are queried for the “Brüttisellen” image ($|X^{1}|=45$), while 10 pixels per class are requested for the “Highway” image ($|X^{1}|=70$). He/she is invited to choose these pixels on the image through an interactive window. These samples receive a label by the user and a positive label for the confidence classifier. Then, the active learning process starts. The user is asked to label the pixels selected by an active learning algorithm into one of the $\Omega$ classes of interest ($Y=[1,...,\Omega]$), detailing the different urban land use types, or into an “unknown class”, if the user does not know which label to assign. Every 20 valid labeled pixels (i.e. every 20 answers other than “unknown class”) the classifier retrains with the increased training set and produces a new raking of the unlabeled pixels. For EQB, we considered a committee of $10$ models, each one using $75\%$ of the available training data drawn randomly from $X^{\epsilon}$. As a base classification model, we used a nonlinear SVM with RBF Gaussian kernel. As a lower bound of performance, we used a random selection of locations (RS). Since the first training set $X^{1}$ is very small, the RBF kernel parameter is estimated using the median distance between pixels in the image. The Torch library is used for the multiclass SVM [38], which implements the one-against-all (OAA) strategy. To avoid bad states on the active learning output, we learn the user’s confidence by training a second binary SVM classifier with RBF kernel. We consider the probabilistic output using Platt’s method [31]. The LibSVM solver is used, as it returns these posterior probabilities. This classifier is trained only since iteration $\epsilon=2$, because at the first iteration there are no negative samples ($X^{1}_{\theta}$ contains only positive samples chosen by the user). To avoid overfitting of specific situations, we kept the search range for the $\sigma$ kernel bandwidth large in a 4-folds crossvalidation strategy ($\sigma=[10^{-1},...,10^{3}]$). The threshold $\theta$ is fixed at $0.6$ after experimental testing: it constitutes a good tradeoff between a filtering that is too strong (which would return uninformative pixels) and one that is too weak (which would be identical to common AL). In the experiments reported in Sections IV-A and IV-B, a single user performed $5$ independent experiments, where he choose the initial training set by clicking on the image. The user knows the image, as well as the task to be performed (i.e. the ground truth). On the contrary, in Section IV-C we considered different users in the labeling task. In this case, we compared the performance of five users, three with experience in remote sensing and labeling tasks, and two who are not familiar with those tasks. Each user performed a single experiment with the three models (random, standard active learning and the proposed AL-UC, both with MCLU as a base heuristic). As stated above, the different models are compared in terms of estimated Kappa statistics on the entirety of labeled pixels shown in bottom row of Fig. 2 (quantities in Table I). ## IV Results This section reports the experimental results obtained on the two case studies. ### IV-A Zurich Brüttisellen Figure 4 illustrates the basic components of the proposed AL-UC on the Brüttisellen dataset: in the left column, the AL heuristic is reported at iterations $1$, $2$ and $5$. Even if during the iterations there is an increase in confidence on large spatially smooth areas, the heuristic remains fragmented in complex areas and many minima can be seen. Central column of Fig. 4 illustrates the posterior probability of the confidence classifier: during the iterations, this classifier specializes in detecting areas that are easily recognized by the user and not only large, smooth areas (for example, note how the roads become more and more confident, even if they are thin elongated structures). In Fig. 5, we show two examples of how the confidence map works: in the top example, the railway is easily labeled by the user and result in high confidence, while the train (in white/yellow) is not among the classes to be detected, so it is handled as a bad state with low confidence. In the bottom example, one can appreciate how the linear structures such as roads receive high confidence, as well as the parking lots, which are characterized by high variance (parked cars). | ---|--- | (a) Image | (b) Confidence map Figure 5: Zooms into the confidence map of Fig. 4b at iteration $\epsilon=5$. MCLU [20] | | | ---|---|---|--- EQB [9] | | | | a) | b) | c) Figure 6: Numeric results for the Brüttisellen dataset. a) Kappa statistic with traditional active learning setting; b) Number of queries per iteration involving 20 valid labeled samples; c) Kappa statistic related to average real effort provided by the user. Finally, both sets of information are fused by creating a confidence mask. The confidence maps of Fig. 4(b) are simply thresholded at a level of confidence $\theta$ and only the pixels for those $p((y_{\theta,i}=+1|{\mathbf{x}}))>\theta$ become presentable to the user. This way, the model continues to rank the pixels according to the active learning criterion, but only areas supposed to be easily understood by the user are made visible (in red in Fig. 4(c)). Numerical results on the Brüttisellen image are reported in Fig. 6 for the three methods considered and the two heuristics tested. The left-hand panel (Fig. 6(a)) illustrates the numerical performance at the end of each iteration. At first glance, the proposed AL-UC seems not to outperform the standard AL. These observations would be correct considering an omniscient user who can always label the pixels queried by the model (i.e. if $20$ labeled pixels could be obtained with $20$ queries per iteration). However, as illustrated in Fig. 1, the traditional active learning method often highlights border pixels that the user cannot label: after initialization, where $45$ pixels are queried by each method, the user needs to consider, on average, $45$ to $50$ pixels to provide $20$ labels at every iteration (Fig. 6(b)). Random sampling is much more efficient in this sense, since the user needs around $35$ queries to label $20$ pixels (40% of the queries were feasible for the user). Finally, the proposed AL-UC was the most efficient, since only $25$ to $30$ queries are necessary to obtain the $20$ labels. This shows that the second classifier has learned the confidence of the user, as it queries more useful pixels than the random strategy. This leads to a rescaling of the curves of Fig. 6(a) into a more realistic picture, that is illustrated in Fig. 6(c): the performance is plotted as a function of the real effort provided by the user, i.e., the total number of queries (both successful and unsuccessful). This figure shows that the proposed method has a steeper learning phase than the traditional active learning method, since the latter wastes several queries on pixels too difficult to label. When taking a performance objective of 0.75 in $\kappa$, the proposed method needed on the average $432$ queries to reach it, while traditional active learning needed one hundred additional queries. By looking back at the confidence maps in Fig. 4(b), we can also observe that during the iterations the confidence is spread in all the areas that are easy to label: regular areas, like buildings, bare soil or very contrasted structures, such as roads. These maps also illustrate that the threshold of confidence $\theta$ must not be chosen too high, since a risk of getting stuck on a single object in the first iterations increase: in this case, setting $\theta=0.9$ would have focused all the sampling in a few areas such as the commercial area in the bottom-left corner, thus trapping the solution in a local maximum. | ---|--- (a) MCLU | (b) EQB Figure 7: Comparison of the uncertainty maps returned by MCLU and EQB (multiplied by a -1 factor). Dark areas correspond to uncertain areas. Regarding comparison with the EQB heuristic (reported in the bottom row of Fig. 6) similar trends are observed. The learning curve is less steep, showing the adequacy of MCLU for active learning with SVM (the heuristic uses the decision function directly, while EQB works with committees of models), but in general both reach a similar performance after 200 valid queries. However, EQB seems to require less queries to find the 20 valid samples, at each iteration and independently from the approach tested. This can be explained by the differences in the nature of the two heuristics (Fig. 7): MCLU ranks pixels according to the (continuous) decision function obtained on the two most probable classes; this means that there are potentially as many different values as candidates in $U^{\epsilon}$. We saw that, on the average, 50% of the samples minimizing such function are easy to label. On the contrary, the EQB heuristic depends on a committee of predictions and the heuristic only has a limited number of possible entropy values (depending on the number of classes $\Omega$, the number of models $k$ and the number of classes predicted by the committee $N_{i}$ (see Eq. (3)). Therefore, the EQB function is a quantized function with few distinct values111In the case of the experiments reported, the number of partitions $P$ (corresponding to different entropy values) achievable by a committee of $n$ models predicting $K$ classes is $P(n,K)=\sum_{k=1}^{K}P(n,k)$. To compute this quantity, we use the following three properties: i) $P(n,k)=P(n-1,k-1)+P(n-k,k)$; ii) $P(n,k)=0$ if $n<k$; iii) $P(n,n)=P(n,1)=1$. Using the recursive formula in i), the maximum number of entropy values is $P(10,9)=\sum_{k=1}^{9}P(10,k)=41$.. As a consequence, several pixel candidates receive the maximal entropy value and the choice is then done randomly among those with maximal entropy. Consequently with EQB, the chances to query a pixel that the user can label increase. Figure 8 illustrates the importance of finding a good model to describe the confidence. It shows the crossvalidation surface of the parameter space of the confidence classifier at iteration $\epsilon=4$ for a given run of the AL-UC algorithm. The green circle highlights the area of solutions capable of describing the confidence of the user. A good selection of parameters results in confidence maps that successfully constrain the AL heuristic (in green/solid), whereas a bad choice of parameters leads to confidence maps that do not constrain the AL heuristic (in red/dashed). --- Figure 8: Crossvalidation surface showing the overall accuracy of the confidence classifier for the Brüttisellen dataset. On the right, confidence maps corresponding to the maximum (in green/solid) and two minima (in red/dashed). ### IV-B Zurich highway | | ---|---|--- a) | b) | c) Figure 9: Numeric results for the Highway dataset using the MCLU heuristic. a) Kappa statistic with traditional active learning setting; b) Number of queries per iteration involving 20 valid labeled samples; c) Kappa statistic related to average real effort provided by the user. Experimental results on the Highway dataset are reported in Fig. 9 for the three methods considered. Given the higher resolution, the difference between the compared methods is expected to be lower, since a higher proportion of uncertain samples are expected to be within the objects of interest (for instance chimneys on roofs). This is due to the increased intraclass variance of the classes in the Highway dataset, that has a spatial resolution four times higher ($0.6$ m for “Highway” vs. $2.4$ m for “Brüttisellen”). As a consequence, the ambiguous areas are more limited in space (Fig. 10), since the traditional AL tends to ask pixels that are anomalous, but within the urban objects: in this case, the user can respond more easily to the query, since the object itself is easily identifiable. Curves reported in Fig. 9 confirm this intuition: after initialization, where $70$ pixels are queried for each method, AL-UC performs similarly to classical active learning, but AL requires on average $10$ additional queries per iteration to obtain the $20$ labeled pixels. On average, AL-UC needs $25$ to $30$ queries (similar to RS), while AL requires about $40$. The right-hand plot illustrates the effort demanded to the used against the performance and, as for the previous dataset, AL-UC allows to retrieve higher accuracy with less queries. | ---|--- (a) Confidence map | (b) Confidence mask Figure 10: (a) Confidence map and (b) corresponding binary mask for the Highway dataset at iteration $\epsilon=5,|X^{5}|=170,|X_{\theta}^{5}|=218$. ### IV-C Considering a committee of users In this section, we consider a committee of five users, each one performing one experiment on the Brutisellen image with the three models. Three of them are trained remote sensing analysts, while the others two are not familiar with labeling tasks. Figure 11 illustrates the average number of queries required by the four users. The tendency observed in the single-user experiment presented above are confirmed. This shows that the proposed method efficiently evaluates the confidence of the user in labeling, avoiding bad states and significantly reducing the number of queries. Figure 11: Average number of queries for a committee of different users performing a single run of each model. ## V Conclusions In this paper, we studied a way to learn the confidence of a photointerpreter to make active learning routines effective in real-life scenarios. For the first time, an active learning method is assessed on the entirety of an image and shortcomings related to the uncertainty of a signal are put in relation with the capacity of a user to provide a reliable label for the pixel. By assessing the probability of having a bad state (a pixel that the photointerpreter is unable to label), the most uncertain pixels ranked by active learning are filtered, thus presenting to the user a set that he/she is capable of labeling. Experiments on two QuickBird images at different spatial resolutions showed the efficiency of the method, which significantly decreased the number of queries that the user must provide to fulfill his task. This work opens a wide range of possible studies for operative active learning: the effects of the user must be studied, as well as the effect of adapting the threshold $\theta$ along the iterations (see [27]). If the latter is more a technical question to be tackled in the future, the first opens avenues related to crowdsourcing [39, 40] and community-based online learning of surface signatures. ## Acknowledgements The authors would like to acknowledge M. Kanevski (University of Lausanne) for the access to the QuickBird images and M. Volpi (University of Lausanne) for the important inputs about the paper. We also would like to thank the five users for collaborating in the users committee experiments. ## References * [1] P. J. Pinter, J. L. Hatfield, J. S. Schepers, E. M. Barnes, S. Moran, C. S. T. Daughtry, and D. R. Upchurch, “Remote sensing for crop management,” Photogramm. Eng. Rem. S., vol. 69, no. 6, pp. 647–664, 2003. * [2] W. A. Dorigo, R. Zurita-Milla, A.J.W. de Wit, J. Brazile, R. Singh, and M. Schaepman, “A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling,” Int. J. Appl. Earth Obs. Geoinf., vol. 9, pp. 166–193, 2007. * [3] H. Taubenbock, T. Esch, M. Wiesner, A. Roth, and S. Dech, “Monitoring urbanization in mega cities from space,” Remote Sensing of Environment, vol. 117, pp. 162–176, 2012. * [4] G. Camps-Valls, D. Tuia, L. Gómez-Chova, S. Jimenez, and J. Malo, Remote Sensing Image Processing, Synthesis Lectures on Image, Video, and Multimedia Processing. Morgan and Claypool, 2011, Available at: _http://www.morganclaypool.com/toc/ivm/5/1_. * [5] S. Prasad, L. Bruce, and J. Chanussot, Optical Remote Sensing, Advances in Signal Processing and Exploitation Techniques. Springer, 2011\. * [6] G. M. Foody and A. Mathur, “Toward intelligent training of supervised image classifications: directing training data acquisition for SVM classification,” Remote Sensing of Environment, vol. 93, no. 1-2, pp. 107–117, 2004\. * [7] G. M. Foody and A. Mathur, “The use of small training sets containing mixed pixels for accurate hard image classification: training on mixed spectral responses for classification by a SVM,” Remote Sensing of Environment, vol. 103, no. 2, pp. 179–189, 2006\. * [8] D. Tuia, M. Volpi, L. Copa, M. Kanevski, and J. Muñoz-Marí, “A survey of active learning algorithms for supervised remote sensing image classifications:,” IEEE J. Sel. Topics Signal Proc., vol. 5, no. 3, pp. 606–617, 2011. * [9] D. Tuia, F. Ratle, F. Pacifici, M. Kanevski, and W.J. Emery, “Active learning methods for remote sensing image classification,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 7, pp. 2218–2232, 2009. * [10] W. Di and M. M. Crawford, “Active learning via multi-view and local proximity co-regularization for hyperspectral image classification,” IEEE J. Sel. Topics Signal Proc., vol. 5, no. 3, pp. 618–628, 2011. * [11] W. Di and M. M. Crawford, “View generation for multiview maximum disagreement based active learning for hyperspectral image classification,” IEEE Trans. Geosci. Remote Sens., in press. * [12] P. Mitra, B. Uma Shankar, and S.K. Pal, “Segmentation of multispectral remote sensing images using active support vector machines,” Pattern Recogn. Lett., vol. 25, no. 9, pp. 1067–1074, 2004. * [13] E. Pasolli, F. Melgani, and Y. Bazi, “SVM active learning through significance space construction,” IEEE Geosci. Remote Sens. Lett., vol. 8, no. 3, pp. 431 – 435, 2011. * [14] S. Patra and L. Bruzzone, “A fast cluster-assumption based active-learning technique for classification of remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 5, pp. 1617–1626, 2011. * [15] J. Li, J.M. Bioucas-Dias, and A. Plaza, “Semisupervised hyperspectral image segmentation using multinomial logistic regression with active learning,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 11, pp. 4085 –4098, 2010. * [16] D. Tuia, E. Pasolli, and W. J. Emery, “Using active learning to adapt remote sensing image classifiers,” Remote Sensing of Environment, vol. 115, pp. 2232–2242, 2011. * [17] Jun Li, J.M. Bioucas-Dias, and A. Plaza, “Hyperspectral image segmentation using a new bayesian approach with active learning,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 10, pp. 3947 –3960, 2011. * [18] D. Tuia, J. Muñoz-Marí, and G. Camps-Valls, “Remote sensing image segmentation by active queries,” Pattern Recognition, vol. 45, no. 6, pp. 2180–2192, 2012. * [19] J. Muñoz-Marí, D. Tuia, and G. Camps-Valls, “Semisupervised classification of remote sensing images with active queries,” IEEE Transaction on Geoscience and Remote Sensing, 2012. * [20] B. Demir, C. Persello, and L. Bruzzone, “Batch mode active learning methods for the interactive classification of remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 3, pp. 1014–1032, 2011. * [21] M. Volpi, D. Tuia, and M. Kanevski, “Memory-based cluster sampling for remote sensing image classification,” IEEE Transactions on Geoscience and Remote Sensing, in press. * [22] C. Persello and L. Bruzzone, “A novel active learning strategy for domain adaptation in the classification of remote sensing images,” in Proc. IEEE IGARSS 2011, Vancouver, Canada, July 2011, pp. 3720–3723. * [23] K. Judah, A. Fern, and T. Diettrich, “Active imitation learning via state queries,” in Intl. Conf. Mach. Learn. ICML, Workshop on Combining Learning Strategies to Reduce Label Cost, Bellevue, WA, USA, 2011. * [24] T. Inamura, M. Inaba, and H. . Inoue, “Acquisition of probabilistic behavior decision model based on the interactive teaching method,” in Proceedings of the Ninth International Conference on Advanced Robotics, 1999, pp. 523–528. * [25] M. N. Nicolescu, A framework for learning from demonstration, generalization and practice in human-robot domains, Ph.D. thesis, University of Southern California, 2003. * [26] D. Grollman and O. Jenkins, “Dogged learning for robots,” in IEEE International Conference on Robotics and Automation, 2007, pp. 2483–2488. * [27] S. Chernova and M. Veloso, “Interactive policy learning through confidence-based autonomy,” J. Artificial Intelligence Res., vol. 2009, pp. 1–25, 34. * [28] D. Cohn, L. Atlas, and R. Ladner, “Improving generalization with active learning,” Mach. Learn., vol. 15, no. 2, pp. 201–221, 1994. * [29] B. Settles, “Active learning literature survey,” Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 2010. * [30] B. Boser, I. Guyon, and V. Vapnik, “A training algorithm for optimal margin classifiers,” in 5th ACM Workshop on Computational Learning Theory, Pittsburgh, USA, 1992, pp. 144–152. * [31] J. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in large margin classifiers, pp. 61–74. MIT press, 1999. * [32] D. Fasbender, J. Radoux, and P. Bogaert, “Bayesian data fusion for adaptable image pansharpening,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 6, pp. 1847–1857, 2008. * [33] P. Soille, Morphological image analysis, Springer-Verlag, Berlin-Heidelberg, 2004. * [34] J.A. Benediktsson, J. A. Palmason, and J. R. Sveinsson, “Classification of hyperspectral data from urban areas based on extended morphological profiles,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 3, pp. 480–490, 2005. * [35] G. Licciardi, F. Pacifici, D. Tuia, S. Prasad, T. West, F. Giacco, J. Inglada, E. Christophe, J. Chanussot, and P. Gamba, “Decision fusion for the classification of hyperspectral data: Outcome of the 2008 GRS-S data fusion contest,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 11, pp. 3857–3865, 2009. * [36] N. Longbotham, C. Chaapel, L. Bleiler, C. Padwick, W. J. Emery, and F. Pacifici, “Very high resolution multiangle urban classification analysis,” IEEE Trans. Geosci. Remote Sens., vol. 50, no. 4, pp. 1155–1170, 2012. * [37] M. Dalla Mura, J. Atli Benediktsson, B. Waske, and L.; Bruzzone, “Morphological attribute profiles for the analysis of very high resolution images,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 10, pp. 3747–3762, 2010. * [38] R. Collobert, S. Bengio, and J. Mariéthoz, “Torch: a modular machine learning software library,” Tech. Rep. RR 02-46, IDIAP, 2002. * [39] J. Abernethy and R. Frongillo, “A collaborative mechanism for crowdsourcing prediction problems,” in Advances in Neural Information Processing Systems (NIPS), 2011\. * [40] R. Gomes, P. Welinder, A. Krause, and P. Perona, “Crowdclustering,” in Proceedings of Advances in Neural Information Processing Systems (NIPS), 2011.
11institutetext: Wake Forest University Department of Statistical Sciences Winston-Salem, North Carolina, 27109, U.S.A 11email<EMAIL_ADDRESS> North Carolina State University Department of Statistics Raleigh, North Carolina 27695, U.S.A. 11email<EMAIL_ADDRESS> # Variational approximations of possibilistic inferential models Leonardo Cella Ryan Martin ###### Abstract Inferential models (IMs) offer reliable, data-driven, possibilistic statistical inference. But despite IMs’ theoretical/foundational advantages, efficient computation in applications is a major challenge. This paper presents a simple and apparently powerful Monte Carlo-driven strategy for approximating the IM’s possibility contour, or at least its $\alpha$-level set for a specified $\alpha$. Our proposal utilizes a parametric family that, in a certain sense, approximately covers the credal set associated with the IM’s possibility measure, which is reminiscent of variational approximations now widely used in Bayesian statistics. ###### Keywords: Confidence regions; credal set; Monte Carlo; statistical inference; stochastic approximation. ## 1 Introduction For a long time, despite Bayesians’ foundational advantages, few statisticians were actually using Bayesian methods. The computational burden for any serious Bayesian analysis was simply too high. Things changed significantly when Monte Carlo methods brought Bayesian solutions within reach. Things changed again more recently with the advances in various approximate Bayesian computational methods, in particular, the variational approximations in Blei et al., (2017) and the references therein. The once clear lines between what was computationally feasible for Bayesians and for others have now been blurred, reinvigorating Bayesians’ efforts in modern applications. Dennis Lindley predicted that “[statisticians] will all be Bayesians in 2020” (Smith, 1995)—his prediction did not come true, but the Bayesian community is stronger now than ever. While Bayesian and frequentist are currently the two mainstream schools of thought in statistical inference, these are not the only perspectives. For example, the Dempster–Shafer theory of belief functions originated as an improvement to and generalization of both Bayesian inference and Fisher’s fiducial argument. Of particular interest to us here are the recent advances in inferential models (IMs, Martin and Liu, 2013, 2015; Martin, 2021; Martin, 2022b ), a framework that offers Bayesian-like, data-dependent, possibilistic quantification of uncertainty about unknowns but with built-in, frequentist- like reliability guarantees. IMs and other new/non-traditional frameworks are currently facing the same computational challenges that Bayesians faced years ago. That is, we know what we want to compute and why, but we are lacking the tools to do so efficiently. Monte Carlo methods are still useful, but the imprecision that is central to the IM’s reliability guarantees implies that Monte Carlo methods alone are not enough. Similar to Lindley’s prediction, for Efron’s speculation about fiducial and related methods—“Maybe Fisher’s biggest blunder will become a big hit in the 21st century!” (Efron, 1998)—to come true, imprecision-accommodating advances in Monte Carlo computations are imperative. This paper offers a simple idea that we hope can be further developed into a general tool for computationally efficient and provably, statistically reliable possibilistic inference. Our idea leverages a well-known characterization of a possibility measure’s credal set in terms of the probabilities assigned to the associated contour’s upper level sets. If our goal is simply to identify those upper level sets—which, in the IM context, are confidence regions—then it can be done using Monte Carlo sampling from the “most diffuse” member of that credal set. Akin to variational Bayes, we propose to cover that credal set with a parametric family and then numerically solve for the parameter corresponding to our best approximation of that “most diffuse” member. This is inspired by the recent work in Jiang et al., (2023) and the seemingly unrelated developments in Syring and Martin, (2019). ## 2 Background: possibilistic IMs The first IMs (e.g., Martin and Liu, 2013, 2015) were formulated in terms of nested random sets and their corresponding belief functions, with certain connections to Dempster–Shafer theory. A more recent IM construction presented in Martin, 2022b defines the IM’s possibility contour using a probability-to- possibility transform applied to the relative likelihood. This section briefly reviews this possibilistic IM construction, its properties, and its shortcomings. Suppose that $X^{n}=(X_{1},\ldots,X_{n})$ consists of iid samples from a distribution $\mathsf{P}_{\Theta}$ depending on an unknown/uncertain $\Theta\in\mathbb{T}$. The model and observed data $X^{n}=x^{n}$ together determine a likelihood function $\theta\mapsto L_{x^{n}}(\theta)$ and a corresponding relative likelihood function $R(x^{n},\theta)=\frac{L_{x^{n}}(\theta)}{\sup_{\vartheta}L_{x^{n}}(\vartheta)}.$ The relative likelihood itself is a data-dependent possibility contour (e.g., Shafer, 1982; Wasserman, 1990; Denœux, 2006, 2014), which has a number of nice properties. What it lacks, however, is a calibration that gives (a) meaning to “possibility values” it assigns to different parameter values and (b) sufficient structure to establish frequentist-style error rate control guarantees. Fortunately, it is at least conceptually straightforward to achieve this calibration by applying what Martin, 2022a calls “validification,” which is just a version of the probability-to-possibility transform (e.g., Dubois et al., 2004; Hose, 2022). In particular, for observed data $X^{n}=x^{n}$, the possibilistic IM’s contour is $\pi_{x^{n}}(\theta)=\mathsf{P}_{\theta}\\{R(X^{n},\theta)\leq R(x^{n},\theta)\\},\quad\theta\in\mathbb{T},$ (1) and the corresponding possibility measure is $\overline{\Pi}_{x^{n}}(H)=\sup_{\theta\in H}\pi_{x^{n}}(\theta)$, for $H\subseteq\mathbb{T}$. Critical to the IM developments is the so-called validity property: $\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\bigl{\\{}\pi_{X^{n}}(\Theta)\leq\alpha\bigr{\\}}\leq\alpha,\quad\text{for all $\alpha\in[0,1]$}.$ (2) Aside from providing meaning or inferential force to the numerical values returned by the possibilistic IM, the validity property (2) also ensures its safety from false confidence (Balch et al., 2019; Martin, 2019) and has some more familiar statistical consequences. Of particular relevance here is that the set $C_{\alpha}(x^{n})=\\{\theta\in\mathbb{T}:\pi_{x^{n}}(\theta)>\alpha\\},$ (3) indexed by a confidence level $\alpha\in[0,1]$, is a $100(1-\alpha)$% frequentist confidence set in the sense that its coverage probability is at least $1-\alpha$, i.e., $\sup_{\Theta\in\mathbb{T}}\mathsf{P}_{\Theta}\\{C_{\alpha}(X^{n})\not\ni\Theta\\}\leq\alpha,\quad\alpha\in[0,1].$ The IM construction and corresponding theoretical properties are quite clean. Where things start to get messy, however, is when it comes to computation of the IM’s possibility contour, the corresponding confidence set (3), etc. The point is that rarely do we have the sampling distribution of the relative likelihood $R(X^{n},\theta)$, under $\mathsf{P}_{\theta}$, available in closed form to facilitate exact computation of $\pi_{x^{n}}(\theta)$ for any $\theta$. So, instead, the go-to strategy is to approximate that sampling distribution using Monte Carlo at each value of $\theta$ on a sufficiently fine grid. That is, the possibility contour is approximated as $\pi_{x^{n}}(\theta)\approx\frac{1}{M}\sum_{m=1}^{M}1\\{R(X_{m,\theta}^{n},\theta)\leq R(x^{n},\theta)\\},\quad\theta\in\mathbb{T},$ where $1(\cdot)$ is the indicator function and $X_{m,\theta}^{n}$ consists of $n$ iid samples from $\mathsf{P}_{\theta}$ for $m=1,\ldots,M$. The above computation is feasible at one or a few different $\theta$ values, but frequently this needs to be carried out on a sufficiently fine grid covering the (relevant area) of the possibly multi-dimensional parameter space $\mathbb{T}$. For example, the confidence region in (3) requires that we can solve the equation $\pi_{x^{n}}(\theta)=\alpha$ and the naive approach is to compute the contour over a huge grid and then keep those that (approximately) solve the aforementioned equation. This amounts to lots of wasted computations. Simple tweaks to this most-naive approach can be employed in certain cases, e.g., importance sampling adjustments, but these adjustments require problem-specific considerations, so this does not offer any substantial improvements in computational efficiency. The next section proposes a new and general strategy that is much more efficient. ## 3 Variational approximations There are a number of different strategies one can employ to approximate the possibility contour. In addition to the Monte Carlo-based strategy described above, there are analytical approximations available based on large-sample theory (Martin and Williams, 2024). The goal here is to strike a balance between the “exact” but expensive Monte Carlo-based approximation and the rough but cheap large-sample approximation. To strike this balance, we must focus on a specific feature of the IM solution, in particular, the confidence sets $C_{\alpha}(x^{n})$ in (3). Our specific proposal resembles the variational approximations that are now widely used in Bayesian analysis and elsewhere in machine learning. Following Destercke and Dubois, (2014), Couso et al., (2001), and others, the possibilistic IM’s credal set, $\mathscr{C}(\overline{\Pi}_{x^{n}})$, has a remarkable characterization: $\mathsf{Q}_{x^{n}}\in\mathscr{C}(\overline{\Pi}_{x^{n}})\iff\mathsf{Q}_{x^{n}}\\{C_{\alpha}(x^{n})\\}\geq 1-\alpha,\quad\text{for all $\alpha\in[0,1]$}.$ That is, a data-dependent probability measure $\mathsf{Q}_{x^{n}}$ is consistent with $\overline{\Pi}_{x^{n}}$ if and only if, for each $\alpha\in[0,1]$, it assigns at least $1-\alpha$ probability to the IM’s confidence set $C_{\alpha}(x^{n})$ in (3). The best inner probabilistic approximation of the possibilistic IM, if it exists, corresponds to a $\mathsf{Q}_{x^{n}}^{\star}$ such that $\mathsf{Q}_{x^{n}}^{\star}\\{C_{\alpha}(x^{n})\\}=1-\alpha$ for all $\alpha\in[0,1]$. For a certain class of statistical models, Martin, (2023) identified this best inner approximation with Fisher’s fiducial distribution. Beyond this special class of models, however, it is not clear if a best inner approximation exists and, if so, how to find it. A less ambitious goal is to find, for a fixed choice of $\alpha$, a probability distribution $\mathsf{Q}_{x^{n},\alpha}^{\star}$ such that $\mathsf{Q}_{x^{n},\alpha}^{\star}\\{C_{\alpha}(x^{n})\\}=1-\alpha.$ (4) Our goal here is to develop a general strategy for finding, for a given $\alpha\in(0,1)$, a probability distribution $\mathsf{Q}_{x^{n},\alpha}^{\star}$ that (at least approximately) solves the equation in (4). Once identified, we can reconstruct relevant features of the possibilistic IM via (Bayesian-like) Monte Carlo sampling from this $\mathsf{Q}_{x^{n},\alpha}^{\star}$. We propose to start with a parametric class of (data-dependent) probability distributions $\mathscr{Q}=\\{\mathsf{Q}_{x^{n}}^{\xi}:\xi\in\Xi\\}$, e.g., a Gaussian distribution with mean and covariance matrix depending in a particular way on the data and on $\xi$. More specifically, since the possibility contour’s mode is at the maximum likelihood estimator $\hat{\theta}_{x^{n}}$, it makes sense to fix the Gaussian family $\mathscr{Q}$’s mean at $\hat{\theta}_{x^{n}}$ but allow the covariance matrix to depend on both data and a hyperparameter $\xi>0$ via, say, $\text{cov}(\mathsf{Q}_{x^{n}}^{\xi})=\xi^{2}\,J_{x^{n}}^{-1}$, where $J_{x^{n}}$ is the observed Fisher information matrix determined by the model. A “right” choice of $\mathscr{Q}$ is context-dependent. Given a suitable choice of $\mathscr{Q}$, our proposed procedure is as follows. Define an objective function $f(\xi)=\mathsf{Q}_{x^{n},\alpha}^{\star}(\\{\theta:\pi_{x^{n}}(\theta)>\alpha\\})-(1-\alpha),$ (5) so that solving (4) boils down to finding a root of $f$. If the probability on the right-hand side could be evaluated in closed-form, then one could apply any of the standard root-finding algorithms to solve this, e.g., Newton–Raphson. However, this probability typically cannot be evaluated in closed-form; instead, $f$ can be approximated via Monte Carlo with $\hat{f}$ defined as $\hat{f}(\xi)=\frac{1}{M}\sum_{m=1}^{M}1\\{\pi_{x^{n}}(\Theta_{m}^{\xi})>\alpha\\}-(1-\alpha),$ where $\Theta_{1}^{\xi},\ldots,\Theta_{M}^{\xi}\overset{\text{\tiny iid}}{\,\sim\,}\mathsf{Q}_{x^{n}}^{\xi}$. Presumably, the aforementioned samples are cheap for every $\xi$ because the family $\mathscr{Q}$ has been specified by the user. But only having an unbiased estimator of the objective function requires some adjustments to the numerical routine. In particular, rather than Newton–Raphson we must use a stochastic approximation algorithm (e.g., Syring and Martin, 2021; Martin and Ghosh, 2008; Kushner and Yin, 2003; Robbins and Monro, 1951) that is adapted to noisy function evaluations, such as $\hat{f}$. The basic Robbins–Monro algorithm, for instance, seeks the root of (5) through the updates $\xi_{t+1}=\xi_{t}\pm w_{t+1}\,\hat{f}(\xi_{t+1}),\quad t\geq 0,$ where “$\pm$” depends on whether $\xi\mapsto f(\xi)$ is decreasing or increasing, and $(\epsilon_{t})$ is a deterministic step size sequence that satisfies $\textstyle\sum_{t=1}^{\infty}w_{t}=\infty\quad\text{and}\quad\textstyle\sum_{t=1}^{\infty}w_{t}^{2}<\infty.$ Robbins and Monro, (1951) showed that, under certain conditions, the sequence $(\xi_{t})$ converges in probability to the root. If $\xi^{\star}=\xi^{\star}(x^{n},\alpha)$ is a solution to this root-finding problem, we set $\mathsf{Q}_{x^{n},\alpha}^{\star}=\mathsf{Q}_{x^{n},\alpha}^{\xi^{\star}}$. Then, for example, the probability-to-possibility transform applied to $\mathsf{Q}_{x^{n},\alpha}^{\star}$ should be a reasonable approximation to the contour $\pi_{x^{n}}$, at least in terms of their respective upper $\alpha$-level sets. Extensions of the proposed algorithm are discussed in Section 5. ## 4 Illustrations For all but Example 4 below, we use the normal variational family $\mathscr{Q}$ with mean $\hat{\theta}_{x^{n}}$ and covariance $\xi^{2}J_{x^{n}}^{-1}$ as described above, with $\xi$ to be determined. All of the examples display variational approximations $\mathsf{Q}_{x^{n},\alpha}^{\star}$ based on $\alpha=0.1$. ###### Example 1 Let $X^{n}$ consist of iid Bernoulli samples, where $n=15$ in this case. If 6 events/successes are observed, then the exact IM possibility contour and that corresponding to the variational approximation are shown in Figure 1(a). As expected, the two contours closely agree. (a) Example 1: Bernoulli (b) Example 2: Bivariate normal (c) Example 3: Logistic regression (d) Example 4: Multinomial Figure 1: Exact (black) and approximate (red) IM contours, the latter based on $\mathsf{Q}_{x^{n},\alpha}^{\star}$ with $\alpha=0.1$; Panel (d) shows only the approximate contour (black), which is fully supported on the lower triangle corresponding to the probability simplex. ###### Example 2 Suppose $X^{n}$ consists of iid bivariate normal pairs with zero means and unit variances. Inference on the unknown correlation $\Theta$ is a surprisingly challenging goal (e.g., Basu, 1964). Figure 1(b) shows the exact IM contour and the approximation based on simulated data of size $n=50$ with true correlation 0.5. The exact contour has some asymmetry that the normal approximation cannot perfectly accommodate, but, as expected, it makes up for this imperfection with a slightly wider upper 0.1-level set. ###### Example 3 The data presented in Table 8.4 of Ghosh et al., (2006) concerns the relationship between exposure to chloracetic acid and mouse mortality. A simple logistic regression model is fitted to relate the binary death indicator with the levels of exposure to chloracetic acid for the dataset’s $n=120$ mice. Figure 1(c) presents the $0.1$-level set of the exact IM possibility contour for the regression coefficients, alongside the one corresponding to the variational approximation. The two contours line up almost perfectly. ###### Example 4 To highlight our proposal’s flexibility, we consider $n$ iid data points sampled from $\\{1,2,\ldots,K\\}$ with unknown probabilities $\Theta=(\Theta_{1},\ldots,\Theta_{K})$. The frequency table is a sufficient statistic, having a multinomial distribution with parameters $(n,K,\Theta)$. Here we use a Dirichlet variational family $\mathsf{Q}_{x^{n}}^{\xi}$, centered at the maximum likelihood estimator with precision $n\xi$. Figure 1(d) shows the approximate IM contour based on $K=3$ and counts $X=(8,10,7)$. The exact IM contour is virtually impossible to get, because naive Monte Carlo is slow. The discrete data gives it the unusual shape like in Figure 1(a) but in several dimensions. Here we get a smooth approximation in a matter of seconds. ## 5 Conclusion In a similar spirit to the variational approximations that are now widely used in Bayesian statistics, and building on recent ideas presented in Jiang et al., (2023), we develop here a strategy to approximate the possibilistic IM’s contour function—or at least its $\alpha$-level sets for a specified $\alpha$—using ordinary Monte Carlo sampling and stochastic approximation. A few examples are presented to highlight the potential of our proposal. A number of important and practically relevant extensions to the proposed method can and will be explored. First, it is important to be able to handle cases where $\mathscr{Q}$ is indexed by a multivariate $\xi\in\Xi$. No conceptual change is necessary to accommodate this, but the root-finding representation of the solution needs to be re-expressed as an optimization. Second, statistical inference problems often involve nuisance parameters and eliminating these efficiently requires care. The IM framework facilitates this marginalization, and our conjecture is that the proposed variational approximation strategy can be directly applied to approximate the corresponding “marginal IM contour.” ### 5.0.1 Acknowledgements RM’s research is supported by the U.S. National Science Foundation, SES–2051225. ### 5.0.2 The authors have no competing interests to declare that are relevant to the content of this article. ## References * Balch et al., (2019) Balch, M. S., Martin, R., and Ferson, S. (2019). Satellite conjunction analysis and the false confidence theorem. Proc. Royal Soc. A, 475(2227):2018.0565. * Basu, (1964) Basu, D. (1964). Recovery of ancillary information. Sankhyā Ser. A, 26:3–16. * Blei et al., (2017) Blei, D. M., Kucukelbir, A., and McAuliffe, J. D. (2017). Variational inference: a review for statisticians. J. Amer. Statist. Assoc., 112(518):859–877. * Couso et al., (2001) Couso, I., Montes, S., and Gil, P. (2001). The necessity of the strong $\alpha$-cuts of a fuzzy set. Internat. J. Uncertain. Fuzziness Knowledge-Based Systems, 9(2):249–262. * Denœux, (2006) Denœux, T. (2006). Constructing belief functions from sample data using multinomial confidence regions. Internat. J. of Approx. Reason., 42(3):228–252. * Denœux, (2014) Denœux, T. (2014). Likelihood-based belief function: justification and some extensions to low-quality data. Internat. J. Approx. Reason., 55(7):1535–1547. * Destercke and Dubois, (2014) Destercke, S. and Dubois, D. (2014). Special cases. In Introduction to Imprecise Probabilities, Wiley Ser. Probab. Stat., pages 79–92. Wiley, Chichester. * Dubois et al., (2004) Dubois, D., Foulloy, L., Mauris, G., and Prade, H. (2004). Probability-possibility transformations, triangular fuzzy sets, and probabilistic inequalities. Reliab. Comput., 10(4):273–297. * Efron, (1998) Efron, B. (1998). R. A. Fisher in the 21st century. Statist. Sci., 13(2):95–122. * Ghosh et al., (2006) Ghosh, J. K., Delampady, M., and Samanta, T. (2006). An Introduction to Bayesian Analysis. Springer, New York. * Hose, (2022) Hose, D. (2022). Possibilistic Reasoning with Imprecise Probabilities: Statistical Inference and Dynamic Filtering. PhD thesis, University of Stuttgart. * Jiang et al., (2023) Jiang, Y., Liu, C., and Zhang, H. (2023). Finite sample valid inference via calibrated bootstrap. Under review. * Kushner and Yin, (2003) Kushner, H. J. and Yin, G. G. (2003). Stochastic Approximation and Recursive Algorithms and Applications. Springer-Verlag, New York, second edition. * Martin, (2019) Martin, R. (2019). False confidence, non-additive beliefs, and valid statistical inference. Internat. J. Approx. Reason., 113:39–73. * Martin, (2021) Martin, R. (2021). An imprecise-probabilistic characterization of frequentist statistical inference. arXiv:2112.10904. * (16) Martin, R. (2022a). Valid and efficient imprecise-probabilistic inference with partial priors, I. First results. arXiv:2203.06703. * (17) Martin, R. (2022b). Valid and efficient imprecise-probabilistic inference with partial priors, II. General framework. arXiv:2211.14567. * Martin, (2023) Martin, R. (2023). Fiducial inference viewed through a possibility-theoretic inferential model lens. In Miranda, E., Montes, I., Quaeghebeur, E., and Vantaggi, B., editors, Proceedings of the Thirteenth International Symposium on Imprecise Probability: Theories and Applications, volume 215 of Proceedings of Machine Learning Research, pages 299–310. PMLR. * Martin and Ghosh, (2008) Martin, R. and Ghosh, J. K. (2008). Stochastic approximation and Newton’s estimate of a mixing distribution. Statist. Sci., 23(3):365–382. * Martin and Liu, (2013) Martin, R. and Liu, C. (2013). Inferential models: a framework for prior-free posterior probabilistic inference. J. Amer. Statist. Assoc., 108(501):301–313. * Martin and Liu, (2015) Martin, R. and Liu, C. (2015). Inferential Models, volume 147 of Monographs on Statistics and Applied Probability. CRC Press, Boca Raton, FL. * Martin and Williams, (2024) Martin, R. and Williams, J. P. (2024). Large-sample theory for inferential models: a possibilistic Bernstein–von Mises theorem. arXiv:2404.15843. * Robbins and Monro, (1951) Robbins, H. and Monro, S. (1951). A stochastic approximation method. Ann. Math. Statistics, 22:400–407. * Shafer, (1982) Shafer, G. (1982). Belief functions and parametric models. J. Roy. Statist. Soc. Ser. B, 44(3):322–352. With discussion. * Smith, (1995) Smith, A. (1995). A conversation with Dennis Lindley. Statist. Sci., 10(3):305–319. * Syring and Martin, (2019) Syring, N. and Martin, R. (2019). Calibrating general posterior credible regions. Biometrika, 106(2):479–486. * Syring and Martin, (2021) Syring, N. and Martin, R. (2021). Stochastic optimization for numerical evaluation of imprecise probabilities. In Cano, A., De Bock, J., Miranda, E., and Moral, S., editors, Proceedings of the Twelveth International Symposium on Imprecise Probability: Theories and Applications, volume 147 of Proceedings of Machine Learning Research, pages 289–298. PMLR. * Wasserman, (1990) Wasserman, L. A. (1990). Belief functions and statistical inference. Canad. J. Statist., 18(3):183–196.
# Multi-Scale Spatio-Temporal Graph Convolutional Network for Facial Expression Spotting Yicheng Deng, Hideaki Hayashi, Hajime Nagahara Osaka University, Japan This work was partially supported by Innovation Platform for Society 5.0 from Japan Ministry of Education, Culture, Sports, Science and Technology, and JSPS KAKENHI Grant Number JP21H03511. ###### Abstract Facial expression spotting is a significant but challenging task in facial expression analysis. The accuracy of expression spotting is affected not only by irrelevant facial movements but also by the difficulty of perceiving subtle motions in micro-expressions. In this paper, we propose a Multi-Scale Spatio- Temporal Graph Convolutional Network (SpoT-GCN) for facial expression spotting. To extract more robust motion features, we track both short- and long-term motion of facial muscles in compact sliding windows whose window length adapts to the temporal receptive field of the network. This strategy, termed the receptive field adaptive sliding window strategy, effectively magnifies the motion features while alleviating the problem of severe head movement. The subtle motion features are then converted to a facial graph representation, whose spatio-temporal graph patterns are learned by a graph convolutional network. This network learns both local and global features from multiple scales of facial graph structures using our proposed facial local graph pooling (FLGP). Furthermore, we introduce supervised contrastive learning to enhance the discriminative capability of our model for difficult- to-classify frames. The experimental results on the SAMM-LV and CAS(ME)2 datasets demonstrate that our method achieves state-of-the-art performance, particularly in micro-expression spotting. Ablation studies further verify the effectiveness of our proposed modules. ## I INTRODUCTION Facial expressions are a typical form of nonverbal communication used to convey human emotions. When people undergo emotional changes, voluntary or involuntary facial muscle movements create various expressions. These expressions act as simple and direct social signals, allowing others to understand their emotions and facilitating human communication. In general, facial expressions can be divided into two categories: macro- expressions and micro-expressions. Macro-expressions usually last for 0.5 to 4.0 seconds [7], and they are easy to perceive due to their occurrence on a large facial area and high intensity [5]. Macro-expression analysis is important in many practical applications such as sociable robots [11], mental health [18], and virtual reality [34]. In contrast, micro-expressions generally last for less than 0.5 seconds [40], and they are hard to perceive due to their locality and low intensity [1]. Due to their involuntary nature, they are crucial in situations where people may want to suppress their emotions or attempt to deceive others, such as lie detection [8], medical care [9], and national security [31]. Therefore, both macro- and micro-expression analysis play important roles in understanding human emotions and behaviors. Facial expression spotting is the preliminary step in facial expression analysis, aiming to locate the onset and offset frames of macro- and micro- expression intervals in long videos, as shown in Fig. 1. The onset frame represents when an expression starts, and the offset frame represents when it ends. However, the difficulty in perceiving micro-expressions and the presence of irrelevant motions in long videos, such as head movements and eye blinking, make this task highly challenging. In recent years, many researchers have devoted a lot of effort to developing effective algorithms for facial expression spotting. Early works employed traditional methods to extract hand-crafted features, analyze feature differences, and spot expression clips using threshold strategies [12, 14, 46, 49]. Lately, with the development of deep learning, several learning-based methods have been proposed for expression spotting [35, 36, 43, 45, 44], but the spotting accuracy, especially in micro-expression spotting, still needs improvement. Figure 1: Illustration of macro- and micro-expression spotting. The main problems arise from their choice of feature extraction strategy and the oversimplification of the network. First, some methods utilize a large sliding window strategy to spot potential expression proposals[14, 13], where the accuracy is severely affected by head movement. Alternatively, some other methods calculate optical flows between adjacent frames as motion features[45, 20], but they cannot effectively reveal subtle motions that exist in micro- expressions. Therefore, there is an urgent need to find a strategy that can magnify motion information while alleviating the influence of head movement. Second, some methods compute optical flows in specific regions of interest (ROIs) related to the action unit to alleviate the influence of irrelevant motions and then utilize a network to learn the extracted motion features[20, 44]. However, their proposed network is overly simplistic, lacking a comprehensive consideration of spatio-temporal relationships and multi-scale feature learning, which limits the representational capacity of their models. To address these issues, we propose a multi-scale spatio-temporal graph convolutional network, termed SpoT-GCN, for macro- and micro-expression spotting. Specifically, we design a receptive field adaptive sliding window strategy, where the temporal window size corresponds to the receptive field of the network, to compute and combine short- and long-term optical flows as input for frame-level apex or boundary (onset or offset) probability estimation, amplifying the motion features while avoiding significant head movement problems. Then, we adopt a graph convolutional network to capture the spatial relationships and temporal variations among different facial parts across frames, where a facial local graph pooling (FLGP) strategy is proposed to extract multi-scale facial graph-structured features, enhancing the model’s understanding ability from local to global. We also notice that distinguishing between certain macro-expressions and micro-expressions near the boundary is difficult. Additionally, some normal frames might be misclassified as micro-expression frames due to the noises that exist in the extracted optical flows. To address this issue, we introduce supervised contrastive learning into our model to learn finer discriminative feature representation for better distinguishing different types of frames in long videos. Our main contributions are as follows: * • We propose a novel graph convolutional network (GCN) to comprehensively capture the spatial relationships and temporal variations among different facial parts across frames, in which a graph pooling strategy suitable for facial structure is proposed for multi-scale feature learning. * • We design a receptive field adaptive sliding window strategy to compute short- and long-term optical flows for frame-level apex or boundary probability estimation, which not only magnifies the motion information but also avoids large head movement problems. * • We introduce supervised contrastive loss to our model for discriminative feature representation learning. To the best of our knowledge, our work is the first to study contrastive learning for facial expression spotting, achieving the recognition of boundaries between different types of expressions. Figure 2: Extracted ROIs and constructed facial graph structure are denoted in yellow, while the nose tip region for face alignment is denoted in green. Figure 3: Overview of the proposed framework. (a) The data pre-processing module partitions the input video into overlapping temporal windows using the receptive field adaptive sliding window strategy and extracts facial graph- structured optical flows; (b) the feature learning module employs the SpoT-GCN which takes optical flow features as input for frame-level apex or boundary probability estimation; (c) the post-processing module aggregates the probability maps from all frames and generates expression proposals. ## II RELATED WORKS ### II-A Facial expression spotting Current facial expression spotting methods can be divided into two categories: traditional methods and deep learning methods. Early works employed appearance-based feature extraction techniques, such as local binary patterns [30] and histogram of oriented gradients [6], and combined them with machine learning algorithms for feature difference analysis, then utilizing threshold strategies for expression spotting. Subsequently, using the optical flow algorithm to extract motion features became mainstream. He et al.[14] employed the method of main directional maximal difference analysis [37] to utilize the maximal difference in magnitude along the main direction of optical flow features to detect facial movements. He [46] proposed computing optical flows for more accurate face alignment and performed the spotting process using a sliding window strategy. With the development of deep learning, several researchers proposed various neural networks for feature learning. Zhang et al. [48] utilized convolutional neural networks (CNNs) to extract features from video clips and spotted the apex frames from long videos using a feature matrix processing method. Liong et al. [24] introduced an optical flow-based three-stream CNN and a pseudo- labeling technique to facilitate the learning process. Yu et al. [45] introduced a two-stream CNN network to extract features from raw videos and optical flow maps. Then they added location suppression modules to the network to reduce redundant proposals. However, the spotting accuracy may be affected by irrelevant motions when processing entire images without any masking. Yang et al. [41] utilized facial action unit information and concatenated various types of neural networks for feature learning. Leng et al. [20] extracted 12 ROIs and adopted the main directional mean optical flow algorithm [26] to compute optical flows between adjacent frames. Then they utilized one- dimensional CNNs to learn temporal variations and predicted the probability that each frame belongs to an apex or boundary frame. Based on [20], Yin et al. [44] learned spatial relations by adding a GCN to embed action units (AUs) label information into the extracted optical flows. While current optical flow-based methods have achieved a significant improvement in MaE spotting, the performance in ME spotting remains considerably lower. This is because their extracted optical flow features cannot reveal subtle motions that exist in micro-expressions. Our receptive field adaptive sliding window strategy can effectively magnify these subtle motions, thereby improving ME spotting performance. ### II-B Graph Convolutional Network Kipf et al. [17] proposed GCNs in 2017, presenting an effective convolutional operation that captures relationships between nodes, thus enabling deep learning on graph-structured data. In recent years, researchers have begun to apply GCNs to human emotion analysis, considering the human face as a graph. GCNs are beneficial for facial expression analysis, as the issue of irrelevant facial muscle movements can be alleviated by extracting only several ROIs instead of processing entire human face images. Liu et al. [27] were the first to utilize GCNs for action unit detection. They treated each AU-related region as a graph node and employed a GCN to learn the AU relations. Jin et al. [15] presented a double dynamic relationships GCN for facial expression recognition (FER). Liu et al. [25] proposed to recognize facial expressions by combing the advantages of CNN for extracting features and GCN for modeling complex graph patterns to capture the underlying relationship between expressions, thus improving the accuracy and efficiency of FER. Xie et al. [38] explored the application of GCNs in micro-expression recognition and introduced a GCN for AU relation learning. Kumar et al.[19] presented a two-stream relational edge- node graph attention network to improve the accruracy of micro-expression recognition. Yin et al. [44] were the first to apply GCNs to macro- and micro- spotting tasks. However, they simply used GCNs to learn spatial information, then they flattened the facial graph-structured data and fed it into CNNs to learn temporal information, which fails to comprehensively capture the spatio- temporal dependencies among different facial parts. To solve this problem, we propose SpoT-GCN, which considers the spatial relationships of multi-scale facial graph structures as well as the temporal variations of various facial parts at different scales, significantly enhancing the model’s representational capacity. ### II-C Contrastive learning In recent years, contrastive learning has demonstrated significant effectiveness in the field of unsupervised representation learning. The goal of contrastive learning is to learn meaningful representations by maximizing the similarity between positive pairs and minimizing it between negative pairs. Chen et al. [4] introduced unsupervised contrastive learning and generated positive pairs using data augmentation. This approach yields meaningful visual representations that can subsequently be applied in downstream tasks like image recognition and image segmentation. Then, Khosla et al. [16] proposed the supervised contrastive (SupCon) loss, showcasing its potential to enhance supervised tasks by introducing class information into the contrastive loss. Supervised contrastive learning efficiently enlarges the domain discrepancy, thereby improving discriminative feature representation extraction. Our method aims to establish the contrast between macro- and micro-expression frames, as well as between micro-expression frames and normal frames. This approach allows us to learn more discriminative feature representations and consequently reduce the misclassification rate. ## III METHODOLOGY Given a long video as input, our goal is to spot all potential macro- and micro-expression intervals within the video, locating the onset and offset frames as well as determining the expression type for each expression proposal. As illustrated in Fig. 3, our framework comprises three modules: data pre-processing module, feature learning module, and post-processing module. ### III-A Data pre-processing Figure 4: Network structure of our SpoT-GCN and the scale change between different facial graph structures through FLGP. Suppose we have a raw video $V=(v_{i})_{i=1}^{N}$ as input, where $N$ represents the total number of frames in the video. First, given a window length $w$, we pad the beginning of the video with $\lfloor\frac{w}{2}\rfloor$ repetitions of the first frame and pad the end of the video with $\lfloor\frac{w}{2}\rfloor$ repetitions of the last frame. Then, different from previous methods that treat the receptive field and temporal sliding window size separately, we employ a receptive field adaptive sliding window strategy with a window length $w$ and a stride of $1$ to partition the video into multiple overlapping clips $C=(c_{i})_{i=1}^{N}$, where $w$ adapts to the temporal receptive field of the network and $c_{i}$ corresponds to the clip whose all temporal information will be fully utilized for predicting the $i$-th frame $v_{i}$. For each clip $c_{i}$, we employ MobileFaceNet [3] to detect 68 facial key points in the first frame $c_{i}^{1}$. These key points are then used for cropping the human face, extracting the nose region for face alignment, and extracting ROIs. Specifically, we detect the facial bounding box in the first frame. Then, for each subsequent frame $c_{i}^{s},s=2,3,\ldots,w$, we initialize the facial bounding box using that of the first frame. Afterward, since the nose tip area remains stationary during expressions, we compute the optical flow of the nose tip region, shown in Fig. 2, as the global head movement and use it to adjust the facial bounding box for $c_{i}^{s}$. Specifically, we utilize the Farneback algorithm [10] to compute the average optical flow of the nose tip region $o_{i}^{s,\mathrm{nose}}\in\mathbb{R}^{2}$ between $c_{i}^{1}$ and $c_{i}^{s}$: $o^{s,\mathrm{nose}}_{i}=\frac{1}{m^{\mathrm{nose}}_{i}\times n^{\mathrm{nose}}_{i}}\sum_{(x,y)\in M^{\mathrm{nose}}_{i}}\mathrm{OF}(c_{i}^{1},c_{i}^{s}),$ (1) where $m^{\mathrm{nose}}_{i}$ and $n^{\mathrm{nose}}_{i}$ represent the height and width of the nose tip region $M^{\mathrm{nose}}_{i}$, and $\mathrm{OF}(\cdot,\cdot)$ represents the computation of optical flow. Since processing entire images could affect prediction accuracy due to irrelevant motion (e.g., eye blink) and background information, we selectively extract $R=10$ ROIs that are most representative of facial expressions. The extracted ROIs are shown in Fig. 2. We compute the optical flow for the chosen $R$ ROIs between $c_{i}^{1}$ and $c_{i}^{s}$ to obtain optical flow features $o_{i}^{s}=[o_{i}^{s,r}]_{r=1}^{R}\in\mathbb{R}^{R\times 2}$. For each ROI, the optical flow computation is similar to (1). Then we construct $o_{i}\in\mathbb{R}^{w\times R\times 2}$ for the $i$-th clip $c_{i}$ by concatenating the optical flows $[o_{i}^{1},o_{i}^{2},\ldots,o_{i}^{w}]$, where $o_{i}^{1}=\bm{0}$. As a result of the data pre-processing, we obtain the optical flow features $O=(o_{i})_{i=1}^{N}\in\mathbb{R}^{N\times w\times R\times 2}$ for the entire input video. ### III-B Multi-scale spatio-temporal GCN for feature learning After obtaining optical flow features, we employ the proposed SpoT-GCN for feature learning. The network structure is shown in Fig. 4. We first use spatio-temporal GCNs (ST-GCNs) to capture spatio-temporal relationships among different ROIs across temporal frames. Just as standard three-dimensional (3D) CNNs can be seen as learning kernels to discover meaningful and distinguishable latent patterns in videos, ST-GCNs acquire knowledge about embedded nodes based on spatio-temporal neighbors and graph relations. In our method, an ST-GCN takes the graph node optical flow features $O$ and the adjacency matrix $A$ as input. To this end, we build the initial adjacency matrix $A\in\mathbb{R}^{R\times R}$ empirically based on the designed facial graph structure shown in Fig. 2, where the weights of all edges are set to 1, as we only use the edges to represent the facial spatial structure. In practice, we stack $G$ ST-GCN layers. For the $g$-th GCN layer, its operation can be expressed as: $H_{g}=\sigma(AH_{g-1}W_{g}),$ (2) where $\sigma$ is the activation function, $H_{g-1}$ is the output of the $(g-1)$-th ST-GCN layer, and $W_{g}$ are the learnable weights of the $g$-th ST-GCN layer. Multi-scale learning is significant and has shown powerful performance in CNN- based image processing tasks [21, 28] because it enables the model to extract richer and more diverse feature representations at various scales. However, applying pooling operations, which are generally used for downsampling images, to graph-structured data presents challenges because it is a type of non- Euclidean structured data. To address this issue, inspired by Xu et al. [39], we introduce FLGP, specifically designed for extracting multi-scale facial graph features. In practice, we design three scales of facial structures. The designed scales and the scale change achieved through FLGP are illustrated in Fig. 4. Note that we only use FLGP to downsample the spatial scale while maintaining the temporal dimension, relying on the network layers with learnable weights to fully learn the temporal dynamics. During each FLGP operation, the facial graph is downsampled by aggregating features of several nodes using max pooling after each ST-GCN layer. Finally, we aggregate the global spatial features into a single node. Since we do not need to learn spatial relationships anymore after downsampling the scale of the facial graph to a single node, we employ temporal convolutional networks (TCNs) to capture the remaining high-level temporal variations. In practice, we stack $C$ TCN layers and the output of the $c$-th layer can be expressed as: $H_{c}=\sigma(H_{c-1}W_{c}),$ (3) where $\sigma$ is the activation function, $H_{c-1}$ is the output of the $(c-1)$-th TCN layer, and $W_{c}$ are the learnable weights of the $c$-th TCN layer. Finally, the single remaining node fully aggregates spatio-temporal relationships in a sliding window, which is then fed into a fully-connected layer to output the probability map $p_{i}=\\{p_{i}^{\mathrm{onset}}$, $p_{i}^{\mathrm{apex}}$, $p_{i}^{\mathrm{offset}}$, $p_{i}^{\mathrm{exp}}$, $p_{i}^{\mathrm{norm}}$} for the frame $v_{i}$. This map contains the probabilities of $v_{i}$ being an onset frame, apex frame, offset frame, expression frame, or normal frame. In addition, each component in $p_{i}$ includes two probabilities for micro-expression spotting and macro-expression spotting, respectively. Specifically, $p_{i}^{\mathrm{onset}}=\\{p_{i}^{\mathrm{mi},\mathrm{onset}},p_{i}^{\mathrm{ma},\mathrm{onset}}\\}$, $p_{i}^{\mathrm{apex}}=\\{p_{i}^{\mathrm{mi},\mathrm{apex}},p_{i}^{\mathrm{ma},\mathrm{apex}}\\}$, $p_{i}^{\mathrm{offset}}=\\{p_{i}^{\mathrm{mi},\mathrm{offset}},p_{i}^{\mathrm{ma},\mathrm{offset}}\\}$, $p_{i}^{\mathrm{exp}}=\\{p_{i}^{\mathrm{mi},\mathrm{exp}},p_{i}^{\mathrm{ma},\mathrm{exp}}\\}$, $p_{i}^{\mathrm{norm}}=\\{p_{i}^{\mathrm{mi},\mathrm{norm}},p_{i}^{\mathrm{ma},\mathrm{norm}}\\}$. We split the optimization tasks into two binary classification tasks and a three-class classification task for different types of frames, following the optimization method outlined in [20]. We employ focal-loss [22] to optimize our model, which can be expressed as: $\mathcal{L}_{\mathrm{cls}}=-\sum_{i}y_{i}\alpha(1-p_{i})^{\gamma}\log(p_{i}),$ (4) where $y_{i}$ is the ground-truth label, $\alpha$ and $\gamma$ are hyperparameters, respectively. ### III-C Supervised contrastive learning Until now, our focus has been on minimizing the divergence between the predicted class probabilities and the ground-truth class labels, which might neglect the distributional differences among different classes. Some macro- expressions and micro-expressions near the boundary are hard to distinguish in terms of duration and intensity. For example, the labeling of macro- and micro-expressions follows a rule where macro-expressions have a duration greater than 0.5 seconds, and micro-expressions have a duration smaller than 0.5 seconds. This rule creates difficulty in distinguishing frames of expressions whose duration is close to 0.5 seconds. Similar issues also exist when distinguishing between micro-expression frames and normal frames. This is due to the fact that some ground-truth micro-expressions exhibit very low intensity, and the noises exist in optical flow features can lead to the misclassification of certain normal frames as micro-expression frames. To address this issue, we introduce supervised contrastive learning [16] to enhance the discriminative feature learning for classifying different types of frames. Specifically, we utilize the output of the final TCN layer as the feature representation for each frame. Then we introduce a supervised contrastive loss to minimize the distance between feature representations of the same class while simultaneously pushing apart feature representations of different classes. We use the frame type label $\widetilde{y}_{i}$ for the $i$-th sample to provide supervision for the supervised contrastive loss. This means that each frame is labeled as a macro-expression frame, micro-expression frame, or normal frame. Let $I$ denote a set of samples in a batch, and the loss function can be expressed as: $\displaystyle\mathcal{L}_{\mathrm{con}}=\sum_{i\in I}\frac{-1}{|Q(i)|}\sum_{q\in Q(i)}\log\frac{\exp(z_{i}\cdot z_{q}/\tau)}{\sum_{e\in E(i)}\exp(z_{i}\cdot z_{e}/\tau)},$ (5) where $E(i)\coloneqq I\textbackslash i$, $Q(i)\coloneqq\\{q\in E(i)\mid\widetilde{y}_{q}=\widetilde{y}_{i}\\}$ represents the set of samples in the batch who has the same label with the $i$-th sample, $\tau\in\mathbb{R}^{+}$ is a scalar temperature parameter, and $z_{i}$ is the intermediate representation of $i$ which is extracted from the network. The overall loss function for the optimization of our model can be formulated as follows: $\mathcal{L}=\mathcal{L}_{\mathrm{cls}}+\lambda\mathcal{L}_{\mathrm{con}},$ (6) where $\lambda$ is a weight parameter to balance between classification and contrastive learning. ### III-D Post-processing After obtaining the series of output probabilities $P=(p_{i})_{i=1}^{N}$, we proceed with macro-expression spotting and micro-expression spotting separately, following [20]. We then describe the micro-expression spotting process, and the macro-expression spotting process is similar. We first obtain all possible micro-expression apex frames $U^{\mathrm{mi},\mathrm{apex}}$ with the rule $p_{l}^{\mathrm{mi},\mathrm{apex}}>\theta_{\mathrm{apex}}$, where $\theta_{\mathrm{apex}}$ is a threshold. For each possible apex frame $u^{\mathrm{mi},\mathrm{apex}}_{l}\in U^{\mathrm{mi},\mathrm{apex}}$, we select the onset frame with the highest onset probability from the left side of the apex frame within the range of $[l-\frac{k^{\mathrm{mi}}}{2},l-\frac{j^{\mathrm{mi}}}{2}]$, and we select the offset frame with the highest offset probability from the right side of the apex frame within the range of $[l+\frac{j^{\mathrm{mi}}}{2},l+\frac{k^{\mathrm{mi}}}{2}]$, where $k^{\mathrm{mi}}$ and $j^{\mathrm{mi}}$ represent the average duration and minimum duration of a micro-expression, respectively. As a result, we obtain one micro-expression proposal $\phi_{l}$, which contains the onset frame, offset frame, and expression type. Subsequently, we assign a score $s_{l}=p_{b}^{\mathrm{mi},\mathrm{onset}}\times p_{l}^{\mathrm{mi},\mathrm{apex}}\times p_{d}^{\mathrm{mi},\mathrm{offset}}$ to the micro-expression proposal $\phi_{l}$, where $b$ and $d$ represent the frame indices of the onset frame and offset frame selected by the rule mentioned above, respectively. After obtaining all possible expression proposals, we utilize Non-Maximum Suppression to filter out redundant proposals. Specifically, if the overlap rate of two proposals is higher than $\theta_{\mathrm{overlap}}$, we compare the assigned scores and discard the proposal with the lower score, thereby obtaining the final spotting results. TABLE I: Results of the ablation study on the effectiveness of the proposed modules. | SAMM-LV | CAS(ME)2 | ---|---|---|--- | MaE | ME | Overall | MaE | ME | Overall | Baseline | 0.3762 | 0.2222 | 0.3392 | 0.3891 | 0.1607 | 0.3561 | +SLO | 0.4279 | 0.3455 | 0.3990 | 0.3973 | 0.2270 | 0.3729 | +SLO+STGCN | 0.4258 | 0.3520 | 0.4017 | 0.4150 | 0.1846 | 0.3808 | +SLO+STGCN+FLGP | 0.4364 | 0.3771 | 0.4173 | 0.4095 | 0.2556 | 0.3865 | +SLO+STGCN+FLGP+SupCon | 0.4631 | 0.4035 | 0.4454 | 0.4340 | 0.2637 | 0.4154 | TABLE II: Results of the ablation study on different temporal window sizes (receptive fields). Window Size / | MaE | ME | F1-Score ---|---|---|--- Receptive Field 5 | 0.3627 | 0.2131 | 0.3279 9 | 0.3926 | 0.3529 | 0.3818 11 | 0.4051 | 0.3467 | 0.3892 15 | 0.4337 | 0.3602 | 0.4128 17 | 0.4631 | 0.4035 | 0.4454 19 | 0.4419 | 0.3873 | 0.4264 21 | 0.4202 | 0.3712 | 0.4054 25 | 0.4060 | 0.3721 | 0.3955 29 | 0.3973 | 0.3610 | 0.3857 TABLE III: Results of the ablation study on the choice of hyperparameter. $\lambda$ | SAMM-LV | CAS(ME)2 | Overall ---|---|---|--- 0.0 | 0.4173 | 0.3865 | 0.4039 0.005 | 0.4303 | 0.3973 | 0.4160 0.01 | 0.4337 | 0.4057 | 0.4218 0.05 | 0.4454 | 0.4154 | 0.4328 0.08 | 0.4289 | 0.4167 | 0.4238 0.10 | 0.4194 | 0.4113 | 0.4160 ### III-E Training details We use the AdamW optimizer [29] to optimize our model, setting the learning rate to 0.01, $\beta_{1}$ to 0.5, and $\beta_{2}$ to 0.9. The window length $w$ used for partitioning the videos is set to 17, which is equal to the temporal receptive field of our network. The temperature parameter $\tau$ in (5) is set to 0.5. We train our model for 100 epochs with a batch size of 512. ## IV EXPERIMENTS ### IV-A Dataset We evaluate our method following the protocol of MEGC2021 on two benchmark datasets: SAMM-LV [42] and CAS(ME)2 [33]. The SAMM-LV dataset comprises 147 raw long videos containing 343 macro-expression clips and 159 micro-expression clips. The CAS(ME)2 dataset includes 87 raw long videos with 300 macro- expression clips and 57 micro-expression clips. The frame rate of the SAMM-LV dataset is 200fps, while the frame rate of the CAS(ME)2 dataset is 30fps. To align the frame rates of both datasets, we subsample every 7th frame from the SAMM-LV dataset to achieve a frame rate close to 30fps. Leave-one-subject-out cross-validation strategy is utilized in our experiments. ### IV-B Evaluation metric We use the evaluation metrics outlined in the MEGC2021 protocol. The true positive (TP) expression proposal in a video is defined based on the intersection between the proposal and the ground-truth expression clip. Specifically, given a ground-truth expression clip and its expression type, we compare it with all expression proposals with the same expression type. An expression proposal $W_{\mathrm{Proposal}}$ is considered TP when it satisfies the following condition: $\displaystyle\frac{W_{\mathrm{Proposal}}\cap W_{\mathrm{GroundTruth}}}{W_{\mathrm{Proposal}}\cup W_{\mathrm{GroundTruth}}}\geq\theta_{\mathrm{IoU}},$ (7) where $\theta_{\mathrm{IoU}}$ is set to 0.5 and $W_{\mathrm{GroundTruth}}$ represents the ground-truth expression proposal (from the onset frame to the offset frame). Otherwise, the proposed expression proposal is considered a false-positive (FP). All ground-truth expression clips that do not match any proposal are considered false-negative (FN). According to the protocol, each ground-truth expression clip corresponds to at most one TP. We calculate the precision rate, recall rate, and F1 score to evaluate the performance of our model. ### IV-C Ablation studies Effectiveness of proposed modules. We first validate our proposed modules and Table I shows the experimental results. We report the F1-score for macro- expression (MaE) spotting, micro-expression (ME) spotting, and overall performance. Baseline in Table I refers to the method proposed in [20]. The acronyms SLO, STGCN, FLGP, and SupCon correspond to specific modifications in our method. Specifically, SLO represents computing short- and long-term optical flows with our receptive field adaptive sliding window strategy instead of computing optical flows between adjacent frames, STGCN involves employing spatio-temporal GCNs for feature learning instead of flattening the graph-structured data and learning the features using CNNs, FLGP introduces facial local graph pooling into our ST-GCN network for multi-scale feature learning, and SupCon introduces supervised contrastive loss for discriminative feature learning. The results demonstrate the effectiveness of our proposed modules. Notably, the receptive field adaptive sliding window strategy enhances the extraction of motion features, particularly in magnifying subtle motions that exist in micro-expressions. This enhancement leads to a significant overall performance improvement of 17.6%/4.7% on the SAMM-LV and CAS(ME)2 datasets. The utilization of ST-GCN and its combination with facial local graph pooling for multi-scale feature learning results in a further improvement of 4.6%/3.6% on the SAMM-LV and CAS(ME)2 datasets compared to using one-dimensional CNNs. This demonstrates the superior representational capabilities of our proposed model. Finally, the introduction of the supervised contrastive loss enables our model to better recognize the boundaries of distinguishing between different types of expressions, resulting in an improvement of 6.7%/7.5% on the SAMM-LV and CAS(ME)2 datasets. Figure 5: Visualization analysis of supervised contrastive learning. (a) and (b) show the PCA distribution of certain macro- and micro-expression frames: (a) without supervised contrastive learning and (b) with supervised contrastive learning. (c) and (d) depict the PCA distribution of certain micro-expression frames and normal frames: (c) without supervised contrastive learning and (d) with supervised contrastive learning. Figure 6: Some visualization optical flows of certain micro-expression frames computed by three strategies. The data comes from the vertical component of optical flows computed at the left mouth corner when subject 11 from the SAMM-LV dataset is performing a micro-expression. (a) optical flows computed between adjacent frames; (b) optical flows computed with our receptive field adaptive sliding window strategy; (c) optical flows computed with a large sliding window strategy. In addition to the quantitative results, Fig. 5 and Fig. 6 show some qualitative analyses of our proposed receptive field adaptive sliding window strategy and supervised contrastive learning. Fig. 5 illustrates the visualization analysis of introducing supervised contrastive learning into our model. During inference, we randomly sampled some frames and employed principal component analysis (PCA) [2] to examine the distribution of various expression classes. Subsequently, we labeled each frame with its ground-truth expression label. In Fig. 5 (a) and (b), we compare the PCA distribution of specific macro-expression frames and micro-expression frames with and without the use of supervised contrastive learning. In Fig. 5 (c) and (d), we compare the PCA distribution of specific normal frames and micro-expression frames with and without the use of supervised contrastive learning. The results indicate that when we do not introduce supervised contrastive learning into our model, the distribution of certain frames from different classes might become mixed, resulting in misclassification. However, after introducing supervised contrastive learning to our model, the domain discrepancy increases, leading to improved accuracy in expression spotting. Fig. 6 presents visualizations of optical flows calculated using three different strategies. When computing optical flows between adjacent frames, the motions that exist in micro-expressions are so subtle that the noise in the optical flows might overshadow these delicate movements, impacting the quality of the data and making it difficult to reveal these subtle expressions. On the other hand, using a large sliding window strategy to compute optical flows can introduce significant influence from head movements, making it difficult to analyze expressions accurately. However, the utilization of our receptive field adaptive sliding window strategy helps alleviate these problems. It strikes a balance between magnifying subtle expression motions and minimizing the influence of head movements. TABLE IV: Comparison with the state-of-the-art methods on CAS(ME)2 and SAMM-LV in terms of F1-score. Methods | SAMM-LV | CAS(ME)2 ---|---|--- | | MaE | ME | Overall | MaE | ME | Overall Traditional methods | MDMD[14] | 0.0629 | 0.0364 | 0.0445 | 0.1196 | 0.0082 | 0.0376 | Optical Strain[12] | - | - | - | 0.1436 | 0.0098 | 0.0448 | Zhang et al.[47] | 0.0725 | 0.1331 | 0.0999 | 0.2131 | 0.0547 | 0.1403 | He [46] | 0.4149 | 0.2162 | 0.3638 | 0.3782 | 0.1965 | 0.3436 | Zhao et al.[49] | - | - | 0.3863 | - | - | 0.4030 Deep-learning methods | Verburg[35] | - | 00821 | - | - | - | - | LBCNN[32] | - | - | 0.0813 | - | - | 0.0595 | MESNet[36] | - | 0.0880 | - | - | 0.0360 | - | SOFTNet[24] | 0.2169 | 0.1520 | 0.1881 | 0.2410 | 0.1173 | 0.2022 | 3D-CNN[43] | 0.1595 | 0.0466 | 0.1084 | 0.2145 | 0.0714 | 0.1675 | Concat-CNN[41] | 0.3553 | 0.1155 | 0.2736 | 0.2505 | 0.0153 | 0.2019 | LSSNet[45] | 0.2810 | 0.1310 | 0.2380 | 0.3770 | 0.0420 | 0.3250 | MTSN[23] | 0.3459 | 0.0878 | 0.2867 | 0.4104 | 0.0808 | 0.3620 | ABPN[20] | 0.3349 | 0.1689 | 0.2908 | 0.3357 | 0.1590 | 0.3117 | AUW-GCN[44] | 0.4293 | 0.1984 | 0.3728 | 0.4235 | 0.1538 | 0.3834 | Ours | 0.4631 | 0.4035 | 0.4454 | 0.4340 | 0.2637 | 0.4154 TABLE V: Detailed spotting results of the proposed method on CAS(ME)2 and SAMM-LV. Dataset | SAMM-LV | CAS(ME)2 ---|---|--- Expression | MaE | ME | Overall | MaE | ME | Overall Total | 343 | 159 | 502 | 300 | 57 | 357 TP | 188 | 69 | 257 | 161 | 12 | 173 FP | 281 | 114 | 395 | 281 | 22 | 303 FN | 155 | 90 | 245 | 139 | 45 | 184 Precision | 0.4009 | 0.3770 | 0.3942 | 0.3643 | 0.3529 | 0.3634 Recall | 0.5481 | 0.4340 | 0.5120 | 0.5367 | 0.2105 | 0.4678 F1-Score | 0.4631 | 0.4035 | 0.4454 | 0.4340 | 0.2637 | 0.4154 Temporal window size. We further explore how much temporal information we need to predict one frame. In practice, we test different temporal window sizes (i.e., the receptive field of our network) on the SAMM-LV dataset, and the experimental results are shown in Table II. The results show that when it comes to 17, the performance achieves the best. This is because the temporal boundary for distinguishing macro- and micro-expressions is 0.5 seconds, which corresponds to 15 frames when the frame rate is set to 30fps. When the receptive field comes to 17, the temporal window is enough to perceive a complete micro-expression and magnify the motion information that exists in micro-expressions. Therefore, it enables the model capture the temporal variations needed to distinguish between general macro-expressions and micro- expressions. When the receptive field becomes larger, the increasing number of frames may lead to information redundancy, and increasingly serious head movement problems may also impact the accuracy of expression spotting. Hyperparameter $\lambda$. The weight parameter $\lambda$ in (6) is set to balance classification and contrastive learning. We conducted experiments with various $\lambda$ values on the SAMM-LV and CAS(ME)2 datasets, and the corresponding results are presented in Table III. We observed that the optimal value for $\lambda$ varies across different datasets. However, the best overall performance is achieved when $\lambda$ is set to 0.05. Increasing $\lambda$ beyond this value starts to impact standard classification, leading to a decrease in spotting accuracy. ### IV-D Comparison with state-of-the-art methods We compare our method with the state-of-the-art methods on the SAMM-LV and CAS(ME)2 datasets and the results are shown in Table IV. For the overall performance, our method achieves F1-scores of 0.4454 on the SAMM-LV dataset and 0.4154 on the CAS(ME)2 dataset, which outperforms other state-of-the-art methods by 15.30% and 3.08% respectively. For the macro-expression spotting, our method achieves an improvement of 7.8%/2.5% on the SAMM-LV and CAS(ME)2 datasets respectively compared to other methods. It is important to emphasize our method’s remarkable effectiveness in micro-expression spotting. The results demonstrate a substantial enhancement with an 86.63% improvement on the SAMM-LV dataset and a 34.20% improvement on the CAS(ME)2 dataset in comparison to other state-of-the-art methods. The results show our method’s ability to capture subtle motions that exist in micro-expressions and alleviate the impact of irrelevant motions. ### IV-E Detailed discussion Table V shows the detailed results on the SAMM-LV and CAS(ME)2 datasets. It can be seen that our method achieves a recall rate of approximately 0.5 in MaE spotting and ME spotting on the SAMM-LV dataset, as well as in MaE spotting on the CAS(ME)2 dataset. The particular issue in not achieving a higher recall rate for ME spotting on the CAS(ME)2 dataset may be due to several factors. One primary factor could be the limited amount of data available, as the CAS(ME)2 dataset contains only 57 ME clips. Additionally, some of these clips may involve eye blinking, which we consider as irrelevant motion. Addressing these issues, including the consideration of generating more diverse micro- expression data, will be a focal point of our future research. ## V CONCLUSIONS AND FUTURE WORKS In this paper, we presented a novel framework for macro- and micro-expression spotting. We proposed a receptive field adaptive sliding window strategy to compute short- and long-term optical flow to magnify the motion information existed in facial expressions. Furthermore, we proposed SpoT-GCN to fully capture the spatial and temporal relationships that existed in the optical flow features, where the introduction of FLGP enables the network to learn multi-scale features. In addition, we introduced supervised contrastive loss for more discriminative feature representation learning. Comprehensive experiments were conducted on the SAMM-LV and CAS(ME)2 datasets to verify the effectiveness of our proposed method. In the future, as mentioned in Section IV-E, we hope to introduce a method for generating diverse MEs in order to enrich the ME dataset. Moreover, we aim to develop a deep learning-driven technique for motion extraction and create an end-to-end framework for facial expression spotting. ## References * [1] B. Bhushan. Study of facial micro-expressions in psychology. Understanding Facial Expressions in Communication: Cross-cultural and Multidisciplinary Perspectives, pages 265–286, 2015. * [2] R. Bro and A. K. Smilde. Principal component analysis. Analytical Methods, 6(9):2812–2831, 2014. * [3] C. Chen. PyTorch Face Landmark: A fast and accurate facial landmark detector. https://github.com/cunjian/pytorch_face_landmark. 2021\. * [4] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In Proceedings of the International Conference on Machine Learning, pages 1597–1607. PMLR, 2020. * [5] C. A. Corneanu, M. O. Simón, J. F. Cohn, and S. E. Guerrero. Survey on RGB, 3D, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 38(8):1548–1568, 2016. * [6] A. K. Davison, M. H. Yap, and C. Lansley. Micro-facial movement detection using individualised baselines and histogram-based descriptors. In Proceedings of the International Conference on Systems, Man, and Cybernetics, pages 1864–1869. IEEE, 2015. * [7] P. Ekman. Emotions revealed: Recognizing faces and feelings to improve communication and emotional life. Holt Paperback, 128(8):140–140, 2003. * [8] P. Ekman. Telling lies: Clues to deceit in the marketplace, politics, and marriage (revised edition). WW Norton & Company, 2009. * [9] J. Endres and A. Laidlaw. Micro-expression recognition training in medical students: a pilot study. BMC Medical Education, 9(1):1–6, 2009. * [10] G. Farnebäck. Two-frame motion estimation based on polynomial expansion. In Proceedings of the Scandinavian Conference on Image Analysis, pages 363–370. Springer, 2003. * [11] T. Fukuda, J. Taguri, F. Arai, M. Nakashima, D. Tachibana, and Y. Hasegawa. Facial expression of robot face for human-robot mutual communication. In Proceedings of the International Conference on Robotics and Automation, volume 1, pages 46–51. IEEE, 2002. * [12] Y. Gan, S. Liong, D. Zheng, S. Li, and C. Bin. Optical strain based macro-and micro-expression sequence spotting in long video. In Proceedings of the International Conference on Automatic Face and Gesture Recognition. IEEE, 2020. * [13] Y. Guo, B. Li, X. Ben, Y. Ren, J. Zhang, R. Yan, and Y. Li. A magnitude and angle combined optical flow feature for microexpression spotting. IEEE MultiMedia, 28(2):29–39, 2021. * [14] Y. He, S.-J. Wang, J. Li, and M. H. Yap. Spotting macro-and micro-expression intervals in long video sequences. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 742–748. IEEE, 2020. * [15] X. Jin, Z. Lai, and Z. Jin. Learning dynamic relationships for facial expression recognition based on graph convolutional network. IEEE Transactions on Image Processing, 30:7143–7155, 2021. * [16] P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan. Supervised contrastive learning. In Proceedings of the Advances in Neural Information Processing Systems, volume 33, pages 18661–18673, 2020. * [17] T. N. Kipf and M. Welling. Semi-supervised classification with graph convolutional networks. In Proceedings of the International Conference on Learning Representations, 2017. * [18] B. A. Kopper and D. L. Epperson. The experience and expression of anger: Relationships with gender, gender role socialization, depression, and mental health functioning. Journal of Counseling Psychology, 43(2):158, 1996. * [19] A. J. R. Kumar and B. Bhanu. Relational edge-node graph attention network for classification of micro-expressions. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 5818–5827. IEEE/CVF, 2023. * [20] W. Leng, S. Zhao, Y. Zhang, S. Liu, X. Mao, H. Wang, T. Xu, and E. Chen. Abpn: Apex and boundary perception network for micro-and macro-expression spotting. In Proceedings of the International Conference on Multimedia, pages 7160–7164. ACM, 2022. * [21] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 2117–2125. IEEE/CVF, 2017. * [22] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár. Focal loss for dense object detection. In Proceedings of the International Conference on Computer Vision, pages 2980–2988. IEEE/CVF, 2017. * [23] G. B. Liong, S.-T. Liong, J. See, and C.-S. Chan. MTSN: A multi-temporal stream network for spotting facial macro-and micro-expression with hard and soft pseudo-labels. In Proceedings of the Workshop on Facial Micro-Expression: Advanced Techniques for Multi-Modal Facial Expression Analysis, pages 3–10. ACM, 2022. * [24] G.-B. Liong, J. See, and L.-K. Wong. Shallow optical flow three-stream cnn for macro-and micro-expression spotting from long videos. In Proceedings of the International Conference on Image Processing, pages 2643–2647. IEEE, 2021. * [25] T. Liu, J. Li, J. Wu, B. Du, J. Chang, and Y. Liu. Facial expression recognition on the high aggregation subgraphs. IEEE Transactions on Image Processing, 2023. * [26] Y.-J. Liu, J.-K. Zhang, W.-J. Yan, S.-J. Wang, G. Zhao, and X. Fu. A main directional mean optical flow feature for spontaneous micro-expression recognition. IEEE Transactions on Affective Computing, 7(4):299–310, 2015. * [27] Z. Liu, J. Dong, C. Zhang, L. Wang, and J. Dang. Relation modeling with graph convolutional networks for facial action unit detection. In Proceedings of the International Conference on Multimedia Modeling, pages 489–501. Springer, 2020. * [28] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, and B. Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the International Conference on Computer Vision, pages 10012–10022. IEEE/CVF, 2021. * [29] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. In Proceedings of the International Conference on Learning Representations, 2019. * [30] A. Moilanen, G. Zhao, and M. Pietikäinen. Spotting rapid facial movements from videos using appearance-based feature difference analysis. In Proceedings of the International Conference on Pattern Recognition, pages 1722–1727. IEEE, 2014. * [31] M. O’sullivan, M. G. Frank, C. M. Hurley, and J. Tiwana. Police lie detection accuracy: The effect of lie scenario. Law and Human Behavior, 33(6):530, 2009. * [32] H. Pan, L. Xie, and Z. Wang. Local bilinear convolutional neural network for spotting macro-and micro-expression intervals in long video sequences. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 749–753. IEEE, 2020. * [33] F. Qu, S.-J. Wang, W.-J. Yan, H. Li, S. Wu, and X. Fu. CAS(ME)2: A database for spontaneous macro-expression and micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 9(4):424–436, 2017. * [34] J. Thies, M. Zollhöfer, M. Stamminger, C. Theobalt, and M. Nießner. FaceVR: Real-time gaze-aware facial reenactment in virtual reality. ACM Transactions on Graphics, 37(2):1–15, 2018. * [35] M. Verburg and V. Menkovski. Micro-expression detection in long videos using optical flow and recurrent neural networks. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 1–6. IEEE, 2019. * [36] S.-J. Wang, Y. He, J. Li, and X. Fu. MESNet: A convolutional neural network for spotting multi-scale micro-expression intervals in long videos. IEEE Transactions on Image Processing, 30:3956–3969, 2021. * [37] S.-J. Wang, S. Wu, X. Qian, J. Li, and X. Fu. A main directional maximal difference analysis for spotting facial movements from long-term videos. Neurocomputing, 230:382–389, 2017. * [38] H.-X. Xie, L. Lo, H.-H. Shuai, and W.-H. Cheng. Au-assisted graph attention convolutional network for micro-expression recognition. In Proceedings of the International Conference on Multimedia, pages 2871–2880, 2020. * [39] T. Xu and W. Takano. Graph stacked hourglass networks for 3D human pose estimation. In Proceedings of the Conference on Computer Vision and Pattern Recognition, pages 16105–16114. IEEE/CVF, 2021. * [40] W.-J. Yan, Q. Wu, J. Liang, Y.-H. Chen, and X. Fu. How fast are the leaked facial expressions: The duration of micro-expressions. Journal of Nonverbal Behavior, 37:217–230, 2013. * [41] B. Yang, J. Wu, Z. Zhou, M. Komiya, K. Kishimoto, J. Xu, K. Nonaka, T. Horiuchi, S. Komorita, G. Hattori, et al. Facial action unit-based deep learning framework for spotting macro-and micro-expressions in long video sequences. In Proceedings of the International Conference on Multimedia, pages 4794–4798. ACM, 2021. * [42] C. H. Yap, C. Kendrick, and M. H. Yap. Samm long videos: A spontaneous facial micro-and macro-expressions dataset. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 771–776. IEEE, 2020. * [43] C. H. Yap, M. H. Yap, A. Davison, C. Kendrick, J. Li, S.-J. Wang, and R. Cunningham. 3D-CNN for facial micro-and macro-expression spotting on long video sequences using temporal oriented reference frame. In Proceedings of the International Conference on Multimedia, pages 7016–7020. ACM, 2022. * [44] S. Yin, S. Wu, T. Xu, S. Liu, S. Zhao, and E. Chen. AU-aware graph convolutional network for macroand micro-expression spotting. In Proceedings of the International Conference on Multimedia and Expo, pages 228–233. IEEE, 2023. * [45] W.-W. Yu, J. Jiang, and Y.-J. Li. LSSNet: A two-stream convolutional neural network for spotting macro-and micro-expression in long videos. In Proceedings of the International Conference on Multimedia, pages 4745–4749. ACM, 2021. * [46] H. Yuhong. Research on micro-expression spotting method based on optical flow features. In Proceedings of the International Conference on Multimedia, pages 4803–4807. ACM, 2021. * [47] L.-W. Zhang, J. Li, S.-J. Wang, X.-H. Duan, W.-J. Yan, H.-Y. Xie, and S.-C. Huang. Spatio-temporal fusion for macro-and micro-expression spotting in long video sequences. In Proceedings of the International Conference on Automatic Face and Gesture Recognition, pages 734–741. IEEE, 2020. * [48] Z. Zhang, T. Chen, H. Meng, G. Liu, and X. Fu. SMEConvNet: A convolutional neural network for spotting spontaneous facial micro-expression from long videos. IEEE Access, 6:71143–71151, 2018. * [49] Y. Zhao, X. Tong, Z. Zhu, J. Sheng, L. Dai, L. Xu, X. Xia, Y. Jiang, and J. Li. Rethinking optical flow methods for micro-expression spotting. In Proceedings of the International Conference on Multimedia, pages 7175–7179. ACM, 2022.
Wei, McDonald, Garcia, Markkula, Engström, and O’Kelly # An active inference model of car following: Advantages and applications Ran Wei Anthony D. McDonald, Corresponding Author Alfredo Garcia Gustav Markkula Johan Engström*111*Advisors Matthew O’Kelly* Abstract Driver process models play a central role in the testing, verification, and development of automated and autonomous vehicle technologies. Prior models developed from control theory and physics-based rules are limited in automated vehicle applications due to their restricted behavioral repertoire. Data- driven machine learning models are more capable than rule-based models but are limited by the need for large training datasets and their lack of interpretability, i.e., an understandable link between input data and output behaviors. We propose a novel car following modeling approach using active inference, which has comparable behavioral flexibility to data-driven models while maintaining interpretability. We assessed the proposed model, the Active Inference Driving Agent (AIDA), through a benchmark analysis against the rule- based Intelligent Driver Model, and two neural network Behavior Cloning models. The models were trained and tested on a real-world driving dataset using a consistent process. The testing results showed that the AIDA predicted driving controls significantly better than the rule-based Intelligent Driver Model and had similar accuracy to the data-driven neural network models in three out of four evaluations. Subsequent interpretability analyses illustrated that the AIDA’s learned distributions were consistent with driver behavior theory and that visualizations of the distributions could be used to directly comprehend the model’s decision making process and correct model errors attributable to limited training data. The results indicate that the AIDA is a promising alternative to black-box data-driven models and suggest a need for further research focused on modeling driving style and model training with more diverse datasets. Keywords: Driver behavior modeling, Active Inference, Behavior Cloning, Intelligent Driver Model, Car Following, Learning from Demonstration ## 1 Introduction The rapid development of automated and connected vehicle technologies has created a corresponding demand for models of driver behavior that can be used to calibrate design parameters [1, 2], evaluate technologies [3, 4], and refine real-time decision making [5]. To be effective in these tasks, driver models must be flexible, generalizable, and interpretable. Model flexibility is the ability of the model to mimic nuanced social behavior of human drivers [6]. Generalizability is the ability of the model to extend to new environments with minimal modeler intervention. Interpretability refers to both a clear connection between model mechanics and predicted behavior and a grounding in human psychology [7]. These elements facilitate model inspection and diagnostics which are essential for interpretable models [8]. Car following is an important driving sub-task as it represents a large portion of current driving time and crashes involving automated vehicles [9, 10]. Moreover, it requires a complex expression of social behaviors through physical vehicle positioning [6], e.g., speeding up to prevent a vehicle from merging. Therefore, it is important to develop flexible, generalizable, and interpretable car following models for automated vehicles and future transportation systems. Existing car following models can be partitioned into rule-based models and data-driven models [11, 12]. Rule-based models generate acceleration behavior based on a function specified by the modeler. Typically, this function is grounded in known observations or driver behavior theory [11]. For example, the Intelligent Driver Model (IDM) predicts driver acceleration based on deviations from a desired speed and distance headway [13]. While rule-based models have a clear connection between model mechanics and predicted behavior, they are limited in their flexibility and generalizability. Because the rules in rule-based models are designed to replicate driving behavior in specific contexts and depict driver characteristics with small parameter sets, they are limited in the behavioral repertoire and in generalizing to scenarios outside of those governed by rules beyond their initial rule set. For example, research has shown that rule-based models designed for car following do not generalize to emergency scenarios and crashes [14]. Despite these limitations, rule-based models are still widely used for automated vehicle analyses [15] and thus offer a valid benchmark for new models. In contrast to rule-based models, data-driven models learn a function mapping observations or features to acceleration behaviors using an algorithm. Recent works have used neural networks [16], hybrid neural network algorithms with physics constraints [17], reinforcement learning [18], and adversarial imitation learning [19] to model car following behavior. These approaches have shown considerable flexibility in replicating human behavior, however, data- driven models still struggle to reproduce well-known traffic phenomena such as stop-and-go oscillation and their generalizability is constrained by the chosen machine learning technique [16, 20]. Furthermore, the complexity of existing data-driven models prohibits interpretability both in the connection between input and output and in their grounding in human psychology. Despite these shortcomings, data-driven models are more generalizable to complex scenarios which are difficult for manual model specification. One important class of data-driven models is Behavior Cloning (BC) known for their simplicity and general effectiveness [21, 22, 23]. Neural network-based BC models have been widely adopted for developing and evaluating automated vehicle algorithms and are a common benchmark for evaluating novel data-driven models [24, 25]. The relative strengths of rule-based and data-driven approaches suggest that there is a role for model structure (to aid in interpretability) — especially structure grounded in psychological theory [7] — and learning from data (to aid in flexibility) in car following model development. The incorporation of these two concepts requires a shift to contemporary theories of human cognition. One relevant theory is active inference [26, 27] — a framework developed from Bayesian principles of cognition [28, 29]. The central ideas of active inference are that 1) humans have internal probabilistic generative models of the environment and that 2) humans leverage their model of the environment to make inferences about action courses that reduce surprise in terms of both distance from their desired states of the environment and uncertainty [26, 27]. Importantly, these principles have been translated into a quantitative framework for modeling human behavior and cognition [27, 30]. The quantitative framework includes an explicit representation of agent belief dynamics to facilitate agent decision making and action selection in response to observed perceptual signals. Due to this structure, the model is fundamentally interpretable (i.e., actions can be traced back to beliefs and observations at a given time). On the other hand, the increased complexity and probabilistic nature of the model compared to rule-based frameworks also increase it’s flexibility and potentially it’s generalizability. Recently, the active inference framework has been extended to driving to depict driving behavior during emergency scenarios with some success [31, 32], however, the application to broader scenarios has been limited. Our goal in this article is to introduce the Active Inference Driving Agent (AIDA), evaluate its flexibility and generalizability relative to rule-based and data-driven benchmarks, and illustrate the interpretability of the model and the resulting insights it provides into car following behavior. ## 2 Materials and Methods In this section, we introduce a formulation of the benchmarks — IDM and Behavior Cloning — then describe our AIDA formulation. We then describe the dataset used for model fitting and the model comparison approach. To simplify notation, we adopt a unifying view of car following models as longitudinal driving control policies which map input signals observed by drivers to a control signal, i.e., the instantaneous longitudinal acceleration. We denote the driver observations (or features in machine learning terminology) at discrete time step $t$ by $o_{t}$ and the control signal by $a_{t}$. Using this nomenclature, the most generic class of driver control policies can be described as a probabilistic mapping from the entire history of inputs and controls, denoted by $h_{t}=\\{o_{1:t},a_{1:t-1}\\}$ to the next control signal, i.e., $\pi(a_{t}|h_{t})$. However, the control policy may only depend on the most recent observation as $\pi(a_{t}|o_{t})$. The definition of the control policy is the most significant element that differentiates the IDM, Behavior Cloning, and AIDA. These differences are illustrated in the computation graphs in Figure 1 and further described in the subsequent sections. IDM rule 2$o$$\tilde{d}$IDM rule 1$a$ (a) IDM Bayes’ rule$o$$b$$\min\mathcal{G}$$a$ (b) AIDA NN$h$$a$ (c) BC Figure 1: Computation graphs for (a) IDM, (b) AIDA, and (c) neural network BC models. $o=$ instantaneous observation, $a=$ control action, $h=$ complete interaction history, $\tilde{d}=$ desired distance headway, $b=$ instantaneous belief, $\mathcal{G}=$ expected free energy, $\text{NN}=$ neural network. ### 2.1 Intelligent Driver Model The IDM [33] implements a control policy based on drivers’ instantaneous observation of their own vehicle’s speed $v$, relative speed to the lead vehicle $\Delta v$, and distance headway to the lead vehicle $d$, i.e., $\pi(a_{t}|o_{t}=\\{v_{t},\Delta v_{t},d_{t}\\})$. At each time step, the IDM computes a longitudinal acceleration to regulate the driver’s vehicle towards a desired speed $\tilde{v}$ and desired distance headway $\tilde{d}$ using the following control rule: $\displaystyle a_{t}=a_{max}\left[1-\left(\frac{v_{t}}{\tilde{v}}\right)^{4}-\left(\frac{\tilde{d}}{d_{t}}\right)^{2}\right]$ (1) where the desired distance headway is defined as: $\displaystyle\tilde{d}=d_{0}+v_{t}\tau-\frac{v_{t}\Delta v_{t}}{2\sqrt{a_{max}b}}$ (2) The IDM has the following parameters: $a_{max}$ the maximum acceleration rate which can be implemented by the driver, $d_{0}$ the minimum allowable distance headway, $\tau$ the desired headway time, and $b$ the maximum deceleration rate. While these parameters can be set manually by human designers, they usually depend on the road condition and vary with individual driver characteristics, e.g., the desired velocity and minimum distance headway. Thus, various methods have been proposed to calibrate model parameters from traffic data [34, 35]. A significant limitation of the IDM is that it cannot express certain types of behavior as a result of the control rule defined in (1) and (2). For example, it cannot express behavior due to uncertainty about the lead vehicle behavior and surrounding traffic and is limited to modeling behavior in non-conflict scenarios [14]. Incorporation of such behavior requires significant intervention from the model designers in adapting the control rule, e.g, modifying (1) and (2) to depend on additional inputs or “memory" mechanisms [13]. ### 2.2 Behavior Cloning BC refers to methods that train neural networks to learn policies from datasets of human car following behavior. The dataset, denoted with $\mathcal{D}$, is usually organized in the form of observation-action trajectories, i.e., $\mathcal{D}=\\{o_{1:T}^{(i)},a_{1:T}^{(i)}\\}_{i=1}^{N}$, where $N$ is the total number of trajectories and $T$ is the length of each trajectory. The neural network parameterized policies depend on either the entire history $h_{t}$ or the most recent observation $o_{t}$. Let us denote the policy parameters with $\theta$, BC trains policies to maximize the expected log likelihood of the dataset trajectories: $\displaystyle\max_{\theta}\mathcal{L}(\theta)=\mathbb{E}_{o_{1:t},a_{1:t}\sim\mathcal{D}}\left[\sum_{t=1}^{T}\log\pi_{\theta}(a_{t}|h_{t})\right]$ (3) BC is simple to implement and more computationally efficient than comparative data-driven machine learning methods like reinforcement learning and online imitation learning. BC also does not require a high fidelity traffic simulation environment for training, which is necessary for reinforcement learning and online imitation learning. In contrast to rule-based policies, BC policies are more flexible and can express a much larger class of behaviors. However, BC as a representative offline learning method has known disadvantages of being sensitive to the quantity and quality of training data and input features. The covariate-shift between the training dataset and the testing environment and neural network models’ difficulty of extrapolating learned mechanisms to unseen inputs often cause BC models to overfit to the training dataset while producing poor control behavior during closed-loop testing (defined in section 2.8.2) [36, 20]. Furthermore, several studies have found that BC can be highly sensitive to input features [19, 37, 16]. Specifically, when a driver’s previous control actions are used as input features to the trained policy, it is likely that the policy merely repeats those controls actions in closed-loop testing. This has been interpreted as a form of learning spurious correlations or causal confusion in machine learning, since driver controls at adjacent time steps are usually so similar that predicting previous controls can quickly minimize training error [37]. Because BC does not impose any structure on the policy, examining the failure modes of BC models is as challenging as examining any other black-box neural network models. Despite these shortcomings, BC, or variations of it, is a widely studied approach in developing automated vehicle algorithms and building simulated agents and environments for training them [25, 24]. It can produce high quality simulated behavior in practice when the training dataset is large and diverse, appropriate features are selected, and the neural network model is large and expressive enough [22, 16, 23] and thus it represents a valid data- driven modeling benchmark. ### 2.3 Active Inference Driving Agent An active inference agent is defined by its internal generative model, which we implemented as a Partially Observable Markov Decision Process (POMDP). A POMDP describes a dynamic process in which the state of the environment $s_{t}\in\mathcal{S}$ evolves with driver actions, $a_{t}\in\mathcal{A}$, according to a probability distribution, $P(s_{t+1}|s_{t},a_{t})$, and generates observation signal, $o_{t+1}\in\mathcal{O}$, according to, $P(o_{t+1}|s_{t+1})$. In this work, we assume the observation signals are multivariate continuous variables and the states are discrete to represent probabilistic categorizations of the observation space (i.e., categorical perception [38]). At every time step, the active inference agent first makes inference about the hidden state of the environment upon receiving observations using Bayes’ rule: $\displaystyle b_{t}(s_{t})=\frac{P(o_{t}|s_{t})P(s_{t}|b_{t-1},a_{t-1})}{\sum_{s_{t}}P(o_{t}|s_{t})P(s_{t}|b_{t-1},a_{t-1})}$ (4) where $b_{t}(s_{t})=P(s_{t}|h_{t})$ denotes the agent’s belief about the environment state given the observation-action history $h_{t}$ and $P(s_{t}|b_{t-1},a_{t-1})=\sum_{s_{t-1}}P(s_{t}|s_{t-1},a_{t-1})b(s_{t-1})$ is the prior predictive distribution based on the previous belief. The active inference agent then selects control actions to minimize a criterion known as the (cumulative) expected free energy (EFE) [39]: $\displaystyle\mathcal{G}^{*}(b_{t})=\min_{\pi}\mathbb{E}\left[\sum_{t}^{t+H}EFE(b_{t},a_{t})+\log\pi(a_{t}|b_{t})\right]$ (5) where $H\leq\infty$ is a finite planning horizon. The EFE is defined as: $\displaystyle EFE(b_{t},a_{t})\triangleq\mathbb{E}[D_{KL}\left(b_{t+1}||\tilde{P}\right)]+\mathbb{E}[\mathcal{H}(o_{t+1})]$ (6) where $\tilde{P}:=\tilde{P}(s_{t+1})$ defines the agent’s preferred state distribution, $D_{KL}(\cdot||\cdot)$ denotes the Kullback-Leibler divergence — measuring the discrepancy between the current belief and the preferred state distribution — and $\mathcal{H}(\cdot)$ denotes Shannon entropy — measuring uncertainty about observations. These terms represent goal-seeking and information-seeking (epistemic) behavior respectively [40]. The first expectation in the EFE is taken with respect to: $\displaystyle\begin{split}P(o_{t+1}|b_{t},a_{t})&=\sum_{s_{t+1}}P(o_{t+1}|s_{t+1})P(s_{t+1}|b_{t},a_{t})\end{split}$ (7) and the second expectation in the EFE is taken with respect to $P(s_{t+1}|b_{t},a_{t})$. Let $\mathcal{G}^{*}(b_{t},a_{t})$ be defined as: $\displaystyle\mathcal{G}^{*}(b_{t},a_{t}):=EFE(b_{t},a_{t})+\log\pi(a_{t}|b_{t})+\int P(o_{t+1}|b_{t},a_{t})\mathcal{G}^{*}(b_{t+1})d_{o_{t+1}}$ (8) Then the optimal policy has a closed-form expression [41]: $\displaystyle\pi(a_{t}|b_{t})=\frac{e^{-\mathcal{G}^{*}(b_{t},a_{t})}}{\sum_{\tilde{a}\in\mathcal{A}}e^{-\mathcal{G}^{*}(b_{t},\tilde{a})}}$ (9) Active inference has two important differences from the traditional notion of POMDP in operations research and reinforcement learning. First, both the generative model and the control objective are internal to the agent, meaning they can differ in substantial ways from the true environment generative model or a canonical notion of desired behavior, e.g., a "good driver" should always be centered in the lane. This has important implications as many human driving behaviors can be explained as inference in subjective or sub-optimal models [42]. Second, active inference makes an explicit distinction between pragmatic and epistemic behavior in its policy objective according to the first and second terms in (6). This distinction supports adaptive behavior in unknown and uncertain environments [40, 43], e.g., traffic environments. ### 2.4 Dataset We performed our analysis of the IDM, BC, and AIDA using the INTERACTION dataset [44], a publicly available driving dataset recorded using drones on fixed road segments in the USA, Germany, and China. The dataset provides a lanelet2 format map [45] and a set of time-indexed trajectories of the positions, velocities, and headings of each vehicle in the scene in the map’s coordinate system at a sampling frequency of 10 Hz, and the vehicle’s length and width for each road segment. The dataset contains a variety of traffic behaviors, including car following, free-flow traffic, and merges. Due to our emphasis on car following behavior, we selected a subset of the data to include car following data from a two-way, seven-lane highway segment in China with a total distance of 175 m. We focused on vehicles in the middle two west-bound lanes shown in Figure 2. We further filtered the remaining vehicles according to two criteria: 1) there was a lead vehicle with a maximum distance headway of 60 m, and 2) the ego vehicle was not performing a merge or lane change. We identified merging and lane change behavior using an automated logistic regression-based approach and validated the classifications with a manual review of a subset of trajectories. We also removed all trajectories with length shorter than 5 seconds, leaving a total of 1,254 trajectories with an average length of 14 seconds. Figure 2: Map of the roadway included in data collection. The westbound lanes (blue) were used for training and the eastbound lanes (orange) were used for testing. #### 2.4.1 Feature Computation The input features to the IDM are defined in (1) and (2). For BC and the AIDA, we used $d$ and $\Delta v$ but excluded $v$ to prevent learning spurious correlations to ego speed or acceleration from past time steps reported in prior studies [37, 22, 16]. Furthermore, we included an additional feature $\tau^{-1}$ in BC and AIDA defined as the rate of change of the visual angle of the lead vehicle from the ego driver’s seat position divided by the angle itself. $\tau^{-1}$ can be considered as a perceptual-control analog of inverse time-to-collision, a feature commonly used in driver modeling [19, 46, 47], with the difference of incorporating the width of the lead vehicle into feature computation and using quantities that can actually be observed by the driver. This is consistent with recent findings on the impact of optical expansion of the lead vehicle’s image on driver longitudinal control behavior [48]. Furthermore, the inclusion of this feature puts the information contained in the inputs to BC and the AIDA on a similar level to the IDM, as the IDM implicitly accounts for time-to-collision in its desired distance headway computation in (2). We computed all features in the Frenet frame (i.e., lane-centric coordinates [49]), by first transforming vehicle positions, velocities, and headings using the current lane center line as the reference path and then computing the features from the transformed positions and velocities. We obtained the drivers’ instantaneous longitudinal control inputs (i.e., accelerations) from the dataset by differentiating the Frenet frame longitudinal velocities. For BC and the AIDA, we discretized the continuous control inputs into discrete actions using a Gaussian mixture model of 15 Gaussian components with mean and variance parameters chosen using the Bayesian Information Criteria [50]. ### 2.5 Model implementation In this section, we describe our approach for parameterizing the IDM, BC, and the AIDA. IDM. Following [51], we parameterized the IDM by treating the IDM policy as a conditional Gaussian distribution: $\pi(a_{t}|o_{t}=\\{v_{t},\Delta v_{t},d_{t}\\})=\mathcal{N}(a_{t}|\mu_{t},\sigma^{2})$ with mean action $\mu_{t}$ and variance $\sigma^{2}$. The mean action $\mu_{t}$ is computed from the IDM rule defined in (1) and (2) by making the desired speed $\tilde{v}$, minimum distance headway $d_{0}$, desired headway time $\tau$, maximum acceleration rate $a_{max}$ and maximum deceleration rate $b$ adjustable parameters. The action variance $\sigma^{2}$ is assumed to be independent of the input features and also estimated from data. BC. We implemented the BC model with two types of neural network policies: standard Multi-Layer Perceptron (MLP) networks and recurrent neural networks (RNN) following [19]. The MLP network takes as input the observation vector (normalized by training set statistics) and outputs a probability distribution over the discrete control actions. The RNN addresses the possibility that driver behavior may be influenced by the full observation history rather than just the most recent observation. For the RNN, we combined a Gated Recurrent Unit (GRU) network [52] with a MLP network, where the GRU network compresses the observation history into a fixed length vector, which is then transformed into the action distribution by the MLP network. AIDA. We modeled the discrete state transition probabilities $P(s_{t+1}|s_{t},a_{t})$ and the desired state distribution $\tilde{P}(s_{t})$ of the AIDA using categorical distributions parameterized by their logits. We parameterized the observation distributions $P(o_{t}|s_{t})$ using Normalizing Flow, a flexible class of neural network-based density estimator [53, 54]. This provides the AIDA with adequate flexibility in modeling complex and nonlinear observation sequences and associating observed actions with agent beliefs. Normalizing Flow uses invertible neural networks to transform simple distributions, e.g., Gaussian distributions, into complex and correlated distributions while maintaining the tractability of likelihood evaluation and sampling. In this work, we used a single, shared Inverse Autoregressive Flow [55] to transform a set of conditional Gaussians with mean vector $\mu(s_{t})$ and covariance matrix $\Sigma(s_{t})$. We modeled a distribution over the agent’s planning horizon using a Poisson rate parameter and used the QMDP method [56, 57] as a closed-form approximation of the cumulative expected free energy in (8). We approximately computed the entropy of the state-conditioned observation distributions required in the EFE calculation using the entropy of the Normalizing Flow base distributions. In subsequent sections, we refer to the transition and observation parameters with $\theta_{1}$, the desired state distribution and planning horizon parameters with $\theta_{2}$, and the combined parameters with $\theta=\\{\theta_{1},\theta_{2}\\}$. We provide additional implementation details in Appendix A. Our software implementation is publicly available at [58]. ### 2.6 Parameter Estimation We estimated the parameters of the IDM, BC, and AIDA by maximizing the expected log likelihood of driver control inputs from the dataset under the corresponding control policy, i.e., (3). This procedure for the AIDA differs slightly from the IDM and BC by requiring a nested step. Between each parameter update in the nested procedure, we first computed the sequence of beliefs given the observation-action history using (4) and the optimal belief- action policy using (9). We then evaluated the log likelihood as a function of the computed beliefs: $\displaystyle\max_{\theta}\mathbb{E}_{o_{1:T},a_{1:T}\sim\mathcal{D}}\left[\sum_{t=1}^{T-1}\log\pi_{\theta}(a_{t}|b_{t,\theta_{1}})\right]\quad\text{{\small s.t.}}\quad\pi_{\theta}(a_{t}|b_{t})=\frac{e^{-\mathcal{G}_{t,\theta}^{*}(b_{t},a_{t})}}{\sum_{\tilde{a}_{t}\in\mathcal{A}}e^{-\mathcal{G}_{t,\theta}^{*}(b_{t},\tilde{a}_{t})}}$ (10) While (10) allows us to learn task-relevant beliefs in active inference agents as it depends on both $\theta_{1}$ and $\theta_{2}$, the parameters are fundamentally unidentifiable since there are potentially infinite sets of $\theta$ with the same likelihood [59, 60, 61]. This is because, for example, the estimation algorithm cannot differentiate between drivers who desire small distance headway and drivers who believe the distance headway will increase to desired levels in subsequent time steps. A possible consequence of this is learning an environment model that deviates significantly from the reality, which leads to a large number of crash or inaction as a result of the agent not being able to recognize the actual environment state [62]. In order to constrain the hypothesis space and avoid configurations of $\theta$ that are incompatible with real-world constraints, we designed a data-driven prior distribution $P(\theta)$ encoding likely configurations of $\theta$. Specifically, the prior is defined as $P(\theta)=P(\theta_{1})P(\theta_{2}|\theta_{1})$, where: $\displaystyle P(\theta_{1})\propto\exp\left(\lambda\mathbb{E}_{o_{1:T},a_{1:T}\sim\mathcal{D}}\left[\sum_{t=1}^{T}\log P_{\theta_{1}}(o_{t}|h_{t-1},a_{t-1})\right]\right)$ (11) with hyperparameter $\lambda$ controlling how much the prior distribution prefers model accuracy, measured by expected log likelihood of observations. We let $P(\theta_{2}|\theta_{1})$ be a uniform distribution. In our experiments, we only compute the Maximum A posteriori (MAP) estimate of the Bayesian model by converting the prior into the following loss function added to the objective in (10): $\displaystyle\mathcal{L}(\theta_{1})=\lambda\mathbb{E}_{o_{1:T},a_{1:T}\sim\mathcal{D}}\left[\sum_{t=1}^{T}\log P_{\theta_{1}}(o_{t}|h_{t-1},a_{t-1})\right]$ (12) To prevent learning unreasonably large observation variance as a result of the observation entropy term in (6), another symptom previously reported in [62], we applied a penalty on the $l^{2}$ norm of the observation covariance parameters. Using these prior loss functions, the AIDA MAP estimate can be written as: $\displaystyle\theta^{\text{MAP}}=\arg\max_{\theta}\sum_{t=1}^{T}\mathbb{E}_{o_{1:T},a_{1:T}\sim\mathcal{D}}\left[\log\pi_{\theta}(a_{t}|b_{t,\theta_{1}})+\lambda_{1}\log P_{\theta_{1}}(o_{t}|h_{t-1},a_{t-1})\right]+\lambda_{2}\sum_{s}||\Sigma_{\theta_{1}}(s)||^{2}$ (13) ### 2.7 Model selection We trained each model with 15 random seeds controlling model parameter initialization and dataset mini-batch iteration orders. To select the hyperparameters for the AIDA, we first set $\lambda_{2}=0.1$ since it’s sufficient to mitigate overly large covariances. We then trained the model for $\lambda_{1}=[0.2,1,4]$ and selected $\lambda_{1}=1$ as it best trades off environment model accuracy and agent behavior predictive performance (with criteria described in the next section). ### 2.8 Model Evaluation and Comparison We evaluated and compared our models’ ability to generate behavior similar to the human drivers in the dataset using both open-loop offline predictions and closed-loop online simulations. In both cases, we evaluated the models on two different held-out testing datasets. The first dataset includes vehicles from the same lanes as the training dataset. This dataset tests whether the models can generalize to unseen vehicles in the same traffic condition. We obtained this dataset by dividing all selected trajectories in the westbound lanes using a 7-3 train-test ratio. The second dataset includes vehicles from the top two eastbound lanes in Figure 2. This dataset tests whether the models can generalize to unseen vehicles in novel traffic conditions, since the traffic in the eastbound lanes have on average higher speed and less density. We refer to these two datasets as same-lane and new-lane, respectively. We randomly selected 100 trajectories with a minimum length of 10 seconds from the same- lane dataset and 75 trajectories with a minimum length of 5 seconds from the new-lane dataset for testing. #### 2.8.1 Offline Evaluation The goal of the offline evaluation was to assess each model’s ability to predict a driver’s next action based on the observation-action history recorded in the held-out testing dataset. This task evaluates the models’ ability to be used as a short-horizon predictor of other vehicles’ behavior in an on-board trajectory planner [63]. We measured a model’s predictive accuracy using Mean Absolute Error (MAE) of the predicted control inputs (unit=$m/s^{2}$) on the held-out testing datasets. For the IDM, we calculated the predicted control inputs by sampling from the Gaussian policy. For BC and the AIDA, we first sampled a discrete action from the action distribution predicted by the models and then sampled from the corresponding Gaussian component in the Gaussian mixture model used to perform action discretization. #### 2.8.2 Online Evaluation Rather than predicting instantaneous actions, the goal of the online evaluation was to assess the models’ ability to generate trajectories similar to human drivers such that they can be used as simulated agents in automated vehicle training and testing environments [24]. This is fundamentally different from offline predictions because the models need to choose actions based on observation-action history generated by its own actions rather than those stored in the fixed, offline dataset. This can introduce significant covariate shift [20] sometimes resulting in situations outside of the model’s training data, which can lead to poor action selection. We built a single-agent simulator where the ego vehicle’s longitudinal acceleration is controlled by the trained models and its lateral acceleration is controlled by a feedback controller for lane-centering. The lead vehicle simply plays back the trajectory recorded in the dataset. Other vehicles do not have any effect on the ego vehicle, given our observation space does not contain other vehicle related features. Following [25], We measured the similarity between the generated trajectories and the true trajectories using the following metrics: 1. 1. Average deviation error (ADE; unit=$m$): deviation of the Frenet Frame longitudinal position from the dataset averaged over all time steps in the trajectory. 2. 2. Lead vehicle collision rate (LVCR; unit=$\%$): percentage of testing trajectories containing collision events with the lead vehicle. A collision is defined as an overlap between the ego and lead vehicles’ bounding boxes. #### 2.8.3 Statistical Evaluation Following the recommendations in [64, 65] for evaluating learned control policies, we represented the central tendency of a model’s offline prediction and online control performance using the interquartile mean (IQM) of the offline MAEs and online ADEs. Note however for collision rate, we compute the regular mean instead of IQM to account for the collision rate lower bound of 0. The IQMs are computed by 1) ranking all tested trajectories by their respective performance metrics and 2) computing the mean of the performance metrics ranked in the middle 50%. To compare the central performance difference between the AIDA and baseline models, we performed two-sided Welch’s t-tests with 5 percent rejection level on the MAE-IQM and ADE-IQM values computed from different random seeds with the assumption that the performance distributions between two models may have different variances [64, 65]. ## 3 Results and Discussion ### 3.1 Offline Performance Comparison Figure 3 shows the offline evaluation results for each model with the model type on the x-axis and the IQMs of acceleration prediction MAEs averaged across the testing dataset on the y-axis. The color of the points in the figure represents the testing condition and each point corresponds to a random seed’s result. The points are randomly distributed around each x-axis label for clarity. Dispersion on the y-axis indicates sensitivity in the model to initial training conditions. The plot illustrates that the AIDA had the lowest MAE-IQM in the same-lane tests, followed by BC-RNN, BC-MLP, and IDM. The corresponding pairwise Welch’s t-test results in Table 1 show that these differences are significant. In the new-lane tests, both the AIDA and neural network BC models significantly outperformed IDM. The AIDA performance has higher variance than BC models, however the difference in the central tendency was not significant. These results show that in the current car following setting, the AIDA and BC generalized better to the new-lane scenario than the IDM, mostly likely due to the IDM rules being unable to adapt to different traffic speed and density than the training dataset. The stronger predictive performance in the AIDA and BC-RNN in the same-lane data can be attributed to the fact that driver acceleration actions depend on the full history of past observations rather than just the most recent observation, which can be modeled by the recurrent structure of the AIDA and BC-RNN. The figure also shows that for the same-lane tests, the AIDA had more variance across the random seeds compared to other models, suggesting that it is the most sensitive to local optima in the training process. Figure 3: Offline evaluation MAE-IQM. Each point corresponds to a random seed used to initialize model training and its color corresponds to the testing condition of either same-lane or new-lane. Table 1: Two-sided Welch’s t-test results of offline MAE-IQM against baseline models. Asterisks indicate statistical significance with $\alpha=0.05$. Baseline | Comparison | t(df=14) | p-value ---|---|---|--- IDM | same-lane | t=37.58 | p<0.001* BC-MLP | same-lane | t=32.38 | p<0.001* BC-RNN | same-lane | t=17.31 | p<0.001* IDM | new-lane | t=33.21 | p<0.001* BC-MLP | new-lane | t=0.35 | p=0.73 BC-RNN | new-lane | t=-0.12 | p=0.90 ### 3.2 Online Performance Comparison Figure 4 shows the IQM of each model’s ADEs from data set trajectories in the online evaluations using the same format as the offline evaluation results. In the same-lane testing condition, all models had an ADE-IQM values between 1.8 m and 2.8 m, which is less than the length of a standard sedan ($\approx$ 4.8 m; [66]). Among all models, BC-MLP achieved the lowest ADE values for both the same-lane and new-lane conditions, followed by the AIDA, IDM, and BC-RNN. Furthermore, both the AIDA and BC models achieved lower ADE-IQM in the new lane settings compared to the same-lane setting, however the IDM achieved higher ADE-IQM in the new-lane setting. The Welch’s t-test results in Table 2 show that AIDA’s online test performances are significantly different from all baseline models in both the same-lane and new-lane settings (P $\leq$ 0.01). These findings confirm that the AIDA and BC models generalized better to the new-lane setting than the IDM and suggest that the AIDA’s average online trajectory-matching ability is significantly better than IDM and BC-RNN, although BC-MLP is significantly better than the AIDA. Figure 4: Online evaluation ADE-IQM. Each point corresponds to a random seed used to initialize model training and its color corresponds to the testing condition of either same-lane or new-lane. Table 2: Two-sided Welch’s t-test results of online ADE-IQM against baseline models. Asterisks indicate statistical significance with $\alpha=0.05$. Baseline | Comparison | t(df=14) | p-value ---|---|---|--- IDM | same-lane | t=3.05 | p<0.01* BC-MLP | same-lane | t=-5.46 | p<0.001* BC-RNN | same-lane | t=8.73 | p<0.001* IDM | new-lane | t=58.18 | p<0.001* BC-MLP | new-lane | t=-3.77 | p<0.001* BC-RNN | new-lane | t = 6.87 | p<0.001* Figure 5 shows the lead vehicle collision rates for each random seed and model using the same format as Figure 4. The figure illustrates that in the same- lane condition, the random seeds for BC-MLP, BC-RNN, and the AIDA had more collisions than the IDM (0% collision rate across all seeds). In particular, BC-RNN and the AIDA had substantial differences across random seeds compared to the other models. However, the minimum collision rates for BC-MLP, BC-RNN, and the AIDA were consistent (less than or equal to 1%). In the new-lane condition, the collision rate was 0% for all four models. The higher collision rates in the same-lane data are likely due to the traffic density and complexity, which were higher in the same-lane condition compared to the new- lane condition. Figure 5: Lead vehicle collision rate in online evaluation. Each point corresponds to a random seed used to initialize model training and its color corresponds to the testing condition of either same-lane or new-lane. ### 3.3 AIDA Interpretability Analysis The previous sections suggest that the AIDA can capture driver car following behavior significantly better than the IDM and comparably to data-driven BC models. However, the findings have yet addressed the interpretability of the AIDA. While there is no established metric for model interpretability, Räukur et. al. [8] recommend assessments based on the easiness of comprehending the connection between model input and output and tracing model predictive errors to internal model dynamics. Given that the AIDA’s decisions are emitted from a two-step process, i.e., (1) forming beliefs about the environment and (2) selecting control actions that minimize free energy, the model’s success at the car following task depends on the two sub-processes both independently and jointly. We examined the AIDA’s learned input-output mechanism by visualizing its independent components (i.e., the observation, transition, and preference distributions) and verified them against expectations guided by driving theory [67, 43, 68]. We then examined the joint belief-action process by replaying the AIDA beliefs and diagnosing its predictions of recorded human drivers in the offline setting and its own decisions in the online setting. #### 3.3.1 Independent Component Interpretability Initial insights into the model input and output connections can be gained by visualizing the AIDA components, specifically its policy (Figure 6(b)), observation distribution (shown in Figure 6(c)), and preference distribution (Figure 6(d)). These figures show 200 random samples from each state of the AIDA’s state-conditioned observation distribution, $P(o|s)$, plotted on each pair of observation modalities. Color is used to highlight relevant quantities of interest. We further used samples drawn from the INTERACTION dataset, plotted in Figure 6(a) and colored by the recorded accelerations, to facilitate interpreting the the AIDA samples. (a) (b) (c) (d) Figure 6: Visualizations of the dataset and AIDA model components. In panel (a), we plotted observations sampled from the dataset. In panels (b), (c), and (d) we sampled 200 points from the AIDA’s state conditioned observation distributions and plotted the sampled points for each pair of observation feature combinations. The points in each panel are colored by: (a) accelerations from the dataset, (b) the AIDA’s predicted accelerations upon observing the sampled signals from a uniform prior belief, (c) state assignments (d) log probabilities of the preference distribution. Figure 6(b) illustrates the observation samples by the model’s chosen control actions. The top chart shows the samples using distance headway ($d$; x-axis) by relative velocity to the lead vehicle ($\Delta v$; y-axis), the middle chart shows distance headway by $\tau^{-1}$, and the bottom chart shows relative velocity by $\tau^{-1}$. The shape of the sampled points matches the contour of the empirical dataset (Figure 6(a)), particularly in the middle and bottom visualizations, which suggests that the model’s learned observation model aligns with the recorded observations in the dataset. Darker green and red colors correspond to larger acceleration and deceleration magnitudes, respectively, and light yellow color corresponds to near zero control inputs. The color gradient at different regions in Figure 6(b) is consistent with that of the empirical dataset shown in Figure 6(a). This shows that the model learned a similar observation to action mapping as the empirical dataset. The mapping can be interpreted as the tendency to choose negative accelerations when the relative speed and $\tau^{-1}$ are negative and the distance headway is small, and positive accelerations in the opposite case. Furthermore, the sensitivity of the red and green color gradients with respect to distance headway shows that the model tends to accelerate whenever there is positive relative velocity, regardless of the distance headway. However, it tends to input smaller deceleration at large distance headway for the same level of relative speed. Figure 6(c) shows the observation samples colored by their associated discrete states. The juxtaposition of color clusters in the top panel shows that the AIDA learned to categorize observations by relative speed and distance headway and its categorization for relative speed is more fine-grained at small distance headways and spans a larger range of values. The middle and bottom panels show that its categorization of relative speed is highly correlated with $\tau^{-1}$ as the ordering of colors along the y-axis is approximately the same as in the top panel. The middle and bottom panels show that the AIDA’s categorization of high $\tau_{1}$ magnitude states (blue and cyan clusters) have larger span than that of low $\tau^{-1}$ magnitude states. These patterns further establish that the AIDA has learned a representation of the environment consistent with the dataset. At the same time, it can be interpreted as a form of satisficing in that the model represents low urgency large distance headway states with less granularity [69]. Figure 6(d) shows the observation samples by the log of its preference probability, $\tilde{P}(o)=\sum_{s}\tilde{P}(s)P(o|s)$, where higher preference probability (i.e., desirability) corresponds to brighter colors (e.g., yellow) and lower desirability corresponds to darker colors (e.g., purple). The figure shows that the highest preference probability corresponds to observations of zero $\tau^{-1}$, zero relative velocity, and a distance headway of 18 m (see the center region of the middle chart, and the yellow circle at the left-center of the top chart). This aligns with the task- difficulty homeostasis hypothesis that drivers prefer states in which the crash risk is manageable [67] and not increasing. It is also consistent with the observed driver behavior in Figure 6(a) where drivers tend to maintain low accelerations (light yellow points) within the same regions. Overall, these results show a clear mapping between the AIDA’s perceptual (Figure 6(c)) and control (Figure 6(d) and 6(b)) behavior that is both consistent with the observed data and straightforwardly illustrated using samples from the fitted model distributions. This mapping facilitates predictions of the AIDA’s reaction to observations without querying the model, which is an important dimension of interpretability in real world model verification [8]. #### 3.3.2 Joint Model Interpretability While the previous analysis illustrates the interpretability of individual model components, the interaction between components introduces additional challenges for overall model interpretability. To address this, we analyzed two same-lane scenarios where the AIDA made sub-optimal decisions in the model testing phase — one from the offline evaluations where the AIDA’s predictions had the largest MAE and one from the online evaluations where the AIDA generated a rear-end collision with the lead vehicle. We first visualized the AIDA’s beliefs (computed by (4)) and policies (computed by (9)) as the model generated actions and then used those visualizations to demonstrate how the transparent input-output mechanism in the AIDA can be used to mitigate the sub-optimal decisions. The chosen offline evaluation trajectory is visualized in Figure 7. The left column charts show the data of the three observation features over time. The right column charts show the time-varying ground truth action probabilities over time (top), action probabilities predicted by the AIDA over time (middle), and environment state probabilities $P(s|h)$ inferred by the AIDA over time (bottom). In the right-middle and right-bottom charts, the action and belief state indices are sorted by the mean acceleration and $\tau^{-1}$ value of each state to facilitate alignment with the left and top-right charts. We labeled the actions by the corresponding means but not the belief states because they represent multi-dimensional observation categorizations (see Figure 6(c)). The bottom-right chart shows that the inferred belief patterns closely followed the observed relative speed and $\tau^{-1}$ in the left-middle and left-bottom charts with high precision, i.e., close to probability of 1. The predicted action probabilities in the right-middle chart followed the trend of the ground truth actions, however, they exhibited substantially higher uncertainty at most time steps and multi-modality at $t=1\text{ s}$ and $t=12\text{ s}$, where one of the predicted modes coincided with the true actions. Given the inferred beliefs were precise, uncertain and multi-model actions were likely caused by inter-driver variability in the dataset, where drivers experienced similar belief states but selected different actions. Alternatively, this uncertainty may be caused by drivers having highly different beliefs after experiencing similar observations, where a simple policy would be sufficient to predict their actions. In either case, the error in AIDA predictions can be attributed to inconsistency between the belief trajectories and action predictions. Figure 7: Visualizations of a same-lane offline evaluation trajectory where the AIDA had the highest prediction MAE. The charts in the left column show distance headway, relative speed, and $\tau^{-1}$ signals observed by the model over time. The binary heat maps in the right column show the ground truth action probabilities (top), action probabilities predicted by the AIDA (middle), and the corresponding belief states (bottom) over time (x-axis), where darker colors correspond to higher probabilities. The belief state and action indices are sorted by the mean $\tau^{-1}$ and acceleration value of each state, respectively. The chosen online evaluation trajectory which resulted in a rear-end collision with the lead vehicle is shown in Figure 8 plotted using the same format as Figure 7. The duration of the crash event is highlighted by the red square in the bottom-left chart, where the sign of $\tau^{-1}$ values instantly inverted when overlapping bounding boxes between the ego and lead vehicle first occurred and eventually ended. The AIDA initially made the correct and precise decision of braking, however, its predictions for high magnitude actions became substantially less precise prior to the collision ($t>1\text{ s}$; see right middle chart). This led to the model failing to stop fully before colliding with the lead vehicle. The belief pattern shows that the AIDA tracked the initial decreasing values of relative speed and $\tau^{-1}$ but did not further respond to increasing magnitude of $\tau^{-1}$ 3 seconds prior to the crash (starting at $t=1.6\text{ s}$). These findings show that the model exhibited the correct behavior of being "shocked" by out-of-sample near- crash observations, however, the learned categorical belief representation was not able to extrapolate beyond the data from the crash-free INTERACTION dataset. Figure 8: Visualizations of a same-lane online evaluation trajectory where the AIDA generated a rear-end collision with the lead vehicle. This figure shares the same format as Figure 7. The red square in the bottom-left chart represents the duration of the rear-end crash event where the vehicle controlled by the AIDA had overlapping bounding box with the lead vehicle. The analysis of the near-crash AIDA beliefs suggests that editing the AIDA’s learned environment dynamics model (i.e., the transition and observation distributions) to properly recognize near-crash observation signals can likely avoid the current crash. To demonstrate the utility of being able to make precise model-editing decision based on the interpretability analysis, we tested a modification of the AIDA by replacing its learned dynamics model with a physics-based dynamics model assuming constant lead vehicle velocity in the model predictions. Although the physics-based dynamics model does not capture the stochasticity in the lead vehicle behavior, it is sufficient for mitigating the current crash given its ability to accurately predict near- crash observations. We evaluated this new model in the same online testing scenarios as the AIDA, where the control actions were generated from a model- predictive controller (MPC [70]) using the AIDA’s preference distribution as the reward function (for detailed implementation seed Appendix A.3). The AIDA- MPC mitigated all crashes when deployed in the same scenarios as the AIDA as our analysis predicted. However, it generated substantially more high-ADE trajectories than the AIDA, most likely due to the lack of representation of lead vehicle stochasticity. The analyses in this section show that the decision making structure in the AIDA enables modelers to reason about the training dataset’s effect on the learned model behavior. To the best of our knowledge, this analysis is not possible with neural network BC models using existing interpretability tools. We also showed how this understanding can be used to edit parts of the model to achieve desired safety criteria. ## 4 General Discussion In this article, we introduced and evaluated a novel active inference model of driver car following behavior (AIDA). The proposed AIDA significantly outperformed the IDM and neural network BC models in offline predictions in the same-lane condition and outperformed the IDM while performing similarly to BC models in the new-lane condition. Additionally, the AIDA achieved significantly lower average deviation error than the IDM and BC-RNN in the online control settings. However, the results showed that the AIDA was sensitive to initial training conditions, which resulted in higher rates of lead vehicle collisions in the same-lane condition compared to the IDM and BC- MLP. While BC had comparable or better performance than the AIDA in action prediction and control, the AIDA is substantially more interpretable than BC models. In contrast to approximate explanatory methods for BC neural networks, we showed that the AIDA’s decision making process can be directly accessed by sampling and visualizing the AIDA distributions. Further, we illustrated how the AIDA’s joint belief and action trajectories could be used to understand model errors and correct them. This level of understanding and diagnostic analysis is central to real world model inspection and verification which are essential components of interpretability [71, 8]. These results partially confirm our hypothesis that balancing the relative strengths of rule-based and data-driven models, specifically using the active inference framework, results in better predictions of driver behavior and more nuanced understanding of driver cognitive dynamics during car following. In contrast to fixed rule-based models like the IDM, the AIDA can incorporate additional "rules" in its state and policy priors while maintaining the flexibility provided by its probabilistic representation. In contrast to purely data-driven models, learning in the AIDA is constrained by its probability distributions and structure. This balance preserves interpretability but still allows the model to be flexible to new data. Our findings here suggest that this flexibilty comes at a cost of sensitivity to local optima in the training process as evidenced by the collision rates across random seeds in online evaluations. Further, our findings suggest that the AIDA, like other data-driven approaches, may be limited by the scope of the data used in training (e.g., the crash limitation illustrated in Figure 8). Our findings here extend prior applications of active inference theory in driving and driver models and illustrate the value of rule-based modeling. Engström and colleagues [43] presented active inference as a general theory of driving behavior with qualitative illustrations, highlighting the need to separate pragmatic (risk) and epistemic (uncertainty) behavior and relaxing the requirement of a strictly accurate environment model among human drivers. Portions of this theory have been enacted in other driver models including [72, 31, 17]. The model in [17] includes the concept of balancing rule-based and data-driven models, but the focus is primarily on physical concepts rather than psychological concepts in the AIDA. The model presented by Pekkanen et al. [72] includes an attention mechanism driven by the uncertainty of desired actions. The desired actions were computed using the IDM and action uncertainty was obtained by propagating state uncertainty computed from a Bayesian filter. The most notable differences between Pekkanen et al.’s model and the AIDA is that their model assumes an accurate environment model and uses the IDM to generate behavior. Our results show that an integrated perception-action system is important to the AIDA’s trajectory-matching performance. However, we did not investigate epistemic behavior in the model due to the simplicity of the car following task. The AIDA posed here also extends our prior work [31] to model fine-grained longitudinal control, validate that model against established benchmarks, and provide a more detailed interpretability analysis. In addition to the contributions to driver modeling, this work extends research on human perception and control modeling. Our simultaneous estimation of human preference, understanding of environment dynamics (i.e., transition probabilities), and perceptual uncertainties (i.e., observation probabilities), and use of data from a complex driving environment differentiate this work from [73, 59, 74]. Our findings here suggest that the AIDA can be extended to complex environments successfully, although it is sensitive to training data and model parameterization. Our use of a data- driven prior distribution, i.e.,(11), to prevent estimating transition and observation parameters that are highly inconsistent with actual traffic dynamics and reduce unidentifiability is also novel and differentiates this work from [57] and [75]. Our visualizations of model preference and beliefs in Figure 6(d) and Figures 7-8 show that the proposed data-driven prior leads to preference and dynamics estimation consistent with the observed data and driver behavior theories. Our work is limited by the following aspects. First, we have assumed three driver observation modalities: distance headway, relative speed, and $\tau^{-1}$ with respect to the lead vehicle. However, human drivers are known to monitor other surrounding vehicles while driving [6] and to have broader visual sampling [76]. Second, our parameterization of discrete states has limited the expressivity of the model and prevented inductive biases such as the smoothness of physical dynamics from being encoded. The limited dataset coverage, e.g., the lack of crashes, prevented the learned dynamics from generalizing to some out-of-distribution scenarios. The combination of model and data insufficiency led to the difficulty of recognizing near-crash states and resulted in substantially more lead vehicle crashes than BC-MLP and the IDM. Third, since the INTERACTION dataset was collected on highway, there likely exists considerable heterogeneous driving behavior. This is shown in the uncertain and multi-modal predictions in Figure 7 as the model had to explain drivers who took different actions upon observing similar signals. While we anticipate incorporating additional observations and higher state space dimension and application to alternative driving scenarios to be easy under the current model formulation, doing so would impose additional requirements on dataset quality and diversity. We thus recommend future work to consider general methods for incorporating domain knowledge in more expressive generative models to combat dataset limitations and modeling heterogeneity in naturalistic driver behavior. The results here suggest that these extensions may alleviate many of the current model limitations. ## 5 Conclusions We proposed a novel active inference model of driver behavior (AIDA). Using car following data, we showed that the AIDA significantly outperformed the rule-based IDM on all metrics and performed comparably with the data-driven neural network benchmarks. Using an interpretability analysis, we showed that the structure of the AIDA provides superior transparency of its input-output mechanics than the neural network models. Future work should focus on training with data from more diverse driving environments and examining model extensions that can capture heterogeneity across drivers. ## 6 Acknowledgements Support for this research was provided in part by the U.S. Department of Transportation (DOT), University Transportation Centers Program to the Safety through Disruption University Transportation Center (451453-19C36) and the UK Engineering and Physical Sciences Research Council (EPSRC; EP/S005056/1). Thanks to advisers, J. Engstrom and M. O’Kelly, from Waymo, who helped set the technical direction, identified relevant published research, and advised on the scope and structuring of this publication, independent of the support this research received from USDOT. ## Appendix A Appendix ### A.1 BC Implementation For BC-MLP, we used a two-layer MLP network with ReLU activation and 40 hidden units in each layer. For BC-RNN, we used a two-layer MLP network on top of a single-layer GRU network with ReLU activation and 30 hidden units in each layer. The GRU layer only takes in past observations but not past actions. We found that larger number of hidden units in the BC-RNN model leads to significant overfitting. Both BC-MLP and BC-RNN receive 3 input observations and output probability distributions over 15 discrete actions. ### A.2 AIDA Implementation The AIDA implementation follows value-iteration network and QMDP network [75, 57] to enable end-to-end training in Pytorch [77]. We used a state dimension of 20, action dimension of 15, and a maximum planning horizon of 30 steps (3 seconds). The Normalizing Flow network consisted of a Gaussian mixture base distribution and a two-layer MLP network with ReLU activation and 30 hidden units in each layer. For each mini-batch of observation-action sequences, we first computed the log likelihood of the observations at all time steps and used (4) to compute the belief sequences. We then computed the cumulative EFE in (8) and the resulting optimal policy in (9) for each inferred belief using the QMDP approximation method [56]. We evaluated dataset action likelihood using a weighted average of optimal policies over different horizons: $\displaystyle\pi(a|b)=\sum_{H}\pi(a|b,H)P(H)$ (14) where $P(H)$ is a truncated Poisson distribution up to the maximum planning horizon. The QMDP method assumes the belief-action value can be approximated as a weighted-average of the state-action value: $\displaystyle\mathcal{G}^{*}(b_{t},a_{t})=\sum_{s_{t}}b_{t}(s_{t})\mathcal{G}^{*}(s_{t},a_{t})$ (15) where $\displaystyle\mathcal{G}^{*}(s_{t},a_{t})$ $\displaystyle=EFE(s_{t},a_{t})+\log\pi(a_{t}|b_{t})+\sum_{s_{t+1}}P(s_{t+1}|s_{t},a_{t})\mathcal{G}^{*}(s_{t+1})$ (16) $\displaystyle EFE(s_{t},a_{t})$ $\displaystyle=D_{KL}(P(s_{t+1}|s_{t},a_{t})||\tilde{P}(s_{t+1}))+\mathbb{E}_{P(s_{t+1}|s_{t},a_{t})}[\mathcal{H}(P(o_{t+1}|s_{t+1}))]$ (17) and $\forall s\in\mathcal{S},\mathcal{G}^{*}(s_{t+H+1})=0$. The combination of QMDP approximation and computing the observation entropy in (17) using the Gaussian base distributions reduced the model’s ability to evaluate state uncertainty. However, given the low state uncertainty shown in Figure 7 and Figure 8 (i.e., the nearly deterministic belief states in the lower right charts), these approximations do not significantly impact the current results while providing the benefit of computational tractability. Another difference between our implementation and the common active inference presentation is that we performed exact Bayesian state inference (i.e., (4)) instead of approximate variational inference (e.g., in [27]). This does not impact the current results since both methods arrive at the same solution in the discrete state setting. ### A.3 AIDA-MPC Implementation In the AIDA-MPC model, we replaced the learned discrete environment dynamics model with a physics-based dynamics model with deterministic state transition and observation functions. The physical-based model had the same three observation modalities: $d,\Delta v,\tau^{-1}$. We defined the state space as $\\{d,\Delta v\\}$. Assuming constant lead vehicle acceleration, the state transition function is the following linear function: $\displaystyle\begin{bmatrix}d^{\prime}\\\ \Delta v^{\prime}\end{bmatrix}=\begin{bmatrix}1&-\delta t\\\ 0&1\end{bmatrix}\begin{bmatrix}d\\\ \Delta v\end{bmatrix}+\begin{bmatrix}0\\\ -\delta t\end{bmatrix}a$ (18) The state to observation mapping for $d$ and $\Delta v$ are identity functions. The observation $\tau^{-1}$ is computed as $\tau^{-1}\approx\Delta v/d$. Given this dynamics model, we used the Cross-Entropy Method (CEM [70]) model predictive controller to generate actions treating the AIDA log preference probability over observation as the reward function, i.e., $R(o)=\log\sum_{s}P(o|s)\tilde{P}(s)$. At each time step, the CEM controller is initialized with a Gaussian distribution over finite horizon action sequences. It then iteratively refines the distribution by sampling $N$ action sequences from the distribution, simulating the action sequences forward using the dynamics model, refitting the Gaussian distribution to the top $K$ samples. Finally, it selects the first step of the mean action sequence of the final Gaussian distribution as the action output. We used a CEM planning horizon of 6 time steps (0.6 seconds), sampled $N=50$ action sequences, selected the top $K=5$ sequences, and refined the distribution for 20 iterations. ### A.4 Parameter Counts The number of parameters in each model is listed in Table 3. Table 3: Parameter count of all models. | IDM | BC-MLP | BC-RNN | AIDA ---|---|---|---|--- Count | 6 | 4125 | 6465 | 7670 ### A.5 AIDA vs. AIDA-MPC Figure 9(a) and 9(b) show AIDA and AIDA-MPC’s lead vehicle collision rate and ADE for each tested trajectory, respectively, where each point corresponds to the result of a trajectory. The shadows in Figure 9(b) represent the density of each model’s ADEs, where wider shadow represents higher density. (a) (b) Figure 9: Online same-lane evaluation results of AIDA and AIDA-MPC. Each point represents a trajectory in the test set. The AIDA-MPC replaces the AIDA’s dynamics model with a physics-based dynamics model and plans by treating the AIDA’s preference distribution as a reward function using model-predictive control. (a) Lead vehicle collision rate of each trajectory. (b) ADE of each trajectory. Wider shadows represent higher density of the ADE values. ## References * Scheel et al. [2022] Scheel, O., L. Bergamini, M. Wolczyk, B. Osiński, and P. Ondruska, Urban driver: Learning to drive from real-world demonstrations using policy gradients. In _Conference on Robot Learning_ , PMLR, 2022, pp. 718–728. * Scanlon et al. [2021] Scanlon, J. M., K. D. Kusano, T. Daniel, C. Alderson, A. Ogle, and T. Victor, Waymo simulated driving behavior in reconstructed fatal crashes within an autonomous vehicle operating domain. _Accident Analysis & Prevention_, Vol. 163, 2021, p. 106454. * Bärgman et al. [2017] Bärgman, J., C.-N. Boda, and M. Dozza, Counterfactual simulations applied to SHRP2 crashes: The effect of driver behavior models on safety benefit estimations of intelligent safety systems. _Accident Analysis & Prevention_, Vol. 102, 2017, pp. 165–180. * Roesener et al. [2017] Roesener, C., J. Hiller, H. Weber, and L. Eckstein, How safe is automated driving? Human driver models for safety performance assessment. In _2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC)_ , IEEE, 2017, pp. 1–7. * Rhinehart et al. [2019] Rhinehart, N., R. McAllister, K. Kitani, and S. Levine, Precog: Prediction conditioned on goals in visual multi-agent settings. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 2821–2830. * Brown [2017] Brown, B., The social life of autonomous cars. _Computer_ , Vol. 50, No. 2, 2017, pp. 92–96. * Markkula et al. [2022] Markkula, G., Y.-S. Lin, A. R. Srinivasan, J. Billington, M. Leonetti, A. H. Kalantari, Y. Yang, Y. M. Lee, R. Madigan, and N. Merat, Explaining human interactions on the road requires large-scale integration of psychological theory, 2022. * Räukur et al. [2022] Räukur, T., A. Ho, S. Casper, and D. Hadfield-Menell, Toward transparent ai: A survey on interpreting the inner structures of deep neural networks. _arXiv preprint arXiv:2207.13243_ , 2022. * Alambeigi et al. [2020] Alambeigi, H., A. D. McDonald, and S. R. Tankasala, _Crash Themes in Automated Vehicles: A Topic Modeling Analysis of the California Department of Motor Vehicles Automated Vehicle Crash Database_ , 2020. * Novakazi et al. [2021] Novakazi, F., M. Johansson, H. Strömberg, and M. Karlsson, Levels of what? Investigating drivers’ understanding of different levels of automation in vehicles. _Journal of Cognitive Engineering and Decision Making_ , Vol. 15, No. 2-3, 2021, pp. 116–132. * Saifuzzaman and Zheng [2014] Saifuzzaman, M. and Z. Zheng, Incorporating human-factors in car-following models: a review of recent developments and research needs. _Transportation research part C: emerging technologies_ , Vol. 48, 2014, pp. 379–403. * McDonald et al. [2019] McDonald, A. D., H. Alambeigi, J. Engström, G. Markkula, T. Vogelpohl, J. Dunne, and N. Yuma, Toward computational simulations of behavior during automated driving takeovers: a review of the empirical and modeling literatures. _Human factors_ , Vol. 61, No. 4, 2019, pp. 642–688. * Kesting et al. [2009] Kesting, A., M. Treiber, and D. Helbing, Agents for traffic simulation. _Multi-agent systems: Simulation and applications_ , Vol. 5, 2009. * Hamdar and Mahmassani [2008] Hamdar, S. H. and H. S. Mahmassani, From existing accident-free car-following models to colliding vehicles: exploration and assessment. _Transportation research record_ , Vol. 2088, No. 1, 2008, pp. 45–56. * Talebpour and Mahmassani [2016] Talebpour, A. and H. S. Mahmassani, Influence of connected and autonomous vehicles on traffic flow stability and throughput. _Transportation Research Part C: Emerging Technologies_ , Vol. 71, 2016, pp. 143–163. * Zhou et al. [2017] Zhou, M., X. Qu, and X. Li, A recurrent neural network based microscopic car following model to predict traffic oscillation. _Transportation research part C: emerging technologies_ , Vol. 84, 2017, pp. 245–264. * Mo et al. [2021] Mo, Z., R. Shi, and X. Di, A physics-informed deep learning paradigm for car-following models. _Transportation research part C: emerging technologies_ , Vol. 130, 2021, p. 103240. * Zhu et al. [2018] Zhu, M., X. Wang, and Y. Wang, Human-like autonomous car-following model with deep reinforcement learning. _Transportation research part C: emerging technologies_ , Vol. 97, 2018, pp. 348–368. * Bhattacharyya et al. [2020] Bhattacharyya, R., B. Wulfe, D. Phillips, A. Kuefler, J. Morton, R. Senanayake, and M. Kochenderfer, Modeling human driving behavior through generative adversarial imitation learning. _arXiv preprint arXiv:2006.06412_ , 2020. * Spencer et al. [2021] Spencer, J., S. Choudhury, A. Venkatraman, B. Ziebart, and J. A. Bagnell, Feedback in imitation learning: The three regimes of covariate shift. _arXiv preprint arXiv:2102.02872_ , 2021. * Pomerleau [1988] Pomerleau, D. A., Alvinn: An autonomous land vehicle in a neural network. _Advances in neural information processing systems_ , Vol. 1, 1988. * Codevilla et al. [2019] Codevilla, F., E. Santana, A. M. López, and A. Gaidon, Exploring the limitations of behavior cloning for autonomous driving. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 9329–9338. * Kumar et al. [2021] Kumar, A., J. Hong, A. Singh, and S. Levine, Should I Run Offline Reinforcement Learning or Behavioral Cloning? In _International Conference on Learning Representations_ , 2021. * Igl et al. [2022] Igl, M., D. Kim, A. Kuefler, P. Mougin, P. Shah, K. Shiarlis, D. Anguelov, M. Palatucci, B. White, and S. Whiteson, Symphony: Learning Realistic and Diverse Agents for Autonomous Driving Simulation. _arXiv preprint arXiv:2205.03195_ , 2022. * Suo et al. [2021] Suo, S., S. Regalado, S. Casas, and R. Urtasun, Trafficsim: Learning to simulate realistic multi-agent behaviors. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2021, pp. 10400–10409. * Friston [2010] Friston, K., The free-energy principle: a unified brain theory? _Nature reviews neuroscience_ , Vol. 11, No. 2, 2010, pp. 127–138. * Friston et al. [2017] Friston, K., T. FitzGerald, F. Rigoli, P. Schwartenbeck, and G. Pezzulo, Active inference: a process theory. _Neural computation_ , Vol. 29, No. 1, 2017, pp. 1–49. * Doya et al. [2007] Doya, K., S. Ishii, A. Pouget, and R. P. Rao, _Bayesian brain: Probabilistic approaches to neural coding_. MIT press, 2007. * Gershman et al. [2015] Gershman, S. J., E. J. Horvitz, and J. B. Tenenbaum, Computational rationality: A converging paradigm for intelligence in brains, minds, and machines. _Science_ , Vol. 349, No. 6245, 2015, pp. 273–278. * Smith et al. [2021] Smith, R., P. Badcock, and K. J. Friston, Recent advances in the application of predictive coding and active inference models within clinical neuroscience. _Psychiatry and Clinical Neurosciences_ , Vol. 75, No. 1, 2021, pp. 3–13. * Wei et al. [2022a] Wei, R., A. D. McDonald, A. Garcia, and H. Alambeigi, Modeling Driver Responses to Automation Failures With Active Inference. _IEEE Transactions on Intelligent Transportation Systems_ , 2022a. * Engström et al. [2022] Engström, J., S.-Y. Liu, A. Dinparastdjadid, and C. Simoiu, Modeling road user response timing in naturalistic settings: a surprise-based framework. _arXiv preprint arXiv:2208.08651_ , 2022. * Treiber et al. [2000] Treiber, M., A. Hennecke, and D. Helbing, Congested traffic states in empirical observations and microscopic simulations. _Physical review E_ , Vol. 62, No. 2, 2000, p. 1805. * Papathanasopoulou and Antoniou [2015] Papathanasopoulou, V. and C. Antoniou, Towards data-driven car-following models. _Transportation Research Part C: Emerging Technologies_ , Vol. 55, 2015, pp. 496–509. * Treiber and Kesting [2013] Treiber, M. and A. Kesting, Microscopic calibration and validation of car-following models–a systematic approach. _Procedia-Social and Behavioral Sciences_ , Vol. 80, 2013, pp. 922–939. * Ross and Bagnell [2010] Ross, S. and D. Bagnell, Efficient reductions for imitation learning. In _Proceedings of the thirteenth international conference on artificial intelligence and statistics_ , JMLR Workshop and Conference Proceedings, 2010, pp. 661–668. * De Haan et al. [2019] De Haan, P., D. Jayaraman, and S. Levine, Causal confusion in imitation learning. _Advances in Neural Information Processing Systems_ , Vol. 32, 2019\. * Beer [2003] Beer, R. D., The dynamics of active categorical perception in an evolved model agent. _Adaptive behavior_ , Vol. 11, No. 4, 2003, pp. 209–243. * Friston et al. [2021] Friston, K., L. Da Costa, D. Hafner, C. Hesp, and T. Parr, Sophisticated inference. _Neural Computation_ , Vol. 33, No. 3, 2021, pp. 713–763. * Tschantz et al. [2020] Tschantz, A., A. K. Seth, and C. L. Buckley, Learning action-oriented models through active inference. _PLoS computational biology_ , Vol. 16, No. 4, 2020, p. e1007805. * Haarnoja et al. [2018] Haarnoja, T., A. Zhou, P. Abbeel, and S. Levine, Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor. In _International conference on machine learning_ , PMLR, 2018, pp. 1861–1870. * Schwartenbeck et al. [2015] Schwartenbeck, P., T. H. FitzGerald, C. Mathys, R. Dolan, F. Wurst, M. Kronbichler, and K. Friston, Optimal inference with suboptimal models: addiction and active Bayesian inference. _Medical hypotheses_ , Vol. 84, No. 2, 2015, pp. 109–117. * Engström et al. [2018a] Engström, J., J. Bärgman, D. Nilsson, B. Seppelt, G. Markkula, G. B. Piccinini, and T. Victor, Great expectations: a predictive processing account of automobile driving. _Theoretical issues in ergonomics science_ , Vol. 19, No. 2, 2018a, pp. 156–194. * Zhan et al. [2019] Zhan, W., L. Sun, D. Wang, H. Shi, A. Clausse, M. Naumann, J. Kummerle, H. Konigshof, C. Stiller, A. de La Fortelle, et al., Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. _arXiv preprint arXiv:1910.03088_ , 2019\. * Poggenhans et al. [2018] Poggenhans, F., J.-H. Pauls, J. Janosovits, S. Orf, M. Naumann, F. Kuhnt, and M. Mayr, Lanelet2: A high-definition map framework for the future of automated driving. In _2018 21st international conference on intelligent transportation systems (ITSC)_ , IEEE, 2018, pp. 1672–1679. * Svärd et al. [2021] Svärd, M., G. Markkula, J. Bärgman, and T. Victor, Computational modeling of driver pre-crash brake response, with and without off-road glances: Parameterization using real-world crashes and near-crashes. _Accident Analysis & Prevention_, Vol. 163, 2021, p. 106433. * Engström et al. [2018b] Engström, J., G. Markkula, Q. Xue, and N. Merat, Simulating the effect of cognitive load on braking responses in lead vehicle braking scenarios. _IET Intelligent Transport Systems_ , Vol. 12, No. 6, 2018b, pp. 427–433. * Markkula et al. [2016] Markkula, G., J. Engström, J. Lodin, J. Bärgman, and T. Victor, A farewell to brake reaction times? Kinematics-dependent brake response in naturalistic rear-end emergencies. _Accident Analysis & Prevention_, Vol. 95, 2016, pp. 209–226. * Werling et al. [2010] Werling, M., J. Ziegler, S. Kammel, and S. Thrun, Optimal trajectory generation for dynamic street scenarios in a frenet frame. In _2010 IEEE International Conference on Robotics and Automation_ , IEEE, 2010, pp. 987–993. * Murphy [2012] Murphy, K. P., _Machine learning: a probabilistic perspective_. MIT press, 2012\. * Jung et al. [2022] Jung, S., R. Senanayake, and M. J. Kochenderfer, A Gray Box Model for Characterizing Driver Behavior. In _SafeAI@ AAAI_ , 2022. * Chung et al. [2014] Chung, J., C. Gulcehre, K. Cho, and Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling. _arXiv preprint arXiv:1412.3555_ , 2014. * Rezende and Mohamed [2015] Rezende, D. and S. Mohamed, Variational inference with normalizing flows. In _International conference on machine learning_ , PMLR, 2015, pp. 1530–1538. * Papamakarios et al. [2021] Papamakarios, G., E. T. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan, Normalizing Flows for Probabilistic Modeling and Inference. _J. Mach. Learn. Res._ , Vol. 22, No. 57, 2021, pp. 1–64. * Kingma et al. [2016] Kingma, D. P., T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling, Improved variational inference with inverse autoregressive flow. _Advances in neural information processing systems_ , Vol. 29, 2016. * Littman et al. [1995] Littman, M. L., A. R. Cassandra, and L. P. Kaelbling, Learning policies for partially observable environments: Scaling up. In _Machine Learning Proceedings 1995_ , Elsevier, 1995, pp. 362–370. * Karkus et al. [2017] Karkus, P., D. Hsu, and W. S. Lee, Qmdp-net: Deep learning for planning under partial observability. _Advances in neural information processing systems_ , Vol. 30, 2017. * Wei [2022] Wei, R., _Active Inference Interaction Modeling_. https://github.com/rw422scarlet/interactive_inference, 2022. * Reddy et al. [2018] Reddy, S., A. Dragan, and S. Levine, Where do you think you’re going?: Inferring beliefs about dynamics from behavior. _Advances in Neural Information Processing Systems_ , Vol. 31, 2018. * Armstrong and Mindermann [2017] Armstrong, S. and S. Mindermann, Impossibility of deducing preferences and rationality from human policy. _arXiv preprint arXiv:1712.05812_ , 2017. * Bouchard and Triggs [2004] Bouchard, G. and B. Triggs, The tradeoff between generative and discriminative classifiers. In _16th IASC International Symposium on Computational Statistics (COMPSTAT’04)_ , 2004, pp. 721–728. * Wei et al. [2022b] Wei, R., A. Garcia, A. McDonald, G. Markkula, J. Engström, I. Supeene, and M. O’Kelly, World Model Learning From Demonstrations With Active Inference: Application to Driving Behavior. In _Forthcoming, 3rd International Workshop on Active Inference, Grenoble, France_ , 2022b. * Sadigh et al. [2016] Sadigh, D., S. Sastry, S. A. Seshia, and A. D. Dragan, Planning for autonomous cars that leverage effects on human actions. In _Robotics: Science and systems_ , Ann Arbor, MI, USA, 2016, Vol. 2, pp. 1–9. * Colas et al. [2019] Colas, C., O. Sigaud, and P.-Y. Oudeyer, A hitchhiker’s guide to statistical comparisons of reinforcement learning algorithms. _arXiv preprint arXiv:1904.06979_ , 2019. * Agarwal et al. [2021] Agarwal, R., M. Schwarzer, P. S. Castro, A. C. Courville, and M. Bellemare, Deep reinforcement learning at the edge of the statistical precipice. _Advances in neural information processing systems_ , Vol. 34, 2021, pp. 29304–29320. * sed [2022] _Sedan vehicle dimensions_. https://www.mathworks.com/help/driving/ref/sedan.html, 2022, accessed: 2022-12-15. * Fuller [2005] Fuller, R., Towards a general theory of driver behaviour. _Accident analysis & prevention_, Vol. 37, No. 3, 2005, pp. 461–472. * Summala [1997] Summala, H., Hierarchical model of behavioural adaptation and traffic accidents. _Traffic and transport psychology. Theory and application_ , 1997\. * Hancock [1999] Hancock, P., Is car following the real question–are equations the answer? _Transportation research part F: traffic psychology and behaviour_ , Vol. 2, No. 4, 1999, pp. 197–199. * De Boer et al. [2005] De Boer, P.-T., D. P. Kroese, S. Mannor, and R. Y. Rubinstein, A tutorial on the cross-entropy method. _Annals of operations research_ , Vol. 134, No. 1, 2005, pp. 19–67. * Rudin [2019] Rudin, C., Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. _Nature Machine Intelligence_ , Vol. 1, No. 5, 2019, pp. 206–215. * Pekkanen et al. [2018] Pekkanen, J., O. Lappi, P. Rinkkala, S. Tuhkanen, R. Frantsi, and H. Summala, A computational model for driver’s cognitive state, visual perception and intermittent attention in a distracted car following task. _Royal Society open science_ , Vol. 5, No. 9, 2018, p. 180194. * Herman et al. [2016] Herman, M., T. Gindele, J. Wagner, F. Schmitt, and W. Burgard, Inverse reinforcement learning with simultaneous estimation of rewards and dynamics. In _Artificial Intelligence and Statistics_ , PMLR, 2016, pp. 102–110. * Baker et al. [2011] Baker, C., R. Saxe, and J. Tenenbaum, Bayesian theory of mind: Modeling joint belief-desire attribution. In _Proceedings of the annual meeting of the cognitive science society_ , 2011, Vol. 33. * Tamar et al. [2016] Tamar, A., Y. Wu, G. Thomas, S. Levine, and P. Abbeel, Value iteration networks. _Advances in neural information processing systems_ , Vol. 29, 2016\. * Victor et al. [2015] Victor, T., M. Dozza, J. Bärgman, C.-N. Boda, J. Engström, C. Flannagan, J. D. Lee, and G. Markkula, _Analysis of naturalistic driving study data: Safer glances, driver inattention, and crash risk_ , 2015. * Paszke et al. [2019] Paszke, A., S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al., Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ , Vol. 32, 2019.
# Decreasing the mean subtree order by adding $k$ edges Stijn Cambie Supported by Internal Funds of KU Leuven (PDM fellowship PDMT1/22/005), UK Research and Innovation Future Leaders Fellowship MR/S016325/1 and the Institute for Basic Science (IBS-R029-C4), <EMAIL_ADDRESS>Department of Computer Science, KU Leuven Campus Kulak, 8500 Kortrijk, Belgium. Guantao Chen Partially supported by NSF grant DMS-1855716 and DMS-2154331<EMAIL_ADDRESS>Dept. of Mathematics and Statistics, Georgia State University, Altanta, GA 30303 Yanli Hao The corresponding author, Partially supported by the GSU Provost’s Dissertation Fellowship<EMAIL_ADDRESS>School of Mathematics, Georgia Institute of Technology, Atlanta, GA 30332 Nizamettin Tokar<EMAIL_ADDRESS>Department of Mathematics, Usak University, Usak, Turkey 64200 ###### Abstract The mean subtree order of a given graph $G$, denoted $\mu(G)$, is the average number of vertices in a subtree of $G$. Let $G$ be a connected graph. Chin, Gordon, MacPhee, and Vincent [J. Graph Theory, 89(4): 413-438, 2018] conjectured that if $H$ is a proper spanning supergraph of $G$, then $\mu(H)>\mu(G)$. Cameron and Mol [J. Graph Theory, 96(3): 403-413, 2021] disproved this conjecture by showing that there are infinitely many pairs of graphs $H$ and $G$ with $H\supset G$, $V(H)=V(G)$ and $|E(H)|=|E(G)|+1$ such that $\mu(H)<\mu(G)$. They also conjectured that for every positive integer $k$, there exists a pair of graphs $G$ and $H$ with $H\supset G$, $V(H)=V(G)$ and $|E(H)|=|E(G)|+k$ such that $\mu(H)<\mu(G)$. Furthermore, they proposed that $\mu(K_{m}+nK_{1})<\mu(K_{m,n})$ provided $n\gg m$. In this note, we confirm these two conjectures. Keywords: Mean subtree order; Subtree ## 1 Introduction Graphs in this paper are simple unless otherwise specified. Let $G$ be a graph with vertex set $V(G)$ and edge set $E(G)$. The order of $G$, denoted by $|G|$, is the number of vertices in $G$, that is, $|G|=|V(G)|$. The complement of $G$, denoted by $\overline{G}$, is the graph on the same vertex set as $G$ such that two distinct vertices of $\overline{G}$ are adjacent if and only if they are not adjacent in $G$. For an edge subset $F\subseteq E(\overline{G})$, denote by $G+F$ the graph obtained from $G$ by adding the edges of $F$. For a vertex subset $U\subseteq V(G)$, denote by $G-U$ the graph obtained from $G$ by deleting the vertices of $U$ and all edges incident with them. For any two graphs $G_{1},G_{2}$ with $V(G_{1})\cap V(G_{2})=\emptyset$, denote by $G_{1}+G_{2}$ the graph obtained from $G_{1},G_{2}$ by adding an edge between any two vertices $v_{1}\in V(G_{1})$ and $v_{2}\in V(G_{2})$. A tree is a graph in which every pair of distinct vertices is connected by exactly one path. A subtree of a graph $G$ is a subgraph of $G$ that is a tree. By convention, the empty graph is not regarded as a subtree of any graph. The mean subtree order of $G$, denoted $\mu(G)$, is the average order of a subtree of $G$. Jamison [5, 6] initiated the study of the mean subtree order in the 1980s, considering only the case that $G$ is a tree. In [5], he proved that $\mu(T)\geq\frac{n+2}{3}$ for any tree $T$ of order $n$, with this minimum achieved if and only if $T$ is a path; and $\mu(T)$ could be very close to its order $n$. Jamison’s work on the mean order of subtrees of a tree has received considerable attention [4, 8, 9, 10, 11]. At the 2019 Spring Section AMS meeting in Auburn, Jamison presented a survey that provided an overview of the current state of open questions concerning the mean subtree order of a tree, some of which have been resolved [1, 7]. Figure 1: Adding the edge between $a$ and $b$ decreases the mean subtree order Recently, Chin, Gordon, MacPhee, and Vincent [3] initiated the study of subtrees of graphs in general. They believed that the parameter $\mu$ is monotonic with respect to the inclusion relationship of subgraphs. More specifically, they [3, Conjecture 7.4] conjectured that for any simple connected graph $G$, adding any edge to $G$ will increase the mean subtree order. Clearly, the truth of this conjecture implies that $\mu(K_{n})$ is the maximum among all connected simple graphs of order $n$, but it’s unknown if $\mu(K_{n})$ is the maximum. Cameron and Mol [2] constructed some counterexamples to this conjecture by a computer search. Moreover, they found that the graph depicted in Figure 1 is the smallest counterexample to this conjecture and there are infinitely many graphs $G$ with $xy\in E(\overline{G})$ such that $\mu(G+xy)<\mu(G)$. In their paper, Cameron and Mol [2] initially focused on the case of adding a single edge, but they also made the following conjecture regarding adding several edges. ###### Conjecture 1.1. For every positive integer $k$, there are two connected graphs $G$ and $H$ with $G\subset H$, $V(G)=V(H)$ and $|E(H)\backslash E(G)|=k$ such that $\mu(H)<\mu(G)$. We will confirm Conjecture 1.1 by proving the following theorem, which will be presented in Section 2. ###### Theorem 1.2. For every positive integer $k$, there exist infinitely many pairs of connected graphs $G$ and $H$ with $G\subset H$, $V(G)=V(H)$ and $|E(H)\backslash E(G)|=k$ such that $\mu(H)<\mu(G)$. In the same paper, Cameron and Mol [2] also proposed the following conjecture. ###### Conjecture 1.3. Let $m,n$ be two positive integers. If $n\gg m$, then we have $\mu(K_{m}+nK_{1})<\mu(K_{m,n})$. We can derive Conjecture 1.1 from Conjecture 1.3, the proof of which is presented in Section 3, by observing that when $m=2k$, the binomial coefficient ${m\choose 2}$ is divisible by $k$. With $2k-1$ steps, we add $k$ edges in each step, and eventually the mean subtree order decreases, so it must have decreased in some intermediate step. ## 2 Theorem 1.2 Let $G$ be a graph of order $n$, and let $\mathcal{T}_{G}$ be the family of subtrees of $G$. By definition, we have $\mu(G)=(\sum_{T\in\mathcal{T}_{G}}|T|)/|\mathcal{T}_{G}|$. The density of $G$ is defined by $\sigma(G)=\mu(G)/n$. More generally, for any subfamily $\mathcal{T}\subseteq\mathcal{T}_{G}$, we define $\mu(\mathcal{T})=(\sum_{T\in\mathcal{T}}|T|)/|\mathcal{T}|$ and $\sigma(\mathcal{T})=\mu(\mathcal{T})/n$. Clearly, $1\leq\mu(G)\leq n$ and $0<\sigma(G)\leq 1$. ### 2.1 The Construction Fix a positive integer $k$. For some integer $m$, let $\\{s_{n}\\}_{n\geq m}$ be a sequence of non-negative integers satisfying: (1) $2s_{n}\leq n-k-1$ for all $n\geq m$; (2) $s_{n}=o(n)$, i.e., $\lim_{n\to\infty}s_{n}/n=0$; and (3) $2^{s_{n}}\geq n^{2}$ for all $n\geq m$. Notice that many such sequences exist. Take, for instance, the sequence $\\{\lceil 2\log_{2}(n)\rceil\\}_{n\geq m}$, as in [2], where $m$ is the least positive integer such that $m-2\lceil 2\log_{2}(m)\rceil\geq k+1$. In the remainder of this paper, we fix $P$ for a path $v_{1}v_{2}\cdots v_{n-2s_{n}}$ of order $n-2s_{n}$. Clearly, $|P|\geq k+1$. Furthermore, let $P^{*}:=P-\\{v_{1},\dots,v_{k-1}\\}=v_{k}\cdots v_{n-2s_{n}}$. Figure 2: $G_{n}$ Let $G_{n}$ be the graph obtained from the path $P$ by joining $s_{n}$ leaves to each of the two endpoints $v_{1}$ and $w:=v_{n-2s_{n}}$ of $P$ (see Figure 2). Let $G_{n,k}:=G_{n}+\\{v_{1}w,v_{2}w,\dots,v_{k}w\\}$, that is, $G_{n,k}$ is the graph obtained from $G_{n}$ by adding $k$ new edges $e_{1}:=v_{1}w,e_{2}:=v_{2}w,\ldots,e_{k}:=v_{k}w$ (see Figure 3). Figure 3: $G_{n,k}$ Let $\mathcal{T}_{n,k}$ be the family of subtrees of $G_{n,k}$ containing the vertex set $\\{v_{1},v_{k},w\\}$ but not containing the path $P^{*}=v_{k}\cdots w$. It is worth noting that $\mathcal{T}_{n,1}$ is the family of subtrees of $G_{n,1}$ containing edge $v_{1}w$. Note that the graphs $G_{n}$ and $G_{n,1}$ defined above are actually the graphs $T_{n}$ and $G_{n}$ constructed by Cameron and Mol in [2], respectively. From the proof of Theorem 3.1 in [2], we obtain the following two results regarding the density of $G_{n},G_{n,1},\mathcal{T}_{n,1}$. ###### Lemma 2.1. $\lim\limits_{n\to\infty}\sigma(G_{n})=1$. ###### Lemma 2.2. $\lim\limits_{n\rightarrow\infty}\sigma(G_{n,1})=\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n,1})=\frac{2}{3}$. The following two technical results concerning the density of $\mathcal{T}_{n,k}$ are crucial in the proof of Theorem 1.2. The proofs of these results will be presented in Subsubsection 2.1.1 and Subsubsection 2.1.2, respectively. ###### Lemma 2.3. For any fixed positive integer $k$, $\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n-k+1,1}).$ ###### Lemma 2.4. For any fixed positive integer $k$, $\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\rightarrow\infty}\sigma(G_{n,k}).$ The combination of Lemma 2.2, Lemma 2.3 and Lemma 2.4 immediately yields the following result. ###### Corollary 2.5. For any fixed positive integer $k$, $\lim\limits_{n\rightarrow\infty}\sigma(G_{n,k})=\frac{2}{3}$. Combining Lemma 2.1 and Corollary 2.5, we have that $\lim\limits_{n\rightarrow\infty}\sigma(G_{n,k})=\frac{2}{3}<1=\lim\limits_{n\rightarrow\infty}\sigma(G_{n})$ for any fixed positive integer $k$. By definition, we gain that $\sigma(G_{n,k})=\mu(G_{n,k})/|G_{n,k}|$ and $\sigma(G_{n})=\mu(G_{n})/|G_{n}|$. Since $|G_{n,k}|=|G_{n}|$, it follows that $\mu(G_{n,k})<\mu(G_{n})$ for $n$ sufficiently large, which in turn gives Theorem 1.2. The following result presented in [2, page 408, line -2] will be used in our proof. ###### Lemma 2.6. $|\mathcal{T}_{n,1}|=2^{2s_{n}}\cdot\binom{n-2s_{n}}{2}$. #### 2.1.1 Proof of Lemma 2.3 Let $H$ be the subgraph of $G_{n,k}$ induced by vertex set $\\{v_{1},\dots,v_{k},w\\}$ (see Figure 4). Furthermore, set $n_{1}=n-k+1$, and let $G_{n_{1}}^{+}$ be the graph obtained from $G_{n,k}$ by contracting vertex set $\\{v_{1},\dots,v_{k}\\}$ into vertex $v_{1}$ and removing any resulting loops and multiple edges (see Figure 5). Clearly, $G_{n_{1}}^{+}$ is isomorphic to $G_{n_{1},1}$. Figure 4: $H$ Figure 5: $G_{n_{1}}^{+}$ Let $T\in\mathcal{T}_{n,k}$, that is, $T$ is a subtree of $G_{n,k}$ containing the vertex set $\\{v_{1},v_{k},w\\}$ but not containing the path $P^{*}=v_{k}\cdots w$. Let $T_{1}$ be the subgraph of $H$ induced by $E(H)\cap E(T)$. Since $T$ does not contain the path $P^{*}$, we have that $T_{1}$ is connected, and so it is a subtree of $H$. Let $T_{2}$ be the graph obtained from $T$ by contracting vertex set $\\{v_{1},\dots,v_{k}\\}$ into the vertex $v_{1}$ and removing any resulting loops and multiple edges. Since $T_{1}$ is connected and contains vertex set $\\{v_{1},v_{k},w\\}$, it follows that $T_{2}$ is a subtree of $G_{n_{1}}^{+}$ containing edge $v_{1}w$. So, each $T\in\mathcal{T}_{n,k}$ corresponds to a unique pair $(T_{1},T_{2})$ of trees, where $T_{1}$ is a subtree of $H$ containing vertex set $\\{v_{1},v_{k},w\\}$, and $T_{2}\in\mathcal{T}_{n_{1},1}$. We also notice that $|T|=|T_{1}|+|T_{2}|-2$, where the $-2$ arises due to the fact that $T_{1}$ and $T_{2}$ share exactly two vertices $v_{1}$ and $w$. Let $\mathcal{T}_{H}^{\prime}\subseteq\mathcal{T}_{H}$ be the family of subtrees of $H$ containing vertex set $\\{v_{1},v_{k},w\\}$. By the corresponding relationship above, we have $|\mathcal{T}_{n,k}|=|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|$. Hence, we obtain that $\displaystyle\mu(\mathcal{T}_{n,k})$ $\displaystyle=$ $\displaystyle\,\frac{\sum\limits_{T\in\mathcal{T}_{n,k}}|T|}{|\mathcal{T}_{n,k}|}=\frac{\sum\limits_{T_{1}\in\mathcal{T}_{H}^{\prime}}\sum\limits_{T_{2}\in\mathcal{T}_{n_{1},1}}\left(|T_{1}|+|T_{2}|-2\right)}{|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|}$ $\displaystyle=$ $\displaystyle\,\frac{|\mathcal{T}_{H}^{\prime}|\cdot\sum\limits_{T_{2}\in\mathcal{T}_{n_{1},1}}|T_{2}|+|\mathcal{T}_{n_{1},1}|\cdot\sum\limits_{T_{1}\in\mathcal{T}_{H}^{\prime}}{|T_{1}|}-2|\mathcal{T}_{n_{1},1}|\cdot|\mathcal{T}_{H}^{\prime}|}{|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|}$ $\displaystyle=$ $\displaystyle\,\mu(\mathcal{T}_{n_{1},1})+\mu(\mathcal{T}_{H}^{\prime})-2.$ Dividing through by $n$, we further gain that $\sigma(\mathcal{T}_{n,k})=\frac{n_{1}}{n}\cdot\sigma(T_{n_{1},1})+\frac{k+1}{n}\cdot\sigma(\mathcal{T}_{H}^{\prime})-\frac{2}{n}.$ Since $\sigma(\mathcal{T}_{H}^{\prime})$ is always bounded by 1, it follows that $\lim\limits_{n\rightarrow\infty}\frac{k+1}{n}\cdot\sigma(\mathcal{T}_{H}^{\prime})=0$. Combining this with $\lim\limits_{n\rightarrow\infty}\frac{n_{1}}{n}=1$ and $\lim\limits_{n\rightarrow\infty}\frac{2}{n}=0$, we get $\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n,k})=\lim\limits_{n\rightarrow\infty}\sigma(\mathcal{T}_{n_{1},1})=\frac{2}{3}$ (by Lemma 2.2), which completes the proof of Lemma 2.3. ∎ #### 2.1.2 Proof of Lemma 2.4 Let $\overline{\mathcal{T}}_{n,k}:=\mathcal{T}_{G_{n,k}}\backslash\mathcal{T}_{n,k}$. If $\lim\limits_{n\rightarrow\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0$, then $\lim\limits_{n\rightarrow\infty}\frac{|\overline{\mathcal{T}}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}=0$ because $\frac{|\overline{\mathcal{T}}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}\leq|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|$, and so $\lim\limits_{n\rightarrow\infty}\frac{|\mathcal{T}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}=1$. Hence, $\displaystyle\lim\limits_{n\rightarrow\infty}\sigma(G_{n,k})$ $\displaystyle=$ $\displaystyle\lim\limits_{n\rightarrow\infty}\frac{\mu(G_{n,k})}{n}=\lim\limits_{n\rightarrow\infty}\frac{1}{n}\cdot\left(\frac{\sum\limits_{T\in\mathcal{T}_{n,k}}|T|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}+\frac{\sum\limits_{T\in\overline{\mathcal{T}}_{n,k}}|T|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}\right)$ $\displaystyle=$ $\displaystyle\lim\limits_{n\rightarrow\infty}\left(\sigma({\mathcal{T}_{n,k})}\cdot\frac{|\mathcal{T}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}+\sigma(\overline{\mathcal{T}}_{n,k})\cdot\frac{|\overline{\mathcal{T}}_{n,k}|}{|\mathcal{T}_{n,k}|+|\overline{\mathcal{T}}_{n,k}|}\right)=\lim\limits_{n\rightarrow\infty}\sigma({\mathcal{T}_{n,k}}).$ Thus, to complete the proof, it suffices to show that $\lim\limits_{n\rightarrow\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0$. We now define the following two subfamilies of $\mathcal{T}_{G_{n,k}}$. * • $\mathcal{B}_{1}=\\{T\in\mathcal{T}_{G_{n,k}}\ :\ v_{1}\notin V(T)\mbox{ or }w\notin V(T)\\}$; and * • $\mathcal{B}_{2}=\\{T\in\mathcal{T}_{G_{n,k}}\ :\ T\cap P^{*}\mbox{ is a path, and $T$ contains $w$}\\}$. Recall that $\mathcal{T}_{n,k}$ is the family of subtrees of $G_{n,k}$ containing vertex set $\\{v_{1},v_{k},w\\}$ and not containing the path $P^{*}=v_{k}\cdots w$. For any $T\in\overline{\mathcal{T}}_{n,k}$, by definition, we have the following scenarios: $v_{1}\notin V(T)$, and so $T\in\mathcal{B}_{1}$ in this case; $w\notin V(T)$, and so $T\in\mathcal{B}_{1}$ in this case; $v_{k}\notin V(T)$ and $w\in V(T)$, then $T\cap P^{*}$ is a path, and so $T\in\mathcal{B}_{2}$ in this case; $P^{*}\subseteq T$, and so $T\in\mathcal{B}_{2}$ in this case. Consequently, $\overline{\mathcal{T}}_{n,k}\subseteq\mathcal{B}_{1}\cup\mathcal{B}_{2}$, which in turn gives that $|\overline{\mathcal{T}}_{n,k}|\leq|\mathcal{B}_{1}|+|\mathcal{B}_{2}|.$ (1) Let $S_{v_{1}}$ denote the star centered at $v_{1}$ with the $s_{n}$ leaves attached to it and $S_{w}$ denote the star centered at $w$ with the $s_{n}$ leaves attached to it. Then $G_{n,k}$ is the union of four subgraphs $S_{v_{1}}$, $S_{w}$, $H$, and $P^{*}$. * • Considering the subtrees of $S_{v_{1}}$ with at least two vertices and the subtrees of $S_{v_{1}}$ with a single vertex, we get $|\mathcal{T}_{S_{v_{1}}}|=(2^{s_{n}}-1)+(s_{n}+1)=2^{s_{n}}+s_{n}=2^{s_{n}}+o(2^{s_{n}})$. * • Considering the subtrees of $S_{w}$ with at least two vertices and the subtrees of $S_{w}$ with a single vertex, we get $|\mathcal{T}_{S_{w}}|=(2^{s_{n}}-1)+(s_{n}+1)=2^{s_{n}}+s_{n}=2^{s_{n}}+o(2^{s_{n}})$. * • Considering the subpaths of $P^{*}$ with at least two vertices and the subpaths of $P^{*}$ with a single vertex, we get $|\mathcal{T}_{P^{*}}|=\binom{|P^{*}|}{2}+|P^{*}|=\binom{|P^{*}|+1}{2}=\binom{n-2s_{n}-k+2}{2}\leq\frac{n^{2}}{2}$. * • The number of subpaths of $P^{*}$ containing $w$ is bounded above by $|P^{*}|=n-2s_{n}-k+1\leq n$. Since $s_{n}=o(n)$, we have the following two inequalities $\displaystyle|\mathcal{B}_{1}|$ $\displaystyle\leq$ $\displaystyle(s_{n}+|\mathcal{T}_{H}|\cdot|\mathcal{T}_{P^{*}}|\cdot|\mathcal{T}_{S_{w}}|)+(s_{n}+|\mathcal{T}_{H}|\cdot|\mathcal{T}_{P^{*}}|\cdot|\mathcal{T}_{S_{v_{1}}}|)$ $\displaystyle\leq$ $\displaystyle 2\left[s_{n}+|\mathcal{T}_{H}|\cdot\left(2^{s_{n}}+o(2^{s_{n}})\right)\cdot\frac{n^{2}}{2}\right]=|\mathcal{T}_{H}|\cdot\left(2^{s_{n}}\cdot n^{2}+o(2^{s_{n}}\cdot n^{2})\right)$ $\displaystyle|\mathcal{B}_{2}|$ $\displaystyle\leq$ $\displaystyle|\mathcal{T}_{S_{v_{1}}}|\cdot|\mathcal{T}_{S_{w}}|\cdot|P^{*}|\cdot|\mathcal{T}_{H}|=\left(2^{2s_{n}}\cdot n+o(2^{2s_{n}}\cdot n)\right)\cdot|\mathcal{T}_{H}|.$ Recall that $n_{1}=n-k+1$. Applying Lemma 2.6, we have $\displaystyle|\mathcal{T}_{n,k}|$ $\displaystyle=$ $\displaystyle|\mathcal{T}_{H}^{\prime}|\cdot|\mathcal{T}_{n_{1},1}|=|\mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\binom{n_{1}-2s_{n}}{2}=|\mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\cdot\left(\frac{n^{2}}{2}-o(n^{2})\right).$ Recall that $2^{s_{n}}\geq n^{2}$. Since $|\mathcal{T}_{H}|$ is bounded by a function of $k$ because $|H|=k+1$, we have the following two inequalities. $\displaystyle\lim_{n\to\infty}\frac{|\mathcal{B}_{1}|}{|\mathcal{T}_{n,k}|}$ $\displaystyle=$ $\displaystyle\lim_{n\to\infty}\frac{|\mathcal{T}_{H}|\cdot 2^{s_{n}}\cdot n^{2}}{|\mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\cdot\frac{n^{2}}{2}}=\lim_{n\to\infty}\frac{2|\mathcal{T}_{H}|}{|\mathcal{T}_{H}^{\prime}|\cdot 2^{s_{n}}}=0$ and $\displaystyle\lim_{n\to\infty}\frac{|\mathcal{B}_{2}|}{|\mathcal{T}_{n,k}|}$ $\displaystyle=$ $\displaystyle\lim_{n\to\infty}\frac{2^{2{s_{n}}}\cdot n\cdot|\mathcal{T}_{H}|}{|\mathcal{T}_{H}^{\prime}|\cdot 2^{2s_{n}}\cdot\frac{n^{2}}{2}}=\lim_{n\to\infty}\frac{2\cdot|\mathcal{T}_{H}|}{|\mathcal{T}_{H}^{\prime}|\cdot n}=0.$ Hence, we conclude that $\lim\limits_{n\rightarrow\infty}\frac{|\mathcal{B}_{1}|+|\mathcal{B}_{2}|}{|\mathcal{T}_{n,k}|}=0$ Combining this with (1), we have that $\lim\limits_{n\rightarrow\infty}|\overline{\mathcal{T}}_{n,k}|/|\mathcal{T}_{n,k}|=0$, which completes the proof of Lemma 2.4. ∎ ### 2.2 An Alternative Construction The graphs we constructed in order to prove Theorem 1.2, and the sets of $k$ edges that were added to them, are certainly not the only examples that could be used to prove Theorem 1.2. For example, the $k$-edge set $\\{v_{1}w,v_{2}w,\dots,v_{k}w\\}$ can be replaced by the $k$-edge set $\\{v_{1}v_{n-2s_{n}},v_{2}v_{n-2s_{n}-1}$, $\ldots,v_{k}v_{n-2s_{n}-k+1}\\}$. Fix a positive integer $k$ and let $n$ be an integer much larger than $k$. We follow the notation given in Section 2. Recall that $G_{n}$ is obtained from a path $P:=v_{1}v_{2}\cdots v_{n-2s_{n}}$ by attaching two stars centered at $v_{1}$ and $v_{n-2s_{n}}$, and $\lim\limits_{n\to\infty}\sigma(G_{n})=1$. Let $E_{k}:=\\{v_{i_{1}}v_{j_{1}},v_{i_{2}}v_{j_{2}},\ldots,v_{i_{k}}v_{j_{k}}\\}$ be a set of $k$ edges in $\overline{G_{n}}$ such that $1\leq i_{1}<j_{1}\leq i_{2}<j_{2}\leq\dots\leq i_{k}<j_{k}\leq n-2s_{n}$. Let $H_{n,k}=G_{n}+E_{k}$. For convenience, we assume that $j_{\ell}-i_{\ell}$ have the same value, say $p$, for $\ell\in\\{1,\dots,k\\}$. A simple calculation shows that for each path $Q$ of order $q$, we have $\mu(Q)=(q+2)/3$ (See Jamison [5]), and so $\lim\limits_{q\to\infty}\sigma(Q)=1/3$. For any non-empty subset $F\subseteq E_{k}$, we define $\mathcal{T}_{F}=\\{T\in\mathcal{T}_{H_{n,k}}:E(T)\cap E_{k}=F\\}$. For any edge $v_{i_{\ell}}v_{j_{\ell}}\in F$, let $e_{\ell}=v_{i_{\ell}}v_{j_{\ell}}$ and $P_{\ell}=v_{i_{\ell}}v_{i_{\ell}+1}\cdots v_{j_{\ell}}$. Note that every tree $T\in\mathcal{T}_{F}$ is a union of a subtree of $H_{n,k}-\cup_{e_{\ell}\in F}(V(P_{\ell})\backslash\\{v_{i_{\ell}},v_{j_{\ell}}\\})$ containing $F$ and $\cup_{e_{\ell}\in F}(E(P_{\ell})-E(P_{\ell}^{*}))$ for some path $P_{\ell}^{*}\subseteq P_{\ell}$ containing at least one edge. Since $|E(P_{\ell})|=p$, the line graph of $P_{\ell}$ is a path of order $p$. Consequently, the mean of $|E(P_{\ell}^{*})|$ over subpaths of $P_{\ell}$ is $(p+2)/3$. Hence, the mean of $|E(P_{\ell})-E(P_{\ell}^{*})|$ over all subpaths $P_{\ell}^{*}$ of $P_{\ell}$ is $p-(p+2)/3=2(p-1)/3$ for each $e_{\ell}\in F$. Let $s=|F|$. Since every subtree $T\in\mathcal{T}_{F}$ has at most $n-s(p-1)$ vertices outside $\cup_{e_{\ell}\in F}(P_{\ell}-v_{i_{\ell}}-v_{j_{\ell}})$, we get the following inequality. $\mu(\mathcal{T}_{F})\leq n-s(p-1)+s\cdot\frac{2(p-1)}{3}\leq n-\frac{s(p-1)}{3}.$ By taking $p$ as a linear value of $n$, say $p=\alpha n$ ($\alpha<\frac{1}{k}$), we get $\sigma(\mathcal{T}_{F})\leq 1-s\alpha/3+s/3n<\sigma(G_{n})$ since we assume that $n$ is much larger than $k$. Since $\mathcal{T}_{H_{n,k}}=\bigcup_{F\subseteq E_{k}}\mathcal{T}_{F}$, we have $\sigma(H_{n,k})<\sigma(G_{n})$, and so $\mu(H_{n,k})<\mu(G_{n})$. ###### Remark 1. The above construction gives an example where we can delete $k$ edges in order in such a way that the mean subtree order increases in every step. ## 3 Proof of Conjecture 1.3 To simplify notation, we let $G:=K_{m}+nK_{1}$, where $V(G)=V(K_{m,n})$. Denote by $A$ and $B$ the two color classes of $K_{m,n}$ with $|A|=m$ and $|B|=n$, respectively. For each tree $T\subseteq G$, we have $E(T)\cap E(K_{m})=\emptyset$ or $E(T)\cap E(K_{m})\neq\emptyset$. This implies that the family of subtrees of $G$ consists of the subtrees of $K_{m,n}$ and the subtrees sharing at least one edge with $K_{m}$. For each tree $T\subseteq G$, let $A(T)=V(T)\cap A$ and $B(T)=V(T)\cap B$. Then, $|T|=|A(T)|+|B(T)|$. Furthermore, let $B_{2}(T)$ and $B_{\geq 2}(T)$ be the sets of vertices $v\in B(T)$ such that $d_{T}(v)=2$ and $d_{T}(v)\geq 2$, respectively. Clearly, $B_{2}(T)\subseteq B_{\geq 2}(T)\subseteq B(T)$. We define a subtree $T\in\mathcal{T}_{G}$ to be a b-stem if $B_{\geq 2}(T)=B(T)$, which means that $d_{T}(v)\geq 2$ for any $v\in B(T)$. Let $T$ be a b-stem and assume that $T$ contains $f$ edges in $K_{m}$. Counting the number of edges in $T$, we obtain $|E(T)|=f+\sum_{v\in B(T)}d_{T}(v)$. Since $T$ is a tree, we have $|E(T)|=|T|-1=|A(T)|+|B(T)|-1$. Therefore, we gain $|B(T)|=|A(T)|-1-\left(f+\sum_{v\in B(T)}(d_{T}(v)-2)\right).$ (2) Since $T$ is a b-stem, we have $\sum_{v\in B(T)}(d_{T}(v)-2)\geq 0$, which implies that $|B(T)|\leq|A(T)|-1\leq m-1$. Thus, $|T|=2|A(T)|-\left(1+f+\sum_{v\in B(T)}(d_{T}(v)-2)\right)\leq 2|A(T)|-1$. It follows that a b-stem $T\in\mathcal{T}_{G}$ is the max b-stem, i.e., the b-stem with the maximum order among all b-stems in $\mathcal{T}_{G}$, if and only if $A(T)=A$, $E(T)\cap E(K_{m})=\emptyset$, and $B_{2}(T)=B_{\geq 2}(T)$. This is equivalent to saying that $T$ is a max b-stem if and only if $|A(T)|=m$ and $|B(T)|=m-1$. The b-stem of a tree $T\subset G$ is the subgraph induced by $A(T)\cup B_{\geq 2}(T)$, and it is a subtree in $\mathcal{T}_{G}$. It is worth noting that the b-stem of every subtree $T\subset G$ exists, except for the case when $T$ is a tree with only one vertex belonging to $B$. Conversely, given a b-stem $T_{0}$, a tree $T\subset G$ contains $T_{0}$ as its b-stem if and only if $T_{0}\subseteq T$, $A(T)=A(T_{0})$, and $B(T)\backslash B(T_{0})$ is a set of vertices with degree 1 in $T$. Equivalently, $T$ can be obtained from $T_{0}$ by adding vertices in $B(T)\backslash B(T_{0})$ as leaves. So, there are exactly $(|A(T_{0})|+1)^{n-|B(T_{0})|}$ trees containing $T_{0}$ as their b-stem. For two non-negative integers $a,b$, where $a\geq b+1\geq 1$, let $\mathcal{T}_{G}(a,b)$ (resp. $\mathcal{T}_{K_{m,n}}(a,b)$) be the family of subtrees in $\mathcal{T}_{G}$ (resp. $\mathcal{T}_{K_{m,n}}$) whose b-stems $T_{0}$ satisfy $|A(T_{0})|=a$ and $|B(T_{0})|=b$. For any $A_{0}\subseteq A$ and $B_{0}\subseteq B$, let $f_{G}(A_{0},B_{0})$ (resp. $f_{K_{m,n}}(A_{0},B_{0})$) denote the number of b-stems $T_{0}$ spanned by $A_{0}\cup B_{0}$; that is, $A(T_{0})=A_{0}$ and $B_{\geq 2}(T_{0})=B_{0}$. Clearly, $f_{G}(A_{0},B_{0})$ and $f_{K_{m,n}}(A_{0},B_{0})$ depend only on $|A_{0}|$ and $|B_{0}|$, so we can denote them by $f_{G}(|A_{0}|,|B_{0}|)$ and $f_{K_{m,n}}(|A_{0}|,|B_{0}|)$, respectively. By counting, we have $|\mathcal{T}_{G}(a,b)|=\binom{m}{a}\cdot\binom{n}{b}\cdot f_{G}(a,b)\cdot(a+1)^{n-b}$ and $|\mathcal{T}_{K_{m,n}}(a,b)|=\binom{m}{a}\cdot\binom{n}{b}\cdot f_{K_{m,n}}(a,b)\cdot(a+1)^{n-b}$, due to the fact that there are $\binom{m}{a}$ ways to pick an $a$-set in $A$ and $\binom{n}{b}$ ways to pick a $b$-set in $B$. Since $a\leq m$ and $b\leq m-1$, there exist positive numbers $c_{1}$ and $c_{2}$ that depend only on $m$, such that $c_{1}n^{b}(a+1)^{n-b}\leq|\mathcal{T}_{G}(a,b)|\leq c_{2}n^{b}(a+1)^{n-b}$ (3) Note that if $(a,b)\neq(m,m-1)$, then we have $b\leq m-2$. Applying inequality (3), we get $|\cup_{(a,b)\neq(m,m-1)}\mathcal{T}_{G}(a,b)|\leq c_{3}|\mathcal{T}_{G}(m,m-1)|/n$ for some constant $c_{3}>0$ depending only on $m$. Given a b-stem $T_{0}$ with $|A(T_{0})|=a$ and $|B(T_{0})|=b$, let $T$ be a tree chosen uniformly at random from $\mathcal{T}_{G}$ (resp. $\mathcal{T}_{K_{m,n}}$) that contains $T_{0}$ as its b-stem. Then, the probability of a vertex $v\in B\backslash B(T_{0})$ in $T$ is $\frac{a}{a+1}$. This shows that the mean order of trees containing $T_{0}$ as their b-stem is $(n-b)\frac{a}{a+1}+a+b$, denoted by $\mu(a,b)$. Note that $\sum_{T\in\mathcal{T}_{G}(a,b)}|T|=\mu(a,b)\cdot|\mathcal{T}_{G}(a,b)|$ and $\sum_{T\in\mathcal{T}_{K_{m,n}}(a,b)}|T|=\mu(a,b)\cdot|\mathcal{T}_{K_{m,n}}(a,b)|$. Assume that $T_{0}$ has $f$ edges in $K_{m}$, and set $c=\sum_{v\in B(T_{0})}(d_{T_{0}}(v)-2)$. Using (2), we have $b=a-(1+f+c)$. Hence, $\mu(a,b)=\frac{(n+2+a)\cdot a}{a+1}-\frac{1+f+c}{a+1}$, which reaches its maximum value when $a=m$ and $f=c=0$, i.e., when $T_{0}$ is a max b-stem. We then have: $\displaystyle\mu(G)$ $\displaystyle=$ $\displaystyle\frac{\mu(m,m-1)|\mathcal{T}_{G}(m,m-1)|+\sum_{(a,b)\neq(m,m-1)}\mu(a,b)|\mathcal{T}_{G}(a,b)|+n}{|\mathcal{T}_{G}(m,m-1)|+\sum_{(a,b)\neq(m,m-1)}|\mathcal{T}_{G}(a,b)|+n},$ $\displaystyle\mu(K_{m,n})$ $\displaystyle=$ $\displaystyle\frac{\mu(m,m-1)|\mathcal{T}_{K_{m,n}}(m,m-1)|+\sum_{(a,b)\neq(m,m-1)}\mu(a,b)|\mathcal{T}_{K_{m,n}}(a,b)|+n}{|\mathcal{T}_{K_{m,n}}(m,m-1)|+\sum_{(a,b)\neq(m,m-1)}|\mathcal{T}_{K_{m,n}}(a,b)|+n},$ where $n$ denotes the number of subtrees with a single vertex in $B$. Note that $|\mathcal{T}_{G}(a,b)|\geq|\mathcal{T}_{K_{m,n}}(a,b)|$, with equality holding if and only if $a=b-1$, and so in particular when $(a,b)=(m,m-1).$ We have derived before that $0<\mu(a,b)<\mu(m,m-1)$ when $(a,b)\neq(m,m-1).$ Using the inequality $|\cup_{(a,b)\neq(m,m-1)}\mathcal{T}_{G}(a,b)|\leq c_{3}|\mathcal{T}_{G}(m,m-1)|/n$, we conclude that $\mu(G)>\frac{n}{n+c_{3}}\mu(m,m-1)>\max_{(a,b)\neq(m,m-1)}\mu(a,b)$ for $n$ sufficiently large (for fixed $m$). Since $\mu(K_{m,n})$ is the average of the same terms, as well as some additional terms of the form $\mu(a,b)$, which are smaller than $\mu(G)$, we conclude that $\mu(G)<\mu(K_{m,n})$. This completes the proof. ∎ ## Acknowledgments We would like to express our sincere gratitude to the anonymous referees for their valuable comments and suggestions that improved this manuscript. ## References * [1] Stijn Cambie, Stephan Wagner, and Hua Wang. On the maximum mean subtree order of trees. European Journal of Combinatorics, 97:103388, 2021. * [2] Ben Cameron and Lucas Mol. On the mean subtree order of graphs under edge addition. J. Graph Theory, 96(3):403–413, 2021. * [3] Alex J. Chin, Gary Gordon, Kellie J. MacPhee, and Charles Vincent. Subtrees of graphs. J. Graph Theory, 89(4):413–438, 2018. * [4] John Haslegrave. Extremal results on average subtree density of series-reduced trees. J. Combin. Theory Ser. B, 107:26–41, 2014. * [5] Robert E. Jamison. On the average number of nodes in a subtree of a tree. J. Combin. Theory Ser. B, 35(3):207–223, 1983. * [6] Robert E. Jamison. Monotonicity of the mean order of subtrees. J. Combin. Theory Ser. B, 37(1):70–78, 1984. * [7] Zuwen Luo, Kexiang Xu, Stephan Wagner, and Hua Wang. On the mean subtree order of trees under edge contraction. Journal of Graph Theory, 102(3):535–551, 2023. * [8] Lucas Mol and Ortrud R. Oellermann. Maximizing the mean subtree order. J. Graph Theory, 91(4):326–352, 2019. * [9] Andrew Vince and Hua Wang. The average order of a subtree of a tree. J. Combin. Theory Ser. B, 100(2):161–170, 2010. * [10] Stephan Wagner and Hua Wang. Indistinguishable trees and graphs. Graphs Combin., 30(6):1593–1605, 2014. * [11] Stephan Wagner and Hua Wang. On the local and global means of subtree orders. J. Graph Theory, 81(2):154–166, 2016.
# Are All Edges Necessary? A Unified Framework for Graph Purification Zishan Gu∗ Jintang Li∗&Liang Chen∗ ∗Sun Yat-sen University <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Graph Neural Networks (GNNs) as deep learning models working on graph- structure data have achieved advanced performance in many works. However, it has been proved repeatedly that, not all edges in a graph are necessary for the training of machine learning models. In other words, some of the connections between nodes may bring redundant or even misleading information to downstream tasks. In this paper, we try to provide a method to drop edges in order to purify the graph data from a new perspective. Specifically, it is a framework to purify graphs with the least loss of information, under which the core problems are how to better evaluate the edges and how to delete the relatively redundant edges with the least loss of information. To address the above two problems, we propose several measurements for the evaluation and different judges and filters for the edge deletion. We also introduce a residual-iteration strategy and a surrogate model for measurements requiring unknown information. The experimental results show that our proposed measurements for KL divergence with constraints to maintain the connectivity of the graph and delete edges in an iterative way can find out the most edges while keeping the performance of GNNs. What’s more, further experiments show that this method also achieves the best defense performance against adversarial attacks. ## 1 Introduction Graph structure data is a type of ubiquitous non-Euclidean data that is pervasive across many real-life applications, such as social networks, recommendation systems, etc Chen et al. (2019, 2018). However, we may all have the experience of friending unfamiliar people on Facebook111www.facebook.com. Boshmaf et al. (2011) uses 102 bots to send friend requests to random users with public accounts, the results of which indicates that almost 20% of users are willing to friend total strangers, and approximately 60% of users would friend strangers with only one common friend, which may all lead to links that we probably shouldn’t consider when we study the resulted graphs. In the meantime, studies have shown that GNNs are highly sensitive against well- designed adversarial attacks. A wide range of techniques have been proposed towards this area and achieved significant performance Cai et al. (2005); Goodfellow et al. (2015); Xu et al. (2019). For instance, Zügner and Günnemann (2019) study the discreteness of graph structure data and further propose a poisoning attack method applying meta learning. Xu et al. (2019) presents a novel gradient-based method to inject edges from an optimization perspective, also displaying an impressive ability to fool the GNNs. So, in the consideration of all the redundant edges as well as the deliberately injected edges, we further raise the following questions: Can we find out these edges before the training process? Can GNNs keep their performance without all those deleted edges on clean graphs? Can we enhance the robustness of GNNs against adversarial attacks by deleting these edges? In this work, we focus on the information and effects that edges bring to the graph and propose a Unified Graph Purification (UGP) framework to preprocess the data before the training of GNNs. Under the proposed UGP framework, the core problems are divided into evaluating each of the edges by intrinsic features of the data and deleting the redundant or misleading edges with the least loss of information. We use the powerful Graph Convolutional Network (GCN) proposed by Kipf and Welling (2017) as the estimating model, which, as a matter of fact, can be replaced by any other more accurate GNNs, and examine the performance of several measurements for edges implementing the UGP framework. We also introduce a residual-iteration (RI) strategy for measurements requiring unknown information like labels and thus needing a pre- trained model to predict such information. Experimental results show that the proposed framework applying the selected measurements can indeed delete edges and simultaneously keep the performance of GNNs. Further experiments also display the ability of these methods to enhance the robustness of GNNs against adversarial attacks. Our main contributions exploring this area are summarized as follows: * • We introduce a unified graph purification framework for detecting and purifying redundant and injected edges in a graph. * • We introduce a residual-iteration strategy to measurements requiring information unknown to the preprocess model and thus need a surrogate model to predict such information. * • We conduct experiments to show that the proposed framework can not only find out the redundant edges on clean graphs but also recognize the adversarial edges generated by malicious attackers and thus enhance the robustness of GNNs. ## 2 Related Work With great attention dropped on the powerful GCNs, more and more researchers start to consider about leaving out certain edges of graph structure data and thus heading for more robust models. After Velickovic et al. (2018) first discuss applying dropout on edge attentions in GAT, Rong et al. (2020) develop this idea into dropping edges randomly during training and present the formulation of DropEdge. When it comes to the inference stage, Feng et al. (2020) propose an adaptive inference mechanism, which drops out all the edges of a single node and only trusts its own features to formulate a counterfactual inference. The successful training of these models is telling us that, indeed, a huge number of edges are not necessary for the training and the inference stage of GNNs. However, these models dropout edges either in a completely random way or just delete all the edges, which, obviously, is missing a evaluation and selection process. Moreover, a line of recent studies has demonstrated that GNN-based models suffer from vulnerabilities to adversarial perturbations due to strongly relying on the graph structure and local information Chen et al. (2020). By injecting several poisoning data (i.e., adversarial edges) into the graph, attackers can easily fool the GNNs and cause significant degradation in node classification performance Zügner et al. (2018); Zügner and Günnemann (2019). The model trained on the perturbed graph suffers from the influence of noisy or adversarial data, thus restricting their application in real-world scenarios. Knowing the fact that GNNs are highly vulnerable against adversarial attacks especially with multiple edges injection, there is an urgent need to design practical methods to purify the graph and improve the robustness of GNNs. In this vein, Wu et al. (2019) propose to discover adversarial edges via Jaccard similarity scores of the end nodes’ features. Zhang et al. (2019) study the statistical differences between unperturbed graphs and perturbed graphs, and further propose to detect adversarial edges by calculating the Kullback-Leibler (KL) divergences Joyce (2011) between the softmax probabilities of node and its neighbors. Entezari et al. (2020) explore the characteristic of the high-rank spectrum of perturbed graphs and vaccinate the GNNs with low-rank approximations. These pre-processing techniques have indeed enhanced the robustness of GNNs against adversarial attacks, however, these methods are insufficient in the face of stronger attacks since they can only explore a relatively small amount of adversarial edges, which means there are still several redundant edges being left out. More importantly, there lacks a unified framework to summarize all those methods and guide the development of new measurements. ## 3 Notations and Preliminaries ### 3.1 Notations In this paper, we mainly study the task of semi-supervised node classification in an undirected, unweighted graph, and leave the discussion of other tasks for future exploration. We follow the widely used notation and represent a graph with N nodes as $G=(V,E)$, where $V=\\{v_{1},...,v_{N}\\}$ is a finite set of vertices (nodes) and $E=\\{e_{1},...,e_{K}\\}$ is a finite set of links (edges). We use a matrix $X\in\mathrm{R}^{N\times D}$ to denote the $D$ dimensional node features and an adjacency matrix $A\in\\{0,1\\}^{N\times N}$ to represent the connections of node pairs in graph $G$, where $A_{u,v}=1$ means node $u$ and $v$ is connected while $A_{u,v}=0$ otherwise. In addition, for the node classification task, we use $Y\in\\{0,1\\}^{N\times C}$ to denote a set of class labels where $C$ is the number of classes and $Y_{i}$ denotes the ground-truth label of node $v_{i}$. ### 3.2 Graph Purification Graph purification was first mentioned by Jin et al. (2020), in their empirical study as defending inserted poisoning attacks by purifying the graph data before training the GNNs. We further develop this concept by emphasizing the effect of purification methods on clean graphs. In fact, under the real circumstances, the defenders are supposed to have absolutely no idea if a dataset is under attack or not, and it is not a common thing that the graph is perturbed. So, the top priority of purification methods should be keeping the models’ performance on clean data, which means the purification of edges should cost the least loss of information. And then, in this premise, find out as many redundant edges (on clean graphs) and poisons (on perturbed graphs) as possible. Figure 1: Unified Graph Purification Framework. Scorer evaluates the edges and present the score distributions to Judge. Judge selects out the redundant edges according to the scores. Filter further checks these edges with certain schemes and leaves out the edges that may lead to loss of too much information without them. Figure 2: The Residual-Iteration Strategy. The adjacent matrix of this iteration is sent to the next purification process as the residual and calculate scores to update the next adjacent matrix. In this way, the information lost through the last iteration can be somehow revisited in this iteration. ## 4 Unified Graph Purification Framework In this section, we propose a unified framework of graph purification as depicted in Fig. 1. In Fig. 1, the Scorer is the measurement used to evaluate the edges. If the measurement only needs the features of nodes or the adjacent matrix of the graph (like Jaccard), then we can just calculate it directly and present it to Judge. However, when it comes to measurements requiring labels or other information that the defender has no access to, we then need an additional surrogate model to predict that information for us (like KL-divergence). In fact, the surrogate model can be set as any GNNs, as long as it provides the information that the Scorer needs. What’s more, since the GNNs are usually vulnerable to adversarial attacks, which means the prediction cannot be trusted at first, we introduce a residual-iteration strategy (depicted in Fig. 2) for measurements requiring pre-trained surrogate models to solve this issue. Specifically, during the initial iterations, we delete the edges in a relatively conservative way. Then, while the model’s performance enhances little by little through the iterations, we gradually delete more and more edges utilizing more reliable information provided by the enhanced models. We stop the iteration when there are too few of edges left to delete or the performance on the validation set is not getting any better for a while, or just stop it after a given number of iterations. As for the residual part, we borrow the idea of ResNet He et al. (2016) and engage it in our framework. The residual block in ResNet is a shortcut connection for inputting the weights several layers before to the current layer and update together to solve the degradation issue. It has also been proved to have the power of refinement during the training process Liao and Poggio (2016); Greff et al. (2016); Jastrzebski et al. (2017), which is exactly what we need, especially with the first few iterations. However, simply concatenating two matrices together doesn’t really fit in our framework. So, in the spirit of residual, we use the predicted information resulted from the current iteration and the adjacent matrix the earlier iteration instead to calculate scores and then update the current adjacent matrix. Ideally, this is supposed to perform a self-correction role through iterations, and the stronger the attacks are, the better the framework would perform with a residual operation. After Scorer gives out scores for the edges, we present the distribution of these scores to Judge, which will find out the redundant edges accordingly. Next, in order to delete edges with the least loss of information, Filter gives these edges one more check and leaves out the edges that may damage the original topological structure too much without them. ### 4.1 Scorer To implement our framework, we summarize and further develop five preprocess measurements as Scorer, two of which need a pre-trained surrogate model through iterations. Most of these methods are based on two generally accepted empirical observations: a) Attackers usually tend to insert edges over removing them and b) Attackers are usually in favor of connecting edges between dissimilar nodes. Note that, we construct the score of each edge by its extent of positive contribution to the whole graph given by the selected measurements. In other words, an edge is always more likely to be deleted with a lower score. (a) Jaccard (b) Cosine (c) SVD (d) KLD (e) Label Entropy (f) Feature Entropy Figure 3: Box plots of edges’ score distributions for unperturbed graphs (green dots) and perturbed graphs (red dots, $\tau$=15%) on the Cora-ml dataset. #### 4.1.1 Jaccard Jaccard Similarity has been adopted to discover adversarial edges Wu et al. (2019). The key insight is that the adversarially manipulated graphs differ from normal graphs statistically especially in node similarity. Given a binary node feature matrix $X\in\\{0,1\\}^{N\times D}$, Jaccard similarity measures the overlap of features between node $u$ and the neighboring node $v$, the corresponding score is calculated as follows $\displaystyle S_{u,v}=$ $\displaystyle\frac{M_{1,1}}{M_{0,1}+M_{1,0}+M_{1,1}},$ (1) $\displaystyle M_{i,j}=$ $\displaystyle\sum_{k}^{D}(X_{u,k}=i\ \mbox{and}\ X_{v,k}=j),i,j\in\\{0,1\\}$ Since Jaccard similarity score $S_{u,v}$ lies in the range of $[0,1]$, a larger value of this metric indicates the corresponding nodes are more similar. #### 4.1.2 Cosine Since Jaccard similarity score is restricted for binary inputs, it is more reasonable to use Cosine similarity to wrestle with continuous data. Given a node feature matrix $X\in\mathrm{R}^{N\times D}$, Cosine similarity measures the feature similarity between node $u$ and the neighboring node $v$ by inner product space, and the corresponding score is calculated as follows $\displaystyle S_{u,v}=$ $\displaystyle\frac{X_{u}\cdot X_{v}}{\|X_{u}\|\|X_{v}\|}$ (2) Note that a larger value of Cosine similarity score indicating two nodes are more similar. #### 4.1.3 SVD Entezari et al. (2020) argues that the perturbed graph tends to be high-rank in the spectrum and leverage Singular Value Decomposition (SVD) and low-rank approximation of the graph to enhance its robustness. We extend this insight in our work by calculating the SVD score of each approximated edge $\displaystyle S_{u,v}=\hat{A}_{u,v},\mbox{where}\ \hat{A}=U\Sigma V^{T}$ (3) SVD score $S_{u,v}$ measures the connectivity of the edge $(u,v)$, which lies in the range of $[0,1]$, and a larger value of this metric indicates the corresponding edge (connection) has a higher probability to exist. #### 4.1.4 Neighborhood Entropy Entropy detection is based on the similar idea that attack methods tend to connect nodes with dissimilar features or different labels, and thus increase the entropy of features or labels of central nodes and their neighborhood. The calculation of entropy for a node is described as follow: $\displaystyle\mbox{NE}(u)=-\sum_{l}^{L}P_{l}(u)\log\left(P_{d}(u)\right),$ (4) $\displaystyle\mbox{where}\,P(u)=\frac{p(u)}{\sum_{l}^{L}p_{l}(u)},$ where $L$ equals $D$ for feature entropy and equals $C$ for label entropy, and $p(u)$ for both entropy are calculated by: (5) $\displaystyle p_{feature}(u)=\sum_{v\in\mathcal{N}(u)\cup u}\frac{X_{v}}{\sqrt{(|\mathcal{N}(u)|+1)}},$ $\displaystyle p_{label}(u)=\sum_{v\in\mathcal{N}(u)\cup u}\frac{Y_{v}}{\sqrt{(|\mathcal{N}(u)|+1)}},$ where $N(u)$ is the amount of neighbors for node $u$. In fact, it is more of a nodewise measurement and we develop it into an evaluation for edges. Specifically, the score of an edge is the variation of entropy it brings to the nodes on both ends. In other words, the score for the edge $e$ between node $u$ and node $v$ is the difference between $NE(u)+NE(v)$ with and without edge $e$. We combine these two measurements together by first normalizing them and then add them together with the accuracy on the validation set or training set as a weight. #### 4.1.5 KLD KL-divergence (KLD) is originally a measurement based on the idea that attacks like Nettack Zügner et al. (2018) tends to create a discrepancy between the first-order proximity information of a node and that of its neighbors Zhang et al. (2019). KLD evaluates each edge by calculating the KL divergence between the softmax probabilities of the nodes on both ends estimated by a surrogate model. $\displaystyle S_{u,v}=-KL(\hat{Y}_{u}\|\hat{Y}_{v})-KL(\hat{Y}_{v}\|\hat{Y}_{u}),$ (6) $\displaystyle KL(\hat{Y}_{u}\|\hat{Y}_{v})=\sum_{k}^{C}\hat{Y}_{u,k}\log(\frac{\hat{Y}_{u,k}}{\hat{Y}_{v,k}})$ where $\hat{Y}$ is the prediction of the surrogate model or the feature matrix. Similar to what we did with label entropy, we further develop KLD by combining the feature with it as an adjustment, which would make this measurement not entirely dependent on the performance of the surrogate model and thus become more effective when the prediction is not that trustworthy. Specifically, we also calculated the KLD between $X_{u}$ and $X_{v}$ and add them together with the KLD between possibility vectors as the final score. ### 4.2 Judge We propose two kinds of Judge to find out the redundant edges according to the scores. P-Judge P stands for percentage, which means it selects out a certain percentage of remaining edges. Supposedly, P-judge endows the whole process with better control of the amounts of edges getting deleted, and thus fits the measurements that fluctuate a lot and need multiple iterations. T-Judge T stands for threshold, which means it selects out edges with a score higher or lower than the given threshold. We believe T-judge should be combined with measurements that are relatively stable and more trustworthy, or it may delete too many useful edges. ### 4.3 Filter We propose two kinds of Filter to make sure the methods purify graphs with the least loss of information. S-Filter S stands for singleton, which means this Judge makes sure there are no single nodes that get completely disconnected from the whole graph. In fact, if a node gets singled out, there will be no neighborhood information to aggregate. Though it does cut out any perturbations and result in enhanced robustness against adversarial attacks, it also leaves out the useful information from neighbors and sacrifices the performance on clean graphs. C-Filter C stands for connectivity, which means this Judge makes sure the purification methods don’t break the connection of the graph. In this way, C-Judge keeps the information of a graph under purification with a rather different perspective: as long as the nodes are still connected, no matter how many nodes are in between, there will still be chances to aggregate their features together and utilize it to train the GNNs or predict the labels. After all, just as Granovetter (1973) points out, “weak ties” can sometimes be the gamechanger. So we believe it is important to keep the connection of the graph for certain applications. Specifically, to maintain the connectivity with respect to the scores, C-Judge sets weights accordingly to each of the edges in the original unweighted graph, where all of the edges weight 1 equally. Then, the weight of the selected edges would be set to its score plus one, making the selected edges all weight more than one and are also sorted by their scores. Finally, we apply Prim’s algorithm Prim (1957) on the weighted graph and find the minimum spanning tree (MST), which is the least cost to maintain the connectivity considering the scores. Leaving out the edges in the MST, C-Judge goes ahead and deletes the rest of the selected edges. ## 5 Experiments and Discussions ### 5.1 Datasets and Setup In this paper, we adopt three commonly used datasets as benchmarks: Citeseer, Cora and Cora-ml Sen et al. (2008). For each dataset, we randomly select 20% of the nodes to constitute the training set (10% of which is set to be the validation set) and treat the rest of the nodes as the test set. Table 1. shows an overview of the datasets. As for the attack methods, we adopt a number of structure attack methods including state-of-the-arts to study the defense performance of these purification measurements: DICE Cai et al. (2005), FGSM Goodfellow et al. (2015), PGD Xu et al. (2019), Metattack Zügner and Günnemann (2019). Dataset | #Nodes | #Edges | Density ---|---|---|--- Citeseer | 2,110 | 3,668 | 0.082% Cora | 2,485 | 5,069 | 0.082% Cora-ml | 2,810 | 7,981 | 0.101% Table 1: Dataset statistics. Joining previous works’ practice, we only consider the largest connected component of the graph for each dataset. ### 5.2 Results and Discussions Given the experimental setup presented in the earlier section, we utilize the results of designed experiments to answer the following research questions: ##### RQ1 Can the implemented methods under the UGP framework generally enhance the performance of GNNs? ##### RQ2 How does each of the module in the UGP framework help? Dataset | Non | Cosine | Jaccard | SVD | Entropy | I-Entropy | RI-Entropy | KLD | I-KLD ---|---|---|---|---|---|---|---|---|--- Citeseer | 71.15$\pm$0.17 | 71.09$\pm$0.13 | 71.09$\pm$0.13 | 70.02$\pm$0.37 | 70.97$\pm$0.31 | 71.02$\pm$0.34 | 70.68$\pm$0.25 | 71.03$\pm$0.34 | 69.91$\pm$0.21 Cora | 83.35$\pm$0.39 | 83.30$\pm$0.34 | 83.30$\pm$0.34 | 78.47$\pm$0.42 | 83.45$\pm$0.26 | 83.70$\pm$0.42 | 83.65$\pm$0.13 | 82.9$\pm$0.09 | 81.94$\pm$0.30 Cora-ml | 86.39$\pm$0.17 | 85.50$\pm$0.26 | 85.54$\pm$0.22 | 83.72$\pm$0.22 | 85.77$\pm$0.19 | 85.59$\pm$0.14 | 85.54$\pm$0.25 | 85.54$\pm$0.25 | 85.59$\pm$0.23 Table 2: Accuracy (%) of different purification methods on clean graphs. RI- Entropy means implementing Entropy with residual-iteration strategy, while I-KLD and I-Entropy mean the implementations do iterations without residual operation. | Non | Jaccard-TS | Jaccard-PS | Jaccard-TC | Jaccard-PC | KLD-TS | KLD-PS | KLD-TC | KLD-PC ---|---|---|---|---|---|---|---|---|--- Clean | 86.17$\pm$0.17 | 85.43$\pm$0.09 | 84.85$\pm$0.29 | 86.12$\pm$0.17 | 85.92$\pm$0.11 | 86.09$\pm$0.21 | 84.79$\pm$0.36 | 86.25$\pm$0.18 | 85.3$\pm$0.35 1% | 85.44$\pm$0.25 | 84.92$\pm$0.18 | 84.46$\pm$0.23 | 85.25$\pm$0.3 | 85.29$\pm$0.15 | 85.7$\pm$0.12 | 84.55$\pm$0.21 | 86.09$\pm$0.21 | 85.33$\pm$0.26 5% | 79.56$\pm$0.23 | 79.84$\pm$0.4 | 82.0$\pm$0.11 | 79.55$\pm$0.36 | 79.71$\pm$0.37 | 84.65$\pm$0.2 | 84.38$\pm$0.17 | 84.14$\pm$0.11 | 83.92$\pm$0.28 10% | 74.0$\pm$0.28 | 75.55$\pm$0.42 | 78.69$\pm$0.31 | 74.13$\pm$0.28 | 74.12$\pm$0.34 | 83.76$\pm$0.18 | 83.99$\pm$0.26 | 83.15$\pm$0.13 | 82.96$\pm$0.16 15% | 69.6$\pm$0.53 | 71.19$\pm$0.48 | 75.84$\pm$0.33 | 70.24$\pm$0.57 | 70.43$\pm$0.56 | 83.17$\pm$0.25 | 83.6$\pm$0.16 | 81.81$\pm$0.23 | 82.22$\pm$0.2 20% | 64.2$\pm$0.91 | 66.12$\pm$0.42 | 71.74$\pm$0.45 | 65.07$\pm$0.95 | 65.18$\pm$0.77 | 81.09$\pm$0.15 | 79.69$\pm$0.2 | 80.12$\pm$0.31 | 78.78$\pm$0.09 25% | 56.21$\pm$0.4 | 58.72$\pm$0.44 | 67.19$\pm$0.54 | 57.83$\pm$0.48 | 57.79$\pm$0.85 | 73.88$\pm$0.47 | 69.95$\pm$0.43 | 73.42$\pm$0.42 | 69.64$\pm$0.57 Table 3: Accuracy (%) of different methods on perturbed graphs generated by Metattack. “T” and “P” stand for T-Judge and P-Judge, while “S” and “C” stand for S-Filter and C-Filter. For instance, Jaccard-TC stands for the methods applying Jaccard as the measurement for Scorer, selecting out edges with higher scores than a certain threshold, and leaving out the edges that may damage to the connectivity of the graph. ### 5.3 RQ1: Enhanced Performance To study and compare the performance between different proposed measurements, we train a GCN in the end to evaluate the final purification performance of the measurements and display the accuracy on purified clean graph in Table 2. We also adopt several attack methods to poison the input graph and evaluate the methods’ performance against adversarial attacks. We only report the plots of perturbed graphs attacked by Metattack and FGSM with different perturbation rates on Fig 3, and leave other figures in the supplementary material. (a) Cora(Metattack) (b) Cora-ml(Metattack) (c) Cora(FGSM) (d) Cora-ml(FGSM) Figure 4: Plots of performance under Metattack and FGSM with different perturbations rates on Cora-ml and Cora. It is observed that almost every measurement implementing UGP framework can enhance the robustness of the final GCN model, and only SVD betrays a significantly poorer performance on clean graphs (decrease over 2%) and thus cause the loss of too much information. In this case, we don’t consider SVD a qualified graph purification method, despite its relatively high robustness against adversarial attacks. On the other hand, the best robustness performance is mostly achieved by our proposed method I-KLD, especially for strong attacks like Metattack with high perturbation rates. This is because the iteration strategy engaged in I-KLD can help it find out more edges through iterations comparing with methods like Jaccard and Consine. Moreover, KLD is capable of utilizing the whole proximity information of each node instead of the hard prediction with the highest possibility, which further gives this measurement a higher tolerance to the poor performance of the surrogate model comparing with Entropy. ### 5.4 RQ2: Modules in the UGP Framework (a) Cora (b) Citeseer Figure 5: Trends of accuracy on the test set enhancing through iterations provided by measurements with iteration strategy. The attack here is generated by Metattack with a perturbation rate of 0.25. In order to show the effectiveness of the selected measurements, we present the score distributions for edges calculated by each measurement on Cora-ml by Fig 3. As for the effect of our proposed residual-iteration strategy, we display the trends of accuracy on test set through iterations provided by RI- Entropy, I-Entropy and I-KLD in Fig. 5. We also construct experiments to demonstrate the effect of each of our proposed Judge and Filter by engaging Jaccard and KLD with different combinations of each modules and display the results in Table. 3. In Fig 3, a clear difference between the scores for the unperturbed versus perturbed nodes ($\tau$=15%) can be observed, especially for relatively stronger attacks like Metattack. So the deletion of such edges with relatively lower scores is supposed to enhance the performance of GNNs against adversarial attacks, which is exactly what happened in Fig 4. And, naturally, the more green nodes got separated from the red nodes, the better the measurement is. The residual-iteration strategy is also working as expected, as the accuracy is generally increasing with every iteration and the methods applying the residual strategy outperforms the one without it. What’s more, observed from Table 3, Jaccard-TC and KLD-TC achieve the best performance comparing with methods utilizing the same measurement, and Jaccard-PS achieve the highest robustness compared to other Jaccard methods. This is because T-Judge and C-Filter play a similar role by conserving information on clean graphs and perturbed graphs with low perturbation rates, while P-Judge and S-Filter endow the methods with the ability to delete more edges and thus resulted in high robustness against strong attacks. As for KLD methods engaging T-Judge outperforming the ones with P-Judge, it is because we set the threshold empirically according to this exact dataset. However, the distribution of scores calculated by KLD would not be the same on different datasets, and in real-life circumstances, we normally don’t have the access to such information. In this case, we don’t recommend T-Judge for measurements like KLD. ## 6 Conclusion and Future Work In this work, we further clarify the concept of graph purification and present the UGP framework, a novel framework to preprocess the graph structure data before the training of GNNs starts. We implement the UGP framework by several measurements and construct experiments on three datasets with several attack algorithms to show that we can indeed enhance the robustness of GNNs against adversarial attacks through graph purification while not sacrificing the performance on clean graphs, and our proposed I-KLD performs the best in general consideration. It is expected that our research can offer a new perspective towards preprocessing graph structure data and more robust GNNs. One potential direction for future work is to use machine-learning models instead of pure statistic measurements to evaluate each of the edges. Moreover, GNNs can keep their performance without that many edges may also indicate that current models couldn’t make use of all of the information provided in a graph, and thus encourages us to look for other means to utilize these deleted edges and improve the overall performance of GNNs. ## References * Boshmaf et al. [2011] Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu. The socialbot network: When bots socialize for fame and money. ACSAC ’11, page 93–102. Association for Computing Machinery, 2011. * Cai et al. [2005] Deng Cai, Zheng Shao, Xiaofei He, Xifeng Yan, and Jiawei Han. Mining hidden community in heterogeneous social networks. In Proceedings of the 3rd international workshop on Link discovery, pages 58–65. ACM, 2005. * Chen et al. [2018] Liang Chen, Yang Liu, Zibin Zheng, and Philip Yu. Heterogeneous neural attentive factorization machine for rating prediction. In CIKM, pages 833–842. ACM, 2018. * Chen et al. [2019] Liang Chen, Yang Liu, Xiangnan He, Lianli Gao, and Zibin Zheng. Matching user with item set: Collaborative bundle recommendation with deep attention network. In IJCAI, pages 2095–2101, 2019. * Chen et al. [2020] Liang Chen, Jintang Li, Jiaying Peng, Tao Xie, Zengxu Cao, Kun Xu, Xiangnan He, and Zibin Zheng. A survey of adversarial learning on graph. arXiv preprint arXiv:2003.05730, 2020. * Entezari et al. [2020] Negin Entezari, Saba A. Al-Sayouri, Amirali Darvishzadeh, and Evangelos E. Papalexakis. All you need is low (rank): Defending against adversarial attacks on graphs. In WSDM, pages 169–177, 2020. * Feng et al. [2020] Fuli Feng, Weiran Huang, Xin Xin, Xiangnan He, and Tat-Seng Chua. Should graph convolution trust neighbors? A simple causal inference method. CoRR, abs/2010.11797, 2020. * Goodfellow et al. [2015] Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. In ICLR, 2015. * Granovetter [1973] M.S. Granovetter. The Strength of Weak Ties. The American Journal of Sociology, 78(6):1360–1380, 1973. * Greff et al. [2016] Klaus Greff, Rupesh Kumar Srivastava, and Jürgen Schmidhuber. Highway and residual networks learn unrolled iterative estimation. CoRR, abs/1612.07771, 2016. * He et al. [2016] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In CVPR 2016, pages 770–778. IEEE Computer Society, 2016. * Jastrzebski et al. [2017] Stanislaw Jastrzebski, Devansh Arpit, Nicolas Ballas, Vikas Verma, Tong Che, and Yoshua Bengio. Residual connections encourage iterative inference. CoRR, abs/1710.04773, 2017. * Jin et al. [2020] Wei Jin, Yaxin Li, Han Xu, Yiqi Wang, and Jiliang Tang. Adversarial attacks and defenses on graphs: A review and empirical study. CoRR, abs/2003.00653, 2020. * Joyce [2011] James M. Joyce. Kullback-Leibler Divergence, pages 720–722. Springer Berlin Heidelberg, Berlin, Heidelberg, 2011. * Kipf and Welling [2017] Thomas N. Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. In ICLR, 2017. * Liao and Poggio [2016] Qianli Liao and Tomaso A. Poggio. Bridging the gaps between residual learning, recurrent neural networks and visual cortex. CoRR, abs/1604.03640, 2016. * Prim [1957] R. C. Prim. Shortest connection networks and some generalizations. The Bell System Technical Journal, 36(6):1389–1401, 1957. * Rong et al. [2020] Yu Rong, Wenbing Huang, Tingyang Xu, and Junzhou Huang. Dropedge: Towards deep graph convolutional networks on node classification. In ICLR0. OpenReview.net, 2020. * Sen et al. [2008] Prithviraj Sen, Galileo Namata, Mustafa Bilgic, Lise Getoor, Brian Galligher, and Tina Eliassi-Rad. Collective classification in network data. AI magazine, 29(3):93–93, 2008. * Velickovic et al. [2018] Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. Graph attention networks. In ICLR, 2018. * Wu et al. [2019] Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, and Liming Zhu. Adversarial examples for graph data: Deep insights into attack and defense. In IJCAI, pages 4816–4823, 2019. * Xu et al. [2019] Kaidi Xu, Hongge Chen, Sijia Liu, Pin-Yu Chen, Tsui-Wei Weng, Mingyi Hong, and Xue Lin. Topology attack and defense for graph neural networks: An optimization perspective. In IJCAI, pages 3961–3967, 2019. * Zhang et al. [2019] Yingxue Zhang, S Khan, and Mark Coates. Comparing and detecting adversarial attacks for graph deep learning. 2019\. * Zügner and Günnemann [2019] Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In ICLR, 2019. * Zügner et al. [2018] Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. KDD ’18, page 2847–2856, 2018.
=+ + 1in + =+ + + + 1in + [pass]geometry ∎ 11institutetext: J. E. Kozdon 22institutetext: Department of Applied Mathematics, Naval Postgraduate School, 833 Dyer Road, Monterey, CA 93943–5216 22email<EMAIL_ADDRESS>33institutetext: B. A. Erickson44institutetext: Department of Computer and Information Science & Department Earth Sciences, 1202 University of Oregon, 1477 E. 13th Ave. Eugene, OR 97403–1202 44email<EMAIL_ADDRESS>55institutetext: T. Harvey66institutetext: Department of Computer and Information Science 1202 University of Oregon 1477 E. 13th Ave. Eugene, OR 97403–1202 66email<EMAIL_ADDRESS>77institutetext: The views expressed in this document are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. Approved for public release; distribution unlimited # A Non-stiff Summation-By-Parts Finite Difference Method for the Wave Equation in Second Order Form: Characteristic Boundary Conditions and Nonlinear Interfaces ††thanks: J.E.K. was supported by National Science Foundation Award EAR-1547596 B.A.E. was supported by National Science Foundation Awards EAR-1547603 and EAR-1916992 T.H. was supported by National Science Foundation EAR-1916992 Jeremy E. Kozdon Brittany A. Erickson Tobias Harvey (June 1, 2021) ###### Abstract Curvilinear, multiblock summation-by-parts finite difference methods with the simultaneous approximation term method provide a stable and accurate method for solving the wave equation in second order form. That said, the standard method can become arbitrarily stiff when characteristic boundary conditions and nonlinear interface conditions are used. Here we propose a new technique that avoids this stiffness by using characteristic variables to “upwind” the boundary and interface treatment. This is done through the introduction of an additional block boundary displacement variable. Using a unified energy, which expresses both the standard as well as characteristic boundary and interface treatment, we show that the resulting scheme has semidiscrete energy stability for the anistropic wave equation. The theoretical stability results are confirmed with numerical experiments that also demonstrate the accuracy and robustness of the proposed scheme. The numerical results also show that the characteristic scheme has a time step restriction based on standard wave propagation considerations and not the boundary closure. ## 1 Introduction Due to their superior dispersion properties, high-order methods are ideally suited for wave-dominated partial differential equations (PDEs) [11]. That said, unless great care is taken in the treatment of boundary conditions, interface couplings, and variable coefficients, high-order methods are often less robust than their low-order counterparts. An important tool in robust high-order methods is utilization of the summation-by-parts (SBP) property [12, 13]. SBP is the discrete analogue of integration by parts and allows the discrete stability analysis to mimic the continuous well-posedness analysis [19]. When combined with multiblock domain decompositions and curvilinear coordinates, SBP finite difference methods can be used to stably and accurately model complex geometries and variable material parameters. SBP finite difference methods use standard central difference stencils in the interior of a domain and transition to one-sided stencils at boundaries and interfaces in a manner that maintains the SBP property. An important feature of SBP finite difference methods is the built-in norm matrix, which is similar to the mass matrix in finite element methods. A variety of SBP finite difference operators have been developed with the most relevant to this work being the first and second derivative operators on unstaggered grids [12, 13, 26, 17, 14]. With SBP finite difference methods it is possible to either enforce boundary conditions strongly [20, 21] or weakly [3, 5]; weak enforcement of boundary condition with SBP methods is often called the simultaneous approximation term (SAT) method and is the approach taken here. We are primarily interested in the wave equation in second-order form, that is, a displacement formulation of the wave equation as opposed to velocity- stress or velocity-strain. Our motivation for this is to address our ultimate goal of advancing simulations of the earthquake cycle where interseismic loading (decade long tectonic loading) is coupled to dynamic rupture (earthquake rupture taking place over seconds to minutes); the importance of this coupling has been recently highlighted in, for example, Erickson et al. [7]. In the interseismic phase, a quasidynamic formulation is often used that neglects inertial effects, e.g., acceleration, resulting in an elliptic PDE for the displacement. In the coseismic rupture phase inertial effects should be included resulting equation is a hyperbolic wave equation. In order to avoid having to transition between displacements and velocity- stress (or velocity-strain) it is desirable to use a displacement-based formulation for the coseismic phase. Virta and Mattsson [28], building on Mattsson et al. [15, 16], developed an SBP-SAT finite difference scheme for the second-order wave equation with variable coefficients on curved geometries. Duru et al. [6] extended this scheme for use with nonlinear friction laws which govern the sliding of fault interfaces in earthquake problems; nonlinear friction laws relate the interface traction to the sliding velocity. However, as noted in Duru et al. [6], the modified scheme that incorporates the nonlinear friction law results in a numerically stiff system of ordinary differential equations that prevents the use of, for instance, explicit Runge-Kutta time stepping methods; in Duru et al. [6] a custom second-order accurate time stepping method is used. Similar numerical stiffness is also seen in the velocity-stress formulation of the wave equation for earthquake problems, though this can be circumvented by rewriting the nonlinear friction law in terms of the characteristic variables, [10]. The heart of the difference between the traction-velocity and the characteristic interface formulations can be seen by considering a simple linear boundary condition. In one spatial dimension, if the nonlinear interface is reduced to a boundary and linearized the following boundary condition results: $\displaystyle\partial_{1}u=-\alpha\dot{u}.$ (1) Here $u$ is the particle displacement, $\partial_{1}u$ denotes the derivative in space, and $\dot{u}$ the derivative in time; the traction on the boundary is proportional to $\partial_{1}u$ and the sliding velocity is the negative of boundary particle velocity. The coefficient $\alpha\geq 0$ comes from the linearization of the nonlinear friction law around a reference velocity. In an earthquake rupture simulation, the effective value of $\alpha$ can range over many orders of magnitude; for a fuller discussion of friction laws used in earthquake modeling see, for example, Rice [22], Scholz [25], Rice et al. [23]. When $\alpha$ is large, the boundary condition essentially reduces to enforcing a Dirichlet-type boundary condition through Neumann boundary treatment. Since the scheme proposed by Virta and Mattsson [28] and Duru et al. [6] has a parameter that scales linearly with $\alpha$, it is in the limit of large $\alpha$ that the stiffness is seen; see Figure 1. An alternative formulation is to use the characteristic variables. When this is done the boundary condition becomes the reflection of the outgoing characteristic wave: $\displaystyle\dot{u}-\partial_{1}u=R(\dot{u}+\partial_{1}u),~{}R=\frac{1-\alpha}{1+\alpha},$ (2) where for simplicity we are neglecting the material parameters. Since $\alpha\geq 0$ the reflection coefficient is bounded: $-1\leq R\leq 1$. When used in the SBP-SAT discretization of the first order wave equation, the characteristic boundary condition leads to a parameter that scales linearly with $R$ which avoids the stiffness seen with the traction-velocity approach [10]. The main contribution of this work is the use of a characteristic formulation of boundary and interface conditions within a displacement-based scheme, namely merging the ideas of Virta and Mattsson [28] and Kozdon et al. [10]. The key idea of the work is to track the evolution of the boundary and interface displacements, which allows the use of a non-stiff characteristic formulation. The benefit of our approach versus the previous approach [28, 6] is shown in Figure 1, where the spectrum of a one-dimensional operator is shown for various values of $\alpha$ (or equivalently $R$); a fuller discussion of this figure is in Section 6.1. Additionally, the figure shows the maximum magnitude real part of the spectra for a sweep of $\alpha$ values. As can be seen, as $\alpha\rightarrow\infty$ (or equivalently as $R\rightarrow-1$) the non-characteristic formulation results in a large magnitude, negative real eigenvalue. $-1$$-0.5$$0$$0.5$$1$$10^{-3}$$10^{-2}$$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$R$$\max(-\textrm{Re}(h\lambda))$non- characteristiccharacteristic$\infty$31$1/3$0$\alpha$ (a) Maximum magnitude real component of the eigenvalue spectrum which controls stiffness versus reflection coefficient $R$ and $\alpha$. $-3$$-2$$-1$$0$$-2$$0$$2$$\textrm{Re}(h\lambda)$$\textrm{Im}(h\lambda)$ (b) Full spectrum comparison with $R=0.99$ (or $\alpha=1/199$). $-3$$-2$$-1$$0$$-2$$0$$2$$\textrm{Re}(h\lambda)$$\textrm{Im}(h\lambda)$ (c) Full spectrum comparison with $R=0$ (or $\alpha=1$). $-3$$-2$$-1$$0$$-2$$0$$2$$-562$$\textrm{Re}(h\lambda)$$\textrm{Im}(h\lambda)$ (d) Full spectrum comparison with $R=-0.99$ ($\alpha=199$); as indicated, the far left eigenvalue of the non-characteristic method is shifted. Figure 1: Comparison of the eigenvalue spectra for the proposed characteristic and non-characteristic [28] treatment of boundary conditions for various values of reflection coefficient $R$. In all cases the domain is $[0,1]$ with grid spacing $1/50$ and SBP interior accuracy of $2p=4$. The characteristic method is indicated by red $\color[rgb]{0.7592,0.3137,0.3020}\definecolor[named]{pgfstrokecolor}{rgb}{0.7592,0.3137,0.3020}\times$ and the non-characteristic method with blue $\color[rgb]{0.3098,0.5059,0.7412}\definecolor[named]{pgfstrokecolor}{rgb}{0.3098,0.5059,0.7412}+$. The remainder of the paper is structured as follows: Section 2 describes the model wave equation and the continuous energy analysis. Section 3 discusses the decomposition of the domain into computational blocks and introduces the coordinate transforms. In Section 4 we review important results for SBP finite difference methods and our notation. The proposed discretization is developed in Section 5 along with the semidiscrete energy analysis. Numerical experiments to confirm the stability and accuracy properties of the scheme are given in Section 6 and some concluding remarks are given in Section 7. In order to communicate the core ideas of the paper, most of the proofs, analysis details, and construction details of the SBP operators are given in the appendix. All numerical results can be generated using the codes available at https://github.com/jkozdon/sbp_waveprop_characteristic. ## 2 Model Problem Let $\Omega\subset{\mathbb{R}}^{d}$ be a bounded domain with boundary $\partial\Omega$. The boundary is split into two distinct parts: a Dirichlet boundary $\partial\Omega_{D}$ and a characteristic boundary $\partial\Omega_{C}$. Additionally, let $\Gamma_{I}\subset{\mathbb{R}}^{d-1}$ be a set of interfaces in the domain. Unless otherwise noted summation over repeated subscripts is implied, e.g., $u_{i}v_{i}=\sum_{i=1}^{d}u_{i}v_{i}$, $u_{ii}=\sum_{i=1}^{d}u_{ii}$, and $u_{i}C_{ij}v_{j}=\sum_{i=1}^{d}\sum_{j=1}^{d}u_{i}C_{ij}v_{j}$. As a model problem we consider the second-order, anistropic wave equation for the scalar displacement $u$: $\displaystyle\rho\ddot{u}=\partial_{i}C_{ij}\partial_{j}u,$ $\displaystyle~{}~{}\bm{x}\in\Omega,~{}t\in[0,T],$ (3a) $\displaystyle u=g_{D},$ $\displaystyle~{}~{}\bm{x}\in\partial\Omega_{D},~{}t\in[0,T],$ (3b) $\displaystyle Z\dot{u}+\tau=R(Z\dot{u}-\tau)+g_{C},$ $\displaystyle~{}~{}\bm{x}\in\partial\Omega_{C},~{}t\in[0,T],$ (3c) $\displaystyle\begin{cases}\tau^{-}=-\tau^{+},\\\ \tau^{\pm}=F(V^{\pm})\end{cases}$ $\displaystyle~{}~{}\bm{x}\in\Gamma_{I},~{}t\in[0,T].$ (3d) Here, the density $\rho>0$ and the components of the stiffness matrix $C_{ij}$ are taken to be spatially varying. Additionally, the stiffness matrix is assumed to be symmetric positive definite: $C_{ij}=C_{ji}$ and $v_{i}C_{ij}v_{j}\geq 0$ with equality only when $v_{i}=0$ for all $i$. At interfaces and boundaries the traction $\tau$ is defined as $\tau=n_{i}C_{ij}\partial_{j}u,$ (4) where the vector $n_{i}$ is the unit normal which is taken to be outward pointing on boundaries. On $\partial\Omega_{C}$ the reflection coefficient satisfies $-1\leq R\leq 1$ where the shear impedence is defined as $Z^{2}=\rho n_{i}C_{ij}n_{j}$. On the interface $\Gamma_{I}$, (3d) specifies force balance and a friction law, respectively. The normal vector is defined so that $n_{i}^{-}$ points away from the minus side and $n_{i}^{+}$ points away from the plus side with $n_{i}^{+}=-n_{i}^{-}$. The superscripts on the material parameters denote which side of the interface the material parameters are evaluated on. We define jump in $\dot{u}$ across the interface by $V^{\pm}=\dot{u}^{\mp}-\dot{u}^{\pm}.$ (5) The nonlinear function $F(V)$ is the frictional strength of the interface and is assumed to satisfy $VF(V)\geq 0$. Force balance and $V^{+}=-V^{-}$ imply that $F(V^{+})=-F(V^{-})$. Characteristic variables $w$ and $q$, which are associated with locally propagating waves in the direction $\pm n$, respectively, can be defined as $\displaystyle w$ $\displaystyle=Z\dot{u}-\tau,$ (6a) $\displaystyle q$ $\displaystyle=Z\dot{u}+\tau.$ (6b) The velocity $\dot{u}$ and traction $\tau$ can also be recovered from the characteristic variables, $\displaystyle\dot{u}$ $\displaystyle=\frac{q+w}{2Z},$ (7a) $\displaystyle\tau$ $\displaystyle=\frac{q-w}{2}.$ (7b) The characteristic boundary condition (3c) can now be rewritten as $q=Rw+g_{C}.$ (3c’) Though not needed for the continuous analysis, a critical step in the discretization below is to rewrite interface condition (3d) in terms of the characteristic variables. Namely, we define the interface condition as $q^{\pm}=\mathcal{Q}^{\pm}\left(w^{\pm},w^{\mp}\right).$ (3d’) The function $\mathcal{Q}^{\pm}$ is defined consistently with the underlying function $F(V)$ by requiring that $\displaystyle\dot{u}^{\pm}$ $\displaystyle=\frac{\mathcal{Q}^{\pm}+w^{\pm}}{2Z^{\pm}},$ (8a) $\displaystyle\tau^{\pm}$ $\displaystyle=\frac{\mathcal{Q}^{\pm}-w^{\pm}}{2}=F\left(V^{\pm}\right).$ (8b) For general $F$ the function $\mathcal{Q}^{\pm}$ cannot be stated in closed form, but can be guaranteed to exist by the implicit function theorem as long as $F^{\prime}(V)>0$ [10, Proposition 1]. Details on how the problem of finding $\mathcal{Q}^{\pm}$ can be reduced to a single variable root finding problem are found in Appendix E. To guide the development of the numerical scheme, we now develop an energy estimate for governing equation (3a). We define a seminorm $E(u)$ and then show that $\dot{E}(u(\cdot,t))\leq 0$ when $g_{D}=g_{C}=0$ for all $t>0$; with non-zero boundary data energy growth due to the boundary conditions must be allowed. We define the energy $E(u)$ for a scalar valued function $u(x,t)$ as $E=\frac{1}{2}\int_{\Omega}\left(\rho\dot{u}^{2}+\left(\partial_{i}u\right)C_{ij}\left(\partial_{j}u\right)\right).$ (9) This is valid seminorm of $u$, namely $E\geq 0$ for all $u$, because the stiffness matrix is symmetric positive definite. With this definition of energy it is possible to prove the following lemma; see Appendix B. ###### Lemma 1 Governing equations (3) with energy (9) satisfy $\dot{E}\leq 0$ if $g_{D}=g_{C}=0$. ## 3 Domain Decomposition Let ${\mathcal{B}(\Omega)}$ be a partitioning of $\Omega\subset{\mathbb{R}}^{d}$ into $N_{b}$ non-overlapping, curved blocks (quadrilaterals when $d=2$ and hexahedrons when $d=3$). For each $B\in{\mathcal{B}(\Omega)}$ there is a diffeomorphic mapping $\bm{x}^{B}$ between $B$ and the reference block $\hat{B}=[0,1]^{d}$ such that $\bm{x}^{B}(\bm{\xi})\in B$ for all $\bm{\xi}\in\hat{B}$. We use the notation $\hat{\partial}_{i}$ to denote the partial derivative with respect to $\xi_{i}$. The Jacobian determinant is denoted as $J^{B}$. For example with $d=2$ $J^{B}=\left(\hat{\partial}_{1}x_{1}^{B}\right)\left(\hat{\partial}_{2}x_{2}^{B}\right)-\left(\hat{\partial}_{1}x_{2}^{B}\right)\left(\hat{\partial}_{2}x_{1}^{B}\right).$ (10) Note that typically the metric terms are computed by first computing $\hat{\partial}_{l}x_{i}^{B}$ and then metric identities are employed to calculate $\partial_{j}\xi_{m}^{B}$; see, for example, Kopriva [9]. Each block $B\in{\mathcal{B}(\Omega)}$ has $2d$ faces, and we let $\partial B^{f}$ for $f=1,2,\dots,2d$ be the faces in physical space and $\partial\hat{B}^{f}$ be the faces in the reference space. We assume that each face $B^{f}$ corresponds to either a Dirichlet boundary, characteristic boundary, nonlinear interface, or a purely computational interface (i.e., an artificial interface introduced in the partitioning of $\Omega$ into curved blocks). We let $n_{i}^{B^{f}}$ denote the outward pointing normal to face $f$ of block $B$ in physical space and $\hat{n}_{i}^{B^{f}}\equiv\hat{n}_{i}^{f}$ denote the same outward pointing normal in the reference space. Note that only one component of $\hat{n}_{i}^{f}$ is non-zero so that the Kroneckor delta $\delta_{ij}$ provides a face numbering convention $\hat{n}_{i}^{f}={(-1)}^{f}\delta_{\left\lceil\frac{f}{2}\right\rceil i}$. The relationship between $n_{i}$ and $\hat{n}_{i}$ is $S_{J}^{B^{f}}n^{B^{f}}_{i}=J^{B}\left(\partial_{i}\xi_{k}^{B}\right)\hat{n}_{k}^{f},$ (11) where the surface Jacobian $S_{J}^{B^{f}}$ is the normalization factor so that $n_{i}^{B^{f}}$ is a unit vector. Given the face numbering convention and properties of the reference unit norm $\hat{n}_{i}^{f}$, the surface Jacobian with $d=2$ is ${\left(S_{J}^{B^{f}}\right)}^{2}={\left(J^{B}\right)}^{2}\left(\partial_{i}\xi_{\left\lceil\frac{f}{2}\right\rceil}^{B}\right)\left(\partial_{i}\xi_{\left\lceil\frac{f}{2}\right\rceil}^{B}\right)~{}~{}\text{(no summation over $f$)}.$ (12) Before writing down the transformed governing equations, it is useful to define a few quantities. For each $B\in{\mathcal{B}(\Omega)}$ we define the transformed density and stiffness matrix as $\displaystyle\hat{\rho}$ $\displaystyle=J\rho,$ (13a) $\displaystyle\hat{C}_{ij}$ $\displaystyle=J\left(\partial_{l}\xi_{i}\right)C_{lm}\left(\partial_{m}\xi_{j}\right);$ (13b) in this equation, and those that follow, unless needed the subscript $B$ denoting the block number is suppressed. Similarly, on face $\partial B^{f}$ the shear impedence and traction are defined as $\displaystyle{\left(\hat{Z}^{f}\right)}^{2}$ $\displaystyle=\hat{\rho}\hat{n}^{f}_{i}\hat{C}_{ij}\hat{n}^{f}_{j}={\left(S_{J}^{f}Z^{f}\right)}^{2},$ (14a) $\displaystyle\hat{\tau}^{f}$ $\displaystyle=\hat{n}^{f}_{i}\hat{C}_{ij}\hat{\partial}_{j}u=S_{J}^{f}\tau^{f};$ (14b) unless needed for clarity, the superscript $B^{f}$ is reduced to $f$. With these, for each $B\in{\mathcal{B}(\Omega)}$ governing equations (3) become $\hat{\rho}\ddot{u}=\hat{\partial}_{i}\hat{C}_{ij}\hat{\partial}_{j}u,~{}~{}\bm{\xi}\in[0,1]^{d},~{}t\in[0,T].$ (15a) For each face $\partial B^{f}$ the boundary or interface condition is $\displaystyle u=g_{D},$ $\displaystyle~{}\text{if }\partial B^{f}\cap\partial\Omega_{D}\neq\emptyset,$ (15b) $\displaystyle\hat{Z}^{f}\dot{u}+\hat{\tau}^{f}=R(\hat{Z}^{f}\dot{u}-\hat{\tau}^{f})+S_{J}^{f}g_{C},$ $\displaystyle~{}\text{if }\partial B^{f}\cap\partial\Omega_{C}\neq\emptyset,$ (15c) $\displaystyle\begin{cases}\hat{\tau}^{f^{-}}=-\hat{\tau}^{f^{+}}\\\ \hat{\tau}^{f^{\pm}}=S_{J}F(V^{f^{\pm}}),\\\ \end{cases}$ $\displaystyle~{}\text{if }\partial B^{f}\cap\Gamma_{I}\neq\emptyset,$ (15d) $\displaystyle\begin{cases}\hat{\tau}^{f^{-}}=-\hat{\tau}^{f^{+}},\\\ \dot{u}^{f^{-}}=\dot{u}^{f^{+}},\end{cases}$ $\displaystyle~{}\text{otherwise}.$ (15e) where $V^{f^{\pm}}=\dot{u}^{f^{\mp}}-\dot{u}^{f^{\pm}}.$ Here the notation $f^{\pm}$ denotes the two sides of the interface with $f^{-}$ denoting the interior value and $f^{+}$ denoting the exterior (neighboring block) value. Namely, let face $\partial B^{f}$ of block $B\in{\mathcal{B}(\Omega)}$ be connected to block $C\in{\mathcal{B}(\Omega)}$ along face $\partial C^{f^{\prime}}$, then $\partial B^{f^{-}}=\partial B^{f}$ and $\partial B^{f^{+}}=\partial C^{f^{\prime}}$. By definition $S_{J}^{f^{+}}=S_{J}^{f^{-}}$ and $\hat{n}^{f^{+}}_{i}=-\hat{n}^{f^{-}}_{i}$. Interface conditions (15e) are not present in the original governing equations (3), and are added to account for continuity of the solution across locked (purely computational) block interfaces. As with the original system, it is useful to introduce the characteristic variables $\displaystyle\hat{w}$ $\displaystyle=\hat{Z}\dot{u}-\hat{\tau},$ (16a) $\displaystyle\hat{q}$ $\displaystyle=\hat{Z}\dot{u}+\hat{\tau},$ (16b) and as before the displacement and traction can be easily recovered $\displaystyle\dot{u}$ $\displaystyle=\frac{\hat{q}+\hat{w}}{2\hat{Z}},$ (17a) $\displaystyle\hat{\tau}$ $\displaystyle=\frac{\hat{q}-\hat{w}}{2}.$ (17b) With this, the characteristic boundary condition (15c) can be written as $\hat{q}=R\hat{w}+S_{J}^{f}g_{C}.$ (15c’) Similarly, the nonlinear interface condition (15d) and locked interface condition (15e) can be combined as $\hat{q}^{\pm}=\hat{\mathcal{Q}}^{\pm}\left(\hat{w}^{\pm},\hat{w}^{\mp}\right).$ (15d’) For the nonlinear interface conidition (15d) the form of $\hat{\mathcal{Q}}^{\pm}$ is defined in the same manner as discussed following (3d’). In the case of the locked interface (15e) $\hat{\mathcal{Q}}^{\pm}$ can be stated explicitly: $\hat{\mathcal{Q}}^{\pm}\left(\hat{w}^{\pm},\hat{w}^{\mp}\right)=\frac{2\hat{Z}^{\pm}\hat{w}^{\mp}+(\hat{Z}^{\pm}-\hat{Z}^{\mp})\hat{w}^{\pm}}{\hat{Z}^{+}+\hat{Z}^{-}};$ (18) as can be seen in the limiting case of $\hat{Z}^{+}=\hat{Z}^{-}$ this is just transmission of the characteristic variable across the interface: $\hat{\mathcal{Q}}^{\pm}\left(\hat{w}^{\pm},\hat{w}^{\mp}\right)=\hat{w}^{\mp}$. For the transformed system (15), the energy in block $B\in{\mathcal{B}(\Omega)}$ is $E^{B}=\frac{1}{2}\int_{\hat{B}}\left(\hat{\rho}\dot{u}^{2}+\left(\hat{\partial}_{i}u\right)\hat{C}_{ij}\left(\hat{\partial}_{j}u\right)\right).$ (19) It is straightforward to show that the energy (9) satisfies $E=\sum_{B\in{\mathcal{B}(\Omega)}}E^{B},$ (20) and the transformed governing equation satisfy the energy estimate of Lemma 1. ## 4 Summation-By-Parts Operators To approximate the spatial derivative summation-by-parts (SBP) finite difference operators are used. We begin with the introduction of the one- dimensional operators and then generalize the operators to multiple dimensions using tensor products. ### 4.1 One Dimensional SBP Operators Let the domain $0\leq\xi\leq 1$ be discretized with an $N+1$ equally spaced grid points. The grid of points are represented as ${\bm{\xi}}$ with spacing $h=1/N$ and points located at ${\left\\{{\bm{\xi}}\right\\}}_{k}=kh$ for $k=0,1,\dots,N$. Let ${\bm{u}}$ be the projection of $u$ onto the computational grid. We define the operator ${\bm{e}}_{k}$ to be the grid basis functions, that is the vector which is $1$ at grid point $k$ and zero at all other grid points. Importantly ${\bm{e}}_{k}^{T}$ selects the value of a grid function ${\bm{u}}$ at the point $k$, namely ${\bm{e}}_{k}^{T}{\bm{u}}={\left\\{{\bm{u}}\right\\}}_{k}$. Let the first and $C$-weighted second derivatives of $u$ be approximated as $\displaystyle{\left.\hat{\partial}_{1}u\right|}_{\xi_{1}=kh}\approx{\left\\{{\bm{{D}}}_{1}{\bm{u}}\right\\}}_{k},$ (21a) $\displaystyle{\left.\hat{\partial}_{1}C\hat{\partial}_{1}u\right|}_{\xi_{1}=kh}\approx{\left\\{{\bm{{D}}}_{11}^{(C)}{\bm{u}}\right\\}}_{k}.$ (21b) The derivative approximations ${\bm{{D}}}_{1}$ and ${\bm{{D}}}^{(C)}_{11}$ are called SBP if they satisfy the following definitions. ###### Definition 1 (SBP First Derivative) The operator ${\bm{{D}}}_{1}$ is called an SBP approximation if it can be decomposed as ${\bm{{H}}}_{1}{\bm{{D}}}_{1}={\bm{{Q}}}_{1}$ with ${\bm{{H}}}_{1}$ being a symmetric positive definite matrix and ${\bm{u}}^{T}\left({\bm{{Q}}}_{1}+{\bm{{Q}}}_{1}^{T}\right){\bm{v}}={\bm{u}}^{T}{\bm{e}}_{N}{\bm{e}}_{N}^{T}{\bm{v}}-{\bm{u}}^{T}{\bm{e}}_{0}{\bm{e}}_{0}^{T}{\bm{v}}={\left\\{{\bm{u}}\right\\}}_{N}{\left\\{{\bm{v}}\right\\}}_{N}-{\left\\{{\bm{u}}\right\\}}_{0}{\left\\{{\bm{v}}\right\\}}_{0},$ (22) for all vectors ${\bm{u}}$ and ${\bm{v}}$. ###### Definition 2 (SBP Second Derivative) The operator ${\bm{{D}}}_{11}^{(C)}$ is called an SBP approximation if it can be decomposed as ${\bm{{H}}}_{1}{\bm{{D}}}_{11}^{(C)}=-{\bm{{A}}}_{11}^{(C)}+{\left\\{{\bm{{C}}}\right\\}}_{N}{\bm{e}}_{N}{\bm{b}}_{N}^{T}-{\left\\{{\bm{{C}}}\right\\}}_{0}{\bm{e}}_{0}{\bm{b}}_{0}^{T},$ (23) where ${\bm{{A}}}_{11}^{(C)}$ is a symmetric positive semidefinite matrix and ${\bm{b}}_{N}^{T}{\bm{u}}$ and ${\bm{b}}_{0}^{T}{\bm{u}}$ are accurate approximations of first derivative of $u$ at the boundary points ${\left\\{{\bm{\xi}}\right\\}}_{N}$ and ${\left\\{{\bm{\xi}}\right\\}}_{0}$, respectively. In addition the derivative operators are assumed to be compatible operators, namely that ${\bm{{H}}}_{1}$ is the same for both the first and second derivative operators and the weighting matrix ${\bm{{H}}}_{1}$ is diagonal. ###### Remark 1 It is not assumed that the boundary derivative operators ${\bm{b}}_{0}^{T}$ and ${\bm{b}}_{N}^{T}$ are the first and last rows of ${\bm{{D}}}_{1}$, namely ${\bm{b}}_{0}^{T}\neq{\bm{e}}_{0}{\bm{{D}}}_{1}$ and ${\bm{b}}_{N}^{T}\neq{\bm{e}}_{N}{\bm{{D}}}_{1}$. That is, we do not assume that the operators are fully-compatible SBP operators [18]. The reason that operators that satisfy Definitions 1 and 2 are called SBP is that the following identities $\displaystyle{\bm{u}}^{T}{\bm{{H}}}{\bm{{D}}}_{1}{\bm{v}}$ $\displaystyle={\left\\{{\bm{u}}\right\\}}_{N}{\left\\{{\bm{v}}\right\\}}_{N}-{\left\\{{\bm{u}}\right\\}}_{0}{\left\\{{\bm{v}}\right\\}}_{0}-{\bm{u}}^{T}{\bm{{D}}}_{1}^{T}{\bm{{H}}}{\bm{v}},$ (24a) $\displaystyle{\bm{u}}^{T}{\bm{{H}}}{\bm{{D}}}_{11}^{(C)}{\bm{v}}$ $\displaystyle={\left\\{{\bm{{C}}}\right\\}}_{N}{\left\\{{\bm{u}}\right\\}}_{N}{\bm{b}}_{N}^{T}{\bm{v}}-{\left\\{{\bm{{C}}}\right\\}}_{0}{\left\\{{\bm{u}}\right\\}}_{0}{\bm{b}}_{0}^{T}{\bm{v}}-{\bm{u}}^{T}{\bm{{A}}}_{11}^{(C)}{\bm{v}},$ (24b) discretely mimic the continuous integration by parts identities $\displaystyle\int_{0}^{1}u\hat{\partial}_{1}v$ $\displaystyle={(uv)|}_{0}^{1}-\int_{0}^{1}(\hat{\partial}_{1}u)v,$ (25a) $\displaystyle\int_{0}^{1}u\hat{\partial}_{1}C\hat{\partial}_{1}v$ $\displaystyle={(Cu\hat{\partial}_{1}v)|}_{0}^{1}-\int_{0}^{1}(\hat{\partial}_{1}u)C\hat{\partial}_{1}v.$ (25b) It is useful to note that ${\bm{{H}}}_{1}$ and ${\bm{{A}}}^{(C)}_{11}$ lead to quadrature approximations of the following integrals [8]: $\displaystyle\int_{0}^{1}uv$ $\displaystyle\approx{\bm{u}}^{T}{\bm{{H}}}_{1}{\bm{v}},$ (26a) $\displaystyle\int_{0}^{1}(\hat{\partial}_{1}u)C\hat{\partial}_{1}v$ $\displaystyle\approx{\bm{u}}^{T}{\bm{{A}}}_{11}^{(C)}{\bm{v}}.$ (26b) ### 4.2 Multidimensional SBP operators Multiple dimensional SBP operators can be constructed via tensor products. In particular the one-dimensional operators are applied along the grid lines. To approximate governing equations (15) derivative approximations are needed of the form: $\hat{\partial}_{i}C\hat{\partial}_{j}u\approx{\bm{\tilde{D}}}_{ij}^{(C)}{\bm{\tilde{u}}}.$ (27) The variable coefficients $C$ present in the approximation make it cumbersome to define the form of ${\bm{\tilde{D}}}_{ij}^{(C)}{\bm{\tilde{u}}}$, so here we outline some of the important discrete properties of the operator; Appendix A presents the tensor product construction of the operators in two spatial dimensions from which the higher dimensional extensions can be generalized. We define multidimensional SBP operators on the domain $[0,1]^{d}$. A regular, Cartesian grid is used to discretize the domain with $N_{i}+1$ grid points in each direction and grid spacing $h_{i}=1/N_{i}$. The solution is represented as a vectors with the leading dimension being the fastest index, i.e., column- major order. So in two dimensions the grid function of $u(\xi_{1},\xi_{2})$ is the vector ${\bm{\tilde{u}}}=\begin{bmatrix}{\left\\{{\bm{\tilde{u}}}\right\\}}_{00}&{\left\\{{\bm{\tilde{u}}}\right\\}}_{10}&\dots&{\left\\{{\bm{\tilde{u}}}\right\\}}_{N_{1}N_{2}}\end{bmatrix}^{T},$ (28) where ${\left\\{{\bm{\tilde{u}}}\right\\}}_{ij}\approx u(ih_{1},jh_{2})$. Let ${\bm{\tilde{H}}}$ be the tensor product volume norm matrix, ${\bm{\tilde{H}}}={\bm{{H}}}_{1}\otimes\cdots\otimes{\bm{{H}}}_{d},$ (29) which can be thought of as an approximation of the inner product $\int_{\hat{B}}vu\approx{\bm{\tilde{v}}}{\bm{\tilde{H}}}{\bm{\tilde{u}}}.$ (30) The tensor product derivative operators have the following SBP structure ${\bm{\tilde{H}}}{\bm{\tilde{D}}}_{ij}^{(C)}=-{\bm{\tilde{A}}}_{ij}^{(C)}+\sum_{f=2i-1}^{2i}\hat{n}^{f}_{i}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}^{f}{\bm{{\bar{B}}}}^{f}_{j},$ (31) where the two terms in the multidimensional SBP decomposition can be thought of as approximations of the following volume and surface integrals: $\displaystyle\int_{\hat{B}}(\hat{\partial}_{i}v)C(\hat{\partial}_{j}u)$ $\displaystyle\approx{\bm{\tilde{v}}}^{T}{\bm{\tilde{A}}}_{ij}^{(C)}{\bm{\tilde{u}}}^{T},$ (32a) $\displaystyle\int_{\partial\hat{B}^{f}}v\hat{n}_{i}^{f}C(\hat{\partial}_{j}u)$ $\displaystyle\approx\hat{n}_{i}{\bm{\tilde{v}}}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}^{f}{\bm{{\bar{B}}}}^{f}_{j}{\bm{\tilde{u}}}.$ (32b) If $C_{ij}$ defines a symmetric positive definite, spatially varying coefficient matrix then the matrix ${\bm{\tilde{A}}}_{ij}^{(C_{ij})}$ (summation implied over $i$ and $j$) is symmetric positive semidefinite; see Appendix A. The matrix ${\bm{{\bar{L}}}}^{f}$ selects the points from the volume vector along face $f$ of the reference block. The matrix ${\bm{{\bar{B}}}}^{f}_{j}$ computes the derivative approximation in the direction $\xi_{j}$ and evaluates it along face $f$. When $i=j$ in (31) then $f\in(2j-1,2j)$ and ${\bm{{\bar{B}}}}^{f}_{j}$ is based on the boundary derivatives from the one- dimensional second derivative SBP operator. When $i\neq j$ in (31) then $f\notin(2j-1,2j)$ and ${\bm{{\bar{B}}}}^{f}_{j}$ is based on the first derivative SBP operator. The diagonal matrix ${\bm{{H}}}^{f}$ is the tensor product surface norm matrix, which approximates $\int_{\partial\hat{B}^{f}}vu\approx{\bm{\tilde{v}}}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\bm{{\bar{L}}}}^{f}_{j}{\bm{\tilde{u}}},$ (33) and the diagonal matrix ${\bm{{C}}}^{f}$ is the variable coefficient evaluated at the points of face $f$. Since the reference unit normal $\hat{n}_{i}^{f}={(-1)}^{f}$ on faces $f\in(2i-1,2i)$ and $\hat{n}_{i}^{f}=0$ for $f\notin(2i-1,2i)$ the summation in SBP decomposition (31) can be extended to be a summation over all faces, ${\bm{\tilde{H}}}{\bm{\tilde{D}}}_{ij}^{(C)}=-{\bm{\tilde{A}}}_{ij}^{(C)}+\sum_{f=1}^{2d}\hat{n}^{f}_{i}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}^{f}{\bm{{\bar{B}}}}^{f}_{j};$ (31’) this new form will be used to simplify the statement of the discretization of the wave equation below. As noted above, here we have only outlined our basic notation and more details about the construction of the operators are given in Appendix A. ## 5 Multi-Block Discretization With the above defined SBP notation, a single block discretization of (15a) with weak enforcement of boundary conditions can be written as $\begin{split}{\bm{\tilde{\rho}}}\ddot{{\bm{\tilde{u}}}}=&\;{\bm{\tilde{D}}}_{ij}^{(\hat{C}_{ij})}{\bm{\tilde{u}}}+\sum_{f=1}^{2d}{\bm{\tilde{H}}}^{-1}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}\left({\bm{\hat{\tau}}}^{*f}-{\bm{{\hat{N}}}}^{f}_{i}{\bm{{\hat{C}}}}_{ij}^{f}{\bm{{\bar{B}}}}^{f}_{j}{\bm{\tilde{u}}}\right)\\\ &-\sum_{f=1}^{2d}{\bm{\tilde{H}}}^{-1}{\left({\bm{{\bar{B}}}}^{f}_{j}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{\hat{C}}}}_{ij}^{f}{\bm{{H}}}^{f}\left({\bm{u}}^{*f}-{\bm{{\bar{L}}}}^{f}{\bm{\tilde{u}}}\right)\end{split}$ (34) which after multiplying by ${\bm{\tilde{H}}}$ and applying the multidimensional SBP property (31’) gives a form which is more convenient for analysis: $\begin{split}{\bm{\tilde{\rho}}}{\bm{\tilde{H}}}\ddot{{\bm{\tilde{u}}}}=&\;-{\bm{\tilde{A}}}_{ij}^{(\hat{C}_{ij})}{\bm{\tilde{u}}}+\sum_{f=1}^{2d}{\left({\bm{{\bar{L}}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\bm{\hat{\tau}}}^{*f}\\\ &-\sum_{f=1}^{2d}{\left({\bm{{\bar{B}}}}^{f}_{j}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{\hat{C}}}}_{ij}^{f}{\bm{{H}}}^{f}\left({\bm{u}}^{*f}-{\bm{u}}^{f}\right).\end{split}$ (35) Here we have defined ${\bm{u}}^{f}={\bm{{\bar{L}}}}^{f}{\bm{\tilde{u}}},$ (36) and ${\bm{\tilde{\rho}}}$ is a diagonal matrix of density $\rho$ evaluated at the grid points. The vectors ${\bm{\hat{\tau}}}^{*f}$ and ${\bm{u}}^{*f}$, which we call the numerical fluxes, are used to enforce the boundary and interface conditions weakly; the exact form of these vectors will depend on the specific boundary or interface condition and is discussed in detail below. We define the energy in the domain as $\mathcal{E}=\sum_{B\in{\mathcal{B}(\Omega)}}\mathcal{E}^{B},$ (37) where the energy in block $B$ is $\begin{split}\mathcal{E}^{B}=&\;\frac{1}{2}\dot{{\bm{\tilde{u}}}}^{T}{\bm{\tilde{H}}}{\bm{\tilde{\rho}}}\dot{{\bm{\tilde{u}}}}+\frac{1}{2}{\bm{\tilde{u}}}^{T}{\bm{\tilde{A}}}_{ij}^{(\hat{C}_{ij})}{\bm{\tilde{u}}}\\\ +&\frac{1}{2}\sum_{f=1}^{2d}\left({\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{\tau}}}^{f}\right)}-{\left({\bm{\hat{T}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{T}}}^{f}\right)}\right).\end{split}$ (38) Here we have defined the matrix ${\bm{{X}}}^{f}={\left({\bm{{\hat{N}}}}^{f}_{i}{\bm{{\hat{C}}}}_{ij}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{{\Gamma}}}^{f}\right)}^{-1}$ (39) where ${\bm{{\Gamma}}}^{f}$ is a penalty parameter which must be sufficient large; a lower bound for ${\bm{{\Gamma}}}^{f}$ is given by (86a). Additionally we define the block and interface face tractions $\displaystyle{\bm{\hat{T}}}^{f}$ $\displaystyle={\bm{{\hat{N}}}}^{f}_{i}{\bm{{\hat{C}}}}_{ij}^{f}{\bm{{\bar{B}}}}^{f}_{j}{\bm{\tilde{u}}},$ (40a) $\displaystyle{\bm{\hat{\tau}}}^{f}$ $\displaystyle={\bm{\hat{T}}}^{f}+{\left({\bm{{X}}}^{f}\right)}^{-1}\left({\bm{u}}^{*f}-{\bm{{\bar{L}}}}^{f}{\bm{\tilde{u}}}\right).$ (40b) Essentially, discrete energy (38) is a direct discretization of continuous energy (19) with an additional penalty on the faces for the mismatch between two alternative approximations of the traction $\hat{\tau}^{f}$ (14b) on the faces. The discrete energy satisifies the following lemma; see Appendix C. ###### Lemma 2 Energy (38) is a seminorm of the solution if ${\bm{{\Gamma}}}^{f}$ is positive and sufficiently large. ###### Remark 2 In the proof of Lemma 2 the updated borrowing lemma from Almquist and Dunham [1] is employed to determine the penalty parameter ${\bm{{\Gamma}}}^{f}$. Though not shown, in one spatial dimension a slightly better parameter can be determined using the borrowing lemma from Virta and Mattsson [28]; in multiple dimensions Almquist and Dunham [1] yields better results. The stability of the scheme will be shown by proving that for each boundary type, the global energy is non-increasing in time when boundary data is set to zero. Namely, we will show that $\dot{\mathcal{E}}=\sum_{B\in{\mathcal{B}(\Omega)}}\dot{\mathcal{E}}^{B}\leq 0.$ (41) For a single block, taking the time derivative of block energy (38) and using discretization (35) gives $\displaystyle\dot{\mathcal{E}}^{B}=$ $\displaystyle\;\sum_{f=1}^{2d}\dot{\mathcal{E}}^{f},$ (42a) $\displaystyle\dot{\mathcal{E}}^{f}=$ $\displaystyle\;{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left(\dot{{\bm{\hat{\tau}}}}^{f}\right)}-{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left(\dot{{\bm{\hat{T}}}}\right)}$ (42b) $\displaystyle+{\left({\bm{\hat{\tau}}}^{*f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}-{\left({\bm{u}}^{*f}-{\bm{u}}^{f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{\hat{T}}}}.$ Using the definition of ${\bm{\hat{\tau}}}^{f}$ (40b) the rate of change of face energy simplifies to $\dot{\mathcal{E}}^{f}={\left({\bm{\hat{\tau}}}^{*f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}+{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\left(\dot{{\bm{u}}}^{*f}-\dot{{\bm{u}}}^{f}\right)}.$ (43) Discrete face energy rate (43) is of the same form as the continuous counterpart (73), namely a boundary integral of the particle velocity times the traction at the boundary. Stability is now reduced to showing that if a face $f$ is on a physical boundary that $\dot{\mathcal{E}}^{f}\leq 0$ and if on a block interface that $\dot{\mathcal{E}}^{f^{-}}+\dot{\mathcal{E}}^{f^{+}}\leq 0$. In the remainder of this section numerical fluxes are given for characteristic boundary conditions as well as the characteristic treatment of locked and nonlinear interfaces. In all of these cases the basic idea is to specify an equation for $\dot{u}^{*}$, i.e., the time derivative of the numerical flux, so that $u^{*}$ becomes an additional block face variable that must integrated in time. It is shown that in all cases this leads to energy dissipation across the block face and thus stability. In Appendix D the typical SBP-SAT numerical fluxes for Dirichlet, Neumann, and Characteristic boundary conditions as well as locked interfaces are given, e.g., those from Virta and Mattsson [28] with the improved Dirichlet penalty parameter of Almquist and Dunham [1]. The standard approach specifies $u^{*}$ directly, i.e., they do not require an additional block face variable be integrated in time. ### 5.1 Characteristic Boundary Conditions When block face $f$ corresponds to a characteristic boundary (15c’) the basic idea is to mimic in the upwinding procedure of the first order formulation. Namely, we seek to modify the incoming characteristic variable $q$ while leaving the outgoing characteristic variable $w$ unmodified. The challenge here is that the velocity is not a prognostic variable in our formulation. To get around this, we introduce an equation for $\dot{{\bm{u}}}^{*f}$ which describes the time evolution of the numerical flux and this is used to enforce the boundary condition. Namely, we choose values of ${\bm{\hat{\tau}}}^{*f}$ and $\dot{{\bm{u}}}^{*f}$ which preserves the outgoing characteristic variable while also satisfying the boundary condition: $\displaystyle{\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{*f}-{\bm{\hat{\tau}}}^{*f}$ $\displaystyle={\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{f}-{\bm{\hat{\tau}}}^{f},$ (44a) $\displaystyle{\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{*f}+{\bm{\hat{\tau}}}^{*f}$ $\displaystyle={\bm{{R}}}^{f}\left({\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{*f}-{\bm{\hat{\tau}}}^{*f}\right)+{\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f};$ Solving these equations for the numerical fluxes then gives $\displaystyle\dot{{\bm{u}}}^{*f}$ $\displaystyle=\frac{{\bm{{I}}}+{\bm{{R}}}^{f}}{2}\left(\dot{{\bm{u}}}^{f}-{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{\hat{\tau}}}^{f}\right)+\frac{1}{2}{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f},$ (45a) $\displaystyle{\bm{\hat{\tau}}}^{*f}$ $\displaystyle=-\frac{{\bm{{I}}}-{\bm{{R}}}^{f}}{2}\left({\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{f}-{\bm{\hat{\tau}}}^{f}\right)+\frac{1}{2}{\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f}.$ (45b) Note that this formulation requires that ${\bm{u}}^{*f}$ be stored along the face and integrated in time. Using the characteristic boundary treatment (45) in the face energy rate of change (43) then gives $\begin{split}\dot{\mathcal{E}}^{f}=&\;{\left(-\frac{{\bm{{I}}}-{\bm{{R}}}^{f}}{2}\left({\bm{{\hat{Z}}}}^{f}\dot{{\bm{u}}}^{f}-{\bm{\hat{\tau}}}^{f}\right)+\frac{1}{2}{\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}\\\ &+{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\left(\frac{{\bm{{I}}}+{\bm{{R}}}^{f}}{2}\left(\dot{{\bm{u}}}^{f}-{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{\hat{\tau}}}^{f}\right)+\frac{1}{2}{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f}-\dot{{\bm{u}}}^{f}\right)}\\\ =&\;-{\left(\dot{{\bm{u}}}^{f}\right)}^{T}\frac{{\bm{{I}}}-{\bm{{R}}}^{f}}{2}{\bm{{\hat{Z}}}}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}-{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}\frac{{\bm{{I}}}+{\bm{{R}}}^{f}}{2}{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{{H}}}^{f}{\bm{\hat{\tau}}}^{f}\\\ &+\frac{1}{2}{\left({\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f}\right)}^{T}{\bm{{H}}}^{f}\left(\dot{{\bm{u}}}^{f}+{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{\hat{\tau}}}^{f}\right)\\\ \leq&\;\frac{1}{2}{\left({\bm{{S}}}_{J}^{f}{\bm{g}}_{C}^{f}\right)}^{T}{\bm{{H}}}^{f}\left(\dot{{\bm{u}}}^{f}+{\left({\bm{{\hat{Z}}}}^{f}\right)}^{-1}{\bm{\hat{\tau}}}^{f}\right),\end{split}$ (46) where we have used that the reflection coefficient satisfies $-1\leq R\leq 1$. Letting $g_{C}=0$ it follows that $\dot{\mathcal{E}}_{f}\leq 0$ and the boundary treatment is energy stable. ### 5.2 Characteristic Interface For characteristic interfaces, locked or nonlinear, the aim is to define the numerical fluxes to satisfy the interface condition in a way that preserves the characteristic variables propagating into the interface. As noted in Section 3, the nonlinear and locked interface conditions can be enforced using the function $\hat{\mathcal{Q}}^{f^{\pm}}$ (15d’). Thus we define ${\bm{\hat{\tau}}}^{*f^{\pm}}$ and $\dot{{\bm{u}}}^{*f^{\pm}}$ so that they satisfy $\displaystyle{\bm{\hat{w}}}^{f^{\pm}}$ $\displaystyle={\bm{{\hat{Z}}}}^{f^{\pm}}\dot{{\bm{u}}}^{f^{\pm}}-{\bm{\hat{\tau}}}^{f^{\pm}}={\bm{{\hat{Z}}}}^{f^{\pm}}\dot{{\bm{u}}}^{*f^{\pm}}-{\bm{\hat{\tau}}}^{*f^{\pm}},$ (47a) $\displaystyle{\bm{\hat{q}}}^{*f^{\pm}}$ $\displaystyle=\hat{\mathcal{Q}}^{f^{\pm}}\left({\bm{\hat{w}}}^{f^{\pm}},{\bm{\hat{w}}}^{f^{\mp}}\right)={\bm{{\hat{Z}}}}^{f^{\pm}}\dot{{\bm{u}}}^{*f^{\pm}}+{\bm{\hat{\tau}}}^{*f^{\pm}}.$ (47b) Solving for the numerical fluxes then gives $\displaystyle\dot{{\bm{u}}}^{*f^{\pm}}$ $\displaystyle=\frac{1}{2}{\left({\bm{{\hat{Z}}}}^{f^{\pm}}\right)}^{-1}\left({\bm{\hat{q}}}^{*f^{\pm}}+{\bm{\hat{w}}}^{f^{\pm}}\right),$ (48a) $\displaystyle{\bm{\hat{\tau}}}^{*f^{\pm}}$ $\displaystyle=\frac{1}{2}\left({\bm{\hat{q}}}^{*f^{\pm}}-{\bm{\hat{w}}}^{f^{\pm}}\right).$ (48b) Since ${\bm{\hat{\tau}}}^{*f^{\pm}}$ and $\dot{{\bm{u}}}^{*f^{\pm}}$ satisfy the interface conditions, it follows that for a locked interface: $\displaystyle{\bm{\hat{\tau}}}^{*f^{-}}=-{\bm{\hat{\tau}}}^{*f^{+}},$ (49a) $\displaystyle\dot{{\bm{u}}}^{*f^{-}}=\dot{{\bm{u}}}^{*f^{+}},$ (49b) and for the nonlinear interface: $\displaystyle{\bm{\hat{\tau}}}^{*f^{-}}=-{\bm{\hat{\tau}}}^{*f^{+}},$ (50a) $\displaystyle{\bm{\hat{\tau}}}^{*f^{\pm}}={\bm{{S}}}_{J}^{f}F\left({\bm{V}}^{*f^{\pm}}\right),$ (50b) where ${\bm{V}}^{*f^{\pm}}=\dot{{\bm{u}}}^{*f^{\mp}}-\dot{{\bm{u}}}^{*f^{\pm}}$. Since it is required that $VF(V)\geq 0$, for both the locked and nonlinear interface treatment it follows that ${\left({\bm{\hat{\tau}}}^{*f^{\pm}}\right)}^{T}{\bm{V}}^{*f^{\pm}}\geq 0;$ (51) in the locked interface case ${\bm{V}}^{*f^{\pm}}={\bm{0}}$. In order to analyze the interface treatment, it is useful to define the grid based characteristic variable $\begin{split}{\bm{\hat{q}}}^{f^{\pm}}={\bm{{\hat{Z}}}}^{f^{\pm}}\dot{{\bm{u}}}^{f^{\pm}}+{\bm{\hat{\tau}}}^{f^{\pm}},\end{split}$ (52) so that we can write $\displaystyle\dot{{\bm{u}}}^{f^{\pm}}$ $\displaystyle=\frac{1}{2}{\left({\bm{{\hat{Z}}}}^{f^{\pm}}\right)}^{-1}\left({\bm{\hat{q}}}^{f^{\pm}}+{\bm{w}}^{f^{\pm}}\right),$ (53a) $\displaystyle{\bm{\hat{\tau}}}^{f^{\pm}}$ $\displaystyle=\frac{1}{2}\left({\bm{\hat{q}}}^{f^{\pm}}-{\bm{w}}^{f^{\pm}}\right);$ (53b) identical expressions can be written for the numerical fluxes ${\bm{\hat{q}}}^{*f^{\pm}}$, $\dot{{\bm{u}}}^{*f^{\pm}}$, and ${\bm{\hat{\tau}}}^{*f^{\pm}}$. Using these in the face energy rate of change (43) gives $\begin{split}\dot{\mathcal{E}}^{f^{\pm}}=&\;{\left({\bm{\hat{\tau}}}^{*f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}\left(\dot{{\bm{u}}}^{*f^{\pm}}-\dot{{\bm{u}}}^{*f^{\pm}}+\dot{{\bm{u}}}^{f^{\pm}}\right)+{\left({\bm{\hat{\tau}}}^{f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}{\left(\dot{{\bm{u}}}^{*f^{\pm}}-\dot{{\bm{u}}}^{f^{\pm}}\right)}\\\ =&\;{\left({\bm{\hat{\tau}}}^{*f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{*f^{\pm}}-{\left({\bm{\hat{\tau}}}^{*f^{\pm}}-{\bm{\hat{\tau}}}^{f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}\left(\dot{{\bm{u}}}^{*f^{\pm}}-\dot{{\bm{u}}}^{f^{\pm}}\right)\\\ =&\;{\left({\bm{\hat{\tau}}}^{*f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{*f^{\pm}}-\frac{1}{4}{\left({\bm{\hat{q}}}^{*f^{\pm}}-{\bm{\hat{q}}}^{f^{\pm}}\right)}^{T}{\left({\bm{{\hat{Z}}}}^{f^{\pm}}\right)}^{-1}{\bm{{H}}}^{f}\left({\bm{\hat{q}}}^{*f^{\pm}}-{\bm{\hat{q}}}^{f^{\pm}}\right).\end{split}$ (54) Adding the two sides of an interface together yields $\begin{split}\dot{\mathcal{E}}^{f^{-}}+\dot{\mathcal{E}}^{f^{+}}=&\;{\left({\bm{\hat{\tau}}}^{*f^{-}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{*f^{-}}+{\left({\bm{\hat{\tau}}}^{*f^{+}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{*f^{+}}\\\ &-\frac{1}{4}{\left({\bm{\hat{q}}}^{*f^{-}}-{\bm{\hat{q}}}^{f^{-}}\right)}^{T}{\left({\bm{{\hat{Z}}}}^{f^{-}}\right)}^{-1}{\bm{{H}}}^{f}\left({\bm{\hat{q}}}^{*f^{-}}-{\bm{\hat{q}}}^{f^{-}}\right)\\\ &-\frac{1}{4}{\left({\bm{\hat{q}}}^{*f^{+}}-{\bm{\hat{q}}}^{f^{+}}\right)}^{T}{\left({\bm{{\hat{Z}}}}^{f^{+}}\right)}^{-1}{\bm{{H}}}^{f}\left({\bm{\hat{q}}}^{*f^{+}}-{\bm{\hat{q}}}^{f^{+}}\right)\\\ =&\;-{\left({\bm{\hat{\tau}}}^{*f^{-}}\right)}^{T}{\bm{{H}}}^{f}{\bm{V}}^{*f^{-}}\\\ &-\frac{1}{4}{\left({\bm{\hat{q}}}^{*f^{-}}-{\bm{\hat{q}}}^{f^{-}}\right)}^{T}{\left({\bm{{\hat{Z}}}}^{f^{-}}\right)}^{-1}{\bm{{H}}}^{f}\left({\bm{\hat{q}}}^{*f^{-}}-{\bm{\hat{q}}}^{f^{-}}\right)\\\ &-\frac{1}{4}{\left({\bm{\hat{q}}}^{*f^{+}}-{\bm{\hat{q}}}^{f^{+}}\right)}^{T}{\left({\bm{{\hat{Z}}}}^{f^{+}}\right)}^{-1}{\bm{{H}}}^{f}\left({\bm{\hat{q}}}^{*f^{+}}-{\bm{\hat{q}}}^{f^{+}}\right).\end{split}$ (55) Here we have used that ${\bm{\hat{\tau}}}^{*f^{+}}=-{\bm{\hat{\tau}}}^{*f^{-}}$. Energy stability results since this face energy rate of change is non-positive due to the positivity result of (51) and the fact that the second two terms are in quadratic form. ## 6 Numerical Experiments We confirm theoretical stability and examine accuracy with numerical experiments in one and two spatial dimensions. When needed, the error is measured using the discrete L2 norm $\|\Delta{\bm{\tilde{u}}}\|_{H}=\sqrt{\sum_{b=1}^{N_{b}}{\left(\Delta{\bm{\tilde{u}}}^{B}\right)}^{T}{\bm{\tilde{J}}}^{B}{\bm{\tilde{H}}}^{B}\Delta{\bm{\tilde{u}}}^{B}},$ (56) where $\Delta{\bm{\tilde{u}}}$ is the difference between the numerical and analytic solution evaluated at the grid points. In all cases the penalty parameter is chosen to be at the stability limit, i.e., the equality condition of (86a). Throughout we refer to the SBP operators as $2p$ where $p$ is the boundary accuracy and $2p$ is the interior accuracy; unless otherwise noted, the SBP orders used are $2p=2,4,6$. SBP methods for the wave equation in second order form typically see a global convergence rate of $\min(2p,p+2)$, i.e., two orders of accuracy greater than the boundary accuracy except in the case of $2p=2$. For first derivatives we use the operators from Strand [27]111The free parameter $x_{1}=0.70127127127127$ is used for $2p=6$. and for second derivatives the variable coefficient operators from Mattsson [14]. The Julia programming language [2, v1.6.0] was used for all simulations with the codes available at https://github.com/jkozdon/sbp_waveprop_characteristic. ### 6.1 One Dimensional Linear Boundary Condition: Stiffness and Accuracy We begin by comparing the proposed characteristic and standard non- characteristic boundary treatment in one spatial dimension. A single block is used for the domain $\Omega=[0,1]$ and the material properties are taken to be $\rho=C_{11}=1$. Both the right and left boundaries are characteristic with a reflection coefficient $R\in[-1,1]$. $10^{-3}$$10^{-2}$$10^{-1}$$10^{-1}$$10^{-4}$$10^{-7}$$10^{-10}$$2$$4$$5$$h$$\|\Delta{\bm{\tilde{u}}}\|_{H}$$2p=2$: characteristic$2p=2$: non-characteristic$2p=4$: characteristic$2p=4$: non- characteristic$2p=6$: characteristic$2p=6$: non-characteristic (a) $R=0.99$ (or $\alpha=1/199$). $10^{-3}$$10^{-2}$$10^{-1}$$10^{-1}$$10^{-4}$$10^{-7}$$10^{-10}$$2$$4$$5$$h$$\|\Delta{\bm{\tilde{u}}}\|_{H}$ (b) $R=0$ (or $\alpha=1$). $10^{-3}$$10^{-2}$$10^{-1}$$10^{-1}$$10^{-4}$$10^{-7}$$10^{-10}$$2$$4$$5$$h$$\|\Delta{\bm{\tilde{u}}}\|_{H}$ (c) $R=-0.99$ (or $\alpha=199$). Figure 2: L2 convergence comparison of the characteristic ($+$ markers) and non-characteristic ($\times$ markers) treatment of boundary conditions with various values of the reflection coefficient $R$. The red, blue, and green curves correspond to SBP interior orders $2$, $4$, and $6$, respectively. The accuracy of the scheme is assessed using the initial condition $\displaystyle u_{0}(x)$ $\displaystyle=\sin(2\pi x)^{6},$ (57a) $\displaystyle\dot{u}_{0}(x)$ $\displaystyle=0,$ (57b) which for times $t\in[0,1]$ has the analytic solution $\begin{split}u(x,t)&=\frac{\bar{u}_{0}(x-t)+\bar{u}_{0}(x+t)+R\left(\bar{u}_{0}(2-x-t)+\bar{u}_{0}(-x+t)\right)}{2},\\\ \bar{u}_{0}(x)&=\begin{cases}u_{0}(x),&0\leq x\leq 1,\\\ 0,&\text{otherwise.}\end{cases}\end{split}$ (58) The L2 convergence can be seen in Figure 2 at time $t=0.9$ using $R=0.99$, $0$, and $-0.99$. The spatial resolutions used in the test are $N=17\times 2^{r}$ with $r=0,1,2,3,4,5$ and time integration is performed using matrix exponentiation. As can be seen, the characteristic method converges at a rate similar to the non-characteristic method for this test problem. For the characteristic method with $2p=6$ the overall error constant is higher, though this can be improved by increasing the penalty parameter (not shown) at the cost of increased stiffness. Figure 1 shows the eigenvalue spectra for the same values of $R$ used in Figure 2 with resolution $N=50$. As discussed in the introduction, though the schemes have similar convergence properties the eigenvalue spectra are different. Importantly as $R\rightarrow-1$ the non-characteristic method has a single eigenvalue with a real part that tends to $-\infty$. In this limit, the large magnitude eigenvalue prevents the use of, for instance, explicit Runge- Kutta time stepping; when $R=-1$ the boundary condition reduces to Dirichlet and the standard Dirichlet boundary treatment could be used. Though the characteristic method does have a worse time step restriction for $R\in[0,1]$, the scheme never results in an eigenvalue that grows arbitrarily and even in the worse case, $R=1$, has a spectrum that is appropriate for explicit Runge- Kutta methods. ### 6.2 Two Dimensional Nonlinear Interface: Stiffness, Accuracy, and Robustness $x_{1}$$x_{2}$ Figure 3: Two-dimensional domain used for numerical results in Section 6.2. The thick green line is the interface between the two subdomains $\Omega_{1}$ and $\Omega_{2}$. The thin black lines show the finite difference block interfaces. We now consider the two-dimensional square domain $\Omega=[-2,2]^{2}$. Inside of $\Omega$ we define the unit circle $\Gamma_{I}=\\{(x_{1},x_{2})|x_{1}^{2}+x_{2}^{2}=1\\}$ to partition the domain into a closed unit disk $\Omega_{1}=\\{(x_{1},x_{2})|x_{1}^{2}+x_{2}^{2}\leq 1\\}$ and the remainder $\Omega_{2}=\text{cl}(\Omega\setminus\Omega_{1})$. The interface $\Gamma_{I}$ is governed by the nonlinear condition $\displaystyle\tau^{\pm}=\alpha\operatorname{arcsinh}\left(V^{\pm}\right)+g_{\tau}^{\pm},$ (59) where $\alpha>0$ and $g_{\tau}^{\pm}$ is a time and space dependent forcing function; around $V=0$ with $g_{\tau}^{\pm}=0$ the linearization of the interface condition is $\tau^{\pm}=\alpha V$. The right and left boundaries of $\Omega$ are taken to be Dirichlet, the top and bottom boundaries Neumann; the Dirichlet and Neumann boundary conditions are enforced using the standard approach described in the Appendix D. As shown in Figure 3, the domain is decomposed into 56 finite difference blocks and the locked, i.e., artificial computational interface conditions, are enforced using the characteristic approach described in Section 5.2. Given the unstructured connectivity of the blocks it is necessary to use the same $(N+1)\times(N+1)$ grid of points in each block; we refer to $N$ as the block size. For all the test problems in this section, time stepping is performed using the low-storage, fourth order Runge-Kutta scheme of Carpenter and Kennedy [4, (5,4) $2N$-Storage RK scheme, solution $3$]. In order to assess the stiffness and accuracy of the scheme in two spatial dimensions we use the method of manufactured solutions (MMS) [24]. In particular, we assume an analytic solution and compute the necessary boundary, interface, and volume data. The manufactured solution is taken to be $u(x_{1},x_{2},t)=\begin{cases}\sin(t)\frac{e}{1+e}\left(2-e^{-r^{2}}\right)r\sin(\theta),&\quad(x_{1},x_{2})\in\Omega_{1}\\\ \sin(t)\left({(r-1)}^{2}\cos(\theta)+(r-1)\sin(\theta)\right),&\quad(x_{1},x_{2})\in\Omega_{2},\end{cases}$ (60) where $r^{2}=x_{1}^{2}+x_{2}^{2}$ and $\theta=\operatorname{atan2}2(x_{2},x_{1})$. The boundary, interface, and forcing data are found by using assumed solution (60) in governing equations (3). In order to avoid order reduction with time dependent data, we found it necessary to define the Dirichlet boundary data by integrating $\dot{g}_{D}$ using the Runge-Kutta method. Solution (60) satisfies force balance along $\Gamma_{I}$, i.e., continuity of traction $\tau$, and the interface data $g_{\tau}^{\pm}$ is used to enforce the assumed solution. In the MMS test the material properties are $\rho=1$ and $C_{ij}=\delta_{ij}$, with $\delta_{ij}$ being the kroneckor delta; after the mesh warping the effective material parameters $\hat{C}_{ij}$ are spatially varying. | characteristic | non-characteristic ---|---|--- $\alpha$ | $\gamma$ | $\|\Delta{\bm{\tilde{u}}}\|_{H}$ with $\gamma$ | (with $2\gamma$) | $\gamma$ | $\|\Delta{\bm{\tilde{u}}}\|_{H}$ with $\gamma$ | (with $2\gamma$) $1$ | $1/2$ | $1.393\,209\,499\,389\,692\,1\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) | $1/2\phantom{{}^{1}}$ | $1.232\,508\,736\,634\,718\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) $4$ | $1/2$ | $1.388\,308\,278\,706\,828\,1\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) | $1/2\phantom{{}^{1}}$ | $1.242\,230\,995\,069\,987\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) $16$ | $1/2$ | $1.387\,058\,219\,374\,294\,8\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) | $1/2^{3}$ | $1.281\,850\,798\,312\,621\text{\times}{10}^{-9}$ | ($1.901\,218\,297\,378\,094\,4\text{\times}{10}^{-2}$) $64$ | $1/2$ | $1.388\,263\,468\,181\,041\,0\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) | $1/2^{5}$ | $1.339\,934\,930\,623\,836\text{\times}{10}^{-9}$ | ($2.513\,286\,868\,290\,430\,0\text{\times}{10}^{-2}$) $128$ | $1/2$ | $1.388\,653\,685\,523\,628\,6\text{\times}{10}^{-9}$ | ($1.637\,216\,047\,770\,288\,8\text{\times}{10}^{10}$) | $1/2^{6}$ | $1.357\,500\,106\,402\,760\text{\times}{10}^{-9}$ | ($2.621\,685\,979\,253\,801\,0\text{\times}{10}^{-2}$) Table 1: Stable Courant $\gamma$ for the characteristic and non-characteristic methods for increasing values of $\alpha$ using the SBP operator with interior accuracy $2p=6$. Shown also are the L2 errors for the stable Courant number $\gamma$ and the unstable Courant number $2\gamma$. To compare the stiffness of the standard and characteristic nonlinear interface treatment we vary the nonlinear interface parameter $\alpha$ and decrease the time step size until the simulation is stable for a fixed block size $N=48$. For a non-stiff method, the time step size should be on the order of the effective grid spacing for all $\alpha>0$. In particular, we define the time step size to be $\Delta t=\gamma\bar{h},$ (61) where $\gamma$ is the Courant number and a non-stiff scheme should have $\gamma\sim 1$; since the material properties are taken to be unity the wave speed in this problem is $1$. The effective grid spacing $\bar{h}$ is defined as $\bar{h}=\min(\bar{h}_{1},\bar{h}_{2}),~{}\bar{h}_{r}=\frac{1}{N}\sqrt{{\left(\hat{\partial}_{r}x_{1}\right)}^{2}+{\left(\hat{\partial}_{r}x_{2}\right)}^{2}}.$ (62) Table 1 gives the Courant number $\gamma$ required for stability of the two methods with various values of $\alpha$ using SBP interior order $2p=6$. Here the value of $\gamma$ was repeatedly halved until the error in the simulation at time $t=0.1$ no longer decreased dramatically. As can be seen the characteristic method requires a similar time step for all values of the parameter $\alpha$ whereas the non-characteristic method requires a reduced time step as $\alpha$ increases. Though not shown, results with SBP interior orders $2p=2$ and $2p=4$ are similar; for $2p=2$ the characteristic method can use a Courant of $\gamma=1$ for all values of $\alpha$ as can the non- characteristic method with $\alpha=1$. $\bar{h}_{0}$$\bar{h}_{0}/2$$\bar{h}_{0}/4$$\bar{h}_{0}/8$$10^{-4}$$10^{-6}$$10^{-8}$$10^{-10}$$2$$4$$5$$\bar{h}$$\|\Delta{\bm{\tilde{u}}}\|_{H}$$2p=2$$2p=4$$2p=6$ Figure 4: Convergence results for mms solution (60) using SBP interior orders $2p=2,4,6$ with the characteristic nonlinear interface treartment. The value of $\bar{h}_{0}\approx 0.019$ corresponds to block size $N=17$. To investigate the convergence of the two-dimensional, characteristic method we now run the same MMS solution (60) to time $t_{f}=1$ using $\alpha=128$ with different levels of refinement and a fixed Courant number $\gamma=1/2$. Figure 4 shows the convergence of the scheme using mesh levels $N=17\times 2^{r}$ where $r=0,1,2,3$. As can be seen the convergence order is similar to the one-dimensional case. $0.5$$0.6$$0.7$$0.8$$0.9$$1.0$ (a) material parameter $c_{11}$ $-0.25$$-0.125$$\phantom{-}0.0$$\phantom{-}0.125$$\phantom{-}0.25$ (b) material parameter $c_{12}$ $0.5$$0.6$$0.7$$0.8$$0.9$$1.0$ (c) material parameter $c_{22}$ $-0.05$$-0.025$$\phantom{-}0.0$$\phantom{-}0.025$$\phantom{-}0.05$ (d) displacement $u$ at $t=1$ (e) displacement $u$ at $t=2$ (f) displacement $u$ at $t=3$ Figure 5: Variable material parameters $C_{ij}$ and displacement field $u$ snap shots with block size $N=136$ with the mesh show in Figure 3. The colormap for the displacement field is saturated to show features at later times and the green curve indicates the location of the nonlinear interface. As a final test, we explore the self-convergence and energy dissipation properties of the characteristic method with variable material properties and no body or boundary data. The same two-dimensional spatial domain is used, but now the material parameters are taken to be $\displaystyle\rho$ $\displaystyle=1,$ (63a) $\displaystyle C_{11}$ $\displaystyle=\cos(\theta)^{2}+\frac{1}{2}\sin(\theta)^{2},$ (63b) $\displaystyle C_{12}$ $\displaystyle=-\frac{1}{2}\cos(\theta)\sin(\theta),$ (63c) $\displaystyle C_{22}$ $\displaystyle=\sin(\theta)^{2}+\frac{1}{2}\cos(\theta)^{2},$ (63d) where the angle $\theta=\frac{\pi}{4}\left(2-x_{1}\right)\left(2-x_{2}\right)$; colormaps of the material parameters are shown in Figure 5. The Courant number $\gamma=1/2$ is used for all the simulations and the material parameters lead to a maximum wave speed of $1$, i.e., maximum eigenvalue of the matrix defined by $C_{ij}/\rho$. The initial displacement is taken to be the product of off- center Gaussians $u_{0}=\exp\left(-\frac{{\left(x_{1}-\mu_{1}\right)}^{2}}{2\sigma_{1}}-\frac{{\left(x_{2}-\mu_{2}\right)}^{2}}{2\sigma_{2}}\right),$ (64) where $\mu_{1}=0.1$, $\mu_{2}=0.2$, $\sigma_{1}=0.0025$, and $\sigma_{2}=0.005$, and the initial velocity is $\dot{u}_{0}=0$. A nonlinear parameter $\alpha=1$ is used in order to highlight the effect of the nonlinear interface condition; larger values of $\alpha$ lead to a more continuous solution across the interface since the sliding velocity $V$ will be lower. Snapshots of the displacement field at various times are shown in Figure 5 for the block size $N=136=17\times 8$ and SBP interior order $2p=6$. As can be seen in the figure, there is a discontinuity in the displacement across the interface as well as reflected waves. For the self-convergence study we run the simulation until time $t=3$ using $N_{r}=17\times 2^{r}$ with $r=1,2,3$. The error is estimated by taking the difference between neighboring resolutions, and the rate is estimated by $\text{rate}=\log_{2}(\|\Delta_{1}\|_{H_{1}})-\log_{2}(\|\Delta_{2}\|_{H_{2}}),$ (65) where $\Delta_{r}$ is the difference between the solutions using $N_{r}$ and $N_{r+1}$ and $H_{r}$ indicates that the norm is taken with respect to the metrics defined by $N_{r}$. With this, we get an estimate convergence rate for this problem of $4.4$ using the SBP operators with interior accuracy $2p=6$. Using same material properties and initial condition, Figure 6 show the dissipated energy when $\Gamma_{I}$ is taken to be a locked interface and a nonlinear interface with $\alpha=1$; energy is measured using the discrete energy norm (37). In both cases the energy decreases in time as the theory predicts. In the case of the locked interface the dissipation is purely numerical, and as the results show the dissipation decreases as the resolution increases. In the case of the nonlinear interface the amount of energy dissipated is larger since the continuous formulation supports energy dissipation on interface $\Gamma_{I}$. $0$$0.5$$1$$1.5$$2$$2.5$$3$$10^{-13}$$10^{-10}$$10^{-7}$time $t$$\frac{\mathcal{E}(0)-\mathcal{E}(t)}{\mathcal{E}(0)}$$N=34$$N=68$$N=136$ (a) Locked interface $0$$0.5$$1$$1.5$$2$$2.5$$3$00.050.10.15time $t$$\frac{\mathcal{E}(0)-\mathcal{E}(t)}{\mathcal{E}(0)}$$N=34$$N=68$$N=136$ (b) Nonlinear interface Figure 6: Normalized dissipated energy for a locked and nonlinear interface $\Gamma_{I}$ with energy is computed discretely using (37) and positive values indicating dissipation. ## 7 Concluding Remarks We have developed a characteristic based method for handling boundary and interface conditions with SBP finite difference methods for the second order wave equation. The key idea of the method is the introduction of an additional unknown on the block boundaries which evolves in time and acts as local Dirichlet data for the block. The rate of change of the boundary unknown is defined in an upwind fashion that modifies the incoming characteristic variable, which is similar to the technique previously used to remove stiffness for the wave equation in first order form with nonlinear interfaces [10]. The main benefit of the scheme is that, when compared with the standard approach [28, 6], the scheme is non-stiff for all characteristic boundary conditions and a class of nonlinear interface conditions that can be written in characteristic form. One benefit of this approach is that it enables the use of a wider class of time stepping methods for earthquake rupture problems with nonlinear interfaces. The energy method was used to show that the proposed scheme was stable. Numerical experiments showed that the proposed scheme was non-stiff, confirmed the stability results, and also demonstrated the accuracy of the scheme. ## Appendix A Definition of Two-Dimensional SBP Operators As an example of how to construct multidimensional SBP operators, we consider the two dimensional SBP finite difference operators. We describe the operators on the reference block $\hat{B}=[0,1]\times[0,1]$, where faces $1$ and $2$ are the right and left faces with faces $3$ and $4$ being the top and bottom faces, respectively. For simplicity we let the domain $\hat{B}$ be discretized with an $(N+1)\times(N+1)$ grid points with the grid nodes located at ${\left\\{{\bm{\xi}}\right\\}}_{kl}=(kh,lh)$ for $0\leq k,l\leq N$ with $h=1/N$. The projection of $u$ onto the grid is denoted ${\bm{\tilde{u}}}$, where ${\left\\{{\bm{\tilde{u}}}\right\\}}_{kl}\approx u(kh,lh)$ and is stored as a vector with with $\xi_{1}$ being the fastest index; see (28). With this, the volume norm matrix can be written as ${\bm{\tilde{H}}}={\bm{{H}}}\otimes{\bm{{H}}}.$ (66) We define the face restriction operators as ${\bm{{\bar{L}}}}^{1}={\bm{{I}}}\otimes{\bm{e}}_{0}^{T},\qquad{\bm{{\bar{L}}}}^{2}={\bm{{I}}}\otimes{\bm{e}}_{N}^{T},\qquad{\bm{{\bar{L}}}}^{3}={\bm{e}}_{0}^{T}\otimes{\bm{{I}}},\qquad{\bm{{\bar{L}}}}^{4}={\bm{e}}_{0}^{T}\otimes{\bm{{I}}},$ (67) where the ${\bm{{I}}}$ is the $(N+1)\times(N+1)$ identity matrix. More generally the restriction to a single grid line in the $\xi_{1}$ and $\xi_{2}$ directions, respectively, are ${\bm{{\bar{L}}}}^{l:}={\bm{e}}_{l}^{T}\otimes{\bm{{I}}},\qquad{\bm{{\bar{L}}}}^{:l}={\bm{{I}}}\otimes{\bm{e}}_{l}^{T}.$ (68) In order to construct ${\bm{\tilde{A}}}_{ii}^{(C)}$, no summation over $i$, we must construct individual one-dimensional second derivative matrices for each grid line with varying coefficients $C$ and place them in the correct block; expanding a single second derivative matrix with the tensor product and the identity matrix only works in the constant coefficient case. To do this it is useful to define ${\bm{\tilde{C}}}$ as the projection of $C$ onto the grid, and denote the coefficients along the individual grid lines as ${\bm{{C}}}^{:l}=\text{diag}(C^{0l},\ldots,C^{Nl}),\qquad{\bm{{C}}}^{k:}=\text{diag}(C^{k0},\ldots,C^{kN}).$ (69) The second derivative operators can then be defined as the sum of the operators along each grid line $\displaystyle{\bm{\tilde{A}}}_{11}^{(C)}$ $\displaystyle=({\bm{{H}}}\otimes{\bm{{I}}})\left[\sum_{l=0}^{N}\left({\bm{{\bar{L}}}}^{:l}\right)^{T}{\bm{{A}}}_{11}^{\left(C^{:l}\right)}{\bm{{\bar{L}}}}^{:l}\right],$ (70a) $\displaystyle{\bm{\tilde{A}}}_{22}^{(C)}$ $\displaystyle=({\bm{{I}}}\otimes{\bm{{H}}})\left[\sum_{k=0}^{N}\left({\bm{{\bar{L}}}}^{k:}\right)^{T}{\bm{{A}}}_{11}^{\left(C^{k:}\right)}{\bm{{\bar{L}}}}^{k:}\right],$ (70b) and the mixed derivative operators through a tensor product $\displaystyle{\bm{\tilde{A}}}_{12}^{(C)}$ $\displaystyle=({\bm{{I}}}\otimes{\bm{{Q}}}^{T}){\bm{\tilde{C}}}({\bm{{Q}}}\otimes{\bm{{I}}}),$ (70c) $\displaystyle{\bm{\tilde{A}}}_{21}^{(C)}$ $\displaystyle=({\bm{{Q}}}^{T}\otimes{\bm{{I}}}){\bm{\tilde{C}}}({\bm{{I}}}\otimes{\bm{{Q}}}).$ (70d) The boundary derivatives parallel to a face are given with the one-dimensional first derivative operators ${\bm{{D}}}_{1}$, $\displaystyle{\bm{{\bar{B}}}}^{1}_{2}$ $\displaystyle={\bm{e}}_{0}^{T}{\bm{{D}}}_{1}\otimes{\bm{{I}}},$ (71a) $\displaystyle{\bm{{\bar{B}}}}^{2}_{2}$ $\displaystyle={\bm{e}}_{N}^{T}{\bm{{D}}}_{1}\otimes{\bm{{I}}},$ (71b) $\displaystyle{\bm{\tilde{B}}}^{3}_{1}$ $\displaystyle={\bm{{I}}}\otimes{\bm{e}}_{0}^{T}{\bm{{D}}}_{1},$ (71c) $\displaystyle{\bm{\tilde{B}}}^{4}_{1}$ $\displaystyle={\bm{{I}}}\otimes{\bm{e}}_{N}^{T}{\bm{{D}}}_{1},$ (71d) and those perpendicular to the boundary using the boundary first derivative operators ${\bm{b}}_{0}$ and ${\bm{b}}_{N}$ from the second derivative operator: $\displaystyle{\bm{{\bar{B}}}}^{1}_{1}$ $\displaystyle={\bm{{I}}}\otimes{\bm{b}}_{0}^{T},$ (72a) $\displaystyle{\bm{{\bar{B}}}}^{2}_{1}$ $\displaystyle={\bm{{I}}}\otimes{\bm{b}}_{N}^{T},$ (72b) $\displaystyle{\bm{\tilde{B}}}^{3}_{2}$ $\displaystyle={\bm{b}}_{0}^{T}\otimes{\bm{{I}}},$ (72c) $\displaystyle{\bm{\tilde{B}}}^{4}_{2}$ $\displaystyle={\bm{b}}_{N}^{T}\otimes{\bm{{I}}}.$ (72d) ## Appendix B Proof of Lemma 1 Taking the time derivative of energy (9) and substituting in the governing equation (3a) gives $\begin{split}\dot{E}&=\int_{\Omega}\left(\dot{u}\partial_{i}C_{ij}\partial_{j}u+\left(\partial_{i}u\right)C_{ij}\left(\partial_{j}\dot{u}\right)\right)=\int_{\partial\Omega}\dot{u}\tau+\int_{\Gamma_{I}}\left(\dot{u}^{-}\tau^{-}+\dot{u}^{+}\tau^{+}\right),\end{split}$ (73) where the last equality follows from the divergence theorem and applying the definition of traction (4). Starting with the boundary integral, we apply Dirichlet (3b) and characteristic (3c’) boundary conditions and simplify to get $\int_{\partial\Omega}\dot{u}\tau=\int_{\partial\Omega_{D}}g_{D}\tau+\int_{\partial\Omega_{C}}\left(\frac{\left(R^{2}-1\right)w^{2}+\left(2Rw+g_{C}\right)g_{C}}{4Z}\right).$ (74) With zero boundary data, $g_{D}=g_{C}=0$, the boundary integral becomes $\int_{\partial\Omega}\dot{u}\tau=\int_{\partial\Omega_{C}}\frac{\left(R^{2}-1\right)w^{2}}{4Z}\leq 0,$ (75) where the inequality follows from the restriction that $-1\leq R\leq 1$. Thus boundary conditions (3b) and (3c’) leads to a non-increasing energy. Now considering the interface integral using that $\tau^{+}=-\tau^{-}$ and applying interface condition (3d’) gives $\int_{\Gamma_{I}}\left(\dot{u}^{-}\tau^{-}+\dot{u}^{+}\tau^{+}\right)=\int_{\Gamma_{I}}\left(\dot{u}^{-}-\dot{u}^{+}\right)\tau^{-}=-\int_{\Gamma_{I}}V^{-}F(V^{-})\leq 0.$ (76) Thus, the interface leads to a non-increasing energy since $VF(V)\geq 0$ and the lemma follows. ## Appendix C Proof of Lemma 2 To show that energy (38) is positive we need the following definition from Mattsson [14, Definition 2.4]: ${\bm{\tilde{A}}}_{ij}^{(c)}={\bm{\tilde{D}}}_{i}^{T}{\bm{\tilde{C}}}{\bm{\tilde{H}}}{\bm{\tilde{D}}}_{j}+{\bm{\tilde{R}}}_{ij}^{(c)}.$ (77) The remainder matrix ${\bm{\tilde{R}}}_{ij}^{(c)}$ is symmetric positive semidefinite if the coefficient $c$ is always positive; the remainder matrix is zero when $i\neq j$. The remainder matrix can be further decomposed using a result from [1, Lemma 1] as ${\bm{\tilde{R}}}_{ii}^{(c)}={\bm{\tilde{S}}}_{ii}^{(c)}+\sum_{f=2i-1}^{2i}\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{i}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}^{f,\min}{\bm{{\bar{\Delta}}}}^{f}_{i}\quad\text{(no sum over $i$)}.$ (78) Here the matrix ${\bm{\tilde{S}}}_{ii}^{(c)}$ (no sum over $i$) is a positive semidefinite and the matrix ${\bm{{\bar{\Delta}}}}_{i}^{f}={\bm{{\bar{B}}}}_{i}^{f}-{\bm{{\bar{D}}}}_{i}^{f}$ is the difference between the boundary derivative matrix from ${\bm{\tilde{D}}}_{ii}$ (no summation over $i$) and the first derivative matrix ${\bm{\tilde{D}}}_{i}$ at the boundary. Each element of the diagonal matrix ${\bm{{C}}}^{f,min}$ is the minimum value of $c$ in the $m_{b}$ points orthogonal to the boundary where $m_{b}$ depends on the order of accuracy of the SBP operator. The positive constant $\zeta^{f}=h^{f}_{\bot}\bar{\zeta}$ where $h^{f}_{\bot}$ is the grid spacing orthogonal to the face and $\bar{\zeta}$ is a constant which depends on the SBP operator. The $(m_{b},\bar{\zeta})$ values used for the operators in this paper are given in Table 2; see Almquist and Dunham [1, Table 1]. SBP interior order $2p$ | $\bar{\theta}$ | $\bar{\zeta}$ | $m_{b}$ ---|---|---|--- $2$ | $1/2$ | $1.0$ | $2$ $4$ | $17/48$ | $0.5776$ | $4$ $6$ | $13649/43200$ | $0.3697$ | $7$ Table 2: Borrowing parameters and SBP norm ${\bm{{H}}}$ matrix corner value for used operators [1, Table 1]. Let $\mathbbm{k}$ be a multi-index denoting a given grid point so that ${\left\\{{\bm{v}}\right\\}}_{\mathbbm{k}}$ denotes the value of the grid function ${\bm{v}}$ at grid point $\mathbbm{k}$. Similarly, ${\left\\{{\bm{\tilde{H}}}\right\\}}_{\mathbbm{k}\mathbbm{k}}$ denotes the value of a diagonal matrix ${\bm{\tilde{H}}}$ associated with $\mathbbm{k}$ and $\sum_{\mathbbm{k}=(0,\dots,0)}^{(N,\dots,N)}$ be the sum over all grid points. With this, we can then show the following inequality $\begin{split}{\bm{\tilde{v}}}_{i}^{T}{\bm{\tilde{C}}}_{ij}{\bm{\tilde{H}}}{\bm{\tilde{v}}}_{j}&=\sum_{\mathbbm{k}=(0,\dots,0)}^{(N,\dots,N)}{\left\\{{\bm{\tilde{v}}}_{i}\right\\}}_{\mathbbm{k}}{\left\\{{\bm{\tilde{C}}}_{ij}\right\\}}_{\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{H}}}\right\\}}_{\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{v}}}_{j}\right\\}}_{\mathbbm{k}}\\\ &=\frac{1}{d}\sum_{m=1}^{d}\sum_{\mathbbm{k}=(0,\dots,0)}^{(N,\dots,N)}{\left\\{{\bm{\tilde{v}}}_{i}\right\\}}_{\mathbbm{k}}{\left\\{{\bm{\tilde{C}}}_{ij}\right\\}}_{\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{H}}}\right\\}}_{\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{v}}}_{j}\right\\}}_{\mathbbm{k}}\\\ &\geq\frac{1}{d}\sum_{m=1}^{d}\sum_{f=2m-1}^{2m}\sum_{\mathbbm{k}\in f^{\mathbbm{k}}}{\left\\{{\bm{\tilde{v}}}_{i}\right\\}}_{\mathbbm{k}}{\left\\{{\bm{\tilde{C}}}_{ij}\right\\}}_{\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{H}}}\right\\}}_{\mathbbm{k}\mathbbm{k}\mathbbm{k}}{\left\\{{\bm{\tilde{v}}}_{j}\right\\}}_{\mathbbm{k}}\\\ &=\frac{1}{d}\sum_{f=1}^{2d}\theta^{f}{\left({\bm{{\bar{L}}}}^{f}{\bm{\tilde{v}}}_{i}^{f}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\bar{L}}}}^{f}{\bm{\tilde{v}}}_{j}^{f}.\end{split}$ (79) Here $\theta^{f}$ is the value of the ${\left\\{{\bm{H}}\right\\}}_{00}$ where ${\bm{H}}$ is the norm matrix orthogonal to the face. This can also be written as $\theta^{f}=h^{f}_{\bot}\bar{\theta}$ where $\bar{\theta}$ depends on the SBP operator; see Table 2. The set $f^{\mathbbm{k}}$ is the set of grid points along face $f$. We have also used positive definiteness of the matrix defined by $C_{ij}$. This inequality gives a lower bound for the volume solution in terms of fields on the face; the factor $1/d$ is needed to account for the multiple counting of points on the faces corners (and edges for $d=3$) of the blocks. We now turn to considering the discrete block energy (38). The first term satisfies $\frac{1}{2}\dot{{\bm{\tilde{u}}}}^{T}{\bm{\tilde{H}}}{\bm{\tilde{\rho}}}\dot{{\bm{\tilde{u}}}}\geq 0,$ (80) because it is in quadratic form and ${\bm{\tilde{H}}}$ and ${\bm{\tilde{\rho}}}$ are diagonal, positive matrices. The remaining terms will be shown to combine in a manner that is also positive semidefinite. Using relations (77), (78), and (79) we have that $\begin{split}{\bm{\tilde{u}}}^{T}{\bm{\tilde{A}}}_{ij}^{(C_{ij})}{\bm{\tilde{u}}}=\;&{\bm{\tilde{u}}}^{T}{\bm{\tilde{D}}}_{i}^{T}{\bm{\tilde{C}}}_{ij}{\bm{\tilde{H}}}{\bm{\tilde{D}}}_{j}{\bm{\tilde{u}}}\\\ &+\sum_{k=1}^{d}\left({\bm{\tilde{u}}}^{T}{\bm{\tilde{S}}}_{kk}^{(C_{kk})}{\bm{\tilde{u}}}+\sum_{f=2k-1}^{2k}\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}_{kk}^{f,\min}{\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)\\\ \geq\;&\sum_{f=1}^{2d}\frac{\theta^{f}}{d}{\left({\bm{{\bar{D}}}}_{i}^{f}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\bar{D}}}}_{j}^{f}{\bm{\tilde{u}}}\\\ &+\sum_{k=1}^{d}\left(\sum_{f=2k-1}^{2k}\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{H}}}^{f}{\bm{{C}}}_{kk}^{f,\min}{\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right).\end{split}$ (81) We now considering the face term of the discrete block energy (38). Defining ${\bm{\delta}}^{f}_{u}={\bm{u}}^{*f}-{\bm{u}}^{f}$ and using the definition of ${\bm{\hat{\tau}}}^{f}$ and ${\bm{\hat{T}}}^{f}$ in (40) gives $\begin{split}{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{\tau}}}^{f}\right)}-{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{T}}}\right)}&=2{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{H}}}^{f}{\bm{\delta}}^{f}_{u}+{\left({\bm{\delta}}^{f}_{u}\right)}^{T}{\left({\bm{{X}}}^{f}\right)}^{-1}{\bm{{H}}}^{f}{\bm{\delta}}^{f}_{u}.\end{split}$ (82) It is useful to note that ${\bm{\hat{T}}}$ can be rewritten using ${\bm{{\bar{\Delta}}}}^{f}_{k}$ as ${\bm{\hat{T}}}^{f}={\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\bar{B}}}}^{f}_{j}{\bm{\tilde{u}}}={\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\bar{D}}}}^{f}_{j}{\bm{\tilde{u}}}+{\bm{{\hat{N}}}}^{f}_{k}{\bm{{C}}}_{kk}^{f}{\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}},\quad k=\left\lceil\frac{f}{2}\right\rceil;$ (83) this follow because only when $f\in(2j,2j-1)$ is ${\bm{{\bar{B}}}}^{f}_{j}\neq{\bm{{\bar{D}}}}^{f}_{j}$. Using this then gives $\begin{split}&{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{\tau}}}^{f}\right)}-{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{T}}}\right)}\\\ &\quad=2{\left({\bm{{\bar{D}}}}^{f}_{j}{\bm{\tilde{u}}}\right)}^{T}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{i}^{f}{\bm{{C}}}_{ij}^{f}{\bm{\delta}}^{f}_{u}+{\left({\bm{\delta}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{{\Gamma}}}^{f}{\bm{{H}}}^{f}{\bm{\delta}}^{f}_{u}\\\ &\qquad+2{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{k}^{f}{\bm{{C}}}_{kk}^{f}{\bm{\delta}}^{f}_{u},\quad k=\left\lceil\frac{f}{2}\right\rceil.\end{split}$ (84) Here we have also used the definition of ${\bm{{X}}}^{f}$ in (39). To consider the remaining terms of block energy (38), we use (81) and (84) to write $\begin{split}&{\bm{\tilde{u}}}^{T}{\bm{\tilde{A}}}_{ij}^{(C_{ij})}{\bm{\tilde{u}}}+\sum_{f=1}^{2d}\left({\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{\tau}}}^{f}\right)}-{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{T}}}\right)}\right)\\\ &\quad\geq\sum_{f=1}^{2d}\left(\frac{\theta^{f}}{d}{\left({\bm{{\bar{D}}}}_{i}^{f}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\bar{D}}}}_{j}^{f}{\bm{\tilde{u}}}+2{\left({\bm{{\bar{D}}}}^{f}_{j}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{i}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)\\\ &\quad\quad+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\left(\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{kk}^{f,\min}{\bm{{H}}}^{f}{\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}+2{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{kk}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{k}^{f}{\bm{\hat{\delta}}}^{f}_{u}\right)\\\ &\quad\quad+\sum_{f=1}^{2d}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\Gamma}}}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}.\end{split}$ (85) If we choose $\displaystyle{\bm{{\Gamma}}}^{f}$ $\displaystyle\geq\frac{d}{\theta^{f}}{\bm{{I}}}+\frac{1}{\zeta^{f}}{\bm{{P}}}^{f},$ (86a) $\displaystyle{\bm{{P}}}^{f}$ $\displaystyle={\bm{{C}}}_{kk}^{f}{\left({\bm{{C}}}_{kk}^{f,\min}\right)}^{-1},~{}k=\left\lceil\frac{f}{2}\right\rceil,$ (86b) then we have that $\begin{split}&\sum_{f=1}^{2d}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\Gamma}}}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}\\\ &\quad\geq\sum_{f=1}^{2d}\frac{d}{\theta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}+\sum_{f=1}^{2d}\frac{1}{\zeta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{P}}}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}\\\ &\quad=\sum_{f=1}^{2d}\frac{d}{\theta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\frac{1}{\zeta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{C}}}_{kk}^{f}{\bm{{P}}}^{f}{\bm{{H}}}^{f}{\bm{\hat{\delta}}}^{f}_{u}\\\ &\quad=\sum_{f=1}^{2d}\frac{d}{\theta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\frac{1}{\zeta^{f}}{\left({\bm{{P}}}^{f}{\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{C}}}_{kk}^{f,\min}{\bm{{H}}}^{f}{\bm{{P}}}^{f}{\bm{\hat{\delta}}}^{f}_{u},\end{split}$ (87) where we have used that ${\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{\hat{N}}}}^{f}_{j}={\bm{{C}}}_{kk}^{f}$ with $k=\left\lceil\frac{f}{2}\right\rceil$ (no summation over $k$). Though a similar transformation could be used on the first summation it is not needed and complicates the analysis that follows. Returning to (85) then gives with (87) $\begin{split}&{\bm{\tilde{u}}}^{T}{\bm{\tilde{A}}}_{ij}^{(C_{ij})}{\bm{\tilde{u}}}+\sum_{f=1}^{2d}\left({\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{\tau}}}^{f}\right)}-{\left({\bm{\hat{T}}}\right)}^{T}{\bm{{X}}}^{f}{\bm{{H}}}^{f}{\left({\bm{\hat{T}}}\right)}\right)\\\ &\quad\geq\sum_{f=1}^{2d}\left(\frac{\theta^{f}}{d}{\left({\bm{{\bar{D}}}}_{i}^{f}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\bar{D}}}}_{j}^{f}{\bm{\tilde{u}}}+2{\left({\bm{{\bar{D}}}}^{f}_{j}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{i}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)\\\ &\qquad+\sum_{f=1}^{2d}\frac{d}{\theta^{f}}{\left({\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{\hat{N}}}}^{f}_{i}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}^{f}_{j}{\bm{\hat{\delta}}}^{f}_{u}\\\ &\quad\quad+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\left(\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{kk}^{f,\min}{\bm{{H}}}^{f}{\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}+2{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}\right)}^{T}{\bm{{C}}}_{kk}^{f}{\bm{{H}}}^{f}{\bm{{\hat{N}}}}_{k}^{f}{\bm{\hat{\delta}}}^{f}_{u}\right)\\\ &\quad\quad+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\frac{1}{\zeta^{f}}{\left({\bm{{P}}}^{f}{\bm{\hat{\delta}}}^{f}_{u}\right)}^{T}{\bm{{C}}}_{kk}^{f,\min}{\bm{{H}}}^{f}{\bm{{P}}}^{f}{\bm{\hat{\delta}}}^{f}_{u}\\\ &\quad=\sum_{f=1}^{2d}\frac{\theta^{f}}{d}{\left({\bm{{\bar{D}}}}_{i}^{f}{\bm{\tilde{u}}}+\frac{d}{\theta^{f}}{\bm{{\hat{N}}}}_{i}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)}^{T}{\bm{{C}}}_{ij}^{f}{\bm{{H}}}^{f}{\left({\bm{{\bar{D}}}}_{j}^{f}{\bm{\tilde{u}}}+\frac{d}{\theta^{f}}{\bm{{\hat{N}}}}_{j}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)}\\\ &\quad\quad+\sum_{k=1}^{d}\sum_{f=2k-1}^{2k}\zeta^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}+\frac{1}{\zeta^{f}}{\bm{{\hat{N}}}}^{f}_{k}{\bm{{P}}}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)}^{T}{\bm{{C}}}_{kk}^{f,\min}{\bm{{H}}}^{f}{\left({\bm{{\bar{\Delta}}}}^{f}_{k}{\bm{\tilde{u}}}+\frac{1}{\zeta^{f}}{\bm{{\hat{N}}}}^{f}_{k}{\bm{{P}}}^{f}{\bm{\hat{\delta}}}_{u}^{f}\right)},\end{split}$ (88) where we have used that ${\bm{{C}}}_{kk}^{f}={\bm{{\hat{N}}}}^{f}_{k}{\bm{{C}}}_{kk}^{f}{\bm{{\hat{N}}}}_{k}^{t}$ (no summation over $k$) and ${\bm{{C}}}_{kk}^{f,\min}{\bm{{P}}}^{f}={\bm{{C}}}_{kk}^{f}$ (no summation over $k$). Since this expression is in quadratic form, it is non-negative and the when combine with (80) shows that the block energy (38) is non-negative which completes the proof. ## Appendix D Standard Dirichlet, Neumann, Characteristic, Locked, and Nonlinear Interface Treatment The standard approach for SBP-SAT for Dirichlet (3b), and characteristic boundaries (3c) as well as locked and nonlinear interfaces from Virta and Mattsson [28] and Duru et al. [6] are presented in the notation of this paper; Neumann boundary treatment is the same as the characteristic boundary treat with $R=1$. ### D.1 Dirichlet Boundary Conditions When block face $f$ is on a Dirichlet boundary (15b) then the numerical fluxes are chosen to be $\displaystyle{\bm{u}}^{*f}$ $\displaystyle={\bm{g}}_{D},$ (89a) $\displaystyle{\bm{\hat{\tau}}}^{*f}$ $\displaystyle={\bm{\hat{\tau}}}^{f};$ (89b) Using these numerical fluxes, the face energy rate of change (43) is $\begin{split}\dot{\mathcal{E}}^{f}=&\;{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}+{\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{H}}}^{f}{\left(\dot{{\bm{g}}}^{f}_{D}-\dot{{\bm{u}}}^{f}\right)}={\left({\bm{\hat{\tau}}}^{f}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{g}}}^{f}_{D},\end{split}$ (90) which with $g_{D}=0$ gives $\dot{\mathcal{E}}^{f}=0$ and does not lead to energy growth. ### D.2 Characteristic (and Neumann) Boundary Conditions In order to define the standard treatment of characteristic boundary conditions (3c), it is useful to solve (3c) for $\tau$: $\tau=-\alpha\dot{u}+\beta g_{C},$ (91) with $\alpha=-Z(1-R)/(R+1)\leq 0$ and $\beta=1/(R+1)$. We note again that the Neumann boundary condition is attained when $R=1$ in which case $\alpha=0$ and $\beta=1$. With this, if block face $f$ is on a characteristic boundary then the numerical fluxes are chosen to be $\displaystyle{\bm{u}}^{*f}$ $\displaystyle={\bm{u}}^{f},$ (92a) $\displaystyle{\bm{\hat{\tau}}}^{*f}$ $\displaystyle={\bm{{S}}}_{J}^{f}\left(-{\bm{{\alpha}}}^{f}\dot{{\bm{u}}}^{f}+{\bm{{\beta}}}^{f}{\bm{g}}_{C}\right)$ (92b) where the parameters ${\bm{{\alpha}}}$ and ${\bm{{\beta}}}$ are diagonal matrices of $\alpha$ and $\beta$ evaluated at each point on face $f$. Using these numerical fluxes in (43) give $\begin{split}\dot{\mathcal{E}}^{f}=&\;-{\left(\dot{{\bm{u}}}^{f}\right)}^{T}{\bm{{\alpha}}}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}+{\left({\bm{g}}^{f}_{C}\right)}^{T}{\bm{{\beta}}}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f}.\end{split}$ (93) With $g_{C}=0$ we then have that $\dot{\mathcal{E}}^{f}\leq 0$ and there is no energy growth due to the characteristic boundary treatment; equality is obtained in the Neumann case. ### D.3 Locked Interface For locked interfaces (e.g., interfaces between purely computational blocks in the domains that have been introduced to mesh to either a material interface and/or needed in the mesh generation) continuity of displacement and traction need to be enforced. That is, across the interface it is required that $\begin{split}u^{-}&=u^{+},\\\ n_{i}^{-}C_{ij}^{-}\partial_{j}u^{-}&=-n_{i}^{+}C_{ij}^{+}\partial_{j}u^{+}.\end{split}$ (94) Here the superscript $\pm$ denotes the value on either side of the interface with the unit normal $\bm{n}^{\pm}$ is taken to be outward to each side of the interface, i.e., $\bm{n}^{-}=-\bm{n}^{+}$. The standard approach to enforcing this is to choose the numerical flux to be the average of the values on the two sides of the interface, $\begin{split}{\bm{u}}^{*f^{-}}&=\frac{1}{2}\left({\bm{u}}^{f^{-}}+{\bm{u}}^{f^{+}}\right),\\\ {\bm{\hat{\tau}}}^{*f^{-}}&=\frac{1}{2}\left({\bm{\hat{\tau}}}^{f^{-}}-{\bm{\hat{\tau}}}^{f^{+}}\right);\end{split}$ (95) the minus sign in ${\bm{\hat{\tau}}}^{*f^{-}}$ is due to the unit normals being equal and opposite. Here the two blocks connected across the interface are $B^{\pm}$ through faces $f^{\pm}$. The face energy rate of change (43) for locked interfaces is then $\begin{split}\dot{\mathcal{E}}^{f^{\pm}}=&\;\frac{1}{2}{\left({\bm{\hat{\tau}}}^{f^{\pm}}-{\bm{\hat{\tau}}}^{f^{\mp}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{\pm}}+\frac{1}{2}{\left({\bm{\hat{\tau}}}^{f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}{\left(\dot{{\bm{u}}}^{f^{\mp}}-\dot{{\bm{u}}}^{f^{\pm}}\right)}\\\ =&\;-\frac{1}{2}{\left({\bm{\hat{\tau}}}^{f^{\mp}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{\pm}}+\frac{1}{2}{\left({\bm{\hat{\tau}}}^{f^{\pm}}\right)}^{T}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{\mp}}.\end{split}$ (96) Adding the two sides of the interface together gives $\dot{\mathcal{E}}^{f}=\dot{\mathcal{E}}^{f^{+}}+\dot{\mathcal{E}}^{f^{-}}=0,$ (97) and energy stability results. ### D.4 Nonlinear Interface Condition The approach Duru et al. [6] for nonlinear interfaces is to define the sliding velocity $V^{\pm f}$ directly from the particle velocities on the grid and then the traction $\tau^{f}$ is defined directly from the friction law so the numerical fluxes are $\begin{split}{\bm{u}}^{*f^{\pm}}&={\bm{u}}^{f^{\pm}},\\\ {\bm{\hat{\tau}}}^{*f^{\pm}}&={\bm{{S}}}_{J}^{f}F\left({\bm{V}}^{f^{\pm}}\right),~{}{\bm{V}}^{f^{\pm}}=\left(\dot{{\bm{u}}}^{f^{\mp}}-\dot{{\bm{u}}}^{f^{\pm}}\right).\end{split}$ (98) The face energy rate of change (43) for a nonlinear interface is then $\begin{split}\dot{\mathcal{E}}^{f^{\pm}}=&\;{\left(F\left({\bm{V}}^{f^{\pm}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{\pm}}.\end{split}$ (99) Adding the two sides of the interface together gives $\begin{split}\dot{\mathcal{E}}^{f}=\dot{\mathcal{E}}^{f^{+}}+\dot{\mathcal{E}}^{f^{-}}&={\left(F\left({\bm{V}}^{f^{+}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{+}}+{\left(F\left({\bm{V}}^{f^{-}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{-}}\\\ &={\left(F\left({\bm{V}}^{f^{+}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{+}}-{\left(F\left({\bm{V}}^{f^{+}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}\dot{{\bm{u}}}^{f^{-}}\\\ &=-{\left(F\left({\bm{V}}^{f^{+}}\right)\right)}^{T}{\bm{{S}}}_{J}^{f}{\bm{{H}}}^{f}{\bm{V}}^{f^{+}}\\\ &\leq 0,\end{split}$ (100) where we have used that ${\bm{V}}^{f^{-}}=-{\bm{V}}^{f^{+}}$ and the fact that $VF(V)\geq 0$. ## Appendix E Friction Law Root Finding Problem In general, evaluating $\mathcal{Q}^{\pm}$ for a nonlinear friction law $\tau^{\pm}=F\left(V^{\pm}\right)$ requires solving a nonlinear root finding problem. In particular, using the characteristic variables $w^{\pm}$ a root finding problem for $V^{\pm}$ is solved after which $\mathcal{Q}^{\pm}$ can be determined. Recall that force balance, $\tau^{-}=-\tau^{+}$, and the fact that $V^{-}=-V^{+}$ implies that $\tau^{-}=-F\left(V^{+}\right)$. Using this we can compute the $Z^{\pm}$ weighted-average $\frac{Z^{-}\tau^{+}-Z^{+}\tau^{-}}{Z^{+}+Z^{-}}=F\left(V^{+}\right).$ (101) Expressing $\tau^{\pm}$ in terms of $\mathcal{Q}^{\pm}$ and $w^{\pm}$, see (8b), then gives $\frac{Z^{-}\mathcal{Q}^{+}-Z^{-}w^{+}-Z^{+}\mathcal{Q}^{-}+Z^{+}w^{-}}{2(Z^{+}+Z^{-})}=F\left(V^{+}\right).$ (102) The sliding velocity $V^{+}$ can be written in terms of the characteristic variables using (8a): $V^{+}=\dot{u}^{-}-\dot{u}^{+}=\frac{\mathcal{Q}^{-}+w^{-}}{2Z^{-}}-\frac{\mathcal{Q}^{+}+w^{+}}{2Z^{+}}=\frac{Z^{+}\mathcal{Q}^{-}+Z^{+}w^{-}-Z^{-}\mathcal{Q}^{+}-Z^{-}w^{+}}{2Z^{-}Z^{+}}.$ (103) Using this, we can rewrite (102) as $\frac{Z^{+}Z^{-}}{(Z^{+}+Z^{-})}V^{+}+\frac{Z^{+}w^{-}-Z^{-}w^{+}}{(Z^{+}+Z^{-})}=F\left(V^{+}\right).$ (104) This expression can be more compactly written by defining $\displaystyle\tau^{+}_{l}=\frac{Z^{+}w^{-}-Z^{-}w^{+}}{(Z^{+}+Z^{-})},$ (105) which depends only on the characteristic variables propagating into the interface and is the traction that would result if the interface were a locked interface; seen by using (18) in (8b). We can now write the final form of the root finding problem as $\eta V^{+}+\tau_{l}^{+}=F\left(V^{+}\right),$ (106) where $\eta=Z^{+}Z^{-}/(Z^{+}+Z^{-})$ is known as the radiation damping coefficient. Once this nonlinear system is solved for $V^{+}$ all other quantities can be determined using (8). When numerically solving (106) it is useful to realize that $\operatorname{sgn}\left(V^{+}\right)=\operatorname{sgn}\left(\tau_{l}^{+}\right)$ and that the root can be bracketed: $\left|V^{+}\right|\in\left[0,F^{-1}\left(\tau_{l}^{+}\right)\right]$. ## References * Almquist and Dunham [2020] Almquist, M., Dunham, E.M.: Non-stiff boundary and interface penalties for narrow-stencil finite difference approximations of the laplacian on curvilinear multiblock grids. Journal of Computational Physics 408, 109,294 (2020). DOI 10.1016/j.jcp.2020.109294 * Bezanson et al. [2017] Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. SIAM review 59(1), 65–98 (2017). DOI 10.1137/141000671 * Carpenter et al. [1994] Carpenter, M.H., Gottlieb, D., Abarbanel, S.: Time-stable boundary conditions for finite-difference schemes solving hyperbolic systems: Methodology and application to high-order compact schemes. Journal of Computational Physics 111(2), 220–236 (1994). DOI 10.1006/jcph.1994.1057 * Carpenter and Kennedy [1994] Carpenter, M.H., Kennedy, C.A.: Fourth-order 2N-storage Runge-Kutta schemes. Tech. Rep. NASA TM-109112, National Aeronautics and Space Administration, Langley Research Center, Hampton, VA (1994) * Carpenter et al. [1999] Carpenter, M.H., Nordström, J., Gottlieb, D.: A stable and conservative interface treatment of arbitrary spatial accuracy. Journal of Computational Physics 148(2), 341–365 (1999). DOI 10.1006/jcph.1998.6114 * Duru et al. [2019] Duru, K., Allison, K.L., Rivet, M., Dunham, E.M.: Dynamic rupture and earthquake sequence simulations using the wave equation in second-order form. Geophysical Journal International 219(2), 796–815 (2019). DOI 10.1093/gji/ggz319 * Erickson et al. [2020] Erickson, B.A., Jiang, J., Barall, M., Lapusta, N., Dunham, E.M., Harris, R., Abrahams, L.S., Allison, K.L., Ampuero, J.P., Barbot, S., Cattania, C., Elbanna, A., Fialko, Y., Idini, B., Kozdon, J.E., Lambert, V., Liu, Y., Luo, Y., Ma, X., Mckay, M.B., Segall, P., Shi, P., van den Ende, M., Wei, M.: The community code verification exercise for simulating sequences of earthquakes and aseismic slip (seas). Seismological Research Letters 91, 874–890 (2020). DOI 10.1785/0220190248 * Hicken and Zingg [2013] Hicken, J.E., Zingg, D.W.: Summation-by-parts operators and high-order quadrature. Journal of Computational and Applied Mathematics 237(1), 111–125 (2013). DOI 10.1016/j.cam.2012.07.015 * Kopriva [2006] Kopriva, D.A.: Metric identities and the discontinuous spectral element method on curvilinear meshes. Journal of Scientific Computing 26(3), 301–327 (2006). DOI 10.1007/s10915-005-9070-8 * Kozdon et al. [2012] Kozdon, J.E., Dunham, E.M., Nordström, J.: Interaction of waves with frictional interfaces using summation-by-parts difference operators: Weak enforcement of nonlinear boundary conditions. Journal of Scientific Computing 50(2), 341–367 (2012). DOI 10.1007/s10915-011-9485-3 * Kreiss and Oliger [1972] Kreiss, H., Oliger, J.: Comparison of accurate methods for the integration of hyperbolic equations. Tellus 24(3), 199–215 (1972). DOI 10.1111/j.2153-3490.1972.tb01547.x * Kreiss and Scherer [1974] Kreiss, H., Scherer, G.: Finite element and finite difference methods for hyperbolic partial differential equations. In: Mathematical aspects of finite elements in partial differential equations; Proceedings of the Symposium, pp. 195–212. Madison, WI (1974). DOI 10.1016/b978-0-12-208350-1.50012-1 * Kreiss and Scherer [1977] Kreiss, H., Scherer, G.: On the existence of energy estimates for difference approximations for hyperbolic systems. Tech. rep., Department of Scientific Computing, Uppsala University (1977) * Mattsson [2012] Mattsson, K.: Summation by parts operators for finite difference approximations of second-derivatives with variable coefficients. Journal of Scientific Computing 51(3), 650–682 (2012). DOI 10.1007/s10915-011-9525-z * Mattsson et al. [2008] Mattsson, K., Ham, F., Iaccarino, G.: Stable and accurate wave-propagation in discontinuous media. Journal of Computational Physics 227(19), 8753–8767 (2008). DOI 10.1016/j.jcp.2008.06.023 * Mattsson et al. [2009] Mattsson, K., Ham, F., Iaccarino, G.: Stable boundary treatment for the wave equation on second-order form. Journal of Scientific Computing 41(3), 366–383 (2009). DOI 10.1007/s10915-009-9305-1 * Mattsson and Nordström [2004] Mattsson, K., Nordström, J.: Summation by parts operators for finite difference approximations of second derivatives. Journal of Computational Physics 199(2), 503–540 (2004). DOI 10.1016/j.jcp.2004.03.001 * Mattsson and Parisi [2010] Mattsson, K., Parisi, F.: Stable and accurate second-order formulation of the shifted wave equation. Communications in Computational Physics 7(1), 103 (2010). DOI 10.4208/cicp.2009.08.135 * Nordström [2017] Nordström, J.: A roadmap to well posed and stable problems in computational physics. Journal of Scientific Computing 71(1), 365–385 (2017). DOI 10.1007/s10915-016-0303-9 * Olsson [1995a] Olsson, P.: Summation by parts, projections, and stability. I. Mathematics of Computation 64(211), 1035–1065 (1995a). DOI 10.2307/2153482 * Olsson [1995b] Olsson, P.: Summation by parts, projections, and stability. II. Mathematics of Computation 64(212), 1473–1493 (1995b). DOI 10.2307/2153366 * Rice [1983] Rice, J.R.: Constitutive relations for fault slip and earthquake instabilities. In: Instabilities in continuous media, pp. 443–475 (1983). DOI 10.1115/1.3167042 * Rice et al. [2001] Rice, J.R., Lapusta, N., Ranjith, K.: Rate and state dependent friction and the stability of sliding between elastically deformable solids. J. Mech. Phys. Solids 49(9), 1865–1898 (2001). DOI 10.1016/S0022-5096(01)00042-4 * Roache [1998] Roache, P.: Verification and validation in computational science and engineering. 1 edn. Hermosa Publishers, Albuquerque, NM (1998) * Scholz [1998] Scholz, C.H.: Earthquakes and friction laws. Nature 391(6662), 37–42 (1998). DOI 10.1038/34097 * Strand [1994a] Strand, B.: Summation by parts for finite difference approximations for d/dx. Journal of Computational Physics 110(1), 47–67 (1994a). DOI 10.1006/jcph.1994.1005 * Strand [1994b] Strand, B.: Summation by parts for finite difference approximations for $d/dx$. Journal of Computational Physics 110(1), 47–67 (1994b). DOI 10.1006/jcph.1994.1005 * Virta and Mattsson [2014] Virta, K., Mattsson, K.: Acoustic wave propagation in complicated geometries and heterogeneous media. Journal of Scientific Computing 61(1), 90–118 (2014). DOI 10.1007/s10915-014-9817-1
# BARCOR: Towards A Unified Framework for Conversational Recommendation Systems Ting-Chun Wang Shang-Yu Su Yun-Nung Chen National Taiwan University, Taipei, Taiwan <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Recommendation systems focus on helping users find items of interest in the situations of information overload, where users’ preferences are typically estimated by the past observed behaviors. In contrast, conversational recommendation systems (CRS) aim to understand users’ preferences via interactions in conversation flows. CRS is a complex problem that consists of two main tasks: (1) recommendation and (2) response generation. Previous work often tried to solve the problem in a modular manner, where recommenders and response generators are separate neural models. Such modular architectures often come with a complicated and unintuitive connection between the modules, leading to inefficient learning and other issues. In this work, we propose a unified framework based on BART for conversational recommendation, which tackles two tasks in a single model. Furthermore, we also design and collect a lightweight knowledge graph for CRS in the movie domain. The experimental results show that the proposed methods achieve the state-of-the-art performance in terms of both automatic and human evaluation. 111The data and source code will be released once accepted. ## 1 Introduction Though recommendation systems have gained tremendous success in various domains and many aspects of our lives, they have potential limitations. Practically, recommending is often a one-shot, reactive, uni-directional process. Users passively receive recommended information from the systems in certain pre-defined situations. It assumes that a user has clear, immediate requests when interacting with the system; however, such recommending may not be accurate since user demand would change over time and vacillate. Sometimes users are indecisive; to this end, traditional recommendation systems lack proactive guidance. Conversational Recommendation Systems (CRS) became an emerging research topic, focusing on exploring users’ preferences through natural language interaction. Generally speaking, CRSs support goal-oriented, multi-turn dialogues, which proactively acquire precise user demand by interactions. Thereby, CRS is a complex system consisting of a recommendation module and a dialogue module, which make suitable recommendations and generate proper responses respectively. In terms of modeling, CRS requires seamless integration between the recommendation module and the dialogue module. The systems need to understand user preferences by preceding dialogue context and recommend suitable items. To recommend items to users in the natural language form, the generated responses need to contain relevant items while being fluent and grammatically correct. Previous work has proposed different approaches for integrating the two major modules, for instance, building belief trackers over semi-structured user queries Sun and Zhang (2018); Zhang et al. (2020) and switching decoders for component selection Li et al. (2018). Furthermore, as practical goal- oriented dialogue systems, CRSs usually utilize Knowledge Graphs (KG) for introducing external knowledge and system scalability. Choosing a suitable KG, leveraging the information of entities, and interacting with the two main components of CRS for high-quality recommendation is undoubtedly another challenging problem. Recent work Zhou et al. (2020) proposed to incorporate two special KGs for enhancing data representations of both components and fuse the two semantic spaces by associating two different KGs. Specifically, they incorporate ConceptNet Speer et al. (2017) for word-level information and DBpedia Lehmann et al. (2015) for item information. ConceptNet provides word information such as synonyms and antonyms of certain words, which helps understand dialogue context. At the same time, DBpedia has structural information of entities, providing rich attributes and direct relations between items. However, these public large-scale knowledge graphs were not designed for CRSs hence may not be suitable. Though prior methods have achieved some improvement in performance, there are some potential limitations. Most of them build recommender and response generator separately with complicated and unintuitive connection between the modules, which may cause inefficient learning and unclear knowledge transfer between the modules. For example, the work mentioned above Zhou et al. (2020) requires training multiple graph convolution networks for KG embeddings, mutual information maximization to bridge the embedding spaces. In this case, the practical usage and scalability of the system design are a concern to some extent. To this end, we propose a unified framework for the conversational recommendation, which tackles two tasks in a single model. The framework is built on top of pretrained BART Lewis et al. (2020) and finetuned on the recommendation and response generation tasks. We proposed to use the bidirectional encoder of BART as the recommender and the auto-regresive decoder as the response generator, so-called BARCOR (Bidirectional Auto- Regressive COnversational Recommender). Moreover, we design and collect a lightweight knowledge graph for CRS in the movie domain. With the essentially- connected model structure of BART, we do not need to worry about designing a connection between the recommender and the response generator. To sum up, the contributions can be summarized as 3-fold: * • This paper proposes a general framework conversational recommendation based on BART, which tackles two tasks in a single model. * • This work designs and collects a lightweight knowledge graph for CRS in the movie domain. * • The benchmark experiments demonstrate the effectiveness of the proposed framework. Figure 1: The proposed framework is composed of three components: (1) knowledge graphs for providing external knowledge, (2) a bidirectional encoder as the recommender, and (3) an auto-regressive decoder as the response generator. ## 2 Related Work As a specific type of goal-oriented dialogue systems, Conversational Recommendation Systems (CRS) have also moved towards the use of neural networks Li et al. (2018). Christakopoulou et al. (2018) uses recurrent neural network-based models to recommend videos to users; Zhang et al. (2016) explores the use of knowledge bases in recommendation tasks. Sun et al. (2018) proposes an embedding-based approach to learn semantic representations of entities and paths in a KG to characterize user preferences towards items. Wang et al. (2019) improves the performance of the recommenders by learning the embeddings for entities in the KG using the TransR algorithm Lin et al. (2015) and refining and discriminating the node embeddings by using attention over the neighbour nodes of a given node. Wang et al. (2018) and Li et al. (2020) focus on solving the task of goal-oriented conversation recommendation for cold-start users. Li et al. (2020) generates new venues for recommendation using graph convolution networks (GCNs) and encodes the dialogue contents using hierarchical recurrent encoder-decoder (HRED) Sordoni et al. (2015) and thereby recommend locations to users. Li et al. (2018) released the ReDial dataset wherein users are recommended movies based on the conversation they have with the recommendation agents. KBRD Chen et al. (2019) extends the work of Li et al. (2018) by incorporating a KG and proposing a graph-based recommender for movie recommendations. They have also shown that dialogue and recommendation in CRSs are complementary tasks and benefit one another. To better understand user’s preferences, KGSF Zhou et al. (2020) introduces a word-oriented KG to facilitate node representation learning. Recently, to generate natural and informative responses with accurate recommendations, Lu et al. (2021) incorporates movie reviews, and Zhang et al. (2021) proposes supervision signals for the semantic fusion of words and entities. ## 3 Dataset The ReDial Li et al. (2018) dataset is widely adopted for the conversational recommendation task. This dataset is constructed through Amazon Mechanical Turk (AMT) and comprises multi-turn conversations centered around movie recommendations in seeker-recommender pairs. It contains 10,006 conversations consisting of 182,150 utterances related to 51,699 movies. To generate training data, previous work Zhou et al. (2020) viewed all items mentioned by recommenders as recommendations. However, this processing measure causes issues, clearly stated in Zhang et al. (2021). First, repetitive items are likely to guide a model to simply recommend items once appeared in dialogues. Secondly, the evaluation dataset is biased to repetitive recommendations, failing to present recommendation quality faithfully. To address the issues, we only consider items as recommendations only if they aren’t mentioned before. Since the recommendation module takes over the item recommendation task, the dialogue module could focus on capturing sentence semantics to produce fluent conversations. Thus, we mask the recommended items in the target response with a special token, [MOVIE]. It also serves as a placeholder for items retrieved by the recommender module in generated responses during the inference phase. Table 1 shows training examples from this process. | Accepted | Context | Response | Target movie ---|---|---|---|--- (a) | | S: Hi, I am looking for a movie like Super Troopers. | Yes [MOVIE] is funny. | Police Academy R: You should watch Police Academy. S: Is that a great one? I have never seen it. (b) | | R: Hello, what kind of movies do you like? | | Happy Death Day S: I am looking for a movie recommendation. | Oh, you like scary movies? S: When I was younger, I really enjoyed the | I recently watched [MOVIE]. A Nightmare on Elm Street. | Table 1: Examples in the processed ReDial dataset. In the column of context, "S" and "R" represent a movie seeker and a recommender respectively. Recommended items in responses are masked by [MOVIE]. Example (a) isn’t accepted to the processed dataset since "Police Academy" is a repetitive item, which is presented in the context. ## 4 Preliminaries In this section, we first introduce the problem formulation and then detail the collected knowledge graph. ### 4.1 Problem Formulation For the dataset, $\\{u_{i}\\}^{n}$ denotes a conversation, where $u_{i}$ is the utterance at $i$-th turn, and $n$ is the number of conversation history. We process a conversation into multiple data triplets $(X,\mathcal{I},y)$. At $j$-th turn, $X_{j}=\\{u_{i}\\}^{j-1}_{i=1}$ denotes the conversation context, $\mathcal{I}_{j}$ is the set of ground truth items presented in $u_{j}$ for the recommendation task, and $y_{j}=u_{j}$ denotes the target response for the generation task. Note that every entry in $\mathcal{I}_{j}$ cannot appear in the context $X_{j}$ as stated in the previous section, and it can be an empty set when there is no need for recommendations. For the knowledge graph, $\mathcal{G}=\\{(e_{h},r,e_{t})|e_{h},e_{t}\in\mathcal{E},r\in\mathcal{R}\\}$ denotes the KG, where $(e_{h},r,e_{t})$ means the head entity $e_{h}$ and the tail entity $e_{t}$ is related by the relation $r$. The entity set $\mathcal{E}$ consists of a movie item set $\mathcal{I}$ and a set of descriptive entities that are film properties. The set of ground truth items $\mathcal{I}_{j}$ is the subset of $\mathcal{I}$. The conversational recommendation is essentially the combination of two tasks: document retrieval and natural language generation. They are formulated as two objective functions, $f(X,\mathcal{G})$ and $g(X,\mathcal{I}_{\text{pred}})$. $f(X,\mathcal{G})$ gives novel recommendations $\mathcal{I}_{\text{pred}}$ based on the context $X$ and the KG $\mathcal{G}$, and $g(X,\mathcal{I}_{\text{pred}})$ generates natural responses based on the context and the recommended items. ### 4.2 CORG (COnversational Recommender Graphs) In the previous work, a wide variety of external knowledge sources are incorporated to facilitate recommendations. However, the KGs adopted in the previous work Zhou et al. (2020); Chen et al. (2019); Sarkar et al. (2020) are open-domain KGs, e.g., DBpedia and ConceptNet, which may introduce too many irrelevant entities and obscure high-order connectivity as stated in Zhang et al. (2021). Although some datasets, MindReader Brams et al. (2020) is intended for movie recommendations, its coverage of movies in the ReDial dataset is low, as shown in Table 2. To mitigate these issues, we construct a knowledge graph called CORG (COnversational Recommender Graphs), which contains 5 types of node entities and 5 types of relations. #### Data Source We collect information of movies from Wikidata222https://www.wikidata.org/wiki/Wikidata:Main_Page, which is a collaboratively edited multilingual knowledge graph hosted by Wikimedia Foundation333https://wikimediafoundation.org/. It contains movie-related information and identifiers of other databases for additional information, such as synopses or reviews. #### Information Collection Nodes in CORG comprise two kinds of entities: _movies items_ and _descriptive entities_. Movies items are all mentioned movies in ReDial, and descriptive entities are associative properties of those movies. We use "movie name" and "release year" as keys to query Wikidata to collect movie properties, including movie genres, cast members, directors, and production companies. In this way, we get the entire set of nodes in CORG, whose statistics are shown in Table 4. Among 6,924 mentioned movies in ReDial, CORG covers 6,905 movies (99.7%). #### Data Processing Assuming seekers are only interested in protagonists, we select top-10 main cast members. Besides, since movie genres in Wikidata are hierarchically arranged (e.g, superhero film is a subclass of action and adventure films), we recursively build edges between the nodes of genres and those of their parent genres. The edge statistics are shown in Table 4. Knowledge Graph | # Movies | # Entities | Designed for ReDial | Movie Coverage for ReDial ---|---|---|---|--- MinderReader Brams et al. (2020) | 4,941 | 18,707 | | 44.6% DBpedia (KGSF) Zhou et al. (2020) | 6,111 | 64,361 | | 88.2% TMDKG Zhang et al. (2021) | 6,692 | 15,822 | | 96.2% CORG | 6,905 | 23,164 | | 99.7% Table 2: Characteristics of CORG and existing knowledge graphs. Although TMDKG has high movie coverage, their source code is not publicly available. Figure 2: A sample subgraph of CORG. CORG has 5 types of node entities and 5 types of relations, the statistics of types and relations are shown in Table 4. ## 5 BARCOR We propose to use the bidirectional encoder of BART Lewis et al. (2020) as the recommender and the auto-regresive decoder as the response generator, so- called BARCOR (Bidirectional Auto-Regressive COnversational Recommender). BARCOR is a unified framework for the conversational recommendation which tackles two tasks in a single model. The proposed framework is composed of three main components: (1) a knowledge graph encoder to provide external knowledge, (2) a bidirectional encoder for recommendation, and (3) an auto- regressive decoder for response generation. In this section, we will go through the design of each component in the pipeline. ### 5.1 Graph Encoder We follow Zhou et al. (2020), adopting Relational Graph Convolutional Network (R-GCN) Schlichtkrull et al. (2017) to learn entity representations in CORG. Formally, the hidden state of an entity $i$ at the ($l+1$)-th layer is formulated as: $\mathbf{h}_{i}^{(l+1)}=\sigma(\sum_{r\in\mathcal{R}}\sum_{j\in\mathcal{E}_{i}^{r}}\dfrac{1}{|\mathcal{E}_{i}^{r}|}\mathbf{W}_{r}^{(l)}\mathbf{h}_{j}^{(l)}+\mathbf{W}^{(l)}\mathbf{h}_{i}^{(l)}),$ where $\mathbf{h}_{i}^{(l)}\in\mathbb{R}^{d_{E}}$ is the hidden state of the entity $i$ at the $l$-th layer, $d_{E}$ is the dimension of the hidden state, and $\mathbf{h}_{i}^{0}$ is also referred to as the entity embedding $\mathbf{e}_{i}$. $\mathcal{E}_{i}^{r}$ is the set of neighboring entities of the entity $i$ related by $r$, and its cardinality serves as a normalization constant. $\mathbf{W}_{r}^{(l)}$ denotes a learnable relation-specific transformation matrix for the hidden states of neighboring entities under a relation $r$, and $\mathbf{W}^{(l)}$ is a learnable matrix for transforming hidden states at the $l$-th layer. We treat the hidden states of the last layer as the representations of entities in CORG, which is denoted by $\mathbf{H}\in\mathbb{R}^{(|\mathcal{E}|\times d_{E})}$. The representations construct a search space of recommended candidates for item retrieval. Other than the recommendation task, we include the node classification task to facilitate graph representation learning. Given an entity representation $\mathbf{h}$ and a multiple layer perceptron (MLP), we obtain a node type prediction $\mathbf{p}_{\text{node}}\in\mathbb{R}^{N_{T}}$, where $N_{T}$ is the number of node types: $\mathbf{p}_{\text{node}}=\mathrm{Softmax}(\mathrm{MLP}(\mathbf{h})).$ (1) Then, we conduct a cross entropy loss $L_{\text{node}}$ between the prediction from Equation (1) and ground truth node types to optimize the graph encoder. ### 5.2 BART as Conversational Recommender BART is a Transformer-based Vaswani et al. (2017) sequence-to-sequence model, which can be seen as the generalizing BERT (bidirecitonal encoder) and GPT (autoregressive decoder). In the design of BART, the decoder performs cross- attention from each of its layers over the final hidden state of the encoder to be aware of input sequences. This operation seamlessly integrates the recommendation and dialogue modules into a unified conversational recommender. BARCOR features four advantages over the graph-based recommender in the previous works: First, a unified framework inherently fuses the semantics between the encoder and the decoder and becomes less sensitive to the design of model architecture and hyper-parameters selections. In contrast, other works propose complex attentive interactions between modules, which is not robust from an actual production system perspective. That is, slight parameter changes would impact the performance. Moreover, BART is proved to be effective in various downstream tasks, such as neural machine translation and question answering. Secondly, BART takes users’ utterances as inputs without further processing. Instead, in Zhou et al. (2020), the graph-based recommender demands manual annotations for movies and words in input texts to build a user preference, which is impractical under a realistic scenario. Thirdly, the learned knowledge from pretrained models provides rich sentence semantics. Finally, BART can perform an end-to-end training scheme for both the recommendation and generation tasks. Conversely, other works tend to design separate modules for two tasks and further sequentially optimize each module. #### Bidirectional Recommender Given a conversation context $X$, BART encoder transforms $X$ into $\mathbf{c}$, the hidden state of the final self-attentive layer. Then, $\mathbf{c}$ is viewed as a sentence representation of $X$ and also a search key for retrieving recommendation candidates. To derive the probability over the candidates, we apply inner-product to compute the similarity between $\mathbf{c}$ and entity representations $\mathbf{H}$ from the graph encoder, $\displaystyle\mathbf{p}_{\text{rec}}=\mathrm{Softmax}(\mathbf{c}\mathbf{H}^{\intercal}),$ (2) $\displaystyle\mathbf{p}_{\text{rec- infer}}=\mathrm{Softmax}(\mathbf{c}\mathbf{H}_{I}^{\intercal}),$ (3) where $\mathbf{p}_{\text{rec}}\in\mathbb{R}^{|\mathcal{E}|}$ denotes the recommendation prediction. To learn parameters in BARCOR, we employ a cross- entropy loss $L_{\text{rec}}$ between the prediction from Equation (2) and the labels of ground truth entities. Note that the search space of recommended candidates is $\mathbf{H}$, which means both _movie items_ and _descriptive entities_ are likely to be retrieved. #### Data Augmentation Since sentence-level semantics extracted from BART encoder is naturally inconsistent with entity-level semantics from the graph encoder, other than optimizing BARCOR by $L_{\text{rec}}$, we propose to (1) augment the training set with descriptive entities and (2) strategically initialize the graph encoder’s embeddings to facilitate the fusion of heterogeneous semantics. First, during training, we construct data using the names of descriptive entities as the conversation context, such as "George Clooney," and the entities themselves as the recommended items. The data allows the representations of descriptive entities to be directly optimized by $L_{\text{rec}}$ instead of optimized indirectly through their one-hop neighboring movie items. Besides, BARCOR becomes more aware of their names in conversation context and neighboring movie items. Secondly, we initialize entity embeddings $\\{\mathbf{e}_{i}\\}_{i=1}^{|\mathcal{E}|}$ with the sentence representations of their names transformed by the pretrained BART encoder. Thus, the initial semantic gap between two types of representations becomes closer, presumably easier to fuse. However, during the inference phase, the search space is reduced to the item set $\mathcal{I}$. The recommendation prediction is computed through Equation (3), where $\mathbf{H}_{I}$ is the matrix only consisting of movie item representations. #### Auto-Regressive Response Generator We retain the original operations of BART decoder, which is conditioned on an input sequence and its sentence representation (i.e., the final hidden state of BART encoder) to generate a response auto-regressively. Therefore, we follow Radford and Narasimhan (2018) to compute the generative probability and optimize the decoder through negative log-likelihood. During training, we mask the target responses of the augmented dataset to preserve authentic conversation flows. #### End-to-End Training We optimize BARCOR by simultaneously performing the recommendation and generation tasks, compared to previous works demanding sequential optimization for two separated components. That is, we jointly minimize the objective as follow: $L=L_{\text{rec}}+\alpha L_{\text{gen}}+\beta L_{\text{node}},$ where $\alpha$ and $\beta$ are hyper-parameters determined by cross- validation. ## 6 Experiments ### 6.1 Experiment Setup #### Baselines We compare BARCOR with the following baseline methods for the recommendation and response generation tasks on the processed ReDial dataset as discussed in Section 3. * • KBRD Chen et al. (2019) employs DBpedia to enhance semantics of contextual items or entities for the construction of user preferences. The dialogue module is based on Transformer, where KG information is incorporated as word bias during generation. * • KGSF Zhou et al. (2020) uses MIM Viola and Wells (1995) to fuse the information of entity-oriented and word-oriented KGs (i.e., DBpedia and ConceptNet). A user preference is constructed by fused representations of items and words. The dialogue module is based on Transformer, consisting of a standard encoder and a KG-enhanced decoder. #### Automatic Evaluation For the recommendation task, we adopt _Recall@k_ (R@k, k=1, 5, 10, 50), which suggests whether top-k recommended items contain the ground truth recommendations for evaluation. Since users may be frustrated by too many recommendations within a response, Recall@1,5 more faithfully present the recommendation performance. For the generation task, we follow Zhou et al. (2020) to use _Distinct n-gram (Dist-n, n=2, 3, 4)_ , which measures the diversity of sentences. Since CRSs interact with humans through natural language, we introduce two metrics to capture the effectiveness of recommendations. _Item-F1_ measures whether a CRS accurately provides recommendations compared to ground truth responses. _Average Item Number (AIN)_ denotes the average number of recommended items within a sentence and presents the informativeness of generated responses. #### Human Evaluation Aligning the CRS goal of providing successful recommendations, we invite 11 professional annotators to judge response quality. Given 40 multi-turn conversations from the testing set, the annotators evaluate the quality in terms of three aspects: (1) _Fluency_ , (2) _Relevancy_ , and (3) _Informativeness_ , with each score ranging from 0 to 2. ### 6.2 Result Analysis Model | Recommendation | Response Generation | Human Evaluation ---|---|---|--- R@1 | R@5 | R@10 | R@50 | Dist-2 | Dist-3 | Dist-4 | Item-F1 | AIN | Fluen. | Relev. | Informat. (a) | KBRD | 1.46 | 7.23 | 12.65 | 30.26 | 14.32 | 27.27 | 39.57 | 58.80 | 36.63 | 1.62 | 1.08 | 1.01 (b) | KGSF | 1.41 | 7.66 | 13.47 | 32.17 | 19.49 | 35.36 | 49.19 | 62.61 | 41.00 | 1.56 | 0.98 | 0.66 (c) | BARCOR | 2.53 | 9.98 | 16.17 | 34.95 | 58.90 | 88.75 | 102.52 | 71.71 | 53.00 | 1.86 | 1.76 | 1.57 (e) | (c) - Node Loss | 2.32 | 9.01 | 15.61 | 34.3 | 41.12 | 61.15 | 73.60 | 71.08 | 45.22 | - | - | - (f) | (c) - Data Aug. | 2.23 | 9.22 | 14.62 | 34.16 | 31.91 | 45.05 | 53.57 | 55.13 | 44.64 | - | - | - (g) | (c) - Node Init. | 1.95 | 8.68 | 14.67 | 33.86 | 22.32 | 35.33 | 45.19 | 68.21 | 44.30 | - | - | - (h) | (c) - CORG | 2.29 | 9.15 | 15.32 | 33.34 | 30.50 | 43.11 | 50.80 | 70.00 | 48.37 | - | - | - Table 3: Results on the recommendation and response generation tasks. In human evaluation, “Fluen.”, “Relev”, and “Informat” denote fluency, relevancy, and informativeness, respectively. The best results are in bold. Table 3 summarizes the performance of different methods on the ReDial dataset, including human evaluation and automatic evaluation for the recommendation and response generation tasks. #### Item Recommendation As we can see, KGSF outperforms KBRD because KGSF incorporates a word-oriented KG to enrich entity representations, highlighting the importance of words in context for the representation learning. With learned knowledge from pretrained models, BARCOR achieves 2.53% in R@1, 9.98% in R@5, 16.17% in R@10, and 34.95% in R@50 and outperforms KGSF by 79% and 30% in terms of R@1 and R@5 respectively. It demonstrates a tight fusion of semantics between sentences in context and entities in KG. Also, context and knowledge provide richer entity information, compared to the word-oriented KG adopted by KGSF. #### Response Generation In the automatic evaluation, the proposed BARCOR outperforms all baseline methods with a large margin in terms of Dist-n. Compared to KGSF, it improves Dist-2, Dist-3, and Dist-4 by +39.41%, +53.39%, and +53.33%, respectively, which demonstrates the proposed method effectively generates diverse sentences. Besides, BARCOR achieves 71.71% in Item-F1 and 53% in AIN. It suggests that BARCOR interprets user intentions to further precisely generate responses containing recommendations. In the human evaluation, BARCOR performs best among all baseline methods for the three metrics. We can note that BARCOR especially has higher scores of Relevancy and Informativeness, indicating generated responses are both accurately aligned with user intentions and rich in recommended items and related information. It verifies our interpretation of the scores of Item-F1 and AIN in the automatic evaluation. The above results prove the effectiveness of our method that fuses entity representations from the KG with sentence representations to generate fluent, relevant, and informative utterances. We also provide qualitative analysis in Appendix B. #### Training Stability Figure 3: Recommendation performance of BARCOR and the baselines on the validation set at different training epochs. Figure 3 shows the performance curves of Recall@5 (R@5) and recommendation loss on the validation set for different methods. We select R@5 as the evaluation metric since it is neither too strict nor tolerable for accurate recommendations. It can be observed that BARCOR is more stably optimized and achieves a better performance than other competitive baseline methods. Within the first four epochs, both KBRD and KGSF quickly reach an optimal state where models gain the highest R@5 with the least recommendation loss. However, as training progresses, they begin to overfit the training data, leading to the decline in R@5 and the rise of the recommendation loss. The instability may be attributed to the insufficiency of semantics in conversation context and the number of trainable parameters. To construct a user representation, the baselines aggregate information of annotated entities, including movies and their associative properties, in conversation context. Although KGSF incorporates a word-oriented KG and a semantic fusion technique, the combinations of words and entities are still limited to the training set and the KGs. Therefore, some informative words or entities and their variants are lost if not presented in the corpus. In contrast, BARCOR directly encodes an entire context to build a user representation, ensuring every word is considered and increasing word semantic richness. Learned knowledge from pretrained models also prevents BARCOR from overly biasing on the training set. Moreover, we note that the number of trainable parameters of the BARCOR’s recommendation module (39 million) is less than half of that of KGSF’s (106 million) and KBRD’s (91 million) recommenders. More details about models is presented in Table 8 in Appendix. Optimized fewer parameters with inputs of richer semantics, BARCOR consistently outperforms these baselines for all recommendation metrics. The results demonstrate the effectiveness and optimization stability of the proposed unified framework for modeling CRS. ### 6.3 Ablation Study Figure 4: Ablation study: Recommendation performance on the validation set at different training epochs. To understand the contribution of each component on the recommendation and generation tasks, we construct a ablation study for four variants in BARCOR: (1) BARCOR (w/o Node Loss): removing cross entropy loss of the node classification task presented in Section 5.1, (2) BARCOR (w/o Data Aug.): removing the training set augmentation mentioned in Section 5.2, (3) BARCOR (w/o Node Init.): replacing node embeddings from the pretrained BART encoder by randomly initialized weights mentioned in Section 5.2, and (4) BARCOR (w/o CORG): excluding CORG by removing relations among nodes. Since the recommendation and dialogue modules share the same sentence representation of context, techniques designed for representation enrichment are mutually beneficial for both tasks. As shown in Table 3 (row(e-h)), all techniques are helpful to improve the final performance in terms of all metrics. Besides, node embeddings initialization of the graph encoder and the proposed CORG are seemed to be more critical. First, we observe that R@1, R@5, and Dist-n decrease when the node embeddings are randomly initialized. Also, the validation performance curves in Figure 4 reveal the issue of overfitting, as shown in Section 6.2. We attribute this to the increased optimization difficulty brought by the incorporation of the graph encoder. The number of its trainable parameters is 27 million, accounting for 68% of the total trainable parameters in the recommendation module. Randomly initialized embeddings easily fit the seen data but difficultly fuse with sentence semantics from the BARCOR’s encoder. The results reinforce our claim discussed in Section 6.2. Although random initialization leads to the decline in performance, BARCOR (w/o Node Init.) still outperforms the strong baselines for all evaluation metrics. Second, as shown in row(h), BARCOR (w/o CORG) surprisingly achieves competitive results with BARCOR in R@1, R@5, and R@10 and outperforms KGSF using two KGs. Namely, BARCOR (w/o CORG) merely leverages relations of entities and words in the dialogue history to recommend more accurately than the KG-enhanced strong baselines. It implies that implicit relations of entities within context have yet been exploited to the fullest. In conclusion, the sentence-level semantics derived from BARCOR’s encoder provide richer information than the entity representations encoded by the R-GCN, and is sufficient for accurate recommendations. Besides, a trade-off between KG-based information enrichment and optimization difficulty for a graph encoder needs careful consideration. In our work, we propose incorporating supervision signal from the node classification task, training set augmentation, and node embeddings initialized by the pretrained BART to reduce the difficulty. We hope these results inspire future research. ## 7 Conclusion In this paper, we proposed a novel unified framework for the conversational recommendation, BARCOR. BARCOR jointly tackles the recommendation and generation tasks with the shared sentence representation of conversation history. It serves as a search key for item retrieval and provides rich fused semantic of sentences and entities for the decoder to generate responses. Moreover, we enrich the information of entities by constructing a high-quality KG, namely CORG, and incorporating a graph encoder exploiting structural knowledge. The experiments results demonstrate that BARCOR achieves better performance on recommendation accuracy and response quality than all competitive baselines and generates informative responses with great fluency and relevancy. ## References * Brams et al. (2020) Anders H. Brams, Anders L. Jakobsen, Theis E. Jendal, Matteo Lissandrini, Peter Dolog, and Katja Hose. 2020. Mindreader. _Proceedings of the 29th ACM International Conference on Information & Knowledge Management_. * Chen et al. (2019) Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards knowledge-based recommender dialog system. _arXiv preprint arXiv:1908.05391_. * Christakopoulou et al. (2018) Konstantina Christakopoulou, Alex Beutel, Rui Li, Sagar Jain, and Ed H Chi. 2018\. Q&r: A two-stage approach toward interactive recommendation. In _Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 139–148. * Fey and Lenssen (2019) Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with pytorch geometric. * Lehmann et al. (2015) Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. 2015. Dbpedia–a large-scale, multilingual knowledge base extracted from wikipedia. _Semantic web_ , 6(2):167–195. * Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_. * Li et al. (2018) Raymond Li, Samira Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. * Li et al. (2020) Shijun Li, Wenqiang Lei, Qingyun Wu, Xiangnan He, Peng Jiang, and Tat-Seng Chua. 2020. Seamlessly unifying attributes and items: Conversational recommendation for cold-start users. _arXiv preprint arXiv:2005.12979_. * Lin et al. (2015) Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 29. * Lu et al. (2021) Yu Lu, Junwei Bao, Yan Song, Zichen Ma, Shuguang Cui, Youzheng Wu, and Xiaodong He. 2021. RevCore: Review-augmented conversational recommendation. In _Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021_ , pages 1161–1173, Online. Association for Computational Linguistics. * Radford and Narasimhan (2018) Alec Radford and Karthik Narasimhan. 2018. Improving language understanding by generative pre-training. * Sarkar et al. (2020) Rajdeep Sarkar, Koustava Goswami, Mihael Arcan, and John McCrae. 2020. "suggest me a movie for tonight": Leveraging knowledge graphs for conversational recommendation. * Schlichtkrull et al. (2017) Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. 2017. Modeling relational data with graph convolutional networks. * Sordoni et al. (2015) Alessandro Sordoni, Yoshua Bengio, Hossein Vahabi, Christina Lioma, Jakob Grue Simonsen, and Jian-Yun Nie. 2015. A hierarchical recurrent encoder-decoder for generative context-aware query suggestion. In _Proceedings of the 24th ACM International on Conference on Information and Knowledge Management_ , pages 553–562. * Speer et al. (2017) Robyn Speer, Joshua Chin, and Catherine Havasi. 2017. Conceptnet 5.5: An open multilingual graph of general knowledge. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 31. * Sun and Zhang (2018) Yueming Sun and Yi Zhang. 2018. Conversational recommender system. In _The 41st international acm sigir conference on research & development in information retrieval_, pages 235–244. * Sun et al. (2018) Zhu Sun, Jie Yang, Jie Zhang, Alessandro Bozzon, Long-Kai Huang, and Chi Xu. 2018\. Recurrent knowledge graph embedding for effective recommendation. In _Proceedings of the 12th ACM Conference on Recommender Systems_ , pages 297–305. * Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. * Viola and Wells (1995) P. Viola and W.M. Wells. 1995. Alignment by maximization of mutual information. In _Proceedings of IEEE International Conference on Computer Vision_ , pages 16–23. * Wang et al. (2018) Hongwei Wang, Fuzheng Zhang, Jialin Wang, Miao Zhao, Wenjie Li, Xing Xie, and Minyi Guo. 2018. Ripplenet: Propagating user preferences on the knowledge graph for recommender systems. In _Proceedings of the 27th ACM International Conference on Information and Knowledge Management_ , pages 417–426. * Wang et al. (2019) Xiang Wang, Xiangnan He, Yixin Cao, Meng Liu, and Tat-Seng Chua. 2019. Kgat: Knowledge graph attention network for recommendation. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 950–958. * Zhang et al. (2016) Fuzheng Zhang, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. 2016. Collaborative knowledge base embedding for recommender systems. In _Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining_ , pages 353–362. * Zhang et al. (2021) Tong Zhang, Yong Liu, Peixiang Zhong, Chen Zhang, Hao Wang, and Chunyan Miao. 2021\. Kecrs: Towards knowledge-enriched conversational recommendation system. * Zhang et al. (2020) Xiaoying Zhang, Hong Xie, Hang Li, and John C.S. Lui. 2020. Conversational contextual bandit: Algorithm and application. _Proceedings of The Web Conference 2020_. * Zhou et al. (2020) Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving conversational recommender systems via knowledge graph based semantic fusion. In _Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining_, pages 1006–1014. Measure | Value ---|--- # Node | 23,164 # Movie | 6,924 # Genre | 313 # Cast Member | 11,017 # Director | 3,587 # Production Company | 1,323 # Edge | 87,212 # Movie-Genre | 19,292 # Movie-Cast Member | 53,109 # Movie-Director | 7,155 # Movie-Production Company | 7,407 # Genre-Genre | 249 Table 4: Graph statistics of the constructed CORG. ## Appendix A Implementation Details In all the experiments, we use mini-batch AdamW with learning rate $0.000\,03$ as the optimizer and each batch of 64 examples on a single Nvidia Tesla V100. The whole training takes $22$ epochs without early stop. The entire implementation was based on PyTorch, PyTorch Geometric Fey and Lenssen (2019), and HuggingFace transformers444https://huggingface.co/transformers/ package. We finetune the $11$-th attention layer of BART encoder and the $10$-th and $11$-th attention layers of BART decoder for the CRS task. The detailed number of trainable parameters are listed in Table 8. Model | Rec. | Gen. | # Total ---|---|---|--- KBRD | 85.9 % | 14.1 % | 105,601,166 KGSF | 81.6 % | 18.4 % | 129,899,342 BARCOR | 53.8 % | 46.2 % | 72,593,777 Table 5: Model statistics. "Rec." and "Gen." represent the recommendation and response generation modules, respectively. These two columns present the ratio of the trainable parameters in the two modules to the total number of trainable parameters, which is presented in the column "# Total." The statistics of KBRD777https://github.com/THUDM/KBRD and KGSF888https://github.com/Lancelot39/KGSF are derived from the publicly released implementations of their authors on GitHub. ## Appendix B Qualitative Analysis (a) | ---|--- Seeker: | Hello, I really need a good movie for tonight. I have a date. Recommender: | Ok. Seeker: | Maybe a romantic comedy like How to Lose a Guy in 10 Days (2003). KBRD: | I haven’t seen that one yet. KGSF: | Love Actually is a good one. BARCOR: | 50 First Dates (2004) is a good romantic comedy with Adam Sandler. Human: | How about 10 Things I Hate About You (1999)? (b) | Seeker: | Hello! How are you? Recommender: | Hey there. I’m doing well. Can I recommend a movie for you? Seeker: | I’m looking for a great old movie. Any suggestions? KBRD: | Sure! Have you seen Black Panther (2018)? KGSF: | I am doing great. BARCOR: | Have you seen Gone with the Wind (1939)? Human: | Hmmmm, such as Breakfast at Tiffany’s (1961)? I loved that movie. (c) | Recommender: | Looking for a good movie? Seeker: | I have a movie night tonight. My firends are coming. I need good mystery movies. Recommender: | Well, Wind River (2017) is a good mystery. Seeker: | I have seen that one. KBRD: | It is really good. KGSF: | Shutter Island (2010) is a good one. BARCOR: | Gone Girl (2014) is another good one. If you’re looking for something a little more graphic, it is one of my favorite movies. Human: | How about Memento (2000)? Table 6: Examples of generated responses from different models. Movie names are in bold. In the section, we present several conversations generated by different models in Table 6. Compared to other baseline methods, BARCOR can understand the user intention to provide a relevant recommendation and generate informative responses related to the recommended item. In example (a), when the seeker asks for romantic comedy and mentions "_How to Lose a Guy in 10 Days (2003)_ ", BARCOR recommends another romantic comedy "_50 First Dates (2004)_ ". Besides, it also expresses the attitude toward the recommended item and makes the response more informative by saying that "_is a good romantic comedy with Adam Sandler_." In example (b), BARCOR grasps the idea of great old movies and recommends "_Gone with the Wind (1939)_ ", an epic historical romance film. Conversely, KBRD simply recommends a well-known modern movie, which fails to meet the user demand. In example (c), when asked a mystery movie like "_Wind River (2017)_ ", the human recommender and KGSF merely give recommendations without personal insight. However, BARCOR not only recommends another mystery movie, "_Gone Girl (2014)_ ", but explains the motivation behind the recommendation by saying that "_If you’re looking for something a little more graphic, it is one of my favorite movies_."
# First Detection of an Over-Massive Black Hole Galaxy: UHZ1 – Evidence for Heavy Black Hole Seeds From Direct Collapse? Priyamvada Natarajan Department of Astronomy, Yale University, New Haven, CT 06511, USA Department of Physics, Yale University, New Haven, CT 06520, USA Black Hole Initiative, Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Fabio Pacucci Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Black Hole Initiative, Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Angelo Ricarte Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Black Hole Initiative, Harvard University, 20 Garden Street, Cambridge, MA 02138, USA Ákos Bogdán Center for Astrophysics $|$ Harvard & Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA Andy D. Goulding Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544, USA Nico Cappelluti Department of Physics, University of Miami, Coral Gables, FL 33124, USA ###### Abstract The recent Chandra-JWST discovery of a quasar in the $z\sim 10.3$ galaxy UHZ1 reveals that accreting supermassive black holes (SMBHs) were already in place $\sim 450$ million years after the Big Bang (Bogdan et al., 2023). The Chandra X-ray source detected in UHZ1 is a Compton-thick quasar with a bolometric luminosity of $L_{\rm bol}\sim 5\times 10^{45}\ \rm{erg\ s^{-1}},$ which corresponds to an estimated BH mass of $\sim 4\times 10^{7}\ \rm{M_{\odot}}$ assuming accretion at the Eddington rate. JWST photometry yields a stellar mass estimate for UHZ1 comparable to that of the BH mass. These characteristics are in excellent agreement with prior theoretical predictions for a unique class of transient, high-redshift objects, Outsize Black Hole Galaxies (OBGs; Natarajan et al., 2017) that harbor a heavy initial black hole seed that likely formed from the direct collapse of the gas. Based on the excellent agreement between the observed multi-wavelength properties of UHZ1 with theoretical model template predictions, we suggest that UHZ1 is the first detected OBG candidate, subject to spectroscopic confirmation of its redshift. Our assertion rests on multiple lines of concordant evidence between model predictions and the following observed properties of UHZ1: (i) its X-ray detection and the estimated ratio of the X-ray flux to the IR flux that is consistent with theoretical expectations for a heavy initial BH seed; (ii) its high inferred redshift of $z\sim 10.3$, as predicted for the transient OBG stage ($9<z<12$); (iii) the amplitude and shape of the detected JWST Spectral Energy Distribution (SED) between $1-5$ microns, which is in very good agreement with simulated template SEDs for OBGs; and (iv) the extended JWST morphology of UHZ1 that is suggestive of a recent merger, which is also a predicted property for the formation of transient OBGs. Therefore, as the first OBG candidate, UHZ1 provides compelling evidence for forming heavy initial seeds from direct collapse in the early Universe. Early universe(435) — Galaxy formation(595) — Supermassive black holes (1663) — X-ray active galactic nuclei(2035) — Theoretical models (2107) ††journal: ApJ Letters ## 1 Introduction The origin of the first black holes in the Universe remains an open question in astrophysics. It has attracted significant theoretical attention in the past two decades as observational data to address it have been scarce. The remnants of the first stars are expected to produce a population of light black hole (BH) seeds at early cosmic epochs. Whether this is the only channel that physics permits for the creation of initial BH seeds has been vigorously debated. In particular, accounting for actively growing SMBHs with masses of $\sim 10^{9}\,M_{\odot}$ in detected $z\gtrsim 6$ luminous quasars from light initial seeds has proven to be challenging given the time available for assembling their inferred high masses (Fan et al., 2006; Willott et al., 2007; Jiang et al., 2008; Mortlock et al., 2011; Natarajan, 2011; Pacucci & Loeb, 2022; Cappelluti et al., 2022). Multiple heavy initial BH seed formation scenarios have been proposed to account for forming early high-redshift quasar populations (Natarajan, 2011; Volonteri, 2012). One particular class of models that addresses this timing challenge in the context of the standard cosmological framework while coupling galaxy and black hole formation and evolution is the formation of heavy initial black hole seeds from the direct collapse of pre-galactic gas disks in the early Universe (Bromm & Loeb, 2003; Begelman et al., 2006; Lodato & Natarajan, 2006, 2007). These heavy seeds that form as direct collapse black holes (DCBHs) are expected to have initial masses of $\sim{10^{4}{-}10^{5}}\,M_{\odot}$ (Lodato & Natarajan, 2007; Ferrara et al., 2014). Given that BHs are expected to grow via mergers and accretion over cosmic time (Haehnelt et al., 1998; Pacucci & Loeb, 2020), information about their initial seeding is expected to be erased. Therefore, directly accessing the highest redshift population offers the best prospects (Ricarte & Natarajan, 2018; Pacucci & Loeb, 2022) to constrain BH seeding models. As we show in this letter, the detection of an accreting BH in UHZ1 offers a unique opportunity to interrogate and constrain seeding scenarios and the very early growth history of BHs. The viable DCBH formation sites in the Cold Dark Matter-dominated cosmogony are pristine atomic cooling halos, satellites bound to the first star-forming galaxies. In these satellite subhalos, gas cooling and fragmentation and hence star formation are suppressed as the more efficient molecular hydrogen coolant gets rapidly dissociated due to irradiation by the Lyman-Werner photons from the parent star-forming halo (Agarwal et al., 2013). Meanwhile, since gas in the DCBH subhalo likely has non-zero angular momentum, a pre-galactic gas disk is expected to form and to go globally unstable to eventually lead to the formation of a central BH with mass ranging between $\sim{10^{4}-10^{5}}\,M_{\odot}$ (Lodato & Natarajan, 2006, 2007). While the final stages of the formation of the DCBH are not entirely understood, it is speculated that a massive “quasi-star” type object might form embedded with a growing BH in its core (see for instance scenarios proposed by Begelman et al., 2008; Volonteri & Begelman, 2010; Sakurai et al., 2020). The satellite DCBH subhalo is then predicted to rapidly merge within $\sim$ 1-5 Myrs with the parent star-forming halo to produce a new, transient class of high- redshift objects, Outsize Black hole Galaxies (OBGs) (Agarwal et al., 2013; Natarajan et al., 2017). Simulations of this scenario reveal that the luminosity and multi-wavelength SED of the resulting OBG is a combination of the accretion luminosity of the growing heavy DCBH seed and the luminosity of the stellar component in the merged system. A defining characteristic of OBGs is that the mass of the growing heavy DCBH seed is expected to be comparable to or even, in some instances, in excess of the mass in stars, in stark contrast to what is found locally (Natarajan, 2011; Agarwal et al., 2013; Pacucci et al., 2017a; Visbal & Haiman, 2018), wherein SMBHs weigh $\sim 0.1\%$ of the mass of their host galaxy’s stellar component (Ferrarese & Merritt, 2000; Tremaine et al., 2002). Tracking the fate of heavy and light seeds in semi-analytic models that include the growth of BH populations by gas accretion and mergers over cosmic time, it has been demonstrated that the presence of SMBH populations with masses over $10^{6}\,M_{\odot}$ at epochs earlier than $z\gtrsim 9$ are to be expected (Ricarte & Natarajan, 2018). Interestingly, however, the presence of such objects depends on both the BH seeding mechanism and how rapidly they can accrete in this highly unconstrained epoch. This higher redshift window of $z\gtrsim 9$, offering a glimpse into the initial seeding epoch, has largely been empirically inaccessible until the recent deployment of JWST. Data flowing currently from the JWST is rapidly reshaping our understanding of early galaxy formation with the reported detection of large numbers of faint, distant galaxies at $z>9$, many of which may harbor central BHs. Early studies are hinting at a higher-than-expected abundance of galaxies in the early Universe (Castellano et al., 2022, 2023; Harikane et al., 2022; Naidu et al., 2022; Atek et al., 2023a; Adams et al., 2023; Leung et al., 2023). In addition, there have also been reports of the detection of high-redshift accreting BHs (Bogdan et al., 2023; Larson et al., 2023; Maiolino et al., 2023; Juodžbalis et al., 2023). JWST’s reach has also been greatly augmented by the exploitation of the magnification afforded by nature’s telescopes – cluster lenses – that bring into view even fainter and more distant background sources. Data from multiple JWST programs, looking through the Frontier Fields cluster lens Abell 2744 at $z=0.308$, has revealed an enhanced galaxy density at $z\gtrsim 10$ (Castellano et al., 2022, 2023; Atek et al., 2023b). In this field, accurate photometric redshifts have been determined for the $z\sim 9-15$ galaxies fitting SEDs to the JWST NIRCAM data. A further advantage of using Abell 2744 as nature’s magnifying glass is its extremely well-calibrated lensing mass model (see, for instance, the six independently derived lensing mass models for this Frontier Fields cluster that are publicly available). Utilizing the achromatic nature of gravitational lensing, Bogdan et al. (2023) innovatively deployed Chandra also to study the Abell 2744 field in X-ray wavelengths to detect magnified faint background galaxies and their accreting central BHs. X-rays can penetrate the substantial columns of dust and gas expected at high redshifts and can, therefore, feasibly uncover accretion onto BHs at the earliest epochs. Looking for X-ray emission from high-redshift accreting BHs behind Abell 2744, in JWST-detected galaxies, Bogdan et al. (2023) report the $(4.2-4.4)\sigma$ detection of an X-ray emitting source in the $z\sim 10.3$ galaxy, UHZ1. The JWST image of UHZ1 appears to be extended, with a photometric redshift determined using three independent codes that place it at $z\approx 10.3^{+0.6}_{-1.3}$ (Castellano et al., 2023), with no confounding lower redshift solution. They report a best-fit column density of $N_{\rm H}\approx 8^{+\inf}_{-7}\times 10^{24}$ cm-2 and a corresponding intrinsic $2-10$ keV luminosity of $L_{\rm X,int}\approx 9\times 10^{45}\ \rm{erg\ s^{-1}}$ after correcting for the $\mu=3.81$ lensing magnification factor at the location of UHZ1. Taken together, these suggest the presence of an obscured, likely Compton-thick, accreting BH in UHZ1. Details of the detection, uncertainties, and the estimate of its significance can be found in Bogdan et al. (2023). Given the properties of the unique simultaneous Chandra and JWST detection for UHZ1, we make the case that UHZ1 is the first detected OBG candidate. We demonstrate this by comparing the multi-wavelength UHZ1 data with theoretical OBG model template predictions previously reported in Natarajan et al. (2017). We note that its X-ray detection is what uniquely sets UHZ1 apart in contrast to all the other recent JWST detections of high redshift accreting BHs. The outline of this Letter is as follows: in Section 2, we briefly outline our current understanding of BH seeding models, followed by a summary of the formation and predicted properties of OBGs in Section 3. Collating the observed properties of UHZ1 in Section 4, we present the comparison of UHZ1 with multi-wavelength model predictions of growing early BH seeds in Section 5 that peg it as an OBG. We conclude by discussing the implications of the first detection of an OBG for a deeper understanding of BH seeding and the assembly history of the first black holes in Section 6. Figure 1: Schematic diagram of the potential assembly history of the OBG candidate UHZ1. The direct collapse of primordial gas disks resulting in the production of heavy initial black hole seeds has been demonstrated to occur feasibly in the setting shown here: satellite DCBH halos that are bound to parent star-forming halos (see Figure 1 in Natarajan et al., 2017). Star formation in the satellite atomic cooling DCBH halo is expected to be suppressed due to the dissociation of molecular hydrogen from the Lyman-Werner radiation produced by the parent galaxy. As a consequence, the OBG is a merger remnant that contains an accreting OBH with $M_{\rm bh}\geq{M_{*}}$ . ## 2 BH seeding models and their feasibility A range of theoretical seeding prescriptions operating as early as at $z\sim 20-25$ classified broadly as “light” and “heavy” seeding models have been proposed as starting points to account for the formation of observed SMBHs. Light seeds are believed to be the remnants of the first generation of stars, the so-called Population III stars, that result in the production of initial BH seeds with $10-100\,M_{\odot}$ (Madau & Rees, 2001). However, the precise mass range for light seeds is highly uncertain due to insufficient knowledge of the initial mass function of the first stars and hence their remnants (Hirano & Bromm, 2017). There is considerable latitude in when BH seeding commences and the present time, it is feasible at a wide redshift range $z~{}30-15.$ Heavy seed models, on the other hand, propose the formation of $10^{4}\,-\,10^{5}\,M_{\odot}$ seeds in several possible ways. First, heavy seeds could result from the direct collapse of pre-galactic gas disks (Loeb & Rasio, 1994; Volonteri & Rees, 2005; Lodato & Natarajan, 2006; Begelman et al., 2006; Lodato & Natarajan, 2007) leading on to growth transiting through the OBG stage (Agarwal et al., 2013). A second pathway involves rapid, amplified early growth of originally light seeds that may end up in conducive cosmic environments, such as gas-rich, dense nuclear star clusters (Alexander & Natarajan, 2014). Additionally, rapid mergers of light remnants in the early nuclear star clusters could also lead to the formation of heavy seeds at high redshifts as proposed by Devecchi & Volonteri (2009) as well as the runaway collapse of nuclear star clusters as proposed by (Davies et al., 2011). In addition to these more conventional theoretical seeding models, primordial black holes (PBHs, Hawking 1971) that form in the infant Universe have also been explored as potential candidates to account for the origin of initial seeds for SMBHs in the very early Universe (see Cappelluti et al. 2022 and references therein). In this paper, we focus on the OBG stage that results from the formation of DCBH seeds. Regarding the feasibility of these theoretical seeding scenarios, light seeds are an inevitable consequence of stellar evolution, which is a well-understood and amply empirically tested theory in the late Universe. The heavy seed scenario at present relies heavily on theoretical models and self-consistent tracking of optimal sites in cosmological simulations that suggest that the requisite physical conditions for the DCBH seed formation and growth scenarios are available in the early Universe (Agarwal et al., 2013; Whalen et al., 2020). Currently, due to computational resolution and volume limitations, we cannot physically track the formation of heavy seeds (or, for that matter, even light seeds) and follow their assembly in cosmological simulations. Most simulations adopt ad-hoc seeding prescriptions that commonly involve the planting of $\sim 10^{6}\,M_{\odot}$ seeds in early halos. One recent exception is the ASTRID simulation (with a box size of 250 h-1 Mpc per side) where the minimal planted BH seed mass is $\sim\,3\times 10^{4}\,M_{\odot},$ that spans the DCBH channel mass range Ni et al. (2022). However, in multiple recent works, including Habouzit et al. (2021) and Natarajan et al. (2021), comparing simulation suites, it is reported that the accretion and AGN feedback prescriptions adopted in most if not all current simulations do not reproduce observed BH populations at $z\geq 3$. ## 3 DCBHs & Predicted Properties of OBGs The growth history of light and heavy seeds at high redshifts has multiple phases that permit discriminating between them (Volonteri et al., 2007; Natarajan, 2011; Agarwal et al., 2013; Natarajan et al., 2017; Pacucci et al., 2017b). This implies not only a completely different assembly history but also a very different relationship between properties of the stellar population and the central BH for heavy seeds earlier on compared to what is seen in the nearby Universe at late times (Ferrarese & Merritt, 2000; Tremaine et al., 2002). The pathway and physical processes involved in forming OBGs are summarized here briefly; more details can be found in Natarajan et al. (2017). A set of specific cosmological conditions, available in the cold dark matter Universe, are required for the formation of heavy seeds (Lodato & Natarajan, 2007); their formation sites are preferentially atomic hydrogen-cooled satellite sub-halos proximate to the first generation of star-forming galaxies. From cosmological simulations, in these optimal DCBH formation sites, the first episode of star formation is delayed due to Lyman-Werner irradiation from the external parent galaxy. In these subhalos, gas-rich proto-galactic disks are predicted to form that, instead of cooling and fragmenting, become Jeans unstable, leading to rapid runaway of gas to the center resulting in the formation of a DCBH. In simulations, it is found that within 1 Myr or so, these satellite subhalos rapidly merge with the parent star-forming galaxy. The merger product, an OBG, would then harbor a growing central heavy BH seed procured from the satellite and the stellar population contributed by the parent galaxy. A schematic outline of the formation process of an OBG is shown in Figure 1. Post-merger, the stars and the BH would continue to grow self-consistently as the same gas reservoir that feeds the BH also forms stars. Given that $M_{*}\,\sim\,M_{\rm bh}$ for an OBG, this results in a strikingly different BH-to-host galaxy stellar mass ratio than observed in the local Universe, where the mass of the central BH is $\sim 0.1\%$ of the stellar mass. Heavy seeds and their host galaxies are expected to transition through this OBG stage at early times before feedback-regulated efficient stellar assembly takes over, eventually leading to the flipping of the mass ratio between $M_{*}$ and $M_{\rm bh}$ at later cosmic times. During the OBG stage, the total observed flux is computed in models as arising from the sum of accretion onto the heavy seed and the recently merged stellar population that imprints distinct, detectable signatures in the SED shape – both slope and amplitude – in JWST bands and at X-ray wavelengths. The unique property of OBGs is their simultaneous IR and X-ray detection, as outlined in Table 1. Extending and expanding the tracking of early BH seed growth presented in Pacucci et al. (2015a), Pacucci et al. (2016, 2017c), to include the OBG stage, Natarajan et al. (2017) constructed a library of template models by varying the following key parameters while tracking the evolution of the merged remnant: (i) the metallicity of the gas and stellar population; (ii) the accretion mode - the standard radiatively efficient thin disk mode (Eddington limited as it is feedback regulated) and the radiatively inefficient slim disk (super-Eddington accretion is permitted as the accretion is entirely gas supply limited) and (iii) age of the stellar population. The range of initial conditions for these cases, including gas fraction are adopted from cosmological simulations, wherein both DCBH formation and light seed formation sites are selected. Details are reported in Agarwal et al. (2013). The library of model templates for OBGs is generated by simultaneously following the evolution of growing BH seeds (heavy and light) and the accompanying stellar populations with a range of ages and metallicities. The early growth of the BH seed is computed using 1-dimensional radiation- hydrodynamical models (Pacucci & Ferrara, 2015; Pacucci et al., 2015a, 2016, 2017c). These models simulate spherical accretion onto the high-redshift seed BH, calculating the emitted luminosity self-consistently from the mass accretion rate. The post-processing spectral analysis is done using CLOUDY (Ferland et al., 2013). For our models the gas fraction in viable DCBH formation sites is adopted from simulations, and the star formation history of the parent halo is also built up following cosmological hydrodynamical simulations (Agarwal et al., 2014) that include physically motivated, self-consistent prescriptions for star formation, metal pollution, and supernovae feedback. The evolution of the accompanying stellar component is also tracked simultaneously and combined with that of the growing BH. Two limiting cases for growth by accretion are implemented to produce a library of models (Pacucci et al., 2015b): standard accretion - which adopts the standard $\alpha$-disk model that is geometrically thin and optically thick, and hence radiatively efficient with accretion capped at the Eddington rate and slim disk accretion, which is characterized by a geometrically thick disk, that is radiatively inefficient where radiation pressure is less efficient quenching gas inflow due to radiation trapping permitting super-Eddington accretion rates. The spectral shape of the output luminosity from accretion onto the BH is determined largely by the geometric properties of the accretion disk. Eddington limited accretion is the hallmark of thin-disk accretion during which the output luminosity $L_{\rm acc}\propto\dot{M}_{\rm acc}$, where $\dot{M}_{\rm acc}$ is the mass accretion rate. In this highly radiatively efficient regime, the luminosity is feedback limited. Meanwhile, as slim accretion disks result in super-Eddington accretion rates, in this instance, the output $L_{\rm acc}\propto[\ln{\dot{M}_{\rm acc}}]$. In this radiatively inefficient regime, gas accretion is expected to be supply limited. These two distinct geometries result in dramatically different fluxes mapped out in our models. The combined contribution of fluxes from the stellar component and both light and heavy accreting seeds is computed in the generated synthetic models. The stellar population is modeled using two possibilities: a younger, lower metallicity $(5\times 10^{-4}\,Z_{M_{\odot}})$ and an older, higher metallicity population $(5\times 10^{-2}\,Z_{M_{\odot}})$, both modeled with a Kroupa IMF. Distinct seeding signatures are seen in the emergent spectrum. The resulting properties for the full parameter space comprising the two seeding scenarios, two accretion models, and two distinct assumptions for the metallicity of the stellar population are presented in detail in Natarajan et al. (2017). Robust selection criteria for OBGs powered by initially heavy seeds were also derived, including a pre-selection to eliminate blue sources, followed by color-color cuts $([{F}090W-{F}220W]>0$;$-0.3<[{F}200W-{F}444W]<0.3$) and the ratio of X-ray flux to rest-frame optical flux $({F}X/{F}444W\gg 1)$. These cuts sift out OBGs from other bright, high- and low-redshift contaminants in the infrared. OBGs were predicted to have faint but detectable magnitudes of ${M_{\rm AB}<25}$ and unambiguously detectable by the NIRCam (Near-Infrared Camera) on JWST. Fainter growing light seed remnants with lower birth masses were found to have significantly fainter predicted AB magnitudes of ${M_{\rm AB}<31}$ by $z\sim 10$. Figure 2: The OBG model match for UHZ1: the model spectrum overplotted here is obtained by growing an initially heavy seed of $\sim 10^{4}\,M_{\odot}$ to a final mass of $10^{7}\,M_{\odot}$ as estimated for UHZ1 and combining with a young stellar population (age of 350 Myr), low metallicity ($10^{-3}\,Z_{M_{\odot}}$) with a column density of $\sim 3\times 10^{24}\,cm^{-2}$. The observed near-IR JWST SED forUHZ1 points taken from Castellano et al. (2023) are shown in blue. We note that the overplotted SED template from our library generated with the parameters noted above is similar to UHZ1. ## 4 Observed properties of UHZ1 We use the following observed properties of UHZ1 to find a template match from our library of early seed growth models first presented in Natarajan et al. (2017). As reported by Castellano et al. (2022, 2023), UHZ1, magnified by the foreground cluster Abell 2744, has an intrinsic magnitude that renders it detectable with a magnitude of $M_{\rm AB}\sim 27$ and a nearly flat SED in the observed JWST bands spanning $1-5$ microns. Fitting a Salpeter IMF, Castellano et al. (2023) infer a stellar mass for UHZ1 of $\sim 4\times 10^{7}\,M_{\odot}$. An independent fit performed by Atek et al. (2023b) also report a stellar mass of $\sim 7\times 10^{7}\,M_{\odot}$, making the two estimates broadly consistent with each other. The measured bolometric luminosity from Chandra is $L_{\rm X}\sim 5\times 10^{45}\ \rm{erg\ s^{-1}}$, yielding a BH mass of $\sim 4\times 10^{7}\ \rm{M_{\odot}}$ assuming accretion at the Eddington rate (Bogdan et al., 2023). In addition to these multi- wavelength data, the composite JWST image of UHZ1 appears extended. We note that in their SED fit Castellano et al. (2023) were unaware of the presence of an accreting BH. The X-ray detection of UHZ1, combined with JWST flux measurements, motivates and justifies our exploration of an OBG model match to UHZ1. We also note the following uncertainties in interpreting the data for UHZ1. First, while it is a robustly determined photometric redshift, spectroscopic confirmation is awaited. As the column density is weakly constrained, the mass estimate for the BH in UHZ1 from X-ray data is also not tightly constrained. Meanwhile, the stellar mass estimate from SED fitting to JWST data also represents a lower limit as current templates do not take into account the underlying older stellar population that may be contained in sources like UHZ1. Figure 3: Selection criteria for OBGs: the predicted location of OBGs in color-color space, with the location of UHZ1 marked with a black star. ## 5 Comparison of UHZ1 with OBG model templates We note that the current JWST SED fitting for UHZ1 adopted by Castellano et al. (2023) assumes that the observed UV/optical emission derives solely from the stellar component modeled with a Salpeter IMF. This fit was done before the detection of the accreting BH, which is reported to be heavily obscured by Bogdan et al. (2023). With the subsequent knowledge and derived properties of the SMBH hosted in UHZ1 from the X-ray data, we explore whether the rest-frame UV/optical SED and corresponding X-ray flux detected are compatible with our theoretical model templates of OBGs (Natarajan et al., 2017). We first explore the mass accretion history of the BH in UHZ1, starting from initially light and heavy seeds at $z>20$ to reach the final inferred BH mass of $\sim 10^{7}\,M_{\odot}$. An initially light seed with a birth mass of $10-100\,M_{\odot}$, would need to steadily accrete at well above the Eddington rate (over 2 $\times$ the Eddington rate) for roughly 300 million years, while a heavy DCBH seed with an initial birth mass of $10^{4-5}\,M_{\odot}$, could reach the final mass of the BH powering UHZ1 while accreting at just the Eddington rate throughout. Next, we look at template model matches from our library for the multi- wavelength SEDs for UHZ1 for the two possible seeding scenarios; the two assumed accretion models and possibilities for the age and metallicity of the stellar population that bracket the entire permitted parameter space. Our template library includes outputs for the time slice with the largest detectable X-ray flux. To explain UHZ1 with a light initial seed, accreting at super-Eddington rates captured via our slim disk accretion model, we do not find a template match that simultaneously satisfies JWST and Chandra data for UHZ1. A template that comes somewhat close is the predicted model JWST SED is shown in Figure 2 (grey spectrum) where it is seen that the predicted JWST flux is approximately two orders of magnitude lower than observed, resulting in the production of a significantly fainter source, predicted to have $M_{\rm AB}\leq 31$, at the sensitivity limit of JWST. Additionally, UHZ1 would have been undetected in X-rays in contradiction to what is seen. Therefore, the simultaneous X-ray and JWST detection of UHZ1 given the final BH mass of $\sim 10^{7}\,M_{\odot}$ that is needed to be in place by $z=10.3$ strongly disfavors UHZ1 as originating from a light seed. Meanwhile, for a heavy seed origin model for UHZ1, we do find a template match from our library that simultaneously satisfies JWST and Chandra data for both assumed high and low metallicity values for the stellar population for a range of permitted stellar ages, as there is a trade-off between these two attributes. For models where growth is via standard Eddington-limited accretion with a column density of $N_{H}\sim 3\times 10^{24}\,cm^{-2}$: (i) the predicted hard X-ray flux is consistent with the measured value compatible with the inferred column density as shown in Figure 4; (ii) the flux ratio ${F_{444}/F_{X}}\sim 1$ as expected for OBGs, shown in Figure 3 and (iii) UHZ1 satisfies all the color-color selection criteria for an OBG, also seen in Figure 3. The predicted JWST SED shape and amplitude from this template with an age of 350 Myr for the recently merged stellar population with a low metallicity of $\sim\,10^{-3}\,Z_{M_{\odot}}$ matches the data very well as shown in Figure 2 (red spectrum). In contrast, we note that for templates with a heavy seed that subsequently grows via slim disk accretion at potentially super-Eddington rates limited by the available gas supply, the OBG stage is too short-lived. While it could be potentially detectable with JWST for the higher metallicity case (for the stellar population in the host galaxy) due to the lowered X-ray flux from the extremely strong obscuration, such sources would be undetected even with the deepest current X-ray exposures. Besides, with an extremely short predicted lifetime in the OBG stage of $\sim$ 5-10 Myrs, these sources would rapidly transit toward $M_{*}>M_{\rm bh}$. Once again, our Chandra X-ray detection of UHZ1 rules out this family of templates. Figure 4: The template match for the multi-wavelength SED from our library of OBG models on which we over plot the measured fluxes in the IR from JWST and X-ray from Chandra for UHZ1, shows that they are consistent and well matched. The template that provides this optimal match has the following properties: a heavy initial seed; the OBG accreting at the Eddington limit with an age of 350 Myr for the recently merged stellar population with a low metallicity of $\sim\,10^{-3}\,Z_{\odot}$ and a column density of $3\times 10^{24}\,cm^{-2}$, shown in Figure 2 (in red). Metallicity | | Accretion Mode | | ---|---|---|---|--- of | | | | Stellar Component | Thin Disk | Thin Disk | Slim Disk | Slim Disk | Chandra | JWST | Chandra | JWST Low Z | ✓ | ✓ | X | ✓ High Z | ✓ | ✓ | X | ✓ Table 1: Synopsis of multi-wavelength detectability for an initially heavy seed of $10^{4}M_{\odot}$ from our model template library of predicted SEDs given the current sensitivity limits of JWST and the deepest available current Chandra data for a source with the observed properties of UHZ1 at $z\sim 10$. Metallicity | | Accretion Mode | | ---|---|---|---|--- of | | | | Stellar Component | Thin Disk | Thin Disk | Slim Disk | Slim Disk | Chandra | JWST | Chandra | JWST Low Z | X | at the limit | X | X High Z | X | at the limit | X | X Table 2: Synopsis of multi-wavelength detectability with for an initially light seed from our model template library of predicted SEDs given the current sensitivity limits of JWST and the deepest available current Chandra data for a source with the observed properties of UHZ1 at $z\sim 10$. ## 6 Conclusions & Implications for early BH seeding Despite the theoretical uncertainties in our current understanding of early BH seeding, the detection of even a single high-redshift X-ray quasar, UHZ1, at $z\approx 10.3$ has significant implications as it provides new empirical information on the properties of initial BH seeds and coupling of accreting early BHs and their host galaxies. With this unique multi-wavelength observational dataset for UHZ1, we present compelling evidence that it represents the first detection of an OBG, the class of high-redshift galaxies that are seeded with heavy initial BHs, as predicted by Natarajan et al. (2017). As we show, all OBG selection criteria are satisfied by UHZ1 with an initial heavy seed mass ranging from $10^{4}-10^{5}\,M_{\odot}$, and its growth history is compatible with standard Eddington-limited accretion. The best model match from our OBG template library for UHZ1 is provided by a heavy initial seed with a low metallicity for the stellar component and age of $\sim$ 300 Myr. There is a trade-off between the age and metallicity of the stellar population in OBGs. This degeneracy is well-documented more generally for stellar populations at other cosmic epochs as well. We, therefore, claim that UHZ1 offers compelling empirical evidence for the existence of heavy seeds in the early Universe. Studies of the BH seeding epoch have thus far been restricted to theoretical explorations. Forming BH seeds ab-initio in cosmological simulations and tracking their growth history is extremely challenging. Most simulation suites adopt simple seeding prescriptions that typically assign BH seed masses. Detecting the first OBG candidate, UHZ1, offers empirical guidance to inform our heavy seeding theoretical models. As JWST progressively brings the $z>10$ Universe into view, detecting individual extreme accreting BHs at the earliest epochs will bring more powerful insights soon. In this work, we are explicitly not making a case for extrapolation of the growth of UHZ1 down to $z=6-7$. We do not claim it is a likely progenitor for the luminous optically detected SDSS quasars. We are cautious about this as simulations, the MASSIVEBLACK suite in particular, have shown that the most massive BH at $z\sim 10$ does not necessarily remain and grow to be the most massive BH by $z=6$ (Di Matteo et al., 2017, 2023). We emphasize that the details of the environment play an important role in shaping the accretion and, therefore, growth history of BHs. Neither our 1-D hydrodynamical BH growth tracking simulations, whose results informed our model templates, nor large cosmological boxes at present adequately capture gas flows near BHs. Conversely, in our current analysis, we are agnostic to the Eddington ratio distributions inferred at lower redshifts for observed X-ray AGN. We also refrain from making any number density estimates for OBGs relying on the detection of a single source as models and simulations Wise et al. (2019); Regan et al. (2020); Whalen et al. (2020) indicate that DCBHs are rare and hence expected to account only for a small fraction of the most luminous quasars detected at $z\sim 6-7$. Estimates of the predicted abundance of light and heavy seeds are currently highly uncertain. Lack of information on the occupation fraction of BH seeds and an incomplete census of X-rays from early accretors due to obscured accretion preclude calculations. However, a rough back-of-the-envelope estimate given the currently observed JWST fields (UNCOVER, GLASS, & CEERS surveys) and the detection of UHZ1 as the single OBG candidate suggests agreement with the expected number densities at $z\sim 10$ from just the number density of DCBH sites estimated in Natarajan et al. (2017) from simulations of $\sim 10^{-6}-10^{-7}$ per Mpc-3. For accreting high-redshift BHs detected only by JWST so far that are not necessarily OBGs and remain undetected in the deepest X-ray data in hand, one theoretical number density estimate has been attempted using semi-analytic models that we caution are also unable to capture the properties of the environment. Trinca et al. (2023) discuss the expected number density of $z>10$ AGN in JWST fields with footprints and depths similar to that of UNCOVER/GLASS surveys that probe the Abell 2744 field and arrive at the following estimate for expectations: for CEERS-like survey $\sim$ 8 to 21 AGN at $7\leq z\leq 10$; JADES-Deep about 12 to 63 AGNs with $7\leq z\leq 10$ and 5 to 32 AGN at $z\geq 10$. As we note in this work, only a small fraction of these high redshift AGN will be OBGs and be detected in X-rays as well. Our simple growth models certainly have limits that translate directly into the range of possibilities that we have mapped out to create our model template library. We have made simplifying assumptions - as this is the best we can do now as no current cosmological simulations can form BH seeds ab- initio and track their growth self-consistently, taking the overall environment into account. As JWST detects more $z>9$ accreting BHs in the coming cycles, we plan to analyze those sources, investigate possible X-ray counterparts with Chandra, and develop a deeper understanding of OBGs and heavy seeding physics. ## Acknowledgments PN acknowledges support from the Gordon and Betty Moore Foundation and the John Templeton Foundation that fund the Black Hole Initiative (BHI) at Harvard University where she serves as one of the PIs. F.P. acknowledges support from a Clay Fellowship administered by the Smithsonian Astrophysical Observatory. F.P. and A.R acknowledge support from the BHI. A.B. acknowledges support from the Smithsonian Institution and the Chandra Project through NASA contract NAS8-03060 A.D.G. acknowledges support from NSF/AAG grant 1007094. ## References * Adams et al. (2023) Adams, N. J., Conselice, C. J., Ferreira, L., et al. 2023, MNRAS, 518, 4755 * Agarwal et al. (2014) Agarwal, B., Dalla Vecchia, C., Johnson, J. L., Khochfar, S., & Paardekooper, J.-P. 2014, MNRAS, 443, 648 * Agarwal et al. (2013) Agarwal, B., Davis, A. J., Khochfar, S., Natarajan, P., & Dunlop, J. S. 2013, MNRAS, 432, 3438 * Alexander & Natarajan (2014) Alexander, T., & Natarajan, P. 2014, Science, 345, 1330 * Atek et al. (2023a) Atek, H., Shuntov, M., Furtak, L. J., et al. 2023a, MNRAS, 519, 1201 * Atek et al. (2023b) Atek, H., Chemerynska, I., Wang, B., et al. 2023b, arXiv e-prints, arXiv:2305.01793 * Begelman et al. (2008) Begelman, M. C., Rossi, E. M., & Armitage, P. J. 2008, MNRAS, 387, 1649 * Begelman et al. (2006) Begelman, M. C., Volonteri, M., & Rees, M. J. 2006, MNRAS, 370, 289 * Bogdan et al. (2023) Bogdan, A., Goulding, A., Natarajan, P., et al. 2023, arXiv e-prints, arXiv:2305.15458 * Bromm & Loeb (2003) Bromm, V., & Loeb, A. 2003, ApJ, 596, 34 * Cappelluti et al. (2022) Cappelluti, N., Hasinger, G., & Natarajan, P. 2022, ApJ, 926, 205 * Castellano et al. (2022) Castellano, M., Fontana, A., Treu, T., et al. 2022, ApJ, 938, L15 * Castellano et al. (2023) —. 2023, ApJ, 948, L14 * Davies et al. (2011) Davies, M. B., Miller, M. C., & Bellovary, J. M. 2011, ApJ, 740, L42 * Devecchi & Volonteri (2009) Devecchi, B., & Volonteri, M. 2009, ApJ, 694, 302 * Di Matteo et al. (2023) Di Matteo, T., Angles-Alcazar, D., & Shankar, F. 2023, arXiv e-prints, arXiv:2304.11541 * Di Matteo et al. (2017) Di Matteo, T., Croft, R. A. C., Feng, Y., Waters, D., & Wilkins, S. 2017, MNRAS, 467, 4243 * Fan et al. (2006) Fan, X., Strauss, M. A., Becker, R. H., et al. 2006, AJ, 132, 117 * Ferland et al. (2013) Ferland, G. J., Porter, R. L., van Hoof, P. A. M., et al. 2013, Rev. Mexicana Astron. Astrofis., 49, 137 * Ferrara et al. (2014) Ferrara, A., Salvadori, S., Yue, B., & Schleicher, D. 2014, MNRAS, 443, 2410 * Ferrarese & Merritt (2000) Ferrarese, L., & Merritt, D. 2000, ApJ, 539, L9 * Habouzit et al. (2021) Habouzit, M., Li, Y., Somerville, R. S., et al. 2021, MNRAS, 503, 1940 * Haehnelt et al. (1998) Haehnelt, M. G., Natarajan, P., & Rees, M. J. 1998, MNRAS, 300, 817 * Harikane et al. (2022) Harikane, Y., Ouchi, M., Oguri, M., et al. 2022, arXiv e-prints, arXiv:2208.01612 * Hawking (1971) Hawking, S. 1971, MNRAS, 152, 75 * Hirano & Bromm (2017) Hirano, S., & Bromm, V. 2017, MNRAS, 470, 898 * Jiang et al. (2008) Jiang, L., Fan, X., Annis, J., et al. 2008, AJ, 135, 1057 * Juodžbalis et al. (2023) Juodžbalis, I., Conselice, C. J., Singh, M., et al. 2023, arXiv e-prints, arXiv:2307.07535 * Larson et al. (2023) Larson, R. L., Finkelstein, S. L., Kocevski, D. D., et al. 2023, arXiv e-prints, arXiv:2303.08918 * Leung et al. (2023) Leung, G. C. K., Bagley, M. B., Finkelstein, S. L., et al. 2023, arXiv e-prints, arXiv:2306.06244 * Lodato & Natarajan (2006) Lodato, G., & Natarajan, P. 2006, MNRAS, 371, 1813 * Lodato & Natarajan (2007) —. 2007, MNRAS, 377, L64 * Loeb & Rasio (1994) Loeb, A., & Rasio, F. A. 1994, ApJ, 432, 52 * Madau & Rees (2001) Madau, P., & Rees, M. J. 2001, ApJ, 551, L27 * Maiolino et al. (2023) Maiolino, R., Scholtz, J., Witstok, J., et al. 2023, arXiv e-prints, arXiv:2305.12492 * Mortlock et al. (2011) Mortlock, D. J., Warren, S. J., Venemans, B. P., et al. 2011, Nature, 474, 616 * Naidu et al. (2022) Naidu, R. P., Oesch, P. A., van Dokkum, P., et al. 2022, ApJ, 940, L14 * Natarajan (2011) Natarajan, P. 2011, Bulletin of the Astronomical Society of India, 39, 145 * Natarajan et al. (2017) Natarajan, P., Pacucci, F., Ferrara, A., et al. 2017, ApJ, 838, 117 * Natarajan et al. (2021) Natarajan, P., Tang, K. S., McGibbon, R., et al. 2021, arXiv e-prints, arXiv:2103.13932 * Ni et al. (2022) Ni, Y., Di Matteo, T., Bird, S., et al. 2022, MNRAS, 513, 670 * Pacucci & Ferrara (2015) Pacucci, F., & Ferrara, A. 2015, MNRAS, 448, 104 * Pacucci et al. (2016) Pacucci, F., Ferrara, A., Grazian, A., et al. 2016, MNRAS, 459, 1432 * Pacucci et al. (2015a) Pacucci, F., Ferrara, A., Volonteri, M., & Dubus, G. 2015a, MNRAS, 454, 3771 * Pacucci & Loeb (2020) Pacucci, F., & Loeb, A. 2020, ApJ, 895, 95 * Pacucci & Loeb (2022) —. 2022, MNRAS, 509, 1885 * Pacucci et al. (2017a) Pacucci, F., Natarajan, P., & Ferrara, A. 2017a, ApJ, 835, L36 * Pacucci et al. (2017b) Pacucci, F., Natarajan, P., Volonteri, M., Cappelluti, N., & Urry, C. M. 2017b, ApJ, 850, L42 * Pacucci et al. (2017c) Pacucci, F., Pallottini, A., Ferrara, A., & Gallerani, S. 2017c, MNRAS, 468, L77 * Pacucci et al. (2015b) Pacucci, F., Volonteri, M., & Ferrara, A. 2015b, MNRAS, 452, 1922 * Regan et al. (2020) Regan, J. A., Wise, J. H., Woods, T. E., et al. 2020, The Open Journal of Astrophysics, 3, 15 * Ricarte & Natarajan (2018) Ricarte, A., & Natarajan, P. 2018, MNRAS, 481, 3278 * Sakurai et al. (2020) Sakurai, Y., Haiman, Z., & Inayoshi, K. 2020, MNRAS, 499, 5960 * Tremaine et al. (2002) Tremaine, S., Gebhardt, K., Bender, R., et al. 2002, ApJ, 574, 740 * Trinca et al. (2023) Trinca, A., Schneider, R., Maiolino, R., et al. 2023, MNRAS, 519, 4753 * Visbal & Haiman (2018) Visbal, E., & Haiman, Z. 2018, ApJ, 865, L9 * Volonteri (2012) Volonteri, M. 2012, Science, 337, 544 * Volonteri & Begelman (2010) Volonteri, M., & Begelman, M. C. 2010, MNRAS, 409, 1022 * Volonteri et al. (2007) Volonteri, M., Lodato, G., & Natarajan, P. 2007, Monthly Notices of the Royal Astronomical Society, 383, 1079 * Volonteri & Rees (2005) Volonteri, M., & Rees, M. J. 2005, ApJ, 633, 624 * Whalen et al. (2020) Whalen, D. J., Surace, M., Bernhardt, C., et al. 2020, ApJ, 897, L16 * Willott et al. (2007) Willott, C. J., Delorme, P., Omont, A., et al. 2007, AJ, 134, 2435 * Wise et al. (2019) Wise, J. H., Regan, J. A., O’Shea, B. W., et al. 2019, Nature, 566, 85
2023 020 17 February Measurement of the $\Lambda$ hyperon lifetime ALICE Collaboration††thanks: See Appendix A for the list of collaboration members ALICE Collaboration A new, more precise measurement of the $\Lambda$ hyperon lifetime is performed using a large data sample of Pb–Pb collisions at $\sqrt{s_{\rm NN}}$ $=$ 5.02 TeV with ALICE. The $\Lambda$ and $\overline{\Lambda}$ hyperons are reconstructed at midrapidity using their two-body weak decay channel $\Lambda\rightarrow\mathrm{p}+\pi^{-}$ and $\overline{\Lambda}\rightarrow\overline{\mathrm{p}}+\pi^{+}$. The measured value of the $\Lambda$ lifetime is $\tau_{\Lambda}=[261.07\pm 0.37\ (\rm stat.)\pm 0.72\ (\rm syst.)]\ \rm ps$. The relative difference between the lifetime of $\Lambda$ and $\overline{\Lambda}$, which represents an important test of CPT invariance in the strangeness sector, is also measured. The obtained value $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}=0.0013\pm 0.0028\ (\mathrm{stat.})\pm 0.0021\ (\mathrm{syst.})$ is consistent with zero within the uncertainties. Both measurements of the $\Lambda$ hyperon lifetime and of the relative difference between $\tau_{\Lambda}$ and $\tau_{\overline{\Lambda}}$ are in agreement with the corresponding world averages of the Particle Data Group and about a factor of three more precise. ## 1 Introduction The $\Lambda$ is the lightest hyperon, with strangeness $S=-1$, isospin $I=0$, and quark content $\mathrm{uds}$. Its lifetime has been measured in past experiments starting from 1963 using its weak decay channels $\Lambda\rightarrow\mathrm{p}+\pi^{-}$ and $\overline{\Lambda}\rightarrow\overline{\mathrm{p}}+\pi^{+}$. The world average reported in the Review of Particle Physics of the Particle Data Group (PDG) [1] is $\tau_{\Lambda}=263.2\pm 2.0$ ps. This is the result of averaging the measurements performed in 1973 by Poulard et al. [2] and 1975 by Clayton et al. [3] using $\Lambda$ produced in interactions of low-energy charged kaon beams with a fixed target, and the measurement of Zech et al. [4] in 1977 using a neutral hyperon beam. These results are based on data samples containing a maximum of fifty-three thousand events. The relative difference between the lifetimes of $\Lambda$ and $\overline{\Lambda}$ reported in the PDG is $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}=-0.001\pm 0.009$, resulting from the average of two measurements, one performed in 1967 by Badier et al. [5] and another one in 1996 by Barnes et al. [6], using $\Lambda$ and $\overline{\Lambda}$ produced in low-energy $\mathrm{p}+\mathrm{\overline{p}}\rightarrow\Lambda+\overline{\Lambda}$ reactions. The excellent tracking and particle-identification capabilities of ALICE over a broad momentum range and the large amount of data collected during Run 2 of the LHC are exploited to improve the current precision on the measurement of the $\Lambda$ lifetime and on the relative difference between the lifetimes of $\Lambda$ and $\overline{\Lambda}$. The latter provides a fundamental test of CPT invariance in the strangeness sector. This measurement is also a fundamental reference for the studies of the properties of hypernuclear states created in heavy-ion collisions and for future precision studies of other hyperon properties. The analysis presented here is performed in Pb–Pb collisions at a center-of-mass energy $\sqrt{s_{\mathrm{NN}}}$ $=$ 5.02 TeV using the same data sample employed for the measurements of the (anti)hypertriton lifetime and $\Lambda$ separation energy [7]. The latter measurements are fundamental to infer the internal structure of this hypernucleus as well as the properties of hyperon–nucleon interaction in the low-density limit, as described in [8]. ## 2 Experimental apparatus ALICE is one of the four large experiments at the LHC and it is dedicated to the study of heavy-ion collisions at ultrarelativistic energies. A detailed description of the ALICE apparatus and its performance can be found in Refs. [9] and [10]. In the following, only the subdetector systems used for the analysis presented in this paper are described. Trajectories of charged particles are reconstructed in the ALICE central barrel with the Inner Tracking System (ITS) [11] and the Time Projection Chamber (TPC) [12]. These are located within a large solenoidal magnet, providing a highly homogeneous magnetic field of 0.5 T parallel to the beam axis. The ITS consists of six cylindrical layers of silicon detectors, concentric and coaxial to the beam pipe, with a total pseudorapidity coverage $|\eta|<0.9$ with respect to the nominal interaction point. Three different technologies are used for this detector: the two innermost layers consist of silicon pixel detectors (SPD), the two central layers of silicon drift detectors (SDD), and the two outermost layers of double-sided silicon strip detectors (SSD). This detector is used in the determination of primary and secondary vertices, and in the track reconstruction. The TPC is the largest detector in the ALICE central barrel, with a pseudorapidity coverage $|\eta|<0.9$. It is used for charged-particle track reconstruction, momentum measurement, and particle identification (PID) via the measurement of the specific energy loss ($\mathrm{d}E/\mathrm{d}x$) of particles in the TPC gas. This detector provides up to 159 spacial points per track for charged-particle reconstruction. The resolution in the measurement of the distance-of-closest approach of primary tracks to the primary collision vertex, projected on the transverse plane, ranges from about 200 $\mu$m at 0.2 GeV$/\textit{c}$ to about 10 $\mu$m at 10 GeV$/\textit{c}$ [10]. The transverse-momentum ($p_{\mathrm{T}}$) resolution ranges from about 1$\%$ at 1 GeV$/\textit{c}$ to about 10$\%$ at 50 GeV$/\textit{c}$ in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ $=$ 5.02 TeV [13]. The $\textrm{d}E/\textrm{d}x$ resolution depends on the event multiplicity and is about 5–6.5$\%$ for minimum-ionizing particles crossing the full volume of the TPC [10]. The PID is complemented by the Time-Of-Flight (TOF) system [14]. This detector is made of Multi-gap Resistive Plate Chambers (MRPC) and is located at a radial distance of 3.7 m from the nominal interaction point. The TOF detector measures the arrival time of particles relative to the event collision time provided by the TOF detector itself or by the T0 detectors, two arrays of Cherenkov counters located at forward and backward rapidities [15]. The TOF detector is used in this analysis for pile-up rejection, mostly from out-of- bunch collisions, by requiring that at least one of the $\Lambda$ ($\overline{\Lambda}$) charged decay-daughter tracks has an associated hit in the TOF detector. Collision events are triggered by two plastic scintillator arrays, V0A and V0C [16], located on both sides of the interaction point, covering the pseudorapidity regions 2.8 $<\eta<$ 5.1 and $-3.7<\eta<-1.7$, respectively. Each array consists of four concentric rings, each ring comprising eight cells with the same azimuthal coverage. The V0A and V0C scintillators are used to determine the collision centrality from the measured signals produced by charged particles [17, 18]. The centrality is defined in terms of percentiles of the total hadronic cross section. ## 3 Data analysis ### 3.1 Event selection The data used for this analysis were collected in 2018 during the LHC Pb–Pb run at $\sqrt{s_{\mathrm{NN}}}$ $=$ 5.02 TeV. A minimum bias (MB) event trigger and two centrality triggers were used. The MB trigger, fully efficient in the centrality interval 0–90$\%$, requires coincident signals in the V0 detectors, synchronous with the bunch crossing time defined by the LHC clock. The two centrality triggers, fully efficient in the centrality classes 0–10$\%$ and 30–50$\%$, are based on the signal amplitude measured by the V0 scintillators, which is proportional to the charged-particle multiplicity of the event. The analysis is performed in four centrality classes: 0–10$\%$, 10–30$\%$, 30–50$\%$, and 50–90$\%$. The events in the centrality classes 0–10$\%$ and 30–50$\%$ are selected using both the MB and centrality triggers, while the MB event trigger alone is used for the other centrality classes. In order to keep the conditions of the detectors as uniform as possible and reject background collisions, the coordinate of the primary vertex along the beam axis is required to be within 10 cm from the nominal interaction point. Events with multiple vertices identified with the SPD are tagged as pile-up and removed from the analysis [10]. In addition, events with pile-up occurring during the drift time of the TPC are rejected based on the correlation between the number of SDD and SSD clusters and the total number of clusters in the TPC, as described in Ref. [19]. To further suppress the pile-up contribution, mostly from out-of-bunch collisions, the $\Lambda$ daughter tracks are required to have an associated hit in the TOF detector. This requirement is applied only for the centrality classes 30–50$\%$ and 50–90$\%$. For the most central events, the matching of daughter tracks with a TOF hit does not have a significant impact on the fraction of $\Lambda$ ($\overline{\Lambda}$) from events with out-of-bunch pile-up, which is found to be between 0.02$\%$ and 0.07$\%$. The total number of events selected for each centrality class is reported in Table 1. Table 1: Number of events used for the analysis. Centrality | | Number of events ($\times 10^{6}$) ---|---|--- 0–10$\%$ | | 70.97 10–30$\%$ | | 18.43 30–50$\%$ | | 61.43 50–90$\%$ | | 36.86 ### 3.2 Selection of $\Lambda$ candidates The two-body decay channels $\Lambda\rightarrow\mathrm{p}+\pi^{-}$ and $\overline{\Lambda}\rightarrow\overline{\mathrm{p}}+\pi^{+}$ are used in this measurement. These have a branching ratio BR of $(63.9\pm 0.5)\%$ [1]. The $\Lambda$ ($\overline{\Lambda}$) candidates are reconstructed using the standard ALICE weak decay finder. This algorithm searches for weak decay topologies, called $\mathrm{V}^{0}$, by reconstructing oppositely-charged particle tracks originating from a displaced vertex as described in Refs. [20, 21]. In the case of a decay vertex located inside the ITS volume, at least one hit in any of the ITS layers is used in the reconstruction of the charged tracks originating from the $\mathrm{V}^{0}$ decay. The reconstructed tracks, selected in the pseudorapidity region $|\eta|<0.8$, are required to fulfil a set of quality criteria such as having a number of TPC crossed rows larger than 80, a number of TPC clusters used for the $\textrm{d}E/\textrm{d}x$ calculation larger than 60 to ensure a good $\textrm{d}E/\textrm{d}x$ resolution, a fraction of TPC crossed rows and findable clusters larger than 70$\%$, and a good track fit $\chi^{2}/N^{\rm TPC}_{\rm cls}<2.5$, where $N^{\rm TPC}_{\rm cls}$ is the number of TPC clusters. To reduce the combinatorial background, a set of topological selections are applied, i.e. the distance of closest approach (DCA) between the $\mathrm{V}^{0}$ daughter tracks is required to be less than 1 cm, the DCA between the $\mathrm{V}^{0}$ and the primary collision vertex less than 0.5 cm, the radial distance between primary and secondary vertices larger than 3 cm, and $\mathrm{cos}(\theta_{\rm p})>0.995$, where $\theta_{\rm p}$ is the angle between the vector connecting the primary and secondary vertices and the total $\mathrm{V}^{0}$ momentum ($\vec{p}_{\mathrm{V}^{0}}=\vec{p}_{\rm p}+\vec{p}_{\pi}$). The selection criteria applied for this measurement are similar to those already used in previous measurements [21, 22, 23]. The particle identification is based on the energy loss per unit of track length measured by the TPC. Protons and pions are identified by requiring that their measured $\textrm{d}E/\textrm{d}x$ is within 3$\sigma_{\mathrm{d}E/\mathrm{d}x}$ from the expected average calculated using the Bethe–Bloch, where $\sigma_{\mathrm{d}E/\mathrm{d}x}$ is the $\textrm{d}E/\textrm{d}x$ resolution. Proton and pion candidates are selected in the transverse-momentum intervals $0.2<p^{\pi}_{\rm T}<2\ \mathrm{GeV}/c$ and $0.2<p^{\rm p}_{\rm T}<10\ \mathrm{GeV}/c$, respectively. ### 3.3 Signal extraction The $\Lambda$ and $\overline{\Lambda}$ lifetimes are extracted from a fit to their proper decay length distributions using the exponential function $\mathrm{exp}\left(-L_{\rm proper}/\langle L_{\rm proper}\rangle\right)$. The proper decay length is calculated for every $\Lambda$ ($\overline{\Lambda}$) as $L_{\rm proper}=L_{\rm lab}/\beta\gamma=M_{\Lambda}\frac{L_{\rm lab}}{p},$ (1) where $L_{\rm lab}$ is the decay length measured in the laboratory system as the distance between primary and secondary vertices, $M_{\Lambda}$ is the $\Lambda$ mass taken from the PDG ($M_{\Lambda}=1115.683\ \mathrm{MeV}/c^{2}$) [1], and $p$ is the total momentum of the $\Lambda$ ($\overline{\Lambda}$) measured at the decay point. The number of signal counts in each $L_{\rm proper}$ interval is obtained using the following procedure, which is illustrated in Fig. 1: 1. 1. Location of the peak region: the invariant mass of the decay daughters is calculated and the region around the maximum of the invariant-mass distribution is fitted using a Gaussian function. The peak region is defined as $[M_{0}-8\sigma,M_{0}+10\sigma]$, where $M_{0}$ is the mean and $\sigma$ is the standard deviation of the Gaussian fit. The choice of such a wide signal region is motivated by the fact that the peak has two long tails and is slightly asymmetric, especially for low values of $L_{\rm proper}$. This asymmetry is an effect of residual imperfections in tracking and energy loss corrections, which affect candidates with invariant masses above and below the expected mass differently. 2. 2. Fit of the background: the background on the side bands of the peak is fitted using a continuous function to extrapolate the expected background inside the peak region. Since the background shape changes with $L_{\rm proper}$, a third-order polynomial function is used to fit the background at low $L_{\rm proper}$ while the sum of a power-law and a linear function is used at high $L_{\rm proper}$ values. These functions have the minimum number of parameters that guarantees a data-to-fit ratio consistent with unity within statistical uncertainties in the sidebands. 3. 3. Signal extraction: the signal is extracted in each $L_{\rm proper}$ interval by subtracting the estimated background from the invariant mass distribution and counting the entries inside the peak region. Figure 1: Invariant mass spectra of p$\pi$ pairs measured in central collisions (0–10$\%$) at low (left) and large (right) $L_{\rm proper}$. The green area indicates the peak region. The total number of $\Lambda$ and $\overline{\Lambda}$ raw counts within the peak region are reported in Table 2 for each centrality interval. Table 2: Raw counts of $\Lambda$ and $\overline{\Lambda}$ for different centralities. Centrality | $\Lambda$ ($\times 10^{6}$) | $\overline{\Lambda}$ ($\times 10^{6}$) ---|---|--- 0–10$\%$ | 312.6 | 296.4 10–30$\%$ | 49.7 | 47.2 30–50$\%$ | 41.2 | 35.9 50–90$\%$ | 4.2 | 3.6 ### 3.4 Efficiency and secondary $\Lambda$ corrections The raw $L_{\rm proper}$ spectrum of $\Lambda$ ($\overline{\Lambda}$) is corrected for the reconstruction efficiency, the feed-down contribution from higher-mass baryons and secondary $\Lambda$ ($\overline{\Lambda}$) originating from interactions of particles with the detector material as $\left[~\frac{\mathrm{d}N_{\Lambda}}{\mathrm{d}L_{\rm proper}}\right]_{\rm corr}=\frac{1}{\epsilon(L_{\rm proper})}\times f_{\rm prim}(L_{\rm proper})\times\left[~\frac{\mathrm{d}N_{\Lambda}}{\mathrm{d}L_{\rm proper}}\right]_{\rm raw},$ (2) where $\epsilon(L_{\rm proper})$ is the efficiency of primary $\Lambda$ ($\overline{\Lambda}$) and $f_{\rm prim}(L_{\rm proper})$ is the fraction of primary $\Lambda$ ($\overline{\Lambda}$). The dominant feed-down contributions are given by the weak decays of $\Xi^{\pm}$, $\Xi^{0}$, and $\Omega^{\pm}$: [1] * $\bullet$ $\Xi^{0}(\overline{\Xi}^{0})\rightarrow\Lambda(\overline{\Lambda})+\pi^{0}$ BR $=$ $(99.524\pm 0.012)\%$, * $\bullet$ $\Xi^{\pm}\rightarrow\Lambda+\pi^{\pm}$ BR $=$ $(99.887\pm 0.035)\%$, * $\bullet$ $\Omega^{\pm}\rightarrow\Lambda+K^{\pm}$ BR $=$ $(67.8\pm 0.7)\%$, * $\bullet$ $\Omega^{\pm}\rightarrow\Xi^{0}+\pi^{\pm}\ \mathrm{and}\ \Omega^{\pm}\rightarrow\Xi^{\pm}+\pi^{0}$ BR $=$ $(32.2\pm 0.8)\%$. These corrections are calculated using Monte Carlo (MC) simulations. Collision events between lead ions are simulated using the HIJING event generator [24] and the passage of particles through the experimental apparatus is simulated using GEANT3 [25] as transport code. Considering that the $p_{\rm T}$ distributions of particles and their relative abundances in MC simulations are different from data, centrality and $p_{\rm T}$-dependent corrections are applied in MC simulations using weights. These are defined, for different centrality classes, as the ratio of the $p_{\rm T}$ spectrum measured by ALICE and the $p_{\rm T}$ spectrum generated by HIJING. The ALICE measurements of the $p_{\rm T}$ spectra of $\Lambda$, $\Xi^{\pm}$, and $\Omega^{\pm}$ [21, 26] in Pb–Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ $=$ 2.76 TeV, scaled by the ratio of the proton $p_{\rm T}$ spectra measured at $\sqrt{s_{\mathrm{NN}}}$ $=$ 5.02 TeV [27] and $\sqrt{s_{\mathrm{NN}}}$ $=$ 2.76 TeV [28], are used to calculate the weights for different centralities. Based on isospin symmetry, the $p_{\rm T}$ spectra of $\Xi^{0}$ are assumed to be identical to those of $\Xi^{\pm}$. Centrality dependent factors, given by the ratios $(\Xi/\Lambda)_{\rm data}/(\Xi/\Lambda)_{\rm MC}$ and $(\Omega/\Lambda)_{\rm data}/(\Omega/\Lambda)_{\rm MC}$, are also included to reproduce the centrality dependence of the particle ratios observed in data. The efficiency is calculated as the ratio between reconstructed and generated primary $\Lambda$ in the simulation $\epsilon(L_{\rm proper})=\frac{\left[~\frac{\mathrm{d}N_{\Lambda}}{\mathrm{d}L_{\rm proper}}\right]_{\rm rec}}{\left[~\frac{\mathrm{d}N_{\Lambda}}{\mathrm{d}L_{\rm proper}}\right]_{\rm gen}}.$ (3) The efficiencies of $\Lambda$ and $\overline{\Lambda}$ for central Pb–Pb collisions (0–10$\%$) as a function of $L_{\rm proper}$ are shown in Fig. 2 (left). The correction for secondary $\Lambda$ ($\overline{\Lambda}$) from material and weak decays is applied by scaling the raw $L_{\rm proper}$ spectrum by the fraction of primary $\Lambda$ ($\overline{\Lambda}$) given by $f_{\rm prim}(L_{\rm proper})=1-\frac{\mathrm{d}N_{\Lambda_{\rm sec}}/\mathrm{d}L_{\rm proper}}{\mathrm{d}N_{\Lambda_{\rm all}}/\mathrm{d}L_{\rm proper}}.$ (4) The fraction of secondary $\Lambda$ and $\overline{\Lambda}$ for central Pb–Pb (0–10$\%$) is shown as a function of $L_{\rm proper}$ in the right panel of Fig. 2. The individual contributions from material and weak decays are shown in addition to the total fraction for illustration. The observed trend of the fraction of secondary $\Lambda$ ($\overline{\Lambda}$) from weak decays with $L_{\rm proper}$ is due to an interplay between the efficiency and decay time of the $\Lambda$ ($\overline{\Lambda}$) mother particle. For secondary $\Lambda$ ($\overline{\Lambda}$) from material, it is due to a combined effect of efficiency and the radial distance at which the secondary $\Lambda$ ($\overline{\Lambda}$) is produced. Figure 2: Reconstruction efficiency of primary $\Lambda$ ($\overline{\Lambda}$) (left) and fraction of secondary $\Lambda$ ($\overline{\Lambda}$) (right) in central Pb–Pb collisions (0–10$\%$). ## 4 Systematic uncertainties The dominant sources of systematic uncertainties on the $\Lambda$ and $\overline{\Lambda}$ lifetime measurements are related to the track and $\mathrm{V}^{0}$ selections, signal extraction, efficiency and feed-down corrections. These are summarized in Table 3. The methods used to estimate the systematic uncertainties from these sources are illustrated in the following. In addition, effects of the material budget uncertainty, the uncertainty on the hadronic interaction of $\Lambda$, $\overline{\Lambda}$, and their decay daughters, and potential effects of residual pile-up —which are found to be negligible —are also discussed. Table 3: Summary of the systematic uncertainties on the $\Lambda$ and $\overline{\Lambda}$ lifetime measurements. All values are in ps. Source | $\Lambda$ | $\overline{\Lambda}$ | $\Lambda$ \+ $\overline{\Lambda}$ ---|---|---|--- Track and $\mathrm{V}^{0}$ selections | 0.55 | 0.69 | 0.65 Signal extraction | 0.03 | 0.03 | 0.02 Efficiency and feed-down corrections | 0.30 | 0.33 | 0.30 Total | 0.63 | 0.77 | 0.72 ### 4.1 Systematic uncertainty from track and $\mathrm{V}^{0}$ selection The systematic uncertainty due to the track and $\mathrm{V}^{0}$ selection criteria is estimated by repeating the full analysis chain using one hundred different analysis parameters, where the single-track, topological, and particle-identification selection criteria are varied, such that they produce a maximum variation of $\pm 10\%$ in the raw signal yield, similarly to the approach used in Refs. [21, 22, 23]. The systematic uncertainty from the track and $\mathrm{V}^{0}$ selection is calculated by fitting the distribution of lifetime values obtained from different selection criteria using a Gaussian function and taking the $\sigma$ of the Gaussian fit as an uncertainty. The obtained uncertainties are 0.55 ps for $\tau_{\Lambda}$, 0.69 ps for $\tau_{\overline{\Lambda}}$, and 0.65 ps for $\tau_{\Lambda+\overline{\Lambda}}$. This contribution is the dominant source of systematic uncertainty. ### 4.2 Signal extraction uncertainty The systematic uncertainty from the signal extraction includes two contributions: the choice of the background fit range and the integration range used for the raw yield extraction. For both contributions, one hundred different intervals are randomly generated, with a uniform probability distribution between two extremes, and the signal extraction procedure is repeated for each of these intervals. The limits used are specified in Table 4. Table 4: Invariant-mass intervals used for the signal extraction systematic uncertainty. | Left extreme | Right extreme ---|---|--- Background fit range | $[M_{0}-14\sigma,M_{0}-11\sigma]$ | $[M_{0}+20\sigma,M_{0}+32\sigma]$ Signal integration range | $[M_{0}-13\sigma,M_{0}-7\sigma]$ | $[M_{0}+5\sigma,M_{0}+11\sigma]$ The standard deviation of the distribution of raw yields in each $L_{\rm proper}$ interval is taken as the systematic uncertainty. These two contributions are independent and added in quadrature. The stability of the results was tested against different choices of the fit functions used to model the background. In particular, linear and second-order polynomials were also tried at low $L_{\rm proper}$, while an exponential (power-law) for the left and second-order polynomial (exponential) for the right side of the background were also used at high $L_{\rm proper}$. The use of alternative fit functions to model the background resulted in negligible changes in the extracted yield. For this reason, this contribution to the signal extraction uncertainty is considered negligible. The uncertainty on the lifetime is calculated by replacing the statistical uncertainties on the corrected $L_{\rm proper}$ spectrum with the signal extraction uncertainties, which are bin-by- bin uncorrelated, and taking the uncertainty from the exponential fit as the systematic uncertainty. This contribution is found to be $(\Delta\tau)^{\rm syst}_{\rm signal}=0.03$ ps for both $\tau_{\Lambda}$ and $\tau_{\overline{\Lambda}}$, and 0.02 ps for $\tau_{\Lambda+\overline{\Lambda}}$. ### 4.3 Systematic uncertainty from efficiency and feed-down corrections The systematic uncertainty related to the efficiency and feed-down corrections is estimated by varying: (i) the $p_{\rm T}$-dependent weights used to adjust the input $p_{\rm T}$ distributions of $\Lambda$, $\Xi$, and $\Omega$ in the simulations, (ii) the $\Lambda$, $\Xi$, and $\Omega$ lifetimes by the PDG uncertainties [1] and (iii) the $\Lambda/\Xi$ and $\Lambda/\Omega$ ratios by the measured uncertainties. The $\Omega$ is found to give a negligible contribution to the systematic uncertainties. To vary the lifetimes implemented in the simulations, which are taken from the PDG [1], $L_{\rm proper}$-dependent weights are used. These are obtained as the ratio between the $L_{\rm proper}$ spectrum with modified lifetime and the default spectrum. To estimate the total contribution of these sources, a set of five hundred different efficiency and feed-down corrections is generated. Each of them is obtained using a different set of weights where the $p_{\rm T}$ spectra of $\Lambda$, $\Xi$, and $\Omega$, their lifetimes and particle ratios are varied by a fraction of their uncertainty. Such a fraction is extracted randomly from a Gaussian distribution centered at zero and with a width equal to 1. When modifying the $p_{\rm T}$ spectra measured in data to recalculate the weights, the correlated and uncorrelated uncertainties with $p_{\rm T}$ are treated differently: 1. 1. $p_{\rm T}$-correlated uncertainties: all data points of the $p_{\rm T}$ spectrum are shifted coherently upward and downward by a fraction of their systematic uncertainty in each $p_{\rm T}$ interval. 2. 2. $p_{\rm T}$-uncorrelated uncertainties: the data points are moved independently by a fraction of their uncorrelated uncertainty in each $p_{\rm T}$ interval. These five hundred different efficiencies and fractions of secondary $\Lambda$ ($\overline{\Lambda}$) are then used to correct the raw-$L_{\rm proper}$ spectrum measured in data. The lifetime is extracted for each corrected spectrum and the standard deviation of the distribution of lifetimes is taken as an estimate of the systematic uncertainty from efficiency and feed-down corrections. The obtained uncertainties are 0.30 ps for $\tau_{\Lambda}$, 0.33 ps for $\tau_{\overline{\Lambda}}$, and 0.30 ps for $\tau_{\Lambda+\overline{\Lambda}}$. ### 4.4 Inelastic interaction with the detector materials The default efficiency is based on GEANT3 transport package. The effect of (anti)matter absorption was studied by comparing the default efficiency with that obtained using a MC production based on GEANT4 [29], which contains slightly different parametrizations of the inelastic cross sections of (anti)matter particles. The GEANT3 and GEANT4-based efficiencies are consistent within uncertainties. The $\Lambda$ and $\overline{\Lambda}$ lifetimes are found to be consistent within uncertainties, and therefore, no systematic uncertainty is assigned due to this effect. ### 4.5 Material budget The ALICE detector material is known with a precision of 4.5$\%$ [10]. The effect of the limited knowledge of the material budget, which could affect the efficiency and the fraction of secondary $\Lambda$ ($\overline{\Lambda}$) from material and its dependence on $L_{\rm proper}$, is studied by comparing the efficiency and corrections for secondary $\Lambda$ ($\overline{\Lambda}$) using MC productions with increased and decreased material density by 4.5$\%$. The difference in the mean lifetime is found to not be statistically significant and therefore this contribution is neglected. ### 4.6 Pile-up effects Simultaneous collisions with displaced vertices (pile-up) could, in principle, create a bias in the measurement of the $\Lambda$ ($\overline{\Lambda}$) decay length due to the wrong $\mathrm{V}^{0}$–vertex association. The tight selection on the $\mathrm{cos}(\theta_{\rm p})$ allows the matching between a reconstructed $\mathrm{V}^{0}$ and the wrong vertex only for close vertices. This happens with very low probability and is found to give a negligible bias in the decay length measurement. To further cross-check potential pile-up effects, the analysis is repeated removing all pile-up rejections. The $\Lambda$ ($\overline{\Lambda}$) lifetimes, in this case, are found to be consistent with the value using default pile-up rejection within the statistical uncertainties. It is concluded that the pile-up effects combined with rather strong topological selections used in this analysis give a negligible effect on the $\Lambda$ ($\overline{\Lambda}$) lifetime. ## 5 Results The $L_{\rm proper}$ spectra of $\Lambda$ and $\overline{\Lambda}$ measured in each centrality interval are corrected for the corresponding efficiency, feed- down from higher mass baryons, and the fraction of secondary $\Lambda$ ($\overline{\Lambda}$) from the material. The use of centrality triggers in the data used in this analysis leads to a non-uniform centrality distribution. In order to restore the correct relative contribution from different centralities, the $L_{\rm proper}$ spectra measured in the centrality intervals 10–30$\%$, 30–50$\%$, and 50–90$\%$ are scaled by a factor $k_{i}=\frac{N_{\rm events}^{0-10\%}/w_{0-10\%}}{N_{\rm events}^{i}/w_{i}}w_{i},$ (5) where $N_{\rm events}^{i}$ and $w_{i}$ are the number of events and the centrality bin width of the $i$-th centrality interval and $N_{\rm events}^{0-10\%}$ and $w_{0-10\%}=10$ are those related to the reference centrality interval 0–10$\%$. The obtained spectrum is normalized to unity and fitted with an exponential function in the $L_{\rm proper}$ interval [3,30] cm to extract the mean lifetime. The fit results for $\Lambda$, $\overline{\Lambda}$ and their sum are shown in Fig. 3, which also contains the data-to-fit ratios in the bottom panels. The data-to-fit ratio in each interval is obtained as the ratio between the interval content and the weighted average of the fit function within the interval, with weight given by the exponential function. In this figure, only statistical uncertainties in each $L_{\rm proper}$ interval and on the mean lifetime are shown. The systematic uncertainties are calculated using the procedure described in Sec. 4 and are reported only for the final result of the lifetime in the 0–90$\%$ centrality class. The fit is stable when changing fit range ($L^{\rm min}_{\rm proper}$ $=$ 1,2,3,4,5,…, 10 cm) and binning (width $=$ 0.5,1,2 cm) leading to results that are consistent within the statistical uncertainties. Figure 3: $L_{\rm proper}$ spectra of $\Lambda$ (left), $\overline{\Lambda}$ (middle) and their sum (right), and exponential fits for the lifetime extractions. Only statistical uncertainties are shown for each data point and for the mean lifetime extracted from the exponential fit. The measured lifetimes of $\Lambda$ and $\overline{\Lambda}$ with statistical and systematic uncertainties, are $\displaystyle\tau_{\Lambda}=[261.20\pm 0.49(\rm stat.)\pm 0.63(\rm syst.)]\ \rm ps,$ $\displaystyle\tau_{\overline{\Lambda}}=[260.86\pm 0.55(\rm stat.)\pm 0.77(\rm syst.)]\ \rm ps,$ $\displaystyle\tau_{\Lambda+\overline{\Lambda}}=[261.07\pm 0.37(\rm stat.)\pm 0.72(\rm syst.)]\ \rm ps.$ The lifetimes extracted in different centrality intervals are consistent within their statistical uncertainties, as shown in Fig. 4. As a cross-check, the lifetime is also calculated as the weighted average of the results in different centrality intervals, with weights given by the inverse of the statistical uncertainties squared. The result is fully consistent with that extracted from the $L_{\rm proper}$ distribution obtained using Eq. 5. Figure 4: $\Lambda$ lifetime measured in different centrality intervals. Only statistical uncertainties are shown. The present measurement is compared with previous results in Fig. 5. The STAR measurement is taken from Ref. [30]. For this comparison, statistical and systematic uncertainties are added in quadrature. Figure 5: ALICE measurement of the $\Lambda$ lifetime in comparison with previous measurements [2, 3, 4, 30] and the current world average taken from the PDG [1]. Statistical and systematic uncertainties are added in quadrature. Assuming CPT invariance, the lifetimes of the $\Lambda$ and $\overline{\Lambda}$ are expected to be consistent within uncertainties. To test CPT invariance, the relative difference $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}$ is measured. Statistical uncertainties on $\tau_{\Lambda}$ and $\tau_{\overline{\Lambda}}$ and the systematic uncertainties originating from the signal extraction are uncorrelated and propagated independently. On the other hand, the systematic uncertainties on the efficiency and feed-down corrections as well as those on the track and $\mathrm{V}^{0}$ selections of $\Lambda$ and $\overline{\Lambda}$ are partially correlated. The systematic uncertainty on the relative difference $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}$ from the former contribution is considered as half of the interval with the following extremes $\left[\frac{\tau_{\overline{\Lambda}}}{\tau_{\Lambda}}\right]_{\rm upper}=\frac{\tau_{\overline{\Lambda}}+\Delta\tau_{\overline{\Lambda}}(\rm corrections)}{\tau_{\Lambda}+\Delta\tau_{\Lambda}(\rm corrections)},\ $ (6) $\left[\frac{\tau_{\overline{\Lambda}}}{\tau_{\Lambda}}\right]_{\rm lower}=\frac{\tau_{\overline{\Lambda}}-\Delta\tau_{\overline{\Lambda}}(\rm corrections)}{\tau_{\Lambda}-\Delta\tau_{\Lambda}(\rm corrections)},$ (7) and is found to be $1.1\times 10^{-4}$. To take into account the correlation between the uncertainties from the track and $\mathrm{V}^{0}$ selections, the relative difference $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}$ is calculated for each of the different analysis settings used. The uncertainty is given by the standard deviation of the distribution of $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}$ values and is found to be 0.0021, which is the largest contribution to the total systematic uncertainty. The measured value of the relative difference between the $\Lambda$ and $\overline{\Lambda}$ lifetimes is $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}=0.0013\pm 0.0028(\rm stat.)\pm 0.0021(\rm syst.),$ (8) while that reported in the PDG is $(\tau_{\Lambda}-\tau_{\overline{\Lambda}})/\tau_{\Lambda}=-0.001\pm 0.009$ [1]. The present measurement is consistent with zero with an overall improvement of the absolute precision with respect to the PDG by approximately a factor of three. ## 6 Summary Unprecedentedly precise measurements of the $\Lambda$ ($\overline{\Lambda}$) lifetime and of the relative difference between the lifetimes of $\Lambda$ and $\overline{\Lambda}$ are presented. The latter represents an important test of the CPT symmetry in the strangeness sector. The confidence range of the $\Lambda$ ($\overline{\Lambda}$) lifetime is reduced by approximately a factor of three with respect to the PDG average. ## Acknowledgements The ALICE Collaboration would like to thank all its engineers and technicians for their invaluable contributions to the construction of the experiment and the CERN accelerator teams for the outstanding performance of the LHC complex. The ALICE Collaboration gratefully acknowledges the resources and support provided by all Grid centres and the Worldwide LHC Computing Grid (WLCG) collaboration. The ALICE Collaboration acknowledges the following funding agencies for their support in building and running the ALICE detector: A. I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation (ANSL), State Committee of Science and World Federation of Scientists (WFS), Armenia; Austrian Academy of Sciences, Austrian Science Fund (FWF): [M 2467-N36] and Nationalstiftung für Forschung, Technologie und Entwicklung, Austria; Ministry of Communications and High Technologies, National Nuclear Research Center, Azerbaijan; Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), Financiadora de Estudos e Projetos (Finep), Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP) and Universidade Federal do Rio Grande do Sul (UFRGS), Brazil; Bulgarian Ministry of Education and Science, within the National Roadmap for Research Infrastructures 2020-2027 (object CERN), Bulgaria; Ministry of Education of China (MOEC) , Ministry of Science & Technology of China (MSTC) and National Natural Science Foundation of China (NSFC), China; Ministry of Science and Education and Croatian Science Foundation, Croatia; Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Cubaenergía, Cuba; Ministry of Education, Youth and Sports of the Czech Republic, Czech Republic; The Danish Council for Independent Research | Natural Sciences, the VILLUM FONDEN and Danish National Research Foundation (DNRF), Denmark; Helsinki Institute of Physics (HIP), Finland; Commissariat à l’Energie Atomique (CEA) and Institut National de Physique Nucléaire et de Physique des Particules (IN2P3) and Centre National de la Recherche Scientifique (CNRS), France; Bundesministerium für Bildung und Forschung (BMBF) and GSI Helmholtzzentrum für Schwerionenforschung GmbH, Germany; General Secretariat for Research and Technology, Ministry of Education, Research and Religions, Greece; National Research, Development and Innovation Office, Hungary; Department of Atomic Energy Government of India (DAE), Department of Science and Technology, Government of India (DST), University Grants Commission, Government of India (UGC) and Council of Scientific and Industrial Research (CSIR), India; National Research and Innovation Agency - BRIN, Indonesia; Istituto Nazionale di Fisica Nucleare (INFN), Italy; Japanese Ministry of Education, Culture, Sports, Science and Technology (MEXT) and Japan Society for the Promotion of Science (JSPS) KAKENHI, Japan; Consejo Nacional de Ciencia (CONACYT) y Tecnología, through Fondo de Cooperación Internacional en Ciencia y Tecnología (FONCICYT) and Dirección General de Asuntos del Personal Academico (DGAPA), Mexico; Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO), Netherlands; The Research Council of Norway, Norway; Commission on Science and Technology for Sustainable Development in the South (COMSATS), Pakistan; Pontificia Universidad Católica del Perú, Peru; Ministry of Education and Science, National Science Centre and WUT ID-UB, Poland; Korea Institute of Science and Technology Information and National Research Foundation of Korea (NRF), Republic of Korea; Ministry of Education and Scientific Research, Institute of Atomic Physics, Ministry of Research and Innovation and Institute of Atomic Physics and University Politehnica of Bucharest, Romania; Ministry of Education, Science, Research and Sport of the Slovak Republic, Slovakia; National Research Foundation of South Africa, South Africa; Swedish Research Council (VR) and Knut & Alice Wallenberg Foundation (KAW), Sweden; European Organization for Nuclear Research, Switzerland; Suranaree University of Technology (SUT), National Science and Technology Development Agency (NSTDA), Thailand Science Research and Innovation (TSRI) and National Science, Research and Innovation Fund (NSRF), Thailand; Turkish Energy, Nuclear and Mineral Research Agency (TENMAK), Turkey; National Academy of Sciences of Ukraine, Ukraine; Science and Technology Facilities Council (STFC), United Kingdom; National Science Foundation of the United States of America (NSF) and United States Department of Energy, Office of Nuclear Physics (DOE NP), United States of America. In addition, individual groups or members have received support from: European Research Council, Strong 2020 - Horizon 2020, Marie Skłodowska Curie (grant nos. 950692, 824093, 896850), European Union; Academy of Finland (Center of Excellence in Quark Matter) (grant nos. 346327, 346328), Finland; Programa de Apoyos para la Superación del Personal Académico, UNAM, Mexico; ## References * [1] Particle Data Group Collaboration, R. L. Workman et al., “Review of Particle Physics”, Prog. Theor. Exp. Phys. 2022 (2022) 083C01. * [2] G. Poulard, A. Givernaud, and A. Borg, “New measurement of the $\Lambda$ lifetime”, Phys. Lett. B 46 (1973) 135–137. * [3] E. F. Clayton et al., “High-statistics determination of the $\Lambda$ mean lifetime”, Nucl. Phys. B 95 (1975) 130–134. * [4] G. Zech et al., “A Measurement of the Lifetimes of $\Xi^{0}$ and $\Lambda$ Hyperons”, Nucl. Phys. B 124 (1977) 413–425. * [5] J. Badier et al., “Reactions pp $\rightarrow$ $\Lambda\Lambda$ at 2.5 GeV/c”, Phys. Lett. B 25 (1967) 152–155. * [6] P. D. Barnes et al., “Observables in high statistics measurements of the reaction $\rm\overline{p}p\rightarrow\overline{\Lambda}\Lambda$”, Phys. Rev. C 54 (Oct, 1996) 1877–1886. * [7] ALICE Collaboration, S. Acharya et al., “Measurement of the lifetime and $\Lambda$ separation energy of ${}^{3}_{\Lambda}\mathrm{H}$”, Phys. Rev. Lett. 131 (2023) 102302, arXiv:2209.07360 [nucl-ex]. * [8] ALICE Collaboration, “The ALICE experiment – A journey through QCD”, arXiv:2211.04384 [nucl-ex]. * [9] ALICE Collaboration, K. Aamodt et al., “The ALICE experiment at the CERN LHC”, JINST 3 (2008) S08002. * [10] ALICE Collaboration, B. Abelev et al., “Performance of the ALICE Experiment at the CERN LHC”, Int. J. Mod. Phys. A29 (2014) 1430044, arXiv:1402.4476 [nucl-ex]. * [11] ALICE Collaboration, K. Aamodt et al., “Alignment of the ALICE Inner Tracking System with cosmic-ray tracks”, JINST 5 (2010) P03003, arXiv:1001.0502 [physics.ins-det]. * [12] J. Alme et al., “The ALICE TPC, a large 3-dimensional tracking device with fast readout for ultra-high multiplicity events”, Nucl. Instrum. Meth. A622 (2010) 316–367, arXiv:1001.1950 [physics.ins-det]. * [13] ALICE Collaboration, S. Acharya et al., “Transverse momentum spectra and nuclear modification factors of charged particles in pp, $\mathrm{p-Pb}$ and $\mathrm{Pb-Pb}$ collisions at the LHC”, JHEP 11 (2018) 013, arXiv:1802.09145 [nucl-ex]. * [14] A. Akindinov et al., “Performance of the ALICE Time-Of-Flight detector at the LHC”, Eur. Phys. J. Plus 128 (2013) 44. * [15] ALICE Collaboration, J. Adam et al., “Determination of the event collision time with the ALICE detector at the LHC”, Eur. Phys. J. Plus 132 (2017) 99, arXiv:1610.03055 [physics.ins-det]. * [16] ALICE Collaboration, E. Abbas et al., “Performance of the ALICE VZERO system”, JINST 8 (2013) P10016, arXiv:1306.3130 [nucl-ex]. * [17] ALICE Collaboration, B. Abelev et al., “Centrality determination of $\mathrm{Pb-Pb}$ collisions at $\sqrt{s_{\mathrm{NN}}}=2.76\ \mathrm{TeV}$ with ALICE”, Phys. Rev. C88 (2013) 044909, arXiv:1301.4361 [nucl-ex]. * [18] ALICE Collaboration, S. Acharya et al., “Centrality determination in heavy ion collisions”, $\mathrm{ALICE}$-$\mathrm{PUBLIC}$-$\mathrm{2018}$-$\mathrm{011}$ (2018) . https://cds.cern.ch/record/2636623. * [19] ALICE Collaboration, S. Acharya et al., “$\mathrm{J}/\psi$ elliptic and triangular flow in $\mathrm{Pb-Pb}$ collisions at $\sqrt{s_{\mathrm{NN}}}=5.02\ \mathrm{TeV}$”, JHEP 10 (2020) 141, arXiv:2005.14518 [nucl-ex]. * [20] ALICE Collaboration, K. Aamodt et al., “Strange particle production in proton-proton collisions at $\sqrt{s}=0.9$ ${\rm TeV}$ with $\mathrm{ALICE}$ at the $\mathrm{LHC}$”, Eur. Phys. J. C 71 (2011) 1594, arXiv:1012.3257 [hep-ex]. * [21] ALICE Collaboration, B. Abelev et al., “$\mathrm{K}$${}_{S}^{0}$ and $\Lambda$ production in $\mathrm{Pb-Pb}$ collisions at $\sqrt{s_{\mathrm{NN}}}=2.76\ \mathrm{TeV}$”, Phys. Rev. Lett. 111 (2013) 222301, arXiv:1307.5530 [nucl-ex]. * [22] ALICE Collaboration, S. Acharya et al., “Multiplicity dependence of (multi-)strange hadron production in proton-proton collisions at $\sqrt{s}$ = 13 TeV”, Eur. Phys. J. C 80 (2020) 167, arXiv:1908.01861 [nucl-ex]. * [23] ALICE Collaboration, J. Adam et al., “Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions”, Nature Phys. 13 (2017) 535–539, arXiv:1606.07424 [nucl-ex]. * [24] X.-N. Wang and M. Gyulassy, “$\mathrm{HIJING}$: A monte carlo model for multiple jet production in $\mathrm{pp}$, $\mathrm{pA}$, and $\mathrm{AA}$ collisions”, Phys. Rev. D 44 (1991) 3501–3516. * [25] R. Brunand et al., “GEANT Detector Description and Simulation Tool, Program Library Long Write-up”,. https://doi.org/10.17181/CERN.MUHF.DMJ1. * [26] ALICE Collaboration, B. Abelev et al., “Multi-strange baryon production at mid-rapidity in $\mathrm{Pb-Pb}$ collisions at $\sqrt{s_{\mathrm{NN}}}=2.76\ \mathrm{TeV}$”, Phys. Lett. B 728 (2014) 216–227, arXiv:1307.5543 [nucl-ex]. [Erratum: Phys.Lett.B 734, 409–410 (2014)]. * [27] ALICE Collaboration, S. Acharya et al., “Production of charged pions, kaons, and (anti-)protons in $\mathrm{Pb-Pb}$ and inelastic $pp$ collisions at $\sqrt{{s}_{NN}}=5.02$ TeV”, Phys. Rev. C 101 (2020) 044907, arXiv:1910.07678 [nucl-ex]. * [28] ALICE Collaboration, B. Abelev et al., “Centrality dependence of $\pi$, $k$, and $p$ production in $\mathrm{Pb-Pb}$ collisions at $\sqrt{{s}_{NN}}=2.76$ TeV”, Phys. Rev. C 88 (2013) 044910, arXiv:1303.0737 [hep-ex]. * [29] S. Agostinelli et al., “$\mathrm{GEANT4}$ – a simulation toolkit”, Nucl. Instrum. Meth. A 506 (2003) 250–303. * [30] STAR Collaboration, B. I. Abelev et al., “Observation of an Antimatter Hypernucleus”, Science 328 (2010) 58–62, arXiv:1003.2030 [nucl-ex]. ## Appendix A The ALICE Collaboration S. Acharya 125, D. Adamová 86, A. Adler69, G. Aglieri Rinella 32, M. Agnello 29, N. Agrawal 50, Z. Ahammed 132, S. Ahmad 15, S.U. Ahn 70, I. Ahuja 37, A. Akindinov 140, M. Al-Turany 97, D. Aleksandrov 140, B. Alessandro 55, H.M. Alfanda 6, R. Alfaro Molina 66, B. Ali 15, A. Alici 25, N. Alizadehvandchali 114, A. Alkin 32, J. Alme 20, G. Alocco 51, T. Alt 63, I. Altsybeev 140, M.N. Anaam 6, C. Andrei 45, A. Andronic 135, V. Anguelov 94, F. Antinori 53, P. Antonioli 50, N. Apadula 74, L. Aphecetche 103, H. Appelshäuser 63, C. Arata 73, S. Arcelli 25, M. Aresti 51, R. Arnaldi 55, J.G.M.C.A. Arneiro 110, I.C. Arsene 19, M. Arslandok 137, A. Augustinus 32, R. Averbeck 97, M.D. Azmi 15, A. Badalà 52, J. Bae 104, Y.W. Baek 40, X. Bai 118, R. Bailhache 63, Y. Bailung 47, A. Balbino 29, A. Baldisseri 128, B. Balis 2, D. Banerjee 4, Z. Banoo 91, R. Barbera 26, F. Barile 31, L. Barioglio 95, M. Barlou78, G.G. Barnaföldi 136, L.S. Barnby 85, V. Barret 125, L. Barreto 110, C. Bartels 117, K. Barth 32, E. Bartsch 63, N. Bastid 125, S. Basu 75, G. Batigne 103, D. Battistini 95, B. Batyunya 141, D. Bauri46, J.L. Bazo Alba 101, I.G. Bearden 83, C. Beattie 137, P. Becht 97, D. Behera 47, I. Belikov 127, A.D.C. Bell Hechavarria 135, F. Bellini 25, R. Bellwied 114, S. Belokurova 140, V. Belyaev 140, G. Bencedi 136, S. Beole 24, A. Bercuci 45, Y. Berdnikov 140, A. Berdnikova 94, L. Bergmann 94, M.G. Besoiu 62, L. Betev 32, P.P. Bhaduri 132, A. Bhasin 91, M.A. Bhat 4, B. Bhattacharjee 41, L. Bianchi 24, N. Bianchi 48, J. Bielčík 35, J. Bielčíková 86, J. Biernat 107, A.P. Bigot 127, A. Bilandzic 95, G. Biro 136, S. Biswas 4, N. Bize 103, J.T. Blair 108, D. Blau 140, M.B. Blidaru 97, N. Bluhme38, C. Blume 63, G. Boca 21,54, F. Bock 87, T. Bodova 20, A. Bogdanov140, S. Boi 22, J. Bok 57, L. Boldizsár 136, M. Bombara 37, P.M. Bond 32, G. Bonomi 131,54, H. Borel 128, A. Borissov 140, A.G. Borquez Carcamo 94, H. Bossi 137, E. Botta 24, Y.E.M. Bouziani 63, L. Bratrud 63, P. Braun- Munzinger 97, M. Bregant 110, M. Broz 35, G.E. Bruno 96,31, M.D. Buckland 23, D. Budnikov 140, H. Buesching 63, S. Bufalino 29, P. Buhler 102, Z. Buthelezi 67,121, A. Bylinkin 20, S.A. Bysiak107, M. Cai 6, H. Caines 137, A. Caliva 97, E. Calvo Villar 101, J.M.M. Camacho 109, P. Camerini 23, F.D.M. Canedo 110, M. Carabas 124, A.A. Carballo 32, F. Carnesecchi 32, R. Caron 126, L.A.D. Carvalho 110, J. Castillo Castellanos 128, F. Catalano 24, C. Ceballos Sanchez 141, I. Chakaberia 74, P. Chakraborty 46, S. Chandra 132, S. Chapeland 32, M. Chartier 117, S. Chattopadhyay 132, S. Chattopadhyay 99, T.G. Chavez 44, T. Cheng 97,6, C. Cheshkov 126, B. Cheynis 126, V. Chibante Barroso 32, D.D. Chinellato 111, E.S. Chizzali II,95, J. Cho 57, S. Cho 57, P. Chochula 32, P. Christakoglou 84, C.H. Christensen 83, P. Christiansen 75, T. Chujo 123, M. Ciacco 29, C. Cicalo 51, F. Cindolo 50, M.R. Ciupek97, G. ClaiIII,50, F. Colamaria 49, J.S. Colburn100, D. Colella 96,31, M. Colocci 25, G. Conesa Balbastre 73, Z. Conesa del Valle 72, G. Contin 23, J.G. Contreras 35, M.L. Coquet 128, T.M. CormierI,87, P. Cortese 130,55, M.R. Cosentino 112, F. Costa 32, S. Costanza 21,54, C. Cot 72, J. Crkovská 94, P. Crochet 125, R. Cruz- Torres 74, P. Cui 6, A. Dainese 53, M.C. Danisch 94, A. Danu 62, P. Das 80, P. Das 4, S. Das 4, A.R. Dash 135, S. Dash 46, R.M.H. David44, A. De Caro 28, G. de Cataldo 49, J. de Cuveland38, A. De Falco 22, D. De Gruttola 28, N. De Marco 55, C. De Martin 23, S. De Pasquale 28, R. Deb131, S. Deb 47, R.J. Debski 2, K.R. Deja133, R. Del Grande 95, L. Dello Stritto 28, W. Deng 6, P. Dhankher 18, D. Di Bari 31, A. Di Mauro 32, R.A. Diaz 141,7, T. Dietel 113, Y. Ding 6, R. Divià 32, D.U. Dixit 18, Ø. Djuvsland20, U. Dmitrieva 140, A. Dobrin 62, B. Dönigus 63, J.M. Dubinski 133, A. Dubla 97, S. Dudi 90, P. Dupieux 125, M. Durkac106, N. Dzalaiova12, T.M. Eder 135, R.J. Ehlers 74, V.N. Eikeland20, F. Eisenhut 63, D. Elia 49, B. Erazmus 103, F. Ercolessi 25, F. Erhardt 89, M.R. Ersdal20, B. Espagnon 72, G. Eulisse 32, D. Evans 100, S. Evdokimov 140, L. Fabbietti 95, M. Faggin 27, J. Faivre 73, F. Fan 6, W. Fan 74, A. Fantoni 48, M. Fasel 87, P. Fecchio29, A. Feliciello 55, G. Feofilov 140, A. Fernández Téllez 44, L. Ferrandi 110, M.B. Ferrer 32, A. Ferrero 128, C. Ferrero 55, A. Ferretti 24, V.J.G. Feuillard 94, V. Filova 35, D. Finogeev 140, F.M. Fionda 51, F. Flor 114, A.N. Flores 108, S. Foertsch 67, I. Fokin 94, S. Fokin 140, E. Fragiacomo 56, E. Frajna 136, U. Fuchs 32, N. Funicello 28, C. Furget 73, A. Furs 140, T. Fusayasu 98, J.J. Gaardhøje 83, M. Gagliardi 24, A.M. Gago 101, C.D. Galvan 109, D.R. Gangadharan 114, P. Ganoti 78, C. Garabatos 97, J.R.A. Garcia 44, E. Garcia-Solis 9, C. Gargiulo 32, K. Garner135, P. Gasik 97, A. Gautam 116, M.B. Gay Ducati 65, M. Germain 103, A. Ghimouz123, C. Ghosh132, M. Giacalone 50,25, P. Giubellino 97,55, P. Giubilato 27, A.M.C. Glaenzer 128, P. Glässel 94, E. Glimos 120, D.J.Q. Goh76, V. Gonzalez 134, M. Gorgon 2, S. Gotovac33, V. Grabski 66, L.K. Graczykowski 133, E. Grecka 86, A. Grelli 58, C. Grigoras 32, V. Grigoriev 140, S. Grigoryan 141,1, F. Grosa 32, J.F. Grosse-Oetringhaus 32, R. Grosso 97, D. Grund 35, G.G. Guardiano 111, R. Guernane 73, M. Guilbaud 103, K. Gulbrandsen 83, T. Gundem 63, T. Gunji 122, W. Guo 6, A. Gupta 91, R. Gupta 91, R. Gupta 47, S.P. Guzman 44, K. Gwizdziel 133, L. Gyulai 136, M.K. Habib97, C. Hadjidakis 72, F.U. Haider 91, H. Hamagaki 76, A. Hamdi 74, M. Hamid6, Y. Han 138, R. Hannigan 108, M.R. Haque 133, J.W. Harris 137, A. Harton 9, H. Hassan 87, D. Hatzifotiadou 50, P. Hauer 42, L.B. Havener 137, S.T. Heckel 95, E. Hellbär 97, H. Helstrup 34, M. Hemmer 63, T. Herman 35, G. Herrera Corral 8, F. Herrmann135, S. Herrmann 126, K.F. Hetland 34, B. Heybeck 63, H. Hillemanns 32, B. Hippolyte 127, F.W. Hoffmann 69, B. Hofman 58, B. Hohlweger 84, G.H. Hong 138, M. Horst 95, A. Horzyk 2, Y. Hou 6, P. Hristov 32, C. Hughes 120, P. Huhn63, L.M. Huhta 115, C.V. Hulse 72, T.J. Humanic 88, A. Hutson 114, D. Hutter 38, J.P. Iddon 117, R. Ilkaev140, H. Ilyas 13, M. Inaba 123, G.M. Innocenti 32, M. Ippolitov 140, A. Isakov 86, T. Isidori 116, M.S. Islam 99, M. Ivanov 97, M. Ivanov12, V. Ivanov 140, M. Jablonski 2, B. Jacak 74, N. Jacazio 32, P.M. Jacobs 74, S. Jadlovska106, J. Jadlovsky106, S. Jaelani 82, L. Jaffe38, C. Jahnke 111, M.J. Jakubowska 133, M.A. Janik 133, T. Janson69, M. Jercic89, S. Jia 10, A.A.P. Jimenez 64, F. Jonas 87, J.M. Jowett 32,97, J. Jung 63, M. Jung 63, A. Junique 32, A. Jusko 100, M.J. Kabus 32,133, J. Kaewjai105, P. Kalinak 59, A.S. Kalteyer 97, A. Kalweit 32, V. Kaplin 140, A. Karasu Uysal 71, D. Karatovic 89, O. Karavichev 140, T. Karavicheva 140, P. Karczmarczyk 133, E. Karpechev 140, U. Kebschull 69, R. Keidel 139, D.L.D. Keijdener58, M. Keil 32, B. Ketzer 42, S.S. Khade 47, A.M. Khan 6, S. Khan 15, A. Khanzadeev 140, Y. Kharlov 140, A. Khatun 116,15, A. Khuntia 107, M.B. Kidson113, B. Kileng 34, B. Kim 104, C. Kim 16, D.J. Kim 115, E.J. Kim 68, J. Kim 138, J.S. Kim 40, J. Kim 68, M. Kim 18,94, S. Kim 17, T. Kim 138, K. Kimura 92, S. Kirsch 63, I. Kisel 38, S. Kiselev 140, A. Kisiel 133, J.P. Kitowski 2, J.L. Klay 5, J. Klein 32, S. Klein 74, C. Klein-Bösing 135, M. Kleiner 63, T. Klemenz 95, A. Kluge 32, A.G. Knospe 114, C. Kobdaj 105, T. Kollegger97, A. Kondratyev 141, N. Kondratyeva 140, E. Kondratyuk 140, J. Konig 63, S.A. Konigstorfer 95, P.J. Konopka 32, G. Kornakov 133, S.D. Koryciak 2, A. Kotliarov 86, V. Kovalenko 140, M. Kowalski 107, V. Kozhuharov 36, I. Králik 59, A. Kravčáková 37, L. Krcal 32,38, L. Kreis97, M. Krivda 100,59, F. Krizek 86, K. Krizkova Gajdosova 32, M. Kroesen 94, M. Krüger 63, D.M. Krupova 35, E. Kryshen 140, V. Kučera 32, C. Kuhn 127, P.G. Kuijer 84, T. Kumaoka123, D. Kumar132, L. Kumar 90, N. Kumar90, S. Kumar 31, S. Kundu 32, P. Kurashvili 79, A. Kurepin 140, A.B. Kurepin 140, A. Kuryakin 140, S. Kushpil 86, J. Kvapil 100, M.J. Kweon 57, J.Y. Kwon 57, Y. Kwon 138, S.L. La Pointe 38, P. La Rocca 26, A. Lakrathok105, M. Lamanna 32, R. Langoy 119, P. Larionov 32, E. Laudi 32, L. Lautner 32,95, R. Lavicka 102, T. Lazareva 140, R. Lea 131,54, H. Lee 104, G. Legras 135, J. Lehrbach 38, T.M. Lelek2, R.C. Lemmon 85, I. León Monzón 109, M.M. Lesch 95, E.D. Lesser 18, P. Lévai 136, X. Li10, X.L. Li6, J. Lien 119, R. Lietava 100, I. Likmeta 114, B. Lim 24, S.H. Lim 16, V. Lindenstruth 38, A. Lindner45, C. Lippmann 97, A. Liu 18, D.H. Liu 6, J. Liu 117, I.M. Lofnes 20, C. Loizides 87, S. Lokos 107, J. Lomker 58, P. Loncar 33, J.A. Lopez 94, X. Lopez 125, E. López Torres 7, P. Lu 97,118, J.R. Luhder 135, M. Lunardon 27, G. Luparello 56, Y.G. Ma 39, A. Maevskaya140, M. Mager 32, A. Maire 127, M.V. Makariev 36, M. Malaev 140, G. Malfattore 25, N.M. Malik 91, Q.W. Malik19, S.K. Malik 91, L. Malinina VI,141, D. Mal’Kevich 140, D. Mallick 80, N. Mallick 47, G. Mandaglio 30,52, S.K. Mandal 79, V. Manko 140, F. Manso 125, V. Manzari 49, Y. Mao 6, G.V. Margagliotti 23, A. Margotti 50, A. Marín 97, C. Markert 108, P. Martinengo 32, J.L. Martinez114, M.I. Martínez 44, G. Martínez García 103, S. Masciocchi 97, M. Masera 24, A. Masoni 51, L. Massacrier 72, A. Mastroserio 129,49, O. Matonoha 75, P.F.T. Matuoka110, A. Matyja 107, C. Mayer 107, A.L. Mazuecos 32, F. Mazzaschi 24, M. Mazzilli 32, J.E. Mdhluli 121, A.F. Mechler63, Y. Melikyan 43,140, A. Menchaca-Rocha 66, E. Meninno 102,28, A.S. Menon 114, M. Meres 12, S. Mhlanga113,67, Y. Miake123, L. Micheletti 55, L.C. Migliorin126, D.L. Mihaylov 95, K. Mikhaylov 141,140, A.N. Mishra 136, D. Miśkowiec 97, A. Modak 4, A.P. Mohanty 58, B. Mohanty80, M. Mohisin Khan IV,15, M.A. Molander 43, Z. Moravcova 83, C. Mordasini 95, D.A. Moreira De Godoy 135, I. Morozov 140, A. Morsch 32, T. Mrnjavac 32, V. Muccifora 48, S. Muhuri 132, J.D. Mulligan 74, A. Mulliri22, M.G. Munhoz 110, R.H. Munzer 63, H. Murakami 122, S. Murray 113, L. Musa 32, J. Musinsky 59, J.W. Myrcha 133, B. Naik 121, A.I. Nambrath 18, B.K. Nandi 46, R. Nania 50, E. Nappi 49, A.F. Nassirpour 17,75, A. Nath 94, C. Nattrass 120, M.N. Naydenov 36, A. Neagu19, A. Negru124, L. Nellen 64, S.V. Nesbo34, G. Neskovic 38, D. Nesterov 140, B.S. Nielsen 83, E.G. Nielsen 83, S. Nikolaev 140, S. Nikulin 140, V. Nikulin 140, F. Noferini 50, S. Noh 11, P. Nomokonov 141, J. Norman 117, N. Novitzky 123, P. Nowakowski 133, A. Nyanin 140, J. Nystrand 20, M. Ogino 76, A. Ohlson 75, V.A. Okorokov 140, J. Oleniacz 133, A.C. Oliveira Da Silva 120, M.H. Oliver 137, A. Onnerstad 115, C. Oppedisano 55, A. Ortiz Velasquez 64, J. Otwinowski 107, M. Oya92, K. Oyama 76, Y. Pachmayer 94, S. Padhan 46, D. Pagano 131,54, G. Paić 64, A. Palasciano 49, S. Panebianco 128, H. Park 123, H. Park 104, J. Park 57, J.E. Parkkila 32, R.N. Patra91, B. Paul 22, H. Pei 6, T. Peitzmann 58, X. Peng 6, M. Pennisi 24, L.G. Pereira 65, D. Peresunko 140, G.M. Perez 7, S. Perrin 128, Y. Pestov140, V. Petráček 35, V. Petrov 140, M. Petrovici 45, R.P. Pezzi 103,65, S. Piano 56, M. Pikna 12, P. Pillot 103, O. Pinazza 50,32, L. Pinsky114, C. Pinto 95, S. Pisano 48, M. Płoskoń 74, M. Planinic89, F. Pliquett63, M.G. Poghosyan 87, B. Polichtchouk 140, S. Politano 29, N. Poljak 89, A. Pop 45, S. Porteboeuf- Houssais 125, V. Pozdniakov 141, I.Y. Pozos 44, K.K. Pradhan 47, S.K. Prasad 4, S. Prasad 47, R. Preghenella 50, F. Prino 55, C.A. Pruneau 134, I. Pshenichnov 140, M. Puccio 32, S. Pucillo 24, Z. Pugelova106, S. Qiu 84, L. Quaglia 24, R.E. Quishpe114, S. Ragoni 14, A. Rakotozafindrabe 128, L. Ramello 130,55, F. Rami 127, S.A.R. Ramirez 44, T.A. Rancien73, M. Rasa 26, S.S. Räsänen 43, R. Rath 50, M.P. Rauch 20, I. Ravasenga 84, K.F. Read 87,120, C. Reckziegel 112, A.R. Redelbach 38, K. Redlich V,79, C.A. Reetz 97, A. Rehman20, F. Reidt 32, H.A. Reme-Ness 34, Z. Rescakova37, K. Reygers 94, A. Riabov 140, V. Riabov 140, R. Ricci 28, M. Richter 19, A.A. Riedel 95, W. Riegler 32, C. Ristea 62, M. Rodríguez Cahuantzi 44, K. Røed 19, R. Rogalev 140, E. Rogochaya 141, T.S. Rogoschinski 63, D. Rohr 32, D. Röhrich 20, P.F. Rojas44, S. Rojas Torres 35, P.S. Rokita 133, G. Romanenko 141, F. Ronchetti 48, A. Rosano 30,52, E.D. Rosas64, K. Roslon 133, A. Rossi 53, A. Roy 47, S. Roy 46, N. Rubini 25, O.V. Rueda 114, D. Ruggiano 133, R. Rui 23, B. Rumyantsev141, P.G. Russek 2, R. Russo 84, A. Rustamov 81, E. Ryabinkin 140, Y. Ryabov 140, A. Rybicki 107, H. Rytkonen 115, W. Rzesa 133, O.A.M. Saarimaki 43, R. Sadek 103, S. Sadhu 31, S. Sadovsky 140, J. Saetre 20, K. Šafařík 35, S.K. Saha 4, S. Saha 80, B. Sahoo 46, B. Sahoo 47, R. Sahoo 47, S. Sahoo60, D. Sahu 47, P.K. Sahu 60, J. Saini 132, K. Sajdakova37, S. Sakai 123, M.P. Salvan 97, S. Sambyal 91, I. Sanna 32,95, T.B. Saramela110, D. Sarkar 134, N. Sarkar132, P. Sarma 41, V. Sarritzu 22, V.M. Sarti 95, M.H.P. Sas 137, J. Schambach 87, H.S. Scheid 63, C. Schiaua 45, R. Schicker 94, A. Schmah94, C. Schmidt 97, H.R. Schmidt93, M.O. Schmidt 32, M. Schmidt93, N.V. Schmidt 87, A.R. Schmier 120, R. Schotter 127, A. Schröter 38, J. Schukraft 32, K. Schwarz97, K. Schweda 97, G. Scioli 25, E. Scomparin 55, J.E. Seger 14, Y. Sekiguchi122, D. Sekihata 122, I. Selyuzhenkov 97,140, S. Senyukov 127, J.J. Seo 57, D. Serebryakov 140, L. Šerkšnytė 95, A. Sevcenco 62, T.J. Shaba 67, A. Shabetai 103, R. Shahoyan32, A. Shangaraev 140, A. Sharma90, B. Sharma 91, D. Sharma 46, H. Sharma 107, M. Sharma 91, S. Sharma 76, S. Sharma 91, U. Sharma 91, A. Shatat 72, O. Sheibani114, K. Shigaki 92, M. Shimomura77, J. Shin11, S. Shirinkin 140, Q. Shou 39, Y. Sibiriak 140, S. Siddhanta 51, T. Siemiarczuk 79, T.F. Silva 110, D. Silvermyr 75, T. Simantathammakul105, R. Simeonov 36, B. Singh91, B. Singh 95, R. Singh 80, R. Singh 91, R. Singh 47, S. Singh 15, V.K. Singh 132, V. Singhal 132, T. Sinha 99, B. Sitar 12, M. Sitta 130,55, T.B. Skaali19, G. Skorodumovs 94, M. Slupecki 43, N. Smirnov 137, R.J.M. Snellings 58, E.H. Solheim 19, J. Song 114, A. Songmoolnak105, F. Soramel 27, A.B. Soto-hernandez 88, R. Spijkers 84, I. Sputowska 107, J. Staa 75, J. Stachel 94, I. Stan 62, P.J. Steffanic 120, S.F. Stiefelmaier 94, D. Stocco 103, I. Storehaug 19, P. Stratmann 135, S. Strazzi 25, C.P. Stylianidis84, A.A.P. Suaide 110, C. Suire 72, M. Sukhanov 140, M. Suljic 32, R. Sultanov 140, V. Sumberia 91, S. Sumowidagdo 82, S. Swain60, I. Szarka 12, M. Szymkowski 133, S.F. Taghavi 95, G. Taillepied 97, J. Takahashi 111, G.J. Tambave 20, S. Tang 125,6, Z. Tang 118, J.D. Tapia Takaki 116, N. Tapus124, L.A. Tarasovicova 135, M.G. Tarzila 45, G.F. Tassielli 31, A. Tauro 32, G. Tejeda Muñoz 44, A. Telesca 32, L. Terlizzi 24, C. Terrevoli 114, S. Thakur 4, D. Thomas 108, A. Tikhonov 140, A.R. Timmins 114, M. Tkacik106, T. Tkacik 106, A. Toia 63, R. Tokumoto92, N. Topilskaya 140, M. Toppi 48, F. Torales- Acosta18, T. Tork 72, A.G. Torres Ramos 31, A. Trifiró 30,52, A.S. Triolo 32,30,52, S. Tripathy 50, T. Tripathy 46, S. Trogolo 32, V. Trubnikov 3, W.H. Trzaska 115, T.P. Trzcinski 133, A. Tumkin 140, R. Turrisi 53, T.S. Tveter 19, K. Ullaland 20, B. Ulukutlu 95, A. Uras 126, M. Urioni 54,131, G.L. Usai 22, M. Vala37, N. Valle 21, L.V.R. van Doremalen58, M. van Leeuwen 84, C.A. van Veen 94, R.J.G. van Weelden 84, P. Vande Vyvre 32, D. Varga 136, Z. Varga 136, M. Vasileiou 78, A. Vasiliev 140, O. Vázquez Doce 48, V. Vechernin 140, E. Vercellin 24, S. Vergara Limón44, L. Vermunt 97, R. Vértesi 136, M. Verweij 58, L. Vickovic33, Z. Vilakazi121, O. Villalobos Baillie 100, A. Villani 23, G. Vino 49, A. Vinogradov 140, T. Virgili 28, M.M.O. Virta 115, V. Vislavicius75, A. Vodopyanov 141, B. Volkel 32, M.A. Völkl 94, K. Voloshin140, S.A. Voloshin 134, G. Volpe 31, B. von Haller 32, I. Vorobyev 95, N. Vozniuk 140, J. Vrláková 37, C. Wang 39, D. Wang39, Y. Wang 39, A. Wegrzynek 32, F.T. Weiglhofer38, S.C. Wenzel 32, J.P. Wessels 135, S.L. Weyhmiller 137, J. Wiechula 63, J. Wikne 19, G. Wilk 79, J. Wilkinson 97, G.A. Willems 135, B. Windelband 94, M. Winn 128, J.R. Wright 108, W. Wu39, Y. Wu 118, R. Xu 6, A. Yadav 42, A.K. Yadav 132, S. Yalcin 71, Y. Yamaguchi 92, S. Yang20, S. Yano 92, Z. Yin 6, I.-K. Yoo 16, J.H. Yoon 57, S. Yuan20, A. Yuncu 94, V. Zaccolo 23, C. Zampolli 32, F. Zanone 94, N. Zardoshti 32, A. Zarochentsev 140, P. Závada 61, N. Zaviyalov140, M. Zhalov 140, B. Zhang 6, L. Zhang 39, S. Zhang 39, X. Zhang 6, Y. Zhang118, Z. Zhang 6, M. Zhao 10, V. Zherebchevskii 140, Y. Zhi10, D. Zhou 6, Y. Zhou 83, J. Zhu 97,6, Y. Zhu6, S.C. Zugravel 55, N. Zurlo 131,54 ## Affiliation Notes I Deceased II Also at: Max-Planck-Institut für Physik, Munich, Germany III Also at: Italian National Agency for New Technologies, Energy and Sustainable Economic Development (ENEA), Bologna, Italy IV Also at: Department of Applied Physics, Aligarh Muslim University, Aligarh, India V Also at: Institute of Theoretical Physics, University of Wroclaw, Poland VI Also at: An institution covered by a cooperation agreement with CERN ## Collaboration Institutes 1 A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia 2 AGH University of Science and Technology, Cracow, Poland 3 Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, Kiev, Ukraine 4 Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India 5 California Polytechnic State University, San Luis Obispo, California, United States 6 Central China Normal University, Wuhan, China 7 Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Havana, Cuba 8 Centro de Investigación y de Estudios Avanzados (CINVESTAV), Mexico City and Mérida, Mexico 9 Chicago State University, Chicago, Illinois, United States 10 China Institute of Atomic Energy, Beijing, China 11 Chungbuk National University, Cheongju, Republic of Korea 12 Comenius University Bratislava, Faculty of Mathematics, Physics and Informatics, Bratislava, Slovak Republic 13 COMSATS University Islamabad, Islamabad, Pakistan 14 Creighton University, Omaha, Nebraska, United States 15 Department of Physics, Aligarh Muslim University, Aligarh, India 16 Department of Physics, Pusan National University, Pusan, Republic of Korea 17 Department of Physics, Sejong University, Seoul, Republic of Korea 18 Department of Physics, University of California, Berkeley, California, United States 19 Department of Physics, University of Oslo, Oslo, Norway 20 Department of Physics and Technology, University of Bergen, Bergen, Norway 21 Dipartimento di Fisica, Università di Pavia, Pavia, Italy 22 Dipartimento di Fisica dell’Università and Sezione INFN, Cagliari, Italy 23 Dipartimento di Fisica dell’Università and Sezione INFN, Trieste, Italy 24 Dipartimento di Fisica dell’Università and Sezione INFN, Turin, Italy 25 Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Bologna, Italy 26 Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Catania, Italy 27 Dipartimento di Fisica e Astronomia dell’Università and Sezione INFN, Padova, Italy 28 Dipartimento di Fisica ‘E.R. Caianiello’ dell’Università and Gruppo Collegato INFN, Salerno, Italy 29 Dipartimento DISAT del Politecnico and Sezione INFN, Turin, Italy 30 Dipartimento di Scienze MIFT, Università di Messina, Messina, Italy 31 Dipartimento Interateneo di Fisica ‘M. Merlin’ and Sezione INFN, Bari, Italy 32 European Organization for Nuclear Research (CERN), Geneva, Switzerland 33 Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture, University of Split, Split, Croatia 34 Faculty of Engineering and Science, Western Norway University of Applied Sciences, Bergen, Norway 35 Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic 36 Faculty of Physics, Sofia University, Sofia, Bulgaria 37 Faculty of Science, P.J. Šafárik University, Košice, Slovak Republic 38 Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe- Universität Frankfurt, Frankfurt, Germany 39 Fudan University, Shanghai, China 40 Gangneung-Wonju National University, Gangneung, Republic of Korea 41 Gauhati University, Department of Physics, Guwahati, India 42 Helmholtz-Institut für Strahlen- und Kernphysik, Rheinische Friedrich- Wilhelms-Universität Bonn, Bonn, Germany 43 Helsinki Institute of Physics (HIP), Helsinki, Finland 44 High Energy Physics Group, Universidad Autónoma de Puebla, Puebla, Mexico 45 Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest, Romania 46 Indian Institute of Technology Bombay (IIT), Mumbai, India 47 Indian Institute of Technology Indore, Indore, India 48 INFN, Laboratori Nazionali di Frascati, Frascati, Italy 49 INFN, Sezione di Bari, Bari, Italy 50 INFN, Sezione di Bologna, Bologna, Italy 51 INFN, Sezione di Cagliari, Cagliari, Italy 52 INFN, Sezione di Catania, Catania, Italy 53 INFN, Sezione di Padova, Padova, Italy 54 INFN, Sezione di Pavia, Pavia, Italy 55 INFN, Sezione di Torino, Turin, Italy 56 INFN, Sezione di Trieste, Trieste, Italy 57 Inha University, Incheon, Republic of Korea 58 Institute for Gravitational and Subatomic Physics (GRASP), Utrecht University/Nikhef, Utrecht, Netherlands 59 Institute of Experimental Physics, Slovak Academy of Sciences, Košice, Slovak Republic 60 Institute of Physics, Homi Bhabha National Institute, Bhubaneswar, India 61 Institute of Physics of the Czech Academy of Sciences, Prague, Czech Republic 62 Institute of Space Science (ISS), Bucharest, Romania 63 Institut für Kernphysik, Johann Wolfgang Goethe-Universität Frankfurt, Frankfurt, Germany 64 Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México, Mexico City, Mexico 65 Instituto de Física, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, Brazil 66 Instituto de Física, Universidad Nacional Autónoma de México, Mexico City, Mexico 67 iThemba LABS, National Research Foundation, Somerset West, South Africa 68 Jeonbuk National University, Jeonju, Republic of Korea 69 Johann-Wolfgang-Goethe Universität Frankfurt Institut für Informatik, Fachbereich Informatik und Mathematik, Frankfurt, Germany 70 Korea Institute of Science and Technology Information, Daejeon, Republic of Korea 71 KTO Karatay University, Konya, Turkey 72 Laboratoire de Physique des 2 Infinis, Irène Joliot-Curie, Orsay, France 73 Laboratoire de Physique Subatomique et de Cosmologie, Université Grenoble- Alpes, CNRS-IN2P3, Grenoble, France 74 Lawrence Berkeley National Laboratory, Berkeley, California, United States 75 Lund University Department of Physics, Division of Particle Physics, Lund, Sweden 76 Nagasaki Institute of Applied Science, Nagasaki, Japan 77 Nara Women’s University (NWU), Nara, Japan 78 National and Kapodistrian University of Athens, School of Science, Department of Physics , Athens, Greece 79 National Centre for Nuclear Research, Warsaw, Poland 80 National Institute of Science Education and Research, Homi Bhabha National Institute, Jatni, India 81 National Nuclear Research Center, Baku, Azerbaijan 82 National Research and Innovation Agency - BRIN, Jakarta, Indonesia 83 Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark 84 Nikhef, National institute for subatomic physics, Amsterdam, Netherlands 85 Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom 86 Nuclear Physics Institute of the Czech Academy of Sciences, Husinec-Řež, Czech Republic 87 Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States 88 Ohio State University, Columbus, Ohio, United States 89 Physics department, Faculty of science, University of Zagreb, Zagreb, Croatia 90 Physics Department, Panjab University, Chandigarh, India 91 Physics Department, University of Jammu, Jammu, India 92 Physics Program and International Institute for Sustainability with Knotted Chiral Meta Matter (SKCM2), Hiroshima University, Hiroshima, Japan 93 Physikalisches Institut, Eberhard-Karls-Universität Tübingen, Tübingen, Germany 94 Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany 95 Physik Department, Technische Universität München, Munich, Germany 96 Politecnico di Bari and Sezione INFN, Bari, Italy 97 Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany 98 Saga University, Saga, Japan 99 Saha Institute of Nuclear Physics, Homi Bhabha National Institute, Kolkata, India 100 School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom 101 Sección Física, Departamento de Ciencias, Pontificia Universidad Católica del Perú, Lima, Peru 102 Stefan Meyer Institut für Subatomare Physik (SMI), Vienna, Austria 103 SUBATECH, IMT Atlantique, Nantes Université, CNRS-IN2P3, Nantes, France 104 Sungkyunkwan University, Suwon City, Republic of Korea 105 Suranaree University of Technology, Nakhon Ratchasima, Thailand 106 Technical University of Košice, Košice, Slovak Republic 107 The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland 108 The University of Texas at Austin, Austin, Texas, United States 109 Universidad Autónoma de Sinaloa, Culiacán, Mexico 110 Universidade de São Paulo (USP), São Paulo, Brazil 111 Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil 112 Universidade Federal do ABC, Santo Andre, Brazil 113 University of Cape Town, Cape Town, South Africa 114 University of Houston, Houston, Texas, United States 115 University of Jyväskylä, Jyväskylä, Finland 116 University of Kansas, Lawrence, Kansas, United States 117 University of Liverpool, Liverpool, United Kingdom 118 University of Science and Technology of China, Hefei, China 119 University of South-Eastern Norway, Kongsberg, Norway 120 University of Tennessee, Knoxville, Tennessee, United States 121 University of the Witwatersrand, Johannesburg, South Africa 122 University of Tokyo, Tokyo, Japan 123 University of Tsukuba, Tsukuba, Japan 124 University Politehnica of Bucharest, Bucharest, Romania 125 Université Clermont Auvergne, CNRS/IN2P3, LPC, Clermont-Ferrand, France 126 Université de Lyon, CNRS/IN2P3, Institut de Physique des 2 Infinis de Lyon, Lyon, France 127 Université de Strasbourg, CNRS, IPHC UMR 7178, F-67000 Strasbourg, France, Strasbourg, France 128 Université Paris-Saclay Centre d’Etudes de Saclay (CEA), IRFU, Départment de Physique Nucléaire (DPhN), Saclay, France 129 Università degli Studi di Foggia, Foggia, Italy 130 Università del Piemonte Orientale, Vercelli, Italy 131 Università di Brescia, Brescia, Italy 132 Variable Energy Cyclotron Centre, Homi Bhabha National Institute, Kolkata, India 133 Warsaw University of Technology, Warsaw, Poland 134 Wayne State University, Detroit, Michigan, United States 135 Westfälische Wilhelms-Universität Münster, Institut für Kernphysik, Münster, Germany 136 Wigner Research Centre for Physics, Budapest, Hungary 137 Yale University, New Haven, Connecticut, United States 138 Yonsei University, Seoul, Republic of Korea 139 Zentrum für Technologie und Transfer (ZTT), Worms, Germany 140 Affiliated with an institute covered by a cooperation agreement with CERN 141 Affiliated with an international laboratory covered by a cooperation agreement with CERN.
11institutetext: ETS Montreal, Quebec, Canada 11email<EMAIL_ADDRESS>22institutetext: DIAGNOS Inc., Quebec, Canada # Exploring the Transferability of a Foundation Model for Fundus Images: Application to Hypertensive Retinopathy Julio Silva-Rodriguez 11 Jihed Chelbi 22 Waziha Kabir 22 Hadi Chakor 22 Jose Dolz 11 Ismail Ben Ayed 11 Riadh Kobbi 22 ###### Abstract Using deep learning models pre-trained on Imagenet is the traditional solution for medical image classification to deal with data scarcity. Nevertheless, relevant literature supports that this strategy may offer limited gains due to the high dissimilarity between domains. Currently, the paradigm of adapting domain-specialized foundation models is proving to be a promising alternative. However, how to perform such knowledge transfer, and the benefits and limitations it presents, are under study. The CGI-HRDC challenge for Hypertensive Retinopathy diagnosis on fundus images introduces an appealing opportunity to evaluate the transferability of a recently released vision- language foundation model of the retina, FLAIR [42]. In this work, we explore the potential of using FLAIR features as starting point for fundus image classification, and we compare its performance with regard to Imagenet initialization on two popular transfer learning methods: Linear Probing (LP) and Fine-Tuning (FP). Our empirical observations suggest that, in any case, the use of the traditional strategy provides performance gains. In contrast, direct transferability from FLAIR model allows gains of $\sim 2.5\%$. When fine-tuning the whole network, the performance gap increases up to $\sim 4\%$. In this case, we show that avoiding feature deterioration via LP initialization of the classifier allows the best re-use of the rich pre- trained features. Although direct transferability using LP still offers limited performance, we believe that foundation models such as FLAIR will drive the evolution of deep-learning-based fundus image analysis. ###### Keywords: Foundation Models Transfer Learning Hypertensive Retinopathy. ## 1 Introduction A foundation model for image understanding is a generic pre-trained deep learning model on a large dataset, serving as a base for developing specialized vision models through fine-tuning on task-specific data. Recently, foundation models trained on natural images have gained popularity by the impressive resource-efficient transferability capabilities they present. Successful examples include pre-trained models on ImageNet, vision-language pre-training as CLIP [39] or ALIGN [20], or models for image segmentation as SAM [23]. Despite its promising results in the natural image context, these models have shown limited performance for transferability to expert fields such as medical image analysis [47, 8, 10]. Although the limited benefit of using transfer learning from large pre-trained models when exists a large domain gap is not new [40], these observations have encouraged the recent development of foundation models specialized in concrete medical domains (see Figure 1). As a result, a paradigm shift is occurring in this field. The use of specialized foundation models promises to improve the efficiency of the resources needed to create task-specific solutions, in both samples and computational power. Some successful models have been developed for radiology [47], histology [33], fundus images [42], volumetric segmentation [30, 43], and 2D image segmentation [4]. However, the potential of the pre-train and adapt paradigm remains largely unexplored in many medical imaging domains. This motivates the realization of empirical studies to analyze the benefits of such models in comparison with the more traditional paradigms. Figure 1: Standard vs. Foundation Model Paradigms. Deep learning solutions on medical image analysis are traditionally built upon models pre-trained on ImageNet to alleviate the need for large datasets. Nevertheless, the benefits of transfer learning might be limited when a substantial domain gap from source to target exists [40]. Foundation models on specific domains, such as FLAIR [42] for fundus image analysis, which is pre-trained on heterogeneous data sources and tasks, offer better resource-efficient transferability to new tasks. The CGI-HRDC Challenge for Hypertensive Retinopathy diagnosis through fundus images constitutes an ideal setting to study the potential of foundation models. The analysis of hypertensive retinopathy is burdened by the necessary manual inspection of fundus images from experienced ophthalmologists. Therefore, it is paramount to provide ophthalmologists with an accurate computer system that facilitates the analysis of the course of the disease. Moreover, the scarcity of available data sources with hypertensive cases further challenges the development of task-specific deep learning models. Thus, the objective of this work is to study the limitations and potential of a recently released foundation model for fundus image analysis, FLAIR [42], and compare its transferability capabilities for Hypertensive Retinopathy detection, in comparison with standard solutions using models pre-trained on Imagenet. ## 2 Related Works ### 2.1 Transfer learning on fundus images Deep learning has achieved remarkable performance on a wide variety of fundus image analysis tasks, and offers a potential solution for large-scale screening and early detection of ophthalmologic conditions [2, 3]. Among others, outstanding applications include diabetic retinopathy grading [7, 32], cataract diagnosis [19], lupus detection [31], or multi-disease classification [41, 21]. Nevertheless, training such models from scratch demands substantial datasets and extensive computational resources [11]. In the medical domain, specifically in fundus image analysis, achieving the prerequisite of large datasets is often unattainable, and the norm involves working with small, task-specific datasets. Consequently, transfer learning from natural images has emerged as the primary approach for medical image classification [40]. However, empirical studies have revealed that transfer learning may yield limited performance improvements in specific medical image classification scenarios [40, 35], in which a large inter-domain gap exists [1]. These limitations have motivated the use of pre-trained models for further transferability to downstream tasks. For example, self-supervised [44] or task-specific pre-training [32] using public datasets have shown promising improvements for diabetic retinopathy grading. However, it is important to note that task-specific models are prone to produce too specific inductive biases on specific features, resulting in poor generalization when transferred to other less-related tasks [42]. In this context, vision-language pre- training has raised as a promising solution to group heterogeneous data sources and tasks for pre-training, aligned through text supervision, and thus capturing generic features and representations in large foundation models. This strategy has shown promising transferability performance in the medical context for radiology [47], histology [33], and recently in fundus images [42]. ### 2.2 FLAIR The foundation model FLAIR111Available at https://github.com/jusiro/FLAIR [42] (A Foundation LAnguage Image model of the Retina) is a recently released pre- trained model for universal disease detection on fundus images through text supervision, which has shown remarkable transferability to downstream tasks even on unseen diseases. FLAIR pre-training datasets. The foundation model was built using an assembly dataset from $37$ publicly available sources, which include up to 286,916 fundus images from heterogeneous tasks, consisting of $96$ different categories. These tasks include diabetic retinopathy grading: EYEPACS222https://www.kaggle.com/c/diabetic-retinopathy-detection, IDRID [38], JICHI [45], PARAGUAY [5], SYSU [29], OIA-DDR [28] and BRSET [34]; Glaucoma detection: LAG [27] , PAPILA [24], CHAKSU [26] and AIROGS ([46]); lesion segmentation: DR1-2 [37], SYSU [29], OIA-DDR [28] and HEI-MED [12]; image description: EYENET [18], ODIR-5K333https://odir2019.grand-challenge.org/, and STARE [16, 17]; and the detection of other diseases: RFMid [36], 1000x39 [6], BRSET [34] and FUND-OCT1 [14, 13]. From the last group, it is worth mentioning that nearly $400$ samples from two different datasets contained hypertensive retinopathy findings, which constitutes less than $0.2\%$ of the entire assembly dataset. Model architecture. FLAIR model consists of a vision encoder, ResNet-50 [15], and a text encoder, with the architecture of BioClinicalBert444https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT, which takes as input a fundus image and a text prompt describing its content, respectively. The produced individual modality embeddings are projected into an l2-normalized multimodal space. Optimization criteria. The foundation model is pre-trained using a contrastive vision-language alignment approach, aiming to create a multimodal feature representation in which images and expert knowledge descriptors of the same category are similar while maximizing differences between unrelated samples. This three-dimensional alignment, encompassing image, text, and categories, results in a more comprehensive and richer representation through text semantics, able to inter-correlate different conditions (e.g. diabetic retinopathy and microaneurysms) by efficiently leveraging expert domain knowledge. ### 2.3 Transferability In the context of foundation models, transferability refers to the process of using or adapting the features learned in large pre-trained models to downstream tasks and related domains. In this work, we focus on the transferability in the medium data regime, where a few hundred training examples are available, and we explore only adaptation through the vision encoder. Two popular transfer learning methods are Linear Probing (LP) and Fine-Tuning (FT). The former involves direct transferability of the features by adjusting only the linear classifier. For the latter, all the parameters of the model are re-trained to the target dataset. Fine-tuning all layers of a network can modify the pre-trained features by adapting/improving them to the downstream task, while linear probing, on the other hand, only relies on the frozen features without any further adjustments. ## 3 Method: Transfer Learning from FLAIR model In this work, we aim to explore the potential and limitations of transferring a general-purpose foundation model of the retina for the challenging task of Hypertensive Retinopathy. In particular, we focus on adapting the image encoder from the recently published FLAIR [42] model. Pre-processing. The fundus images are processed accordingly to the foundation model pre-training. Concretely, the samples are resized to $800\times 800$ pixels, and the intensity is scaled between $[0,1]$. Linear Probe (LP) adaptation. For LP adaptation, a classification head is trained over the features extracted from the pre-trained FLAIR model. Two feature representations are considered for LP adaptation: the vision encoder representation (LP (vision)), and the multimodal vision-language projection (LP (proj)). Fine-Tuning (FT). In this setting, a classification head is initialized with random weights, which uses as input the vision encoder features, and the whole network is retrained on the target task. Concretely, the encoder and classifier are trained to minimize the binary cross-entropy between reference and predicted sigmoid scores via stochastic gradient descent. Linear Probe and Fine-Tuning (LP+FT). Last, we follow a recently popularized two-step strategy. First, the classifier is trained with the backbone frozen as in LP, and then the whole network is regularly fine-tuned to the objective task [25, 22]. ## 4 Experiments ### 4.1 Dataset The CGI-HRDC dataset comprises two different tasks: Task 1 involves hypertension classification, determining whether the patient has hypertension, while Task 2 focuses on Hypertensive Retinopathy detection, aiming to identify signs of Hypertensive Retinopathy in the target fundus image. For each task, the development dataset includes 712 samples for training. In addition, the Challenge includes 288 cases for testing for each task, which remain unavailable during the development stage. The samples consist of macula- centered fundus images, each with dimensions of $800\times 800$ pixels. ### 4.2 Implementation details The pre-trained FLAIR vision encoder is transferred to the different tasks related to hypertensive retinopathy diagnosis using the strategies indicated in Section 3. For LP adaptation, We follow the same solver as in CLIP [39], and we applied class weights to account for class imbalances. For full backbone fine-tuning, ADAM is used as an optimizer with an initial learning rate of $1e-4$, and training is carried out using mini-batches of $4$ images, during $20$ epochs. To account for class imbalance, a re-sampling strategy of the minority class is carried out. Data augmentation is applied for each iteration using random horizontal flips, rotations of $[-5,5]$ degrees, zoom scaling in the range $[0.9,1.1]$, and color jitter. Also, the convergence is tracked on the internal validation set, and the best model in this subset is saved as the final solution for evaluation. For each stage of LP+FT method, we follow the same aforementioned implementation details. The adaptation code was part of the official FLAIR repository, publicly accessible at: https://github.com/jusiro/FLAIR. ### 4.3 Baselines To evaluate the benefits of using a domain-specific foundation model for transferring feature representations, we use the ResNet-50 [15] (the same vision backbone used in FLAIR) with weights pre-trained on ImageNet [9], for natural image classification. In particular, the different transfer learning strategies set for FLAIR are applied to this model for adaptation to the challenge tasks. The hyperparameters and implementation details of these baselines were the same as the foundation model adaptation. Hereafter, we refer to this weights initialization as Imagenet. ### 4.4 Evaluation protocol and metrics During the method development stage, a $5$ fold cross-validation partition is performed on the CGI-HRDC development dataset to evaluate the different proposed methods. In each fold iteration, $20\%$ of training samples for each class are randomly retrieved for evaluation, while $70\%$, is used for training and $10\%$ for internal validation. The evaluation metrics used are the Kappa, F1 score, and specificity, which are averaged into a global score. All metrics are averaged fold-wise during the cross-validation stage. ## 5 Results ### 5.1 Development dataset results The cross-validation results obtained in the training subset using the different strategies for adapting FLAIR model and the corresponding baselines for hypertensive classification (Task 1) and Hypertensive Retinopathy detection (Task 2) are presented in Table 1 and Table 2, respectively. Table 1: Cross-Validation results for Task 1: Hypertensive classification LP: Linear Probe; FT: Fine-Tuning; proj: projection. Gray indicates the method submitted for the testing phase. | Metric ---|--- Method | Kappa | F1 | Specificity | Avg. Imagenet \- LP | 0.324(0.039) | 0.666(0.019) | 0.651(0.035) | 0.547 Imagenet \- FT | 0.335(0.112) | 0.659(0.078) | 0.682(0.019) | 0.558 Imagenet \- LP+FT | 0.389(0.074) | 0.711(0.023) | 0.637(0.113) | 0.579 FLAIR - LP (proj) | 0.240(0.037) | 0.593(0.017) | 0.685(0.051) | 0.506 FLAIR - LP (vision) | 0.358(0.066) | 0.680(0.033) | 0.676(0.035) | 0.571 FLAIR - FT | 0.366(0.110) | 0.697(0.039) | 0.640(0.121) | 0.567 FLAIR - LP+FT | 0.420(0.043) | 0.703(0.026) | 0.730(0.058) | 0.617 Table 2: Cross-Validation results for Task 2: Hypertensive Retinopathy classification LP: Linear Probe; FT: Fine-Tuning; proj: projection. Gray indicates the method submitted for the testing phase. | Metric ---|--- Method | Kappa | F1 | Specificity | Avg. Imagenet \- LP | 0.404(0.068) | 0.652(0.040) | 0.740(0.040) | 0.598 Imagenet \- FT | 0.623(0.049) | 0.770(0.030) | 0.874(0.049) | 0.755 Imagenet \- LP+FT | 0.636(0.103) | 0.781(0.061) | 0.869(0.049) | 0.762 FLAIR - LP (proj) | 0.258(0.089) | 0.533(0.068) | 0.759(0.045) | 0.516 FLAIR - LP (vision) | 0.439(0.052) | 0.670(0.033) | 0.764(0.034) | 0.624 FLAIR - FT | 0.622(0.027) | 0.772(0.017) | 0.862(0.062) | 0.752 FLAIR - LP+FT | 0.695(0.060) | 0.816(0.034) | 0.893(0.062) | 0.801 The obtained results unveil the benefit of using foundation models pre-trained on medical domains. Linear Probe (LP) adaptation. Direct transferability (i.e. LP) - of FLAIR features improves in $\sim+2.5\%$ the score compared to Imagenet features on both Tasks. It is worth mentioning that, in the case of FLAIR, using the features of the multimodal projection results in a significant performance drop. Despite this feature representation is commonly used for the transferability of vision-language pre-trained models on other works (e.g. CLIP [39], MedCLIP [47]), our empirical results evidence that they might produce suboptimal solutions. This may be caused by the specific patterns of Hypertensive Retinopathy, and the low prevalence of this condition in the FLAIR pre-training dataset ($<0.2\%$). Thus, tuning the vision encoder for this task seems necessary in this case. Fine-Tuning (FT). After fine- tuning, the obtained performance increases notably for Task 2, while modest improvements are observed for Task 1. In this case, minor differences between Imagenet and FLAIR initialization can be observed. Interestingly, in the case of Task 1, just LP outperforms FT for the whole network. As it is widely known, full FT is an aggressive adaptation strategy, which might distort pre- trained features [25]. Linear Probe and Fine-Tuning (LP+FT). When using the classifier initialized via LP, then the use of a domain-specific Foundation model highlights its benefits. This solution prevents the distortion of pre- trained features, and the performance consistently improves in $\sim+4\%$ compared to using Imagenet representations. Although the benefits of LP+FT observations have been previously reported for regular fine-tuning [22] and out-of-distribution inference [25], our empirical results suggest that the quality of the initialization features and classifier for the target domain also plays an important role in this setting. Performance discrepancies between tasks. The results obtained in Task 1 are consistently worse compared to the performance of the models observed in Task 2. This might be produced by the hardness of the target case. While Hypertension might be a global condition of the patient, with scarce feature representation on the particular eye of the sample, Hypertensive Retinopathy ensures the presence of a disease in the retina of the target fundus image. ### 5.2 CGI-HRDC hidden test results After the development stage, we decided to use the Linear Probe adaptation with the FLAIR vision encoder features (i.e. FLAIR - LP (vision) in Tables 1 and 2) as our solution for the CGI-HRDC challenge. Although this was not the best method in the cross-validation set, the motivation behind this decision was to test the direct transferability of the foundation model in a real use case. Thus, a classifier for each task was trained on top of the frozen vision encoder of FLAIR using the whole challenge development dataset. Under this setting, a global average score of $0.500$ ($\\#3rd$ on the official test Leaderboard) and $0.545$ ($\\#2nd$ on the official test Leaderboard) was obtained for Task 1 and Task 2, respectively. It is worth mentioning that the proposed method experiences a consistent drop of $\sim-8\%$ with respect to the cross-validation stage which might be caused by disparities in class balance or the presence of harder samples on the hidden test subset. ## 6 Conclusions In this work, we have explored the transferability of a foundation model for fundus images, FLAIR [42], to tasks related to Hypertensive Retinopathy detection, in the context of the CGI-HRDC challenge. FLAIR model, although pre-trained through contrastive vision-language alignment in a wide variety of Fundus conditions, contains less than $0.2\%$ of training samples with pathologies related to hypertension. Still, the learned feature representations show promising capability for direct transferability on such a challenging task, with gains of $\sim+4\%$ compared to pre-training on Imagenet. Nevertheless, the modest results obtained using Linear Probing in comparison with other methods participating in the challenge highlight the current limitations of direct transferability for reaching state-of-the-art performance in medium-sized datasets. Thus, we have explored fine-tuning the whole model for adaptation. In any case, using the model pre-trained on Imagenet \- which is the de-facto solution on transfer learning for medical image analysis - has shown any advantage compared to using FLAIR. In particular, preventing feature distortion of the Foundation model through Linear Probing initialization showed promising benefits for both tasks. We believe that developing foundation models on medical domains and enhancing the adaptation of their rich feature representations to downstream tasks is an appealing future direction for medical image analysis and, more specifically, for the characterization of fundus images. ## Acknowledgments The work of J. Silva-Rodríguez was partially funded by the Fonds de recherche du Québec (FRQ) under the Postdoctoral Merit Scholarship for Foreign Students (PBEEE). ## References * [1] Azizpour, H., Razavian, A.S., Sullivan, J., Maki, A., Carlsson, S.: Factors of transferability for a generic convnet representation. In: CVPR Workshop: DeepVision (6 2014) * [2] Balyen, L., Peto, T.: Promising artificial intelligence–machine learning–deep learning algorithms in ophthalmology. Asia-Pacific Journal of Ophthalmology 8, 264–272 (2019) * [3] Bellemo, V., Lim, Z.W., Lim, G., Nguyen, Q.D., Xie, Y., Yip, M.Y., Hamzah, H., Ho, J., Lee, X.Q., Hsu, W., Lee, M.L., Musonda, L., Chandran, M., Chipalo-Mutati, G., Muma, M., Tan, G.S., Sivaprasad, S., Menon, G., Wong, T.Y., Ting, D.S.: Artificial intelligence using deep learning to screen for referable and vision-threatening diabetic retinopathy in africa: a clinical validation study. The Lancet Digital Health 1, e35–e44 (2019) * [4] Butoi, V.I., Ortiz, J.J.G., Ma, T., Sabuncu, M.R., Guttag, J., Dalca, A.V.: Universeg: Universal medical image segmentation. In: ArXiv Preprint (4 2023), http://arxiv.org/abs/2304.06131 * [5] Castillo Benítez, V.E., Castro Matto, I., Mello Román, J.C., Vázquez Noguera, J.L., García-Torres, M., Ayala, J., Pinto-Roa, D.P., Gardel-Sotomayor, P.E., Facon, J., Grillo, S.A.: Dataset from fundus images for the study of diabetic retinopathy. Data in Brief 36, 107068 (2021) * [6] Cen, L.P., Ji, J., Lin, J.W., Ju, S.T., Lin, H.J., Li, T.P., Wang, Y., Yang, J.F., Liu, Y.F., Tan, S., Tan, L., Li, D., Wang, Y., Zheng, D., Xiong, Y., Wu, H., Jiang, J., Wu, Z., Huang, D., Shi, T., Chen, B., Yang, J., Zhang, X., Luo, L., Huang, C., Zhang, G., Huang, Y., Ng, T.K., Chen, H., Chen, W., Pang, C.P., Zhang, M.: Automatic detection of 39 fundus diseases and conditions in retinal photographs using deep neural networks. Nature Communications 12, 4828 (12 2021) * [7] Chandrasekaran, R., Loganathan, B.: Retinopathy grading with deep learning and wavelet hyper-analytic activations. The Visual Computer p. 2741–2756 (2023) * [8] Cheng, D., Qin, Z., Jiang, Z., Zhang, S., Lao, Q., Li, K.: Sam on medical images: A comprehensive study on three prompt modes. In: ArXiv Preprint (2023) * [9] Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: A large-scale hierarchical image database. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–8 (2009) * [10] Deng, R., Cui, C., Liu, Q., Yao, T., Remedios, L.W., Bao, S., Landman, B.A., Wheless, L.E., Coburn, L.A., Wilson, K.T., Wang, Y., Zhao, S., Fogo, A.B., Yang, H., Tang, Y., Huo, Y.: Segment anything model (sam) for digital pathology: Assess zero-shot segmentation on whole slide imaging. In: ArXiv Preprint (2023) * [11] Erhan, D., Manzagol, P.A., Bengio, Y., Bengio, S., Vincent, P.: The difficulty of training deep architectures and the effect of unsupervised pre-training. In: Proceedings of the International Conference on Artificial Intelligence and Statistics (PMLR). pp. 153–160 (2009) * [12] Giancardo, L., Meriaudeau, F., Karnowski, T.P., Li, Y., Garg, S., Tobin, K.W., Chaum, E.: Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis 16, 216–226 (1 2012) * [13] Hassan, T., Akram, M.U., Masood, M.F., Yasin, U.: Deep structure tensor graph search framework for automated extraction and characterization of retinal layers and fluid pathology in retinal sd-oct scans. Computers in Biology and Medicine 105, 112–124 (2 2019) * [14] Hassan, T., Akram, M.U., Werghi, N., Nazir, M.N.: Rag-fw: A hybrid convolutional framework for the automated extraction of retinal lesions and lesion-influenced grading of human retinal pathology. IEEE Journal of Biomedical and Health Informatics 25(1), 108–120 (2021) * [15] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–12 (12 2016) * [16] Hoover, A.: Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Transactions on Medical Imaging 19, 203–210 (2000) * [17] Hoover, A., Goldbaum, M.: Locating the optic nerve in a retinal image using the fuzzy convergence of the blood vessels. IEEE Transactions on Medical Imaging 22, 951–958 (8 2003) * [18] Huang, J.H., Yang, C.H.H., Liu, F., Tian, M., Liu, Y.C., Wu, T.W., Lin, I.H., Wang, K., Morikawa, H., Chang, H., Tegner, J., Worring, M.: Deepopht: medical report generation for retinal images via deep models and visual explanation. In: Proceedings of the Winter Conference on Applications of Computer Vision (WACV). pp. 2442–2452 (2021) * [19] Imran, A., Li, J., Pei, Y., Akhtar, F., Mahmood, T., Zhang, L.: Fundus image-based cataract classification using a hybrid convolutional and recurrent neural network. The Visual Computer (2020) * [20] Jia, C., Yang, Y., Xia, Y., Chen, Y.T., Parekh, Z., Pham, H., Le, Q., Sung, Y.H., Li, Z., Duerig, T.: Scaling up visual and vision-language representation learning with noisy text supervision. In: International conference on machine learning. pp. 4904–4916 (2021) * [21] Jin, K., Huang, X., Zhou, J., Li, Y., Yan, Y., Sun, Y., Zhang, Q., Wang, Y., Ye, J.: Fives: A fundus image dataset for artificial intelligence based vessel segmentation. Scientific Data 9, 475 (12 2022) * [22] Kanavati, F., Tsuneki, M.: Partial transfusion: on the expressive influence of trainable batch norm parameters for transfer learning. In: MIDL (2021) * [23] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A.C., Lo, W.Y., Dollár, P., Girshick, R.: Segment anything. In: ArXiv Preprint (2023) * [24] Kovalyk, O., Morales-Sánchez, J., Verdú-Monedero, R., Sellés-Navarro, I., Palazón-Cabanes, A., Sancho-Gómez, J.L.: Papila: Dataset with fundus images and clinical data of both eyes of the same patient for glaucoma assessment. Scientific Data 9, 291 (12 2022) * [25] Kumar, A., Raghunathan, A., Jones, R.M., Ma, T., Liang, P.: Fine-tuning can distort pretrained features and underperform out-of-distribution. In: International Conference on Learning Representations (ICLR) (2022) * [26] Kumar, J.R., Seelamantula, C.S., Gagan, J.H., Kamath, Y.S., Kuzhuppilly, N.I., Vivekanand, U., Gupta, P., Patil, S.: Chaksu: A glaucoma specific fundus image database. Scientific Data 10 (2023) * [27] Li, L., Xu, M., Wang, X., Jiang, L., Liu, H.: Attention based glaucoma detection: A large-scale database and cnn model. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–10 (2019) * [28] Li, T., Gao, Y., Wang, K., Guo, S., Liu, H., Kang, H.: Diagnostic assessment of deep learning algorithms for diabetic retinopathy screening. Information Sciences 501, 511 – 522 (2019) * [29] Lin, L., Li, M., Huang, Y., Cheng, P., Xia, H., Wang, K., Yuan, J., Tang, X.: The sustech-sysu dataset for automated exudate detection and diabetic retinopathy grading. Scientific Data 7 (12 2020) * [30] Liu, J., Zhang, Y., Chen, J.N., Xiao, J., Lu, Y., Landman, B.A., Yuan, Y., Yuille, A., Tang, Y., Zhou, Z.: Clip-driven universal model for organ segmentation and tumor detection. In: ArXiv Preprint (1 2023), http://arxiv.org/abs/2301.00785 * [31] Liu, R., Wang, T., Li, H., Zhang, P., Li, J., Yang, X., Shen, D., Sheng, B.: Tmm-nets: Transferred multi- to mono-modal generation for lupus retinopathy diagnosis. IEEE Trans Med Imaging 42, 1083–1094 (2023) * [32] Liu, R., Wang, X., Wu, Q., Dai, L., Fang, X., Yan, T., Son, J., Tang, S., Li, J., Gao, Z., Galdran, A., Poorneshwaran, J.M., Liu, H., Wang, J., Chen, Y., Porwal, P., Tan, G.S.W., Yang, X., Dai, C., Song, H., Chen, M., Li, H., Jia, W., Shen, D., Sheng, B., Zhang, P.: Deepdrid: Diabetic retinopathy—grading and image quality estimation challenge. Patterns 3 (2022) * [33] Lu, M.Y., Chen, B., Zhang, A., Williamson, D.F., Chen, R.J., Ding, T., Le, L.P., Chuang, Y.S., Mahmood, F.: Visual language pretrained multiple instance zero-shot transfer for histopathology images. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) (10 2023) * [34] Nakayama, L.F., Goncalves, M., Zago Ribeiro, L., Santos, H., Ferraz, D., Malerbi, F., Celi, L.A., Regatieri, C.: A brazilian multilabel ophthalmological dataset (brset). In: PhysioNet (2023) * [35] Neyshabur, B., Sedghi, H., Zhang, C.: What is being transferred in transfer learning? In: Advances in Neural Information Processing Systems (NeurIPS) (8 2020) * [36] Pachade, S., Porwal, P., Thulkar, D., Kokare, M., Deshmukh, G., Sahasrabuddhe, V., Giancardo, L., Quellec, G., Mériaudeau, F.: Retinal fundus multi-disease image dataset (rfmid): A dataset for multi-disease detection research. Data 6, 1–14 (2 2021) * [37] Pires, R., Jelinek, H.F., Wainer, J., Valle, E., Rocha, A.: Advancing bag-of-visual-words representations for lesion classification in retinal images. PLoS ONE 9 (2014) * [38] Porwal, P., Pachade, S., Kokare, M., Deshmukh, G., Son, J., Bae, W., Liu, L., Wang, J., Liu, X., Gao, L., Wu, T.B., Xiao, J., Wang, F., Yin, B., Wang, Y., Danala, G., He, L., Choi, Y.H., Lee, Y.C., Jung, S.H., Li, Z., Sui, X., Wu, J., Li, X., Zhou, T., Toth, J., Baran, A., Kori, A., Chennamsetty, S.S., Safwan, M., Alex, V., Lyu, X., Cheng, L., Chu, Q., Li, P., Ji, X., Zhang, S., Shen, Y., Dai, L., Saha, O., Sathish, R., Melo, T., Araújo, T., Harangi, B., Sheng, B., Fang, R., Sheet, D., Hajdu, A., Zheng, Y., Mendonça, A.M., Zhang, S., Campilho, A., Zheng, B., Shen, D., Giancardo, L., Quellec, G., Mériaudeau, F.: Idrid: Diabetic retinopathy – segmentation and grading challenge. Medical Image Analysis 59, 101561 (1 2020) * [39] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., Krueger, G., Sutskever, I.: Learning transferable visual models from natural language supervision. In: ArXiv Preprint (2021) * [40] Raghu, M., Zhang, C., Kleinberg, J., Bengio, S.: Transfusion: Understanding transfer learning for medical imaging. In: Advances in neural information processing systems (NeurIPS) (2019) * [41] Salam, A.A., Mahadevappa, M., Das, A., Nair, M.S.: Rdd-net: retinal disease diagnosis network: a computer-aided diagnosis technique using graph learning and feature descriptors. The Visual Computer (2022) * [42] Silva-Rodriguez, J., Chakor, H., Riadh, K., Dolz, J., Ayed, I.B.: A foundation language-image model of the retina (flair): Encoding expert knowledge in text supervision. ArXiv Preprint (2023) * [43] Silva-Rodriguez, J., Dolz, J., Ayed, I.B.: Towards foundation models and few-shot parameter-efficient fine-tuning for volumetric organ segmentation. MICCAI Workshop on foundation models for general medical AI (MedAGI) (2023) * [44] Srinivasan, V., Strodthoff, N., Ma, J., Binder, A., Müller, K.R., Samek, W.: To pretrain or not? a systematic analysis of the benefits of pretraining in diabetic retinopathy. PLoS ONE 17 (10 2022) * [45] Takahashi, H., Tampo, H., Arai, Y., Inoue, Y., Kawashima, H.: Applying artificial intelligence to disease staging: Deep learning for improved staging of diabetic retinopathy. PLoS ONE 12 (6 2017) * [46] de Vente, C., Vermeer, K.A., Jaccard, N., Wang, H., Sun, H., Khader, F., Truhn, D., Aimyshev, T., Zhanibekuly, Y., Le, T.D., Galdran, A., Gonzalez Ballester, M.A., Carneiro, G., G, D.R., S, H.P., Puthussery, D., Liu, H., Yang, Z., Kondo, S., Kasai, S., Wang, E., Durvasula, A., Heras, J., Zapata, M.A., Araujo, T., Aresta, G., Bogunovic, H., Arikan, M., Lee, Y.C., Cho, H.B., Choi, Y.H., Qayyum, A., Razzak, I., van Ginneken, B., Lemij, H.G., Sanchez, C.I.: Airogs: Artificial intelligence for robust glaucoma screening challenge. ArXiv preprint (2023) * [47] Wang, Z., Wu, Z., Agarwal, D., Sun, J.: Medclip: Contrastive learning from unpaired medical images and text. In: Empirical Methods in Natural Language Processing (EMNLP) (10 2022)
# _”With Great Power Comes Great Responsibility!”_ : Student and Instructor Perspectives on the influence of LLMs on Undergraduate Engineering Education Ishika Joshi<EMAIL_ADDRESS>IIIT DelhiNew DelhiIndia , Ritvik Budhiraja<EMAIL_ADDRESS>IIIT DelhiNew DelhiIndia , Pranav Deepak Tanna<EMAIL_ADDRESS>BITS PilaniPilaniIndia , Lovenya Jain<EMAIL_ADDRESS>BITS PilaniPilaniIndia , Mihika Deshpande<EMAIL_ADDRESS>BITS PilaniPilaniIndia , Arjun Srivastava<EMAIL_ADDRESS>BITS PilaniPilaniIndia , Srinivas Rallapalli<EMAIL_ADDRESS>BITS PilaniPilaniIndia , Harshal D Akolekar<EMAIL_ADDRESS>IIT JodhpurJodhpurIndia , Jagat Sesh Challa<EMAIL_ADDRESS>BITS PilaniPilaniIndia and Dhruv Kumar<EMAIL_ADDRESS>IIIT DelhiNew DelhiIndia (2018; 20 February 2007; 12 March 2009; 5 June 2009) ###### Abstract. The rise in popularity of Large Language Models (LLMs) has prompted discussions in academic circles, with students exploring LLM-based tools for coursework inquiries and instructors exploring them for teaching and research. Even though a lot of work is underway to create LLM-based tools tailored for students and instructors, there is a lack of comprehensive user studies that capture the perspectives of students and instructors regarding LLMs. This paper addresses this gap by conducting surveys and interviews within undergraduate engineering universities in India. Using 1306 survey responses among students, 112 student interviews, and 27 instructor interviews around the academic usage of ChatGPT (a popular LLM), this paper offers insights into the current usage patterns, perceived benefits, threats, and challenges, as well as recommendations for enhancing the adoption of LLMs among students and instructors. These insights are further utilized to discuss the practical implications of LLMs in undergraduate engineering education and beyond. ChatGPT, Large Language Models, Education, User Study ††copyright: acmcopyright††journalyear: 2018††doi: XXXXXXX.XXXXXXX††conference: Make sure to enter the correct conference title from your rights confirmation emai; June 03–05, 2018; Woodstock, NY††booktitle: Woodstock ’18: ACM Symposium on Neural Gaze Detection, June 03–05, 2018, Woodstock, NY††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Human-centered computing User studies††ccs: Applied computing Education††ccs: Computing methodologies Artificial intelligence ## 1\. Introduction Large Language Models (LLMs) (Floridi and Chiriatti, 2020) like GPT 3.5 (cha, [n. d.]) & GPT-4 (jag, [n. d.]) and Llama 2 (lla, [n. d.]) have gained immense popularity in recent times due to their remarkable proficiency in understanding and generating human-like language. These models have been trained on large quantities of data available on the internet with billions of parameters. Their exceptional natural language processing capabilities make them highly valuable for chatbots, virtual assistants, and content creation. These LLMs are poised to have a substantial influence across various domains, including but not limited to healthcare (Reddy, 2023), education (Milano, 2023), legal services (Noonan, 2023), software development (MacNeil et al., 2023) and finance (Deng et al., 2023). Human-Computer Interaction (HCI) scholarship has explored various applications of LLMs. These include use of LLMs for mental healthcare interventions (Jo et al., 2023), content and code generation for research and education (Jiang et al., 2022; Liu et al., 2023b; Petridis et al., 2023; Lee et al., 2022; McNutt et al., 2023), prompt design (Zamfirescu-Pereira et al., 2023; Liu and Chilton, 2022; Dang et al., 2023) and creating interactive applications, and games(Wang et al., 2023a; Ashby et al., 2023). Some studies have explored their role in influencing opinions (Jakesch et al., 2023), task management (Arakawa et al., 2023), and long form writing (Mirowski et al., 2023) and design of devices for users of augmentative and alternative communication (Valencia et al., 2023). Since the public release of ChatGPT, it has sparked extensive discussions within the academic community concerning its appropriate usage (Kissinger et al., 2023; Huang, 2023; Chantiri, 2023; noa, 2023b). There is a growing body of research testing the integration of LLM-based tools like ChatGPT into various educational contexts(Denny et al., 2023; Finnie-Ansley et al., 2023; Reeves et al., 2023; MacNeil et al., 2023). However, there remains a notable dearth of studies addressing the perceptions of students and instructors regarding ChatGPT or LLMs in general within academic environments. We seek to bridge this gap in understanding by investigating these perspectives. This paper conducts an analysis of the influence of Large Language Models (LLMs) in undergraduate engineering education in India. We have selected ChatGPT (cha, [n. d.]), a widely used LLM-based Chatbot developed by OpenAI, as our representative LLM-based Chatbot. ChatGPT is constructed upon the GPT-3.5 (cha, [n. d.]) and GPT-4.0 (jag, [n. d.]) LLM models, proprietary to OpenAI. Our research employed a mixed-methods approach (mix, [n. d.]) to comprehensively examine the perceptions of ChatGPT. We aimed to understand its use cases, benefits, associated risks, and gather suggestions for enhancement. To achieve this, we conducted user studies involving both students and instructors in three esteemed engineering universities in India. Our data collection methods included online surveys and one-to-one interviews. The data collection process leads to a substantial dataset, comprising 1306 survey responses from students and 112 individual student interviews. Additionally, we conducted 27 interviews with instructors. These interviews and surveys covered a diverse range of engineering disciplines, including but not limited to computer science, electrical engineering, electronics engineering, mechanical engineering, civil engineering, chemical engineering and a few others that include bioengineering, pharmacuetical engineering and human- centric design. This comprehensive approach enabled us to gain insights into ChatGPT’s usage across various engineering domains. This paper addresses several key research questions to comprehensively investigate the role and impact of ChatGPT in the context of undergraduate engineering education in Indian universities: * • RQ1: What are the usage patterns of ChatGPT among students and instructors and their perceived benefits? * • RQ2: What are the challenges faced by students and instructors in utilizing ChatGPT in education contexts? * • RQ3: How do instructors perceive the influence of ChatGPT on undergraduate education? * • RQ4: In what ways can ChatGPT be leveraged to assist instructors and students in enhancing student learning and growth? To the best of our knowledge, our research is the first comprehensive user study covering student and instructor perspectives on the impact of LLMs on undergraduate engineering education. Our research findings indicate that students are leveraging ChatGPT for a wide array of purposes, including acquiring rapid information, enhancing their understanding of subjects, summarizing content, and, in some cases, employing it directly to solve coursework assignments. Our investigation also details instructors’ apprehensions regarding ChatGPT which is seen to serve as both a valuable tool and a potential academic concern. We draw recommendations from our findings that can be employed in the design and development of educational and academic tools built upon LLMs in the future. The rest of the paper is structured as follows: §2 provides an overview of existing research at the intersection of Large Language Models (LLMs), Human- Computer Interaction (HCI), and Education. §3 provides the relevant details of our chosen methodology for conducting this study. §4, presents the evaluation and analysis of the data gathered during the research. §5 delves into a comprehensive exploration of the various implications arising from the research findings. The paper concludes with §6. ## 2\. Related Work In this section, we present a literature review on various topics that include (i) LLMs x HCI, (ii) AI in Education, (iii) LLMs in Education, (iv) Student and Instructor Perspectives on LLMs. They are described as follows: ### 2.1. LLMs and HCI. Prompt Engineering. There is a growing body of research which examines the influence of prompt quality on the responses generated by large language models such as GPT-3. Liu et al. (Liu and Chilton, 2022) focus on optimizing text prompts for text-to-image generative models, offering design guidelines for better visual outputs. Zamfirescu-Pereira et al. (Zamfirescu-Pereira et al., 2023) explore whether non-experts can effectively use LLMs with a chatbot design tool, finding that participants struggled with prompt design. Wang et al. (Wang et al., 2023a) adapt large language models for mobile interfaces, achieving competitive performance without specialized training. Wu et al. (Wu et al., 2022) introduce ”Chaining” LLM steps, enhancing task outcomes and collaboration. Dang et al. (Dang et al., 2023) show that the users strategically combine diegetic prompts (part of the narrative) and non- diegetic prompts (not a part of the narrative) to guide LLMs in their writing process. Jiang et al. (Jiang et al., 2022) show although LLM-based code synthesis tool offers a promising experience to the study participants, they face challenges in understanding the model’s syntax and capabilities. Liu et al. (Liu et al., 2023b) address the issue of guiding non-expert programmers in generating code by providing natural language prompts to LLMs using ”grounded abstraction matching” which translates the code back into a systematic and predictable naturalistic utterance, thus, improving users’ understanding and code generation in data analysis. LLM-based Interactive Applications. Jo et al. (Jo et al., 2023) investigate the use of an LLM-based chatbot ”CareCall” for aiding socially isolated individuals in public health interventions. Wang et al. (Wang et al., 2023b) introduce ”PopBlends,” a system suggesting conceptual blends for pop culture reference images using traditional knowledge extraction and large language models. Arakawa et al. (Arakawa et al., 2023) propose using generative models to boost task engagement and reduce distractions among workers with their system, CatAlyst, which offers context-aware interventions. All of the above- mentioned studies showed that the users of these applications had an overall positive experience. At the same time, these studies identified various challenges that need to be addressed for the widespread adoption of these LLM- based applications. Jakesch et al. (Jakesch et al., 2023) explore the impact of an LLM-powered writing assistant on users’ opinions and writings. The study found that the assistant significantly influenced both the content of participants’ writing and their subsequent attitudes, highlighting the need for careful monitoring and engineering of opinions embedded in LLMs. McNutt et al. (McNutt et al., 2023) explored code assistants in computational notebooks, identifying challenges such as disambiguation in tasks like data visualization, the need for domain-specific tools, and the importance of polite assistants. LLMs and Datasets. Hämäläinen et al (Hämäläinen et al., 2023) explored the potential of LLMs in generating synthetic user research data for Human- Computer Interaction (HCI) research. Their research found that synthetic responses could be quite useful for piloting experiments in HCI but it could jeopardize the reliability of crowdsourced self-report data, if misused. LLMs and Creativity. Mirowski et al (Mirowski et al., 2023) introduce Dramatron, a hierarchical language model system designed to generate comprehensive scripts including titles, characters, story elements, and dialogue. Dramatron was able to enhance the coherence of long-form creative writing such as scripts and screenplays. Petridis et al (Petridis et al., 2023) introduce AngleKindling, an LLM-based interactive tool to assist journalists in brainstorming various news angles from documents like press releases. AngleKindling was found to be significantly more helpful and less mentally demanding than previous tools. Lee et al (Lee et al., 2022) highlight the potential of using curated interaction datasets to gain insights into LLMs’ generative capabilities, particularly in creative and argumentative writing assistance. Jones et al. (Jones et al., 2023) explored how artists interacted with algorithmically generated performance instructions and concluded that true collaboration with algorithms is impossible due to their limitations in reciprocity, understanding, and consideration for the human body. Ashby et al. (Ashby et al., 2023) introduce a framework for procedural content generation (PCG) (pro, [n. d.]) in role-playing games (RPGs) (rol, [n. d.]) that prioritizes player-centric generative processes. The proposed approach has the potential to enhance player experiences and promote co- creative narratives between humans and AI systems in gaming. Chung et al. (Chung et al., 2022) introduced TaleBrush, a generative story ideation tool that enables writers to control and make sense of AI-generated stories by using line sketching interactions with a GPT-based language model. LLMs and Accessibility. Valencia et al (Valencia et al., 2023) explored the potential of LLMs in supporting the users of augmentative and alternative communication (AAC) devices. LLM-generated suggestions were found to potentially save time and reduce user effort during communication. However, users emphasized the importance of aligning the LLM-generated phrases with their communication style and preferences. ### 2.2. AI in Education. Supporting Students using AI. Wang et al. (Wang et al., 2021) focus on designing natural and prolonged conversations with community-facing conversational agents, focusing on an educational context. The research showed that the student perception of the virtual teaching assistant Jill Watson (JW) changes over time and depends on linguistic elements like verbosity, readability, sentiment, diversity, and adaptability. Winker et al. (Winkler et al., 2020) address the problem of lack of interaction between instructors and learners in online classes having large audiences. The results showed that Sara, an AI conversational agent was able to significantly improve audience learning in programming tasks, thus, enhancing online learning experiences. Ruan et al. (Ruan et al., 2019) introduce a conversational agent for teaching factual knowledge which was able to significantly improve recognition and recall of correct answers, despite being more time-consuming. It shows the potential benefits of educational chatbot systems for non-traditional learning settings. Weitekamp et al. (Weitekamp et al., 2020) propose a Simulated Learners technique based method to expedite the creation of Intelligent Tutoring Systems (ITSs) (int, [n. d.]). The method could enhance model completeness and reduce authoring time as compared to prior approaches. Supporting Teachers using AI. Jensen et al. (Jensen et al., 2020) propose an automated approach for providing detailed and actionable automated feedback to teachers to improve their discourse using speech recognition and machine learning methods. Aslan et al. (Aslan et al., 2019) proposed a real-time, multimodal Student Engagement Analytics Technology to aid teachers in providing just-in-time personalized support to students who risk disengagement. Results indicated that the technology positively contributed to teacher’s class practices and student engagement. ### 2.3. LLMs in Education. LLM (Floridi and Chiriatti, 2020) tools offer diverse benefits to undergraduate students. Some of the early studies (Becker et al., 2023; Denny et al., 2023; Malinka et al., 2023; Daun and Brings, 2023) examined various challenges and opportunities associated with the utilization of AI code generation tools including OpenAI Codex (Ope, [n. d.]), DeepMind AlphaCode (alp, [n. d.]), and Amazon CodeWhisperer (cod, [n. d.]). These studies discussed that these LLM tools can be useful in computer science and related disciplines for a variety of purposes such as (1) generating a variety of code solutions for the students to verify their work and improving their code quality during practice (2) generating high-quality learning material such as new programming exercises (Sarsa et al., 2022), code explanations (Leinonen et al., 2023a; Sarsa et al., 2022; Wermelinger, 2023; MacNeil et al., 2023) and illustrative examples for the instructors to save their precious time and enhance student learning (3) assisting the instructors and students in providing simple explanations to technical concepts, providing starter code for students to get started, and enhancing programming error messages (Leinonen et al., 2023b) to overcome debugging barriers. At the same time, these LLM tools pose a number of ethical issues such as over-indulgence, plagiarism, carbon footprint of training LLMs, bias related to gender, race, emotion, class, structure of names etc, and security. Similar studies in this domain (Finnie-Ansley et al., 2022; Wermelinger, 2023; Savelka et al., 2023; Reeves et al., 2023; Finnie-Ansley et al., 2023; Savelka et al., 2023; Ouh et al., 2023; Cipriano and Alves, 2023) demonstrate that LLMs can solve a significant portion of programming questions effectively, influenced by task complexity and prompt quality. While they generate accurate solutions, students must still cultivate skills like algorithmic thinking, program comprehension, debugging, and communication. The above-mentioned research studies are primarily focused on assisting students in programming-based learning. Additionally, they do not provide a comprehensive perspective of instructors across different engineering disciplines such as electrical, electronics, mechanical, civil etc. Additionally, the above studies along with some other work (Sallam, 2023; Rahman and Watanobe, 2023; Ahmad et al., 2023; Williamson et al., 2023; Abd- alrazaq et al., 2023; Moore et al., 2023) don’t take into account the actual usage of LLMs by students as well as their perspectives and opinions. Our work aims to fill in these gaps. ### 2.4. Student and Instructor Perspective on LLMs. Since LLMs have gained prominence only in recent years, a limited number of user case studies have explored the viewpoints of students and instructors (Yilmaz and Karaoglan Yilmaz, 2023; Shoufan, 2023; Smolansky et al., 2023) on LLMs. Ramazan et al. (Yilmaz and Karaoglan Yilmaz, 2023) conduct a study concentrating on the student perspective of using ChatGPT to solve programming assignments within an object-oriented programming course at a Turkish state university. Smolansky et al. (Smolansky et al., 2023) used online surveys to analyze both student and instructor perspectives concerning the influence of generative AI on online assessments in higher education universities in the US and Australia.These studies possesses several limitations: Some studies only cover programming assignments within a specific course and also lack an instructor perspective while others focus on essay-type and coding-based evaluations in online setting. Skjuve et al. (Skjuve et al., 2023) conducted a questionnaire study with ChatGPT users to understand their good and bad experiences with ChatGPT but does not focus specifically on the academic context. In contrast, our research offers a comprehensive examination of the impact of LLMs on undergraduate engineering education as a whole without confining itself to a particular course or assessment type. Furthermore, in addition to surveys, we incorporate interviews with students and instructors, affording us deeper and more nuanced insights compared to existing research. ## 3\. Methodology ### 3.1. Research Design We adopted a mixed-methods (mix, [n. d.]) research approach for this study with an exploratory design (noa, 2023a). Using a mixed-methods approach enabled us to leverage qualitative and quantitative methodologies, resulting in a thorough and nuanced understanding of our research problem. We selected three universities (referred to as University A, University B, and University C in visualizations) in India for our data collection. All three universities focus primarily on higher education and research in engineering and sciences. We focused our user studies on ChatGPT (cha, [n. d.]) as it was the most widely used LLM during the time this study was conducted (Hu and Hu, 2023). The survey observed 1306 responses. We carried out interviews with 112 undergraduate engineering students spread across different academic years at the selected universities. Interviews with 27 undergraduate engineering instructors111By instructor, we refer to assistant professors, associate professors, and professors who teach engineering courses as well as conduct scientific research. were also carried out in the same universities. The students and instructors were recruited through a combination of purposive and convenience sampling (Andrade, 2021). The interviews were conducted in either online mode or in person and the audio recordings of these interviews were further analysed. The interviewees provided both written and verbal consent for the interview and the recordings. A small fraction of students and instructors did not give their consent for recordings. In such cases, our research team made notes during the interview which were later utilized for analysis. All the research materials and protocols for this study were reviewed and approved by our university’s Institutional Review Board (IRB). (a) (b) Figure 1. Distribution of the 1306 student survey participants in terms of (a) university, (b) engineering stream ### 3.2. Survey Design A survey was made using Google Forms and was circulated through mailing lists of all three universities through authorized personnel. The email explained the purpose of the study and requested students to respond to the form. The survey covered various facets of LLMs, keeping ChatGPT as the focus, within an academic context through 10 questions- including its frequency of use (as addressed in two questions), its popular use cases (covered by two questions), and participants’ overarching perspectives on the benefits & drawbacks, and thoughts regarding ChatGPT’s utility as a tool in engineering education (encompassing six questions). We addressed these themes to help us establish a foundation of typical student perceptions, attitudes, difficulties, and anticipations related to ChatGPT. The survey consisted of single-choice (5 questions), Likert scales (1 question), and multiple-choice options (3 questions) in addition to two text-based response fields, enabling us to gather both qualitative and quantitative insights. It took around 3-5 minutes to fill the form. Respondents were also asked to mention their university name and their academic year. The respondents were also explained their contribution to the study in the survey introduction along with an assurance of the anonymity of their responses as their names or any such identifiers were not collected. The survey responses were analyzed and subsequently used in the study to obtain insights and shape the design of interview questions. Figure 1 shows the distribution of survey participants in terms of university and engineering streams. ### 3.3. Interviews The interviews were conducted with two sets of stakeholders - students and instructors from all three universities, whose details are explained as follows: (a) (b) Figure 2. Distribution of 112 student interviews in terms of (a) university (b) engineering stream #### 3.3.1. Student Interviews 112 interviews were conducted to comprehensively understand students’ experiences using ChatGPT in their academic workflows. This involved exploring their motivations, routines, advantages, obstacles, attitudes, perceptions, and biases related to Language Model Models (LLMs), focusing on ChatGPT. Our sample included students across all four academic years of undergraduate engineering in Indian Universities based on their availability and access to them. The selection criteria for recruiting participants involved being enrolled in an undergraduate engineering program and having used ChatGPT for academic workflows to ensure that the insights gathered come from true experiences. Each interview lasted for an average length of around 10 minutes. Online interviews were conducted and recorded over Google Meet by the research team. In-person interviews were also recorded using Google Meet. Figure 2 shows the distribution of students in terms of university and the engineering stream. There were 86 male participants and 26 female participants in the student interviews which owed to the poor representation of women in the STEM field in India. Interview Design. The interviews were semi-structured, designed based on the established research questions, and further refined by incorporating popular insights derived from the initial survey results. The interview consisted of six themes spread over six primary questions- awareness and familiarity with ChatGPT, usage patterns, use cases in academic contexts, problems experienced in using ChatGPT, potential harm to learning, and perceptions and attitudes towards incorporation of AI in educational settings. Participants were prompted to share how ChatGPT aided their learning and workflows, highlight challenges faced and strategies used, and talk about their fundamental beliefs and opinions, including whether they saw ChatGPT as a harmful or a helpful learning tool. ### 3.4. Instructor Interviews Interviews were conducted with 27 instructors. The instructors revolved around understanding the experiences of instructors using ChatGPT, their use cases, usage patterns, difficulties faced, perceptions, and attitudes about LLMs in an academic context. The average length of the interview was 20 minutes. The instructors were selected while ensuring they are regular faculty for undergraduate engineering courses in Indian Universities doing both teaching and research. 9 of the interviews were conducted in person. Figure 3 shows the distribution of instructors in terms of university and engineering streams. There were 22 male instructors and 5 female instructors. Also, 3 instructors were Professors, 5 were Associate Professors, and 19 were Assistant Professors. This was reflective of the representation of these roles in these colleges. The female representation in STEM education is very low which made it difficult to access more female instructors. (a) (b) Figure 3. Distribution of the 27 instructor interviews in terms of (a) university (institute) (b) engineering stream Interview Design. The interviews were semi-structured, and designed based on the established research questions. The interview consisted of six themes spread over six primary questions- awareness and familiarity with ChatGPT, usage patterns, personal use cases in academic contexts along with use cases for their curriculum design, personal problems experienced in using ChatGPT, anticipated harms to student learning, and perceptions and attitudes towards incorporation of AI in educational settings. Participants were prompted to share insights on how ChatGPT impacted their workflows, discuss its pros and cons for student learning and academics, explore effective integration in academic contexts, and talk about their fundamental beliefs and opinions. ### 3.5. Data Analysis 139 interviews (27 instructor interviews, 112 student interviews) were transcribed by the research team. All transcriptions in the Hindi language were translated into English to maintain the language consistency across transcripts. This was followed by a Thematic Analysis approach (Braun and Clarke, 2006). Multiple rounds of coding of the transcripts were conducted to extract emerging themes. Following this, the codes were collected on a collaborative platform, FigJam222https://www.figma.com/figjam/. The first two authors carried out a three-level analysis of the codes for student interviews by categorizing them into different buckets until the point saturation was achieved. The final categorizations, as explained in the findings section, were - (1) Existing Usage Patterns and Benefits of ChatGPT in academia, (2) Challenges pertaining to ChatGPT usage, and (3) Opportunities and Recommendations for improvement. A similar process was carried out for the instructor interviews, with the insights being segregated into - (1) Instructor Awareness about ChatGPT and its uses, (2) Instructor Perceptions of Student Learning through ChatGPT, (3) Influences on Teaching Methodologies, and (4) Instructor Recommendations. ### 3.6. Ethical Considerations In conducting this study, we diligently addressed various ethical considerations to maintain transparency and protect the privacy and well-being of our participants. All research materials and procedures were subjected to a detailed review and approval process by our university’s Institutional Review Board (IRB). In the surveys, the participants were ensured about the voluntary nature of the survey, the anonymity of their responses, and the purpose of the study. Before engaging in the interviews, every participant was asked to express their consent through a consent form, which explicitly outlined the study’s objectives, the voluntary nature of their involvement, and the assurance of anonymity and confidentiality. Additionally, we obtained explicit written and verbal consent from participants. All the data collected was anonymized and stored on Google Drive with limited access to some members of the research team. Our research team comprises members with expertise in HCI, interaction design, AI, and engineering education with all authors physically located in India. ### 3.7. Limitations The study has been structured to exclude any elements specific to individual universities. However, due to resource and time constraints, it was only feasible to include students from a limited number of universities. This limitation has the potential to introduce undisclosed biases related to the academic capabilities and approaches of the universities included, despite having access to a diverse and extensive participant pool. ## 4\. Evaluation ### 4.1. Student Perspective on ChatGPT - Quantitative Evaluation (a) (b) (c) Figure 4. Illustrating (a) how often do students use ChatGPT? (b) for how long students have been using ChatGPT? (c) how familiar are students with ChatGPT? (a) (b) (c) Figure 5. Perception of ChatGPT based on 1306 student survey responses regarding its (a) usage, (b) advantages, and (c) challenges The survey attracted 1306 responses across various engineering disciplines from Undergraduate students across multiple universities as underscored in the methodology section. We received responses from all engineering streams across the chosen universities including, Computer Science Engineering and its allied disciplines, Electrical / Electronics Engineering, Mechanical Engineering, Civil Engineering, Chemical Engineering, and others (Bioengineering, Pharmaceutical Engineering, Human-Centric Design, etc.). In India, an Engineering program is usually 4 years long. In our responses, we got 33.9% responses from the freshmen year, 27.5% from the sophomore year, 29.3% from the junior year, and 9.3% from the senior year. Since the study was conducted during a period of transition to College Graduates for senior year students, there was less engagement reported from them, resulting in fewer responses. The results of our study are depicted in Figure 4. In the study responses, 83% reported to be very familiar with ChatGPT. Meanwhile, 16% students were somewhat familiar and 1% students reported to be not familiar. This reflects its high usage and adoption among undergraduate engineering students. 32% users claimed to be daily users of ChatGPT, 32% students use it weekly, 30% use it occasionally and 5% of the survey population rarely used ChatGPT. A small percentage (1%) claimed to have never used it. Additionally, the survey’s findings indicated that most of the respondents (80%) had been using ChatGPT for a significantly long period (greater than 3 months), highlighting the vast retention rate of the service among undergraduate engineering students. When asked about the most common use cases of AI, the maximum number of respondents (82.5%) found it useful for gathering information as indicated by Figure 5a. Some other highly ranked use cases were summarizing content (74.7%), assistance in coding (66%), assistance in assignments (64%), assistance in generating content (64.3%), assistance in essay writing (56.5%) and just testing out the capabilities of the AI (48.5%). When asked about the extent of usefulness of ChatGPT in regular coursework, 41.2% students chose level 4 on a Likert scale from 1 - 5, level 1 being not useful at all and level 5 being highly useful. This is indicated by Figure 5b. When asked about the potential benefits of integrating LLMs like ChatGPT in academic workflows, 66.2% respondents found it to be beneficial for immediate feedback, as indicated in Figure 6c. 65.5% respondents found it to be a source to access more information, 60.4% found it to enable personalized learning, 53.6% thought of it as an on-demand tutor, and 41.5% found it to be an engaging and interactive learning platform. This has been indicated in Figure 5b. (a) (b) (c) Figure 6. (a) Overall perception of students towards ChatGPT as a learning and education tool (b) How often do students run into problems using ChatGPT? (c) To what extent do students believe ChatGPT can aid in their coursework-related queries? However, almost half the participants (48.4%) reported to have sometimes faced problems while using ChatGPT, and 31.1% reported to have often experienced problems using ChatGPT, as indicated in Figure 6b. Maximum number of participants (69.4%) ranked incorrect and unreliable results as the most common problem. 67.6% of participants found the absence of a way to verify the information provided by ChatGPT as a common problem, as indicated in Figure 5c. While 36.1% of participants expressed a lack of trust towards the AI. 11.2% also found it to not be conversational enough. It’s worth noting that 4.2% of the participants reported not having any problems with the AI. In all, 58.7% described their overall perception of ChatGPT as a learning and education tool to be positive with 15.5% reporting a highly positive perception, as indicated by Figure 6a. 20.4% respondents had a neutral perception while 4.8% had a negative perception. 0.5% respondents also had a highly negative perception of ChatGPT as a learning tool. ### 4.2. Student Perspective on ChatGPT - Qualitative Evaluation #### 4.2.1. Transitioning Workflows - The Advantages of Leveraging ChatGPT Students have used ChatGPT for their academic coursework and other use cases in relation to their academic requirements. Our interviews found that a large percentage of participants seek general assistance from ChatGPT, such as deriving lists of important topics for a course and seeking explanations for various topics, with the advantage of generating as many examples as required. Students have also used ChatGPT for problem solving, such as coding-related help, debugging sections of code, learning new programming languages, creating edge cases and tests for coding-related problems, and learning topics such as Data Structures and Algorithms. Other problem-solving topics include generating detailed literature reviews, paraphrasing, solving numerical-based questions and generating practice questions. While the majority of the students use ChatGPT to deal with problems that they are facing while coming up with solutions to problems, a fair share of students have also admitted to using ChatGPT directly to solve their assignments for them. > ”I’ll say it has helped me being efficient because generally when we read > books and all, […] we have to read chapters that are 50 - 60 pages. So to > shorten them down to, maybe 10 pages and that helps me study efficiently.” > -[P63] Most students favored ChatGPT for quick information retrieval, knowledge enhancement, and summarizing data. Many utilized its content-generative capabilities for information seeking. Some preferred extracting keywords from research papers with ChatGPT rather than reading the entire document, then requesting brief explanations of the extracted keywords. The majority of the students reported that getting summarised content through ChatGPT is a way to save their time and effort. Students found that using ChatGPT streamlined their tasks, eliminating the need to interrupt their workflow to use external resources or conduct online searches. Many students suggested that ChatGPT is capable of writing high-quality theoretical and verbose content, and hence is useful for assignments that require polished, effective use of language. Generating descriptive essays, creative writing tasks, and reports are some popular language-related uses of ChatGPT among students. Additionally, students frequently rely on ChatGPT to obtain concise information for assistive content like slide decks and presentations. Beyond assignments and information, ChatGPT is also utilized for crafting emails and resumes, and obtaining structural content guidelines. Participants have also used ChatGPT to manage schedules, create timetables, and manage coursework in their day-to- day lives. Some also use it to manage internship task management and design project roadmaps or milestones. > ”I use it to collect certain information like, quick information, if I don’t > Google it. But if I […] want to Know details about a very intricate > question, and I want to know the details of it, then I’ll just ask ChatGPT.” > -[P23] Doubt solving was a major use case for students, as it avoids the dependence on professors, senior students, and fellow batch mates for catering to certain doubts. In addition to gaining academic independence when it comes to asking questions and clarifying concepts, students also mentioned that they use ChatGPT in parallel to their existing classes, and in some cases use ChatGPT to learn alongside an ongoing class. The majority of the students reported that the conversational nature and interactivity of ChatGPT allow for an easier onboarding experience and make it a good learning tool. Following this, a large percentage of students reported using ChatGPT as a tutor, and utilizing the tool for tasks such as asking questions that are not a part of any existing internet resource, getting a step-by-step explanation for concepts and solutions, and generating practice questions outside the typical textbook question banks. Students mentioned that they are using ChatGPT as an assistant in self-study, and the tool is a very useful starting point for further manual search, aiding their initial ideation. Some of their professors even endorsed its use in class. A fraction of the students also mentioned that outside their usual coursework, they also used ChatGPT to enhance their communication skills such as improving their vocabulary and fluency in English. > ”It has helped me understand new things. Sometimes when I’m stuck at some > question or anything it helps to think of it from a new angle or newest > perspective ChatGPT comes to rescue.” -[P30] #### 4.2.2. Challenges in the Utilization of ChatGPT A majority of the students faced ethical dilemmas while using the tool for their academic needs. Students were very aware of the fact that it is quite easy to use ChatGPT for unethical purposes within academia. Some students mentioned that using ChatGPT felt like an unethical shortcut, and this made them prefer hard work and traditional methods of acquiring information and getting answers over ChatGPT. Some students also reported feeling guilty using ChatGPT. Students reported that they feel ChatGPT is a good assistant to their workflow but it inevitably makes humans more lazy and lethargic. They believed that in the longer run, ChatGPT will have negative effects on the capabilities of the human brain and hence humans face a long-term risk of losing their efficiency. Many students believed that due to the effortless nature of gaining answers and information through ChatGPT, they are learning very ineffectively. They mentioned that using ChatGPT was good for surface-level information, but not for in-depth knowledge. One participant also mentioned that they believe using ChatGPT is letting go of the traditional and rote learning methods of gaining knowledge, which is a negative effect of ChatGPT. Nonetheless, students admitted that despite the long-term risks, the tool is very helpful in the short term. > ”Sometimes I feel as though, I am cheating by learning through it. I mean > sometimes, you know, it just feels wrong when it gets too easy?” -[P3] Trust and reliability-based challenges were one of the most frequently mentioned challenges when it came to students using ChatGPT. The majority of the students mentioned how unreliable ChatGPT is in generating responses, and this reliability has put many students on guard. Some students mentioned how ChatGPT sometimes gives correct answers for specific questions, and sometimes it does not. Students reported varied experiences with ChatGPT responses. It’s responses often lacked consistency. Some mentioned that requesting response regeneration resulted in entirely different answers, while others faced persistent repetition of the same response. Paraphrasing also posed issues, with students observing that ChatGPT often changed the original text’s meaning and deviated from the original prompt. Moreover, many students mentioned that ChatGPT is incapable of judging whether what it is reporting is right or wrong, and this lack of verification makes it challenging to rely on ChatGPT for learning. > ”I think while paraphrasing, something ChatGPT, completely changed the > meaning of those scientific texts. So if I would not have come across and > proof read it, I don’t think I would have come across the policies and it > would have completely proved my Observations wrong.” -[P62] The extent of help one can seek from ChatGPT is limited due to its accuracy limitations. Additionally, many students mentioned that even if they told ChatGPT where it was wrong, it had an inclination to repeat its mistakes. When it comes to problem-solving, our interviews reported that a majority of students feel that ChatGPT cannot intuitively solve new problems, it generates incorrect code, it is incapable of solving complex numerical-based questions and it lacks in-depth information for specific programs like Mechanical Engineering and Electrical and Electronic Engineering. It also provides outdated information on recent events and is sometimes biased in its views, and portrays information in a very confident manner, even if it is incorrect. > ”sometimes the tool tries to give an answers way too confidently about > things that it is not really aware about. […] I found that Political > affiliations are a bit questionable. I believe a tool like that should be > new impartial and should try to give in fact instead of providing opinion.” > -[P13] Apart from trust and the quality of answers generated, a large percentage of the students also faced general usability-based challenges. A fraction of the students mentioned that making an account and periodically signing in to the service becomes a hindrance to their workflow. Students also mentioned that the chatbot nature of ChatGPT does not feel very humanistic, and feels repetitive in its nature of conversation which makes it difficult to get comfortable with using ChatGPT on a day-to-day basis. Many students mentioned that even if it is good at generating language-based answers, it is not creative enough to meet their creative requirements. A recurring theme throughout the participants was that ChatGPT does not work well on larger, more complex queries. This was followed by many students reporting that in order to cater to their complex queries, they have to break down their prompt into multiple, simpler prompts in order to get the desired answer from the tool. Writing better prompts was a challenge faced by many, and the fact that they needed to manually verify every response drew many students away from the tool. Students were also observed mentioning that the limited database and the restricted usage on the types of questions ChatGPT can answer, and how even slightly controversial topics are avoided by the tool was a pain point. Moreover, many students reported that they wished there was multi-modal access for academic purposes. > ”You have to double-check everything. You do it carefully, line by line, and > make sure you exactly understand what it’s doing. And most of the time, 80% > of the time, you’ll have to edit it a lot to get it to work.” -[P10] #### 4.2.3. Opportunities to Improve - Student Perceptions and Recommendations Across all participants, the majority of participants agreed that ChatGPT has proven to be a helpful tool in their coursework and academic requirements. Many students referred to ChatGPT as an indispensable tool and believed it has had a big impact on education. Students mentioned that there needs to be a balanced usage of the tool to effectively aid their academics, and the tool can be a revolution in education if used appropriately. Some students compared ChatGPT to the internet revolution, and drew parallels between Google Search and ChatGPT, mentioning that ChatGPT is the new Google search and it is here to stay. Some students also expressed their worries when it came to jobs after their undergraduate education, and addressed the opinions that ChatGPT can end up taking jobs. Some students did not believe that ChatGPT would take away their jobs, while others reported having a strong opinion that ChatGPT might wipe away mundane jobs. > ”It’s a wonderful tool and I think we shouldn’t overuse it. We should > definitely use it to kind of support our needs or complement them, but we > shouldn’t rely on it too much.” -[P17] The majority of students compared ChatGPT with the traditional methods that were in use pre-ChatGPT. The majority of the students mentioned that they feel ChatGPT has the potential to decrease human creativity, while the opposite emotion was also brought up stating that the human mind is much more efficient compared to ChatGPT. Some students felt that the reliance on the human mind has decreased since the introduction of the tool. On the other hand, a few students reported that they felt ChatGPT was not useful enough, and they did not feel any major contributions made by ChatGPT in their academic workflows. > ”I guess it would obviously change the world in a way that we will rely less > on humans, and humans (will rely) more on ChatGPT.” -[P19] Many students commented on their usage habits with ChatGPT. Some students reported that it took them some time to get used to the tool and understand how to use it efficiently. Many believed that structuring prompts in the right way was essential to use the tool effectively, and it requires a certain level of expertise to be used correctly. Students mentioned that over time they have learnt when and when not to use ChatGPT. Students reported making clear distinctions of when ChatGPT can be trusted with it’s answers. The majority of students believe that ChatGPT is time-saving and convenient to use for academic use cases. Many students mentioned that they feel the help they get from ChatGPT is personalized, and that allows them to get help when other traditional agents of guidance, like instructors, are unavailable. Some students felt that ChatGPT was on a thin line when it came to being good or bad due to it being a trade-off between efficiency and laziness. > ”So when I’m talking about learning new things and concepts, I know that I > can trust it, but if I ask it to solve somethng for me, I know it can’t be > that reliable. So I know exactly when I have to use it.” -[P30] Throughout our interviews, students put out their opinions openly and gave suggestions. Many students reported that they believe ChatGPT should hold itself accountable for the answers it gives, and have admittance to not having an answer instead of giving a vague response. In terms of learning to use the tool, students believed that there should be in-built tutorials to help students realize the potential of the tool, rather than having to explore it on their own. The majority of the students mentioned that proper training and guidance should be provided to some extent in terms of how to use the tool effectively. Students also wanted the tool to be more interactive, with features such as cross-questioning. Regarding academic recommendations, some students mentioned that they would want a ChatGPT model that can deal with research work better, including training the model on research papers, having books and their solutions integrated into the tool, and being able to accept equations, formulas, and diagrams better. Many students also highlighted that they would prefer if the open-access version of ChatGPT (GPT 3.5) supported multiple forms of input, including but not limited to images, mathematical calculations, graphs, and more. Students also wanted the interface to be more customized so that they could tweak the model to their requirements before they started giving prompts. > ”I think if ChatGPT was such that when I regenerate my response, then > ChatGPT could ask itself, […] ask me questions. What exactly am I looking > for? And then give it’s response. That would be great.” -[P15] ### 4.3. Instructor Perspective on ChatGPT #### 4.3.1. Instructor Awareness about ChatGPT and its uses Instructors learned about ChatGPT through word-of-mouth, news articles, interviews, and online buzz, sparking their interest in trying it out. Some instructors initially viewed it as a potentially over-hyped technology and questioned its practical utility. One of the instructors compared ChatGPT to the early days of Google, suggesting that its initial results might not be perfect as was the case with Google. On the other hand, several instructors were impressed and enthusiastic about ChatGPT’s capabilities, especially in terms of information retrieval, summarization, and problem-solving. Instructors both praised ChatGPT for answers and highlighted its inaccuracies and limitations, particularly for solving numerical problems. > ”I came across ChatGPT because of the hype it generated on the internet. It > was crazy, so like everyone else, I also tried my hand at it.” -[T9] Coming to student usage, many instructors admitted that they are not fully aware of how students are using ChatGPT in their courses. Some of them did express an intention to initiate discussions with students to gain a better understanding of their experience with ChatGPT. Some instructors also mentioned that the nature of the take-home assignments in their courses is such that they did not think ChatGPT would be able to solve them. On the other hand, some instructors were aware of students trying to use ChatGPT in open- internet exams and assignments. Instructors worried about students blindly relying on ChatGPT without understanding the meaningfulness of its responses, and some even caught students doing so. > ”Even in the labs, we observed students blindly copying code for tasks > related to communication systems, which may be fine. But the problem is that > they don’t really understand what they’re copying. They just want to get the > work done. It undermines the learning of the students.” -[T16] #### 4.3.2. Instructor Perceptions on Student Learning through ChatGPT Instructors shared that ChatGPT can complement students who are already proficient in a subject, aiding them in learning more about that subject. However, its reliability may not suffice for critical assignments, necessitating students to have a background in the topic to identify inaccuracies. Some instructors also believe that ChatGPT can help in focused and efficient learning by providing concise and specific information. This saves them valuable time that might otherwise be spent searching through numerous sources. Instructors also thought that ChatGPT could simplify complex concepts, making them accessible to students with varying levels of subject proficiency and helping bridge language barriers. Some instructors also mentioned that the ChatGPT platform offers a distraction-free and safe learning environment, devoid of advertisements and provides controlled access to reliable information, particularly beneficial for young learners. The interactive and user-friendly interface of ChatGPT was seen as an enabler of iterative learning and reinforcement. Students can receive assistance, identify mistakes, and improve their understanding through back-and-forth interactions. Additionally, ChatGPT supports experimentation and exploration of what if scenarios on a particular topic for deeper comprehension. Instructors also mentioned that students can leverage ChatGPT to generate practice questions on any topic. This can enable students to create their own exercises based on course material, promoting a deeper understanding of concepts. In this context, an instructor highlighted that ChatGPT may sometimes provide incorrect answers to a question. This discrepancy can be used as an opportunity to debate with peers on the correctness of the answer, ultimately, contributing to enhanced learning and a deeper comprehension of course material. > ”In classroom teaching, we have limited time, so we can’t delve into every > topic in great detail. At the same time, we don’t expect students to go > through every topic in depth. […] The advantage of ChatGPT is that it > doesn’t overload you with a lot of information; it provides very focused > information on a specific topic. This can be very satisfying for a student, > and they can feel confident that they’ve understood that topic […] ChatGPT > provides a more streamlined experience compared to Google Search.” - [T4] Instructors viewed ChatGPT as a valuable tool to improve students’ writing skills, especially for non-native English speakers like Indian students. It provides suggestions for grammar, structure, and presentation in various types of written communication. Additionally, it offers guidance on research paper writing and can generate templates. Exposure to ChatGPT-generated high-quality written material can also enhance spoken communication, as one instructor noted. Instructors saw ChatGPT as a resource to help students express mathematical ideas more clearly by generating structured outlines for theorem proofs, aiding in effective mathematical proof-writing skills. Many instructors mentioned that ChatGPT can assist students in generating basic code for assignments or projects, serving as a starting point to save time and effort. However, they recommended that ChatGPT should be used for coding assistance only from sophomore year or later so that students can grasp fundamental programming concepts. > ”In my courses, we often have to prove theorems and establish various facts. > […] When I provide ChatGPT with an outline of the proof, it can generate a > well-structured and elegantly written proof […] the way ChatGPT formulates > the proofs is often superior to what an average second-year undergraduate > student could produce. This skill of effectively communicating solutions or > proofs is something that everyone should have. Even if you know the answer > or solution, being able to convey it clearly and elegantly is essential.” - > [T13] Instructors emphasize that students should understand that ChatGPT is a tool to aid their learning, not a replacement for their intellectual capabilities. It is important for students to use ChatGPT responsibly and not rely on it excessively. For example, ChatGPT could be used for small or trivial parts of assignments or when students need quick assistance, but not as a complete substitute for their own learning efforts. More so, instructors emphasize that students should understand that ChatGPT is a tool to aid their learning, not a replacement for their intellectual capabilities. It is important for students to use ChatGPT responsibly. For example, ChatGPT could be used for small or trivial parts of assignments or when students need quick assistance, but not as a complete substitute for their own learning efforts. More so, > ”We’ve become so dependent on technology that we’ve stopped using that part > of our brain. I suspect that if this dependence on technology goes > unchecked, it may lead to a situation where human intelligence is > compromised. […] It might reach a point where students say, ”Why should I > study now? I can do it later with ChatGPT.” This attitude could continue and > eventually lead to students trying to use it for unfair advantages in tests > or exams. So, any technology should be used in moderation and enjoyed > responsibly.” -[T3] Instructors expressed concern that the availability of tools like ChatGPT may reduce students’ inclination to seek help from their peers or teachers when they encounter challenges or questions as they may now resort to ChatGPT. This may make the students less social with other students and instructors. Hence, it’s important to strike a balance between utilizing technology for assistance and maintaining social interaction and collaborative learning. One instructor highlighted that if everyone relies on the same AI tool such as ChatGPT for generating content or answers, it could lead to homogeneity in thinking and content production. > ”It’s gradually reducing in that aspect. In the past, when students didn’t > know something, they would ask their peers or teachers for the solution or > the underlying concept. But these interactions seem to be missing these > days, and it’s kind of un-socializing the students.” -[T6] #### 4.3.3. Influences on Teaching and Assessment Methodologies Instructors mentioned that they can use ChatGPT to enhance their teaching by finding alternative explanations for challenging concepts and generating additional examples and practice questions. This approach can not only aid in conveying information more effectively but can also provide students with practical applications of the material. ChatGPT’s generative ability assists in creating diverse, up-to-date lecture materials and well-structured assessments, increasing student engagement and understanding. Instructors also explained that ChatGPT can help them generate relevant content within seconds, thus, optimizing their teaching efficiency. A computer science instructor suggested that instructors utilize ChatGPT to create interactive simulators for teaching a system or algorithm to provide students with practical, hands- on experiences that help them better understand complex systems. A few instructors mentioned that ChatGPT can also assist in course design by suggesting the most effective sequence of topics and relevant course material to improve learning outcomes and engagement for students. This saves time in researching and designing course content and also ensures that the course remains up-to-date and relevant. Instructors also emphasized the need to embrace technology and innovation. They encouraged other instructors to leverage tools like ChatGPT to enhance the learning experience and recommended becoming familiar with online resource limitations and strategies to overcome them. > ”A couple of months ago, I was designing a new course. […] My senior > colleagues suggested that I should explore other universities’ portals to > see if they are offering similar courses for their students. This took up a > significant amount of my time, and often, I found that the information I > came across was not very relevant. ChatGPT, being more optimized, could save > a lot of time in designing course content. For example, once I’ve finalized > a course topic and its content, there might be elements that I’m missing, > that could add value and are already taught in similar courses at other > universities. If ChatGPT can highlight such relevant information, it would > enhance the quality and content of our teaching while also saving time.” > -[T16] However, one instructor was also of the opinion that using ChatGPT to prepare teaching material is cheating and should be avoided completely. They believed that the teaching material should reflect their genuine understanding and interpretation of the subject matter. They believed in the value of building content from the ground up and citing sources transparently, emphasizing the role of their own scholarship in the teaching process. > ”Every line I write should reflect something I genuinely believe in, > something that has become a part of my understanding. […] If I’m not allowed > to copy from textbooks, how can I justify copying from ChatGPT? I must think > about what the textbooks are conveying, interpret the material, and create > my own slides. It should be a product of my scholarship. If I use ChatGPT, > it would essentially be cheating. Furthermore, there’s a concern about the > reliability of sources. […] As a teacher, I should build my content from the > ground up, citing all my sources in my slides.” - [T2] Instructors mentioned that they could move towards higher-order learning and problem-solving while asking students to use ChatGPT for more routine and maintainable tasks if ChatGPT gets more pervasive. For instance, in a software engineering course, instructors could focus more on advanced concepts and delegate routine tasks, like writing code modules, to ChatGPT. This would allow students to engage in higher-level thinking and problem-solving. However, the participants also acknowledged the challenges of developing such a higher level of understanding without going through the foundational learning processes. For instance, what to remove from existing content/curriculum to accommodate new higher-level concepts. Additionally, adapting learning outcomes and revising course content assumes a universal reliance on ChatGPT and similar tools, which may not always be true. > ”It appears that we need to move towards a higher order of learning, > focusing on complex problems and solutions, while using ChatGPT for lower- > level, more routine tasks. However, it’s still not clear to me how we can > achieve a higher level of complexity without going through the foundational > learning process. Can we skip this process entirely? My intuition tells me > that we cannot develop a deep understanding without going through the > process of grasping the fundamentals. It’s like trying to become a > proficient coder who assembles modules created by ChatGPT without having a > solid understanding of the underlying concepts.” - [T11] Many instructors acknowledged the need to reconsider traditional assessment methods in light of ChatGPT’s presence. Instructors recognized that in-class closed-book exams are likely to retain their significance in undergraduate assessment due to the controlled and supervised testing environment. This controlled environment helps maintain the integrity of assessments, making them less susceptible to the influence of AI tools like ChatGPT. However, they also acknowledged the importance of supplementary assessment components, such as take-home assignments and projects, to foster deeper engagement with course material. However, many instructors found open-book or open-internet assessments problematic due to the potential for students to rely on ChatGPT. Some instructors have already removed open-book assessments, while others suggested contextualizing questions to prevent ChatGPT from offering ready- made answers. Assignments that require real observations, personal experiences, and synthesis of information could be designed to promote deeper understanding and discourage reliance on AI. Instructors may also adjust the complexity of problems to make them less amenable to ChatGPT-generated solutions. > ”Earlier, I used to assign term papers, but now I’ve shifted the focus. I’ve > added more weightage to assignments that require observation and practical > involvement. Students need to write about their observations and even take > real photos as part of their assignments. They can’t simply rely on internet > sources for everything. […] This way, I can tell if they’ve genuinely > engaged with the subject matter. I removed the term paper requirement > because I know they can easily write it using ChatGPT. Secondly, in design > courses, projects, and assignments, most of them are not very binary; they > involve observation and presentation, such as creating a video. I try to > make them as non-objective as possible, so AI may not be as useful in that > context. This doesn’t mean I make the assignments overly challenging, but > analyzing existing products or designs around them can provide valuable > insights.” -[T19] Instructors also found open-book assessments more challenging for detecting plagiarism due to the difficulty in distinguishing between original student work and ChatGPT-influenced work. This raises concerns about verifying actual student learning during grading. Students may also alter AI-generated answers to appear original. Instructors suggested the need for discussion and brainstorming among peers to address these challenges and explore methods to detect AI tool usage while maintaining fairness and integrity in evaluation. > ”It’s difficult as an instructor to figure out answers written using > ChatGPT. I mean, somebody is writing an answer, and I have a moral and > academic responsibility of evaluating their answer and giving them credit if > it is original and not giving credit if it is not original. But that is very > difficult. I mean, in a non-regulated setting, I don’t know if you got it, > it’s not a very hard task to modify a Chat answer to make it look like it’s > an original. […] So I feel that now when I give homework marks to someone, > that 10 out of 10 may not actually mean that the student knows the stuff. > I’m not able to verify the learning of the student.” -[T10] #### 4.3.4. Instructor Recommendations Instructors suggested that ChatGPT should provide confidence scores with its responses to help users assess the reliability of the information. Some instructors also proposed that ChatGPT should provide references or citations for the specific sources that it has used to generate a particular response, ideally in a journal-style in-line citation format. These features would make ChatGPT more accountable as well as enable users to make informed decisions when consuming ChatGPT-generated content. A few instructors also suggested that users should be encouraged to cite ChatGPT as a source in their research articles, increasing awareness about AI-generated information. Instructors envisioned ChatGPT evolving to provide richer, interactive education with multimedia, including images, graphics, animations, and spoken interactions. This would offer a dynamic, engaging experience beyond text-based interactions. Some instructors recognized the challenges of inputting mathematical queries into ChatGPT and recommended improving the interface for easier input of mathematical formulas and equations. This will make ChatGPT more convenient for users seeking solutions to mathematical problems. > ”I think there should be some accountability […] because the problem that I > can see is that if ChatGPT is so powerful, and in the future, if people see > it as a replacement for Google, then…it can also generate biased > information. Who will be accountable for that? When you do Google search, > Google itself is not providing any information; it can point to different > blogs or something, and you know whether those people are reliable or not. > Even some media house can be biased, some can be unbiased, so you can look > at different media houses, then you can build your understanding about, > like, what is what. But if everything is a black box, and then you’re just > getting some information, and then that is a bit problematic, so…” - [T12] Instructors noted that ChatGPT could serve as a foundation for various new tools and applications. Just as Google provided a window to the world, ChatGPT is seen as a foundational technology upon which new tools can be built. These tools could be domain-specific and tailored to the needs of researchers, students, developers, data scientists, and more. For example, a student- focused version of ChatGPT would not simply provide answers but act as a tutor, giving hints, directions, and clarification. This approach aims to strike a balance between saving students time on redundant tasks and promoting active learning by not providing complete solutions right away. > ”Currently, there is a single ChatGPT and it’s based on a universal model. > In the long run, probably what we will see, and this might not be good for > the company as such, is a lot of chatbots tied to different objectives. > There might be a chatbot that is just for helping people with assignments. > There might be a chatbot that is designed for instructors to improve their > questions.” - [T7] Instructors expressed concerns about potential bias in the training data of ChatGPT, especially with regard to its applicability in regions that may not have been adequately represented in the data. They suggested that fairness, bias, and transparency should be thoroughly reviewed in the training process. Addressing bias, ensuring representation from diverse sources, and providing transparency and explainability in AI systems are crucial steps to ensure their responsible and ethical use in various contexts, including regions outside the primary training data sources. > ”Working in a developing country, […] it is my apprehension that there may > not be sufficient fairness in ChatGPT’s training. They may not have used a > sufficient amount of data from the region where perhaps they could not reach > out sufficiently enough. Therefore, it has been trained on data from the US, > and that may not directly apply to the real-life cases in our setting. So, > my suggestion would be that fairness, in their training, that should be > reviewed. What data they have collected, where they have collected from, > whether sufficient attention has been paid to fairness and bias, > transparency, it is called explainability. ” [T9] ## 5\. Discussion ### 5.1. Expanding the Horizons of LLMs in Education: Building on Existing Capabilities Our findings reveal numerous existing use cases and advantages of ChatGPT, as reported by both students and instructors. We identify opportunities to enhance these current benefits in order to further enhance the positive impact of Large Language Models (LLMs) in education. These opportunities and recommendations can be leveraged by designers, engineers, and instructors engaging in the space of LLMs for education. Below, we provide a categorization of these benefits and their potential implications. #### 5.1.1. LLMs as facilitators of learning and skill building Our findings highlight that both students and instructors acknowledge the significant role ChatGPT has played in supporting learning and the conceptualization process for students. Practices of ChatGPT being used for learning and comprehension included the generation of examples and alternate explanations through ChatGPT for easy understanding, simplification of complicated concepts taught in courses, experimentation, and exploration of new topics, generation of practice questions, and getting feedback on adopted approaches. ChatGPT also came up as an agent of academic independence as using ChatGPT can minimize dependence on instructors and peers and their availability as shown in our findings. ChatGPT can provide students with a quick and always available source of information-seeking and doubt-solving. Instances of ChatGPT being used in parallel to in-class learning and beyond- classroom learning were commonly talked about. Instructors also talked about the distraction-free and interactive nature of ChatGPT which enables a safe learning, engaging yet focused environment for students. ChatGPT being used to enhance language skills like proficiency in English is also a significant use case in contexts like India where English is not a native language but most course materials are in English. Exposure to high-quality written material generated by ChatGPT can also contribute to improved spoken communication in the English language. These use cases open up opportunities to leverage them and build technology- enabled learning and teaching tools over them. We observe that ChatGPT is commonly used for course-specific learning and doubt-solving. AI-powered learning tools and plugins can be built using LLM APIs that can be tailored to specific student needs. Prior literature has extensively explored the use of chatbots to educate students (Yeh et al., 2022; Lee et al., 2020; Mukherjee et al., 2023; Gibellini et al., 2023). The improved and unprecedented capabilities of recent LLMs allow them to adapt better to specific domains and requirements. Tutoring applications built on these LLMs can be specified to serve as personal tutors to students for different subjects. These tools can enable iterative and reinforcement learning of skills relevant to their coursework as well as essential interpersonal skills. Prior literature has explored various ways of incorporating interactive quizzes in learning environments as they serve as a quick way of reinforcing learned concepts (Gibellini et al., 2023; Lu et al., 2023). Such self-assessment quizzes based on the topics searched and queried can be generated by LLM-based learning tools or plugins and prompted to the learner amidst their usage. Users can also be prompted to self-report if they understood the queried theme correctly. Based on user’s inputs, they can be provided with more personalized examples and variations of responses to match their learning requirements and pace. Recommendations of topics or questions similar to queried themes can also be presented to the learner to enhance their knowledge and spark curiosity. More so, multimodal outputs including relevant images or videos can be incorporated in solutions provided by the LLMs to make concepts easier to understand. Better training on regional languages can help in adoption across students coming from educational mediums apart from English. Additionally, LLM-based tools can be designed to aid instructors in providing more detailed and personalized feedback to students. Such tools can consist of features that let instructors provide information on evaluation criteria and rubrics of feedback to the LLM. This feedback can consist of detailed explanations of student’s mistakes and ways to improve. LLM-based tools can also help students practice/rehearse for the in-person viva, presentations, interviews, etc. with personalized feedback. Students can also practice language skills by simulating conversations with LLM-powered teaching bots. Students can feed personal goals in such applications and get feedback specific to their goals. The conversational nature of LLMs allows students to correct their responses, which can help endorse interactive learning. The interaction can also be a Voice User Interface by including speech inputs and outputs to enhance the conversational nature of the interaction (Kim et al., 2020; Reddy et al., 2021). Massive open online courses (MOOCs) are also widely used by students for online learning and accessing courses that might not be accessible to them offline. These MOOCs can use LLM-powered teaching assistants that can solve subject-specific doubts. They can be trained to understand video content so the student can ask doubts about the video content and seek one-on-one help quickly and increase retention (Zheng, 2015). #### 5.1.2. LLMs as an Aid to Academic Tasks Our findings highlight the numerous instances of students using ChatGPT to complete their assignments. Some instances involved using ChatGPT as an ideation tool and a knowledge bank to refer to for assignments. The speedy assistance offered by ChatGPT was a major factor in relying on it for academic assistance. Students mentioned that the near-perfect summarisation and content generation of ChatGPT really helped save time and effort. The instructors mentioned that ChatGPT can make their workflows more efficient as it can also be used for creating diverse, up-to-date lecture materials, well-structured assessments, interesting conceptual simulations, and examples to provide a more interactive and efficient learning experience for students. Moreover, ChatGPT has also been used as a task management tool by students for generating schedules and timetables and creating course roadmaps. Prior literature has studied task management processes and how technology can ease this process (Kamsin et al., 2012; Toxtli et al., 2018). LLMs can help change the way task management is done owing to their adaptive conversational abilities. Task management applications for instructors and students can be created where they can be asked to feed their tasks, goals, and personal preferences. A collaborative process between the user and the LLM can help efficiently create workflow while LLMs can provide additional tips and strategies to manage tasks. More so, LLM-powered curriculum planners can be developed. Previous work has studied and recommended curriculum practices for teachers to devise effective curricula (Lin et al., 2021; Liu et al., 2023a). LLMs fine-tuned on such guidelines and education literature can be used as assistants for instructors while designing curricula. Instructors can query such tools for specific recommendations and suggestions to make course activities more interesting. Recent work has also explored the possibility of LLMs to mimic user personas (Hämäläinen et al., 2023). Such functionalities can be leveraged to test course activities on LLM simulations to predict how students would engage with the activity and if it can meet the intended goals of the course. Content generation capabilities of LLMs can be better leveraged by creating special content authoring tools over LLMs. Such tools can provide specific features that can help users change the tone and nature of the content and ideas generated through a Visual Interface instead of textual prompts. ### 5.2. Navigating Challenges through a Learning-Focused Design Through our findings, we observe different types of challenges, concerns, and perceptions that LLMs pose for students and instructors. Building upon those, we focus on identifying such effects and exploring how future developments can utilize our contributions to develop improved solutions for academia. #### 5.2.1. The Dilemma of Academic Usage of LLMs Our findings suggest that both students and instructors face an ethical dilemma when it comes to utilizing the various use cases that LLMs offer, compared to the more traditional forms of academic learning practices. Introducing open-access LLMs like ChatGPT raises concerns regarding the use of such tools to facilitate unjust and unethical practices such as students committing plagiarism in their assignments, reports, exams, and other forms of academic evaluations. Moreover, many students expressed similar sentiments, with instructors worrying that using LLMs to copy information might be perceived as a less immoral practice than copying from peers. Students also mentioned their preference for hard work, and how they feel that they learn more when they make use of traditional methods of acquiring information. Furthermore, the inherent accuracy and user-trust issues with LLMs like ChatGPT act as a roadblock to the smooth integration of such models, and our findings suggest that these were the most common challenges. Lastly, instructors suspected that using ChatGPT-like tools can reduce the development of cognitive skills like critical and systemic thinking, problem-solving, and creativity among students as they might over-rely on it for direct and easy answer-seeking. instructors also expressed concerns about ChatGPT resulting in homogeneity among student thinking and approaches. The usage of ChatGPT-like tools was also feared to reduce social interactions of students to discuss and collaborate among peers. We acknowledge the potential harm LLMs could impose on student learning as they ease up the process of knowledge creation and could be used unethically. We emphasize the need to prioritize Responsible Learning in both the design of LLMs for educational use cases as well as the curricula developed for undergraduate students. Students need to be taught about the responsible use of AI. A transformation like that of LLM-enabled learning will also require reforms in educational structures (Dignum, 2021). AI Ethics education should be made a core part of the undergraduate curriculum to educate students about the capabilities of AI and foster a responsible attitude toward the use of AI. Studies have explored the use of gamification to help individuals identify ethical concerns in the design of AI technologies (Ballard et al., 2019; Elsayed-Ali et al., 2023). Such interactive elements can help students engage in hands-on experience of identifying the ethical use of AI. Designers and developers of LLM-based educational tools should also be educated about practices that can be adopted to create learning-friendly interfaces. Various studies on Responsible AI have emphasized the importance of participatory design for building responsible AI applications (Rakova et al., 2021; Schiff et al., 2020; Polak et al., 2022). Focused group studies can be carried out with instructors and students to recognize how they would visualize a safe LLM-powered educational tool. These studies can help create mental models of how users will make use of LLM-powered educational tools and inform the design of such tools. More so, it remains essential to re-imagine how students are evaluated and tested in undergraduate education. With the introduction of LLMs, the need for holistic evaluations that test and foster students’ individual creativity becomes important. Panel discussions between instructors can help facilitate the design of curricula in ways that incorporate LLMs as assistant tools for students’ learning. Students can be evaluated on the creative process of solving assignments and asked to provide multi-modal deliverables like videos, digital photo essays (noa, 2023c), and digital process books. Digital tools and applications for students can be built to support the creation of such deliverables. More so, LLM tools for education can be designed in ways that help students build concepts constructively. Explanations and generations can be broken down through in-context learning (Kossen et al., 2023) or prompt engineering that directs the LLM to break answers into steps while prompting the user to interpret and apply the explained concepts in order to proceed to the next steps. Moreover, explainability features in LLM interfaces can help students understand and follow the reason behind the generated outputs which can enable greater trust in the responses (Zhang and Lim, 2022; Wang et al., 2019). Confidence scores or markers can also help students know the authenticity of generated answers. ### 5.3. Interaction with LLMs Our findings highlight some challenges our participants faced while interacting with ChatGPT. These challenges are common across most LLMs as the interaction modalities for them remain similar. Concerns regarding making an account and having to log in regularly came up as a hindrance in the information-seeking process. Concerns around the inability to rightly prompt to get desirable results were commonly reported. More so, concerns about the robot-like answer generations also came up. Findings suggest that many found that conversing with ChatGPT lacked humanness at times due to factors like the repetitive nature of its generations. The LLM technology is developing at a fast pace and we acknowledge that various technical challenges that were mentioned by our participants might eventually be addressed. Nonetheless, there is a need to provide students and instructors with resources to learn strategies for prompting LLMs as that is the primary way to collaborate with LLMs. Prompt Books 333https://dallery.gallery/the-dalle-2-prompt-book/ have come up as a way of training users about prompt engineering for generative AI models. These prompt books can be made multimodal by incorporating gamification or conversational elements. Videos and simulations can also serve as training materials for methods of prompting. Inbuilt plugins and features can also suggest alternate prompts to users or help with tips to prompt rightly. MOOCs (Massive Open Online Courses) or social media channels can be leveraged to share more such techniques and strategies. The information-seeking process can be made simpler and direct by the development of plugin extensions that can help summon LLMs- based chatbots across platforms in the form of popups to seek instant information and query resolution. Users should be given accessible features to provide feedback on the outputs generated which can be utilized to personalize the nature of generations to the preference of the users. ## 6\. Conclusion This paper presents an analysis of students’ and instructors’ perspectives on ChatGPT usage within undergraduate engineering programs in Indian universities. Our research incorporates data from 1306 student surveys, 112 student interviews, and 27 instructor interviews across three Indian universities. Our study highlights the potential of ChatGPT like LLMs to reform the academic practices of both teaching and learning. However, there are certain regulatory measures need to be placed to protect students from harming their learning and development. Our findings and recommendations can be used to inform the design of future educational technologies built to assist students and instructors. We also call for dialogues and conversations about reforming traditional teaching methods to adapt educational environments with technological advancements, to enable learning to be enhanced and augmented by these innovations. ## References * (1) * alp ([n. d.]) [n. d.]. AlphaCode. https://alphacode.deepmind.com/. Accessed: 2023-08-18. * cod ([n. d.]) [n. d.]. Amazon CodeWhisperer. https://aws.amazon.com/codewhisperer/. Accessed: 2023-08-18. * cha ([n. d.]) [n. d.]. Chat GPT. https://chat.openai.com/. Accessed: 2023-08-18. * jag ([n. d.]) [n. d.]. GPT-4. https://openai.com/research/gpt-4. Accessed: 2023-09-14. * int ([n. d.]) [n. d.]. Intelligent Tutoring System. https://en.wikipedia.org/wiki/Intelligent_tutoring_system. Accessed: 2023-09-14. * lla ([n. d.]) [n. d.]. Llama2. https://ai.meta.com/llama/. Accessed: 2023-09-14. * mix ([n. d.]) [n. d.]. Mixed Methods Approach. https://en.wikipedia.org/wiki/Multimethodology. Accessed: 2023-08-18. * Ope ([n. d.]) [n. d.]. OpenAI Codex. https://openai.com/blog/openai-codex. Accessed: 2023-08-18. * pro ([n. d.]) [n. d.]. Procedural Generation. https://en.wikipedia.org/wiki/Procedural_generation. Accessed: 2023-09-14. * rol ([n. d.]) [n. d.]. Role Playing Games. https://en.wikipedia.org/wiki/Role-playing_game. Accessed: 2023-09-14. * noa (2023a) 2023a. Exploratory research. https://en.wikipedia.org/w/index.php?title=Exploratory_research&oldid=1165338844 Page Version ID: 1165338844. * noa (2023b) 2023b. People got hooked to ChatGPT, such reactions were given when the site was down - Gearrice. https://www.gearrice.com/update/people-got-hooked-to-chatgpt-such-reactions-were-given-when-the-site-was-down/ Section: Tech World. * noa (2023c) 2023c. Photo-essay. https://en.wikipedia.org/w/index.php?title=Photo-essay&oldid=1151531077 Page Version ID: 1151531077. * Abd-alrazaq et al. (2023) Alaa Abd-alrazaq, Rawan AlSaad, Dari Alhuwail, Arfan Ahmed, Padraig Mark Healy, Syed Latifi, Sarah Aziz, Rafat Damseh, Sadam Alabed Alrazak, and Javaid Sheikh. 2023. Large Language Models in Medical Education: Opportunities, Challenges, and Future Directions. _JMIR Med Educ_ 9 (1 Jun 2023), e48291. https://doi.org/10.2196/48291 * Ahmad et al. (2023) Norita Ahmad, San Murugesan, and Nir Kshetri. 2023. Generative Artificial Intelligence and the Education Sector. _Computer_ 56, 6 (2023), 72–76. https://doi.org/10.1109/MC.2023.3263576 * Andrade (2021) Chittaranjan Andrade. 2021. The Inconvenient Truth About Convenience and Purposive Samples. _Indian Journal of Psychological Medicine_ 43, 1 (2021), 86–88. * Arakawa et al. (2023) Riku Arakawa, Hiromu Yakura, and Masataka Goto. 2023. CatAlyst: Domain-Extensible Intervention for Preventing Task Procrastination Using Large Generative Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 19. https://doi.org/10.1145/3544548.3581133 arXiv:2302.05678 * Ashby et al. (2023) Trevor Ashby, Braden K. Webb, Gregory Knapp, Jackson Searle, and Nancy Fulda. 2023. Personalized Quest and Dialogue Generation in Role-Playing Games: A Knowledge Graph- and Language Model-based Approach. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 20. https://doi.org/10.1145/3544548.3581441 * Aslan et al. (2019) Sinem Aslan, Nese Alyuz, Cagri Tanriover, Sinem E. Mete, Eda Okur, Sidney K. D’Mello, and Asli Arslan Esme. 2019. Investigating the Impact of a Real-time, Multimodal Student Engagement Analytics Technology in Authentic Classrooms. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3290605.3300534 * Ballard et al. (2019) Stephanie Ballard, Karen M. Chappell, and Kristen Kennedy. 2019. Judgment Call the Game: Using Value Sensitive Design and Design Fiction to Surface Ethical Concerns Related to Technology. In _Proceedings of the 2019 on Designing Interactive Systems Conference_ _(DIS ’19)_. Association for Computing Machinery, New York, NY, USA, 421–433. https://doi.org/10.1145/3322276.3323697 * Becker et al. (2023) Brett A. Becker, Paul Denny, James Finnie-Ansley, Andrew Luxton-Reilly, James Prather, and Eddie Antonio Santos. 2023. Programming Is Hard - Or at Least It Used to Be: Educational Opportunities and Challenges of AI Code Generation. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_ (Toronto ON, Canada) _(SIGCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 500–506. https://doi.org/10.1145/3545945.3569759 * Braun and Clarke (2006) Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. _Qualitative Research in Psychology_ 3 (Jan. 2006), 77–101. https://doi.org/10.1191/1478088706qp063oa * Chantiri (2023) Emily Chantiri. 2023. ‘I used ChatGPT to answer interview questions’. https://ia.acs.org.au/article/2023/-i-used-chatgpt-to-answer-interview-questions-.html * Chung et al. (2022) John Joon Young Chung, Wooseok Kim, Kang Min Yoo, Hwaran Lee, Eytan Adar, and Minsuk Chang. 2022. TaleBrush: Sketching Stories with Generative Pretrained Language Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 19. https://doi.org/10.1145/3491102.3501819 * Cipriano and Alves (2023) Bruno Pereira Cipriano and Pedro Alves. 2023. GPT-3 vs Object Oriented Programming Assignments: An Experience Report. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 61–67. https://doi.org/10.1145/3587102.3588814 * Dang et al. (2023) Hai Dang, Sven Goller, Florian Lehmann, and Daniel Buschek. 2023. Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 17. https://doi.org/10.1145/3544548.3580969 arXiv:2303.03199 * Daun and Brings (2023) Marian Daun and Jennifer Brings. 2023. How ChatGPT Will Change Software Engineering Education. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 110–116. https://doi.org/10.1145/3587102.3588815 * Deng et al. (2023) Xiang Deng, Vasilisa Bashlovkina, Feng Han, Simon Baumgartner, and Michael Bendersky. 2023. What Do LLMs Know about Financial Markets? A Case Study on Reddit Market Sentiment Analysis. In _Companion Proceedings of the ACM Web Conference 2023_ (Austin, TX, USA) _(WWW ’23 Companion)_. Association for Computing Machinery, New York, NY, USA, 107–110. https://doi.org/10.1145/3543873.3587324 * Denny et al. (2023) Paul Denny, Viraj Kumar, and Nasser Giacaman. 2023. Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_ (Toronto ON, Canada) _(SIGCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 1136–1142. https://doi.org/10.1145/3545945.3569823 * Dignum (2021) Virginia Dignum. 2021. The role and challenges of education for responsible AI. _London Review of Education_ 19, 1 (2021). https://doi.org/10.14324/LRE.19.1.01 * Elsayed-Ali et al. (2023) Salma Elsayed-Ali, Sara E Berger, Vagner Figueredo De Santana, and Juana Catalina Becerra Sandoval. 2023. Responsible & Inclusive Cards: An Online Card Tool to Promote Critical Reflection in Technology Industry Work Practices. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3544548.3580771 * Finnie-Ansley et al. (2022) James Finnie-Ansley, Paul Denny, Brett A. Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In _Proceedings of the 24th Australasian Computing Education Conference_ (Virtual Event, Australia) _(ACE ’22)_. Association for Computing Machinery, New York, NY, USA, 10–19. https://doi.org/10.1145/3511861.3511863 * Finnie-Ansley et al. (2023) James Finnie-Ansley, Paul Denny, Andrew Luxton-Reilly, Eddie Antonio Santos, James Prather, and Brett A. Becker. 2023. My AI Wants to Know If This Will Be on the Exam: Testing OpenAI’s Codex on CS2 Programming Exercises. In _Proceedings of the 25th Australasian Computing Education Conference_ (Melbourne, VIC, Australia) _(ACE ’23)_. Association for Computing Machinery, New York, NY, USA, 97–104. https://doi.org/10.1145/3576123.3576134 * Floridi and Chiriatti (2020) Luciano Floridi and Massimo Chiriatti. 2020. GPT-3: Its nature, scope, limits, and consequences. _Minds and Machines_ 30 (2020), 681–694. * Gibellini et al. (2023) Giorgia Gibellini, Valeria Fabretti, and Gianluca Schiavo. 2023. AI Education from the Educator’s Perspective: Best Practices for an Inclusive AI Curriculum for Middle School. In _Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems_ _(CHI EA ’23)_. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3544549.3585747 * Hämäläinen et al. (2023) Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 19. https://doi.org/10.1145/3544548.3580688 * Hu and Hu (2023) Krystal Hu and Krystal Hu. 2023. ChatGPT sets record for fastest-growing user base - analyst note. _Reuters_ (Feb. 2023). https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ * Huang (2023) Kalley Huang. 2023. Alarmed by A.I. Chatbots, Universities Start Revamping How They Teach. _The New York Times_ (Jan. 2023). https://www.nytimes.com/2023/01/16/technology/chatgpt-artificial-intelligence-universities.html * Hämäläinen et al. (2023) Perttu Hämäläinen, Mikke Tavast, and Anton Kunnari. 2023. Evaluating Large Language Models in Generating Synthetic HCI Research Data: a Case Study. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, 1–19. https://doi.org/10.1145/3544548.3580688 * Jakesch et al. (2023) Maurice Jakesch, Advait Bhat, Daniel Buschek, Lior Zalmanson, and Mor Naaman. 2023. Co-Writing with Opinionated Language Models Affects Users’ Views. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 15. https://doi.org/10.1145/3544548.3581196 arXiv:2302.00560 * Jensen et al. (2020) Emily Jensen, Meghan Dale, Patrick J. Donnelly, Cathlyn Stone, Sean Kelly, Amanda Godley, and Sidney K. D’Mello. 2020. Toward Automated Feedback on Teacher Discourse to Enhance Teacher Learning. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3313831.3376418 * Jiang et al. (2022) Ellen Jiang, Edwin Toh, Alejandra Molina, Kristen Olson, Claire Kayacik, Aaron Donsbach, Carrie J. Cai, and Michael Terry. 2022. Discovering the Syntax and Strategies of Natural Language Programming with Generative Language Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 19. https://doi.org/10.1145/3491102.3501870 * Jo et al. (2023) Eunkyung Jo, Daniel A. Epstein, Hyunhoon Jung, and Young-Ho Kim. 2023. Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ (Hamburg, Germany) _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, Article 18, 16 pages. https://doi.org/10.1145/3544548.3581503 * Jones et al. (2023) Mirabelle Jones, Christina Neumayer, and Irina Shklovski. 2023. Embodying the Algorithm: Exploring Relationships with Large Language Models Through Artistic Performance. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 24. https://doi.org/10.1145/3544548.3580885 * Kamsin et al. (2012) Amirrudin Kamsin, Ann Blandford, and Anna L. Cox. 2012. Personal task management: my tools fall apart when I’m very busy!. In _CHI ’12 Extended Abstracts on Human Factors in Computing Systems_ _(CHI EA ’12)_. Association for Computing Machinery, New York, NY, USA, 1369–1374. https://doi.org/10.1145/2212776.2212457 * Kim et al. (2020) Jieun Kim, Woochan Kim, Jungwoo Nam, and Hayeon Song. 2020. ”I Can Feel Your Empathic Voice”: Effects of Nonverbal Vocal Cues in Voice User Interface. In _Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems_ _(CHI EA ’20)_. Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3334480.3383075 * Kissinger et al. (2023) Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher. 2023. Opinion | ChatGPT Heralds an Intellectual Revolution. _Wall Street Journal_ (Feb. 2023). https://www.wsj.com/articles/chatgpt-heralds-an-intellectual-revolution-enlightenment-artificial-intelligence-homo-technicus-technology-cognition-morality-philosophy-774331c6 * Kossen et al. (2023) Jannik Kossen, Tom Rainforth, and Yarin Gal. 2023. In-Context Learning in Large Language Models Learns Label Relationships but Is Not Conventional Learning. https://doi.org/10.48550/arXiv.2307.12375 arXiv:2307.12375 [cs]. * Lee et al. (2022) Mina Lee, Percy Liang, and Qian Yang. 2022. CoAuthor: Designing a Human-AI Collaborative Writing Dataset for Exploring Language Model Capabilities. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3491102.3502030 arXiv:2201.06796 * Lee et al. (2020) Yi-Chieh Lee, Naomi Yamashita, Yun Huang, and Wai Fu. 2020. ”I Hear You, I Feel You”: Encouraging Deep Self-disclosure through a Chatbot. In _Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems_ _(CHI ’20)_. Association for Computing Machinery, New York, NY, USA, 1–12. https://doi.org/10.1145/3313831.3376175 * Leinonen et al. (2023a) Juho Leinonen, Paul Denny, Stephen MacNeil, Sami Sarsa, Seth Bernstein, Joanne Kim, Andrew Tran, and Arto Hellas. 2023a. Comparing Code Explanations Created by Students and Large Language Models. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 124–130. https://doi.org/10.1145/3587102.3588785 * Leinonen et al. (2023b) Juho Leinonen, Arto Hellas, Sami Sarsa, Brent Reeves, Paul Denny, James Prather, and Brett A. Becker. 2023b. Using Large Language Models to Enhance Programming Error Messages. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_ (Toronto ON, Canada) _(SIGCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 563–569. https://doi.org/10.1145/3545945.3569770 * Lin et al. (2021) Xinyue Lin, James Connors, Chang Lim, and John R. Hott. 2021. How Do Students Collaborate? Analyzing Group Choice in a Collaborative Learning Environment. In _Proceedings of the 52nd ACM Technical Symposium on Computer Science Education_ _(SIGCSE ’21)_. Association for Computing Machinery, New York, NY, USA, 212–218. https://doi.org/10.1145/3408877.3432389 * Liu et al. (2023a) Haotian Liu, Chunyuan Li, Qingyang Wu, and Yong Jae Lee. 2023a. Visual Instruction Tuning. https://doi.org/10.48550/arXiv.2304.08485 arXiv:2304.08485 [cs]. * Liu et al. (2023b) Michael Xieyang Liu, Advait Sarkar, Carina Negreanu, Benjamin Zorn, Jack Williams, Neil Toronto, and Andrew D. Gordon. 2023b. ”What It Wants Me To Say”: Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 31. https://doi.org/10.1145/3544548.3580817 * Liu and Chilton (2022) Vivian Liu and Lydia B. Chilton. 2022. Design Guidelines for Prompt Engineering Text-to-Image Generative Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3491102.3501825 arXiv:2109.06977 * Lu et al. (2023) Xinyi Lu, Simin Fan, Jessica Houghton, Lu Wang, and Xu Wang. 2023. ReadingQuizMaker: A Human-NLP Collaborative System that Supports Instructors to Design High-Quality Reading Quiz Questions. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, 1–18. https://doi.org/10.1145/3544548.3580957 * MacNeil et al. (2023) Stephen MacNeil, Andrew Tran, Arto Hellas, Joanne Kim, Sami Sarsa, Paul Denny, Seth Bernstein, and Juho Leinonen. 2023. Experiences from Using Code Explanations Generated by Large Language Models in a Web Software Development E-Book. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_ (Toronto ON, Canada) _(SIGCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 931–937. https://doi.org/10.1145/3545945.3569785 * Malinka et al. (2023) Kamil Malinka, Martin Peresíni, Anton Firc, Ondrej Hujnák, and Filip Janus. 2023. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree?. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 47–53. https://doi.org/10.1145/3587102.3588827 * McNutt et al. (2023) Andrew M. McNutt, Chenglong Wang, Robert A. Deline, and Steven M. Drucker. 2023. On the Design of AI-powered Code Assistants for Notebooks. In _Conference on Human Factors in Computing Systems - Proceedings_ , Vol. 1. Association for Computing Machinery. https://doi.org/10.1145/3544548.3580940 arXiv:2301.11178 * Milano (2023) McGrane Joshua A. Leonelli Sabina Milano, Silvia. 2023. Large language models challenge the future of higher education. _Nature Machine Intelligence_ 5 (2023), 333–334. Issue 4. * Mirowski et al. (2023) Piotr Mirowski, Kory W. Mathewson, Jaylen Pittman, and Richard Evans. 2023. Co-Writing Screenplays and Theatre Scripts with Language Models: Evaluation by Industry Professionals. In _Conference on Human Factors in Computing Systems - Proceedings_ , Vol. 34. Association for Computing Machinery. https://doi.org/10.1145/3544548.3581225 arXiv:2209.14958 * Moore et al. (2023) Steven Moore, Richard Tong, Anjali Singh, Zitao Liu, Xiangen Hu, Yu Lu, Joleen Liang, Chen Cao, Hassan Khosravi, Paul Denny, Chris Brooks, and John Stamper. 2023. Empowering Education with LLMs - The Next-Gen Interface and Content Generation. In _Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky_ , Ning Wang, Genaro Rebolledo-Mendez, Vania Dimitrova, Noboru Matsuda, and Olga C. Santos (Eds.). Springer Nature Switzerland, Cham, 32–37. * Mukherjee et al. (2023) Anwesha Mukherjee, Vagner Figueredo De Santana, and Alexis Baria. 2023. ImpactBot: Chatbot Leveraging Language Models to Automate Feedback and Promote Critical Thinking Around Impact Statements. In _Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems_ _(CHI EA ’23)_. Association for Computing Machinery, New York, NY, USA, 1–8. https://doi.org/10.1145/3544549.3573844 * Noonan (2023) Nick Noonan. 2023. Creative Mutation: A Prescriptive Approach to the Use of ChatGPT and Large Language Models in Lawyering. _SSRN_ (2023). https://ssrn.com/abstract=4406907 * Ouh et al. (2023) Eng Lieh Ouh, Benjamin Kok Siew Gan, Kyong Jin Shim, and Swavek Wlodkowski. 2023. ChatGPT, Can You Generate Solutions for My Coding Exercises? An Evaluation on Its Effectiveness in an Undergraduate Java Programming Course.. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 54–60. https://doi.org/10.1145/3587102.3588794 * Petridis et al. (2023) Savvas Petridis, Nicholas Diakopoulos, Kevin Crowston, Mark Hansen, Keren Henderson, Stan Jastrzebski, Jeffrey V Nickerson, and Lydia B Chilton. 2023. AngleKindling: Supporting Journalistic Angle Ideation with Large Language Models. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ (Hamburg, Germany) _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, Article 225, 16 pages. https://doi.org/10.1145/3544548.3580907 * Polak et al. (2022) Sara Polak, Gianluca Schiavo, and Massimo Zancanaro. 2022. Teachers’ Perspective on Artificial Intelligence Education: an Initial Investigation. In _Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems_ _(CHI EA ’22)_. Association for Computing Machinery, New York, NY, USA, 1–7. https://doi.org/10.1145/3491101.3519866 * Rahman and Watanobe (2023) Md. Mostafizer Rahman and Yutaka Watanobe. 2023. ChatGPT for Education and Research: Opportunities, Threats, and Strategies. _Applied Sciences_ 13, 9 (2023). https://doi.org/10.3390/app13095783 * Rakova et al. (2021) Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where Responsible AI meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. _Proceedings of the ACM on Human-Computer Interaction_ 5, CSCW1 (April 2021), 7:1–7:23. https://doi.org/10.1145/3449081 * Reddy et al. (2021) Anuradha Reddy, A. Baki Kocaballi, Iohanna Nicenboim, Marie Louise Juul Søndergaard, Maria Luce Lupetti, Cayla Key, Chris Speed, Dan Lockton, Elisa Giaccardi, Francisca Grommé, Holly Robbins, Namrata Primlani, Paulina Yurman, Shanti Sumartojo, Thao Phan, Viktor Bedö, and Yolande Strengers. 2021. Making Everyday Things Talk: Speculative Conversations into the Future of Voice Interfaces at Home. In _Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems_ _(CHI EA ’21)_. Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3411763.3450390 * Reddy (2023) Sandeep Reddy. 2023. Evaluating large language models for use in healthcare: A framework for translational value assessment. _Informatics in Medicine Unlocked_ 41 (2023), 101304. https://doi.org/10.1016/j.imu.2023.101304 * Reeves et al. (2023) Brent Reeves, Sami Sarsa, James Prather, Paul Denny, Brett A. Becker, Arto Hellas, Bailey Kimmel, Garrett Powell, and Juho Leinonen. 2023. Evaluating the Performance of Code Generation Models for Solving Parsons Problems With Small Prompt Variations. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 299–305. https://doi.org/10.1145/3587102.3588805 * Ruan et al. (2019) Sherry Ruan, Liwei Jiang, Justin Xu, Bryce Joe Kun Tham, Zhengneng Qiu, Yeshuang Zhu, Elizabeth L. Murnane, Emma Brunskill, and James A. Landay. 2019. QuizBot: A Dialogue-based Adaptive Learning System for Factual Knowledge. In _Conference on Human Factors in Computing Systems - Proceedings_ , Vol. 13. Association for Computing Machinery. https://doi.org/10.1145/3290605.3300587 * Sallam (2023) Malik Sallam. 2023. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. _Healthcare_ 11, 6 (2023). https://doi.org/10.3390/healthcare11060887 * Sarsa et al. (2022) Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In _Proceedings of the 2022 ACM Conference on International Computing Education Research - Volume 1_ (Lugano and Virtual Event, Switzerland) _(ICER ’22)_. Association for Computing Machinery, New York, NY, USA, 27–43. https://doi.org/10.1145/3501385.3543957 * Savelka et al. (2023) Jaromir Savelka, Arav Agarwal, Christopher Bogart, Yifan Song, and Majd Sakr. 2023. Can Generative Pre-Trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses?. In _Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1_ (Turku, Finland) _(ITiCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 117–123. https://doi.org/10.1145/3587102.3588792 * Schiff et al. (2020) Daniel Schiff, Bogdana Rakova, Aladdin Ayesh, Anat Fanti, and Michael Lennon. 2020. Principles to Practices for Responsible AI: Closing the Gap. https://doi.org/10.48550/arXiv.2006.04707 arXiv:2006.04707 [cs]. * Shoufan (2023) Abdulhadi Shoufan. 2023. Exploring Students’ Perceptions of ChatGPT: Thematic Analysis and Follow-Up Survey. _IEEE Access_ 11 (2023), 38805–38818. https://doi.org/10.1109/ACCESS.2023.3268224 * Skjuve et al. (2023) Marita Skjuve, Asbjørn Følstad, and Petter Bae Brandtzaeg. 2023. The User Experience of ChatGPT: Findings from a Questionnaire Study of Early Users. In _Proceedings of the 5th International Conference on Conversational User Interfaces_ (Eindhoven, Netherlands) _(CUI ’23)_. Association for Computing Machinery, New York, NY, USA, Article 2, 10 pages. https://doi.org/10.1145/3571884.3597144 * Smolansky et al. (2023) Adele Smolansky, Andrew Cram, Corina Raduescu, Sandris Zeivots, Elaine Huber, and Rene F. Kizilcec. 2023. Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education. In _Proceedings of the Tenth ACM Conference on Learning @ Scale_ (Copenhagen, Denmark) _(L@S ’23)_. Association for Computing Machinery, New York, NY, USA, 378–382. https://doi.org/10.1145/3573051.3596191 * Toxtli et al. (2018) Carlos Toxtli, Andrés Monroy-Hernández, and Justin Cranshaw. 2018. Understanding Chatbot-mediated Task Management. In _Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems_ _(CHI ’18)_. Association for Computing Machinery, New York, NY, USA, 1–6. https://doi.org/10.1145/3173574.3173632 * Valencia et al. (2023) Stephanie Valencia, Richard Cave, Krystal Kallarackal, Katie Seaver, Michael Terry, and Shaun K. Kane. 2023. ”The less I type, the better”: How AI Language Models can Enhance or Impede Communication for AAC Users. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 14. https://doi.org/10.1145/3544548.3581560 * Wang et al. (2023a) Bryan Wang, Gang Li, and Yang Li. 2023a. Enabling Conversational Interaction with Mobile UI using Large Language Models. In _Conference on Human Factors in Computing Systems - Proceedings_ , Vol. 17. Association for Computing Machinery. https://doi.org/10.1145/3544548.3580895 arXiv:2209.08655 * Wang et al. (2019) Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing Theory-Driven User-Centric Explainable AI. In _Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems_ _(CHI ’19)_. Association for Computing Machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3290605.3300831 * Wang et al. (2021) Qiaosi Wang, Koustuv Saha, Eric Gregori, David A. Joyner, and Ashok K. Goel. 2021. Towards mutual theory of mind in human-ai interaction: How language reflects what students perceive about a virtual teaching assistant. In _Conference on Human Factors in Computing Systems - Proceedings_ , Vol. 14. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445645 * Wang et al. (2023b) Sitong Wang, Savvas Petridis, Taeahn Kwon, Xiaojuan Ma, and Lydia B. Chilton. 2023b. PopBlends: Strategies for Conceptual Blending with Large Language Models. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery, 19. https://doi.org/10.1145/3544548.3580948 arXiv:2111.04920 * Weitekamp et al. (2020) Daniel Weitekamp, Erik Harpstead, and Ken R. Koedinger. 2020. An Interaction Design for Machine Teaching to Develop AI Tutors. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3313831.3376226 * Wermelinger (2023) Michel Wermelinger. 2023. Using GitHub Copilot to Solve Simple Programming Problems. In _Proceedings of the 54th ACM Technical Symposium on Computer Science Education V. 1_ (Toronto ON, Canada) _(SIGCSE 2023)_. Association for Computing Machinery, New York, NY, USA, 172–178. https://doi.org/10.1145/3545945.3569830 * Williamson et al. (2023) Ben Williamson, Felicitas Macgilchrist, and John Potter. 2023. Re-examining AI, automation and datafication in education. _Learning, Media and Technology_ 48, 1 (2023), 1–5. https://doi.org/10.1080/17439884.2023.2167830 arXiv:https://doi.org/10.1080/17439884.2023.2167830 * Winkler et al. (2020) Rainer Winkler, Sebastian Hobert, Antti Salovaara, Matthias Söllner, and Jan Marco Leimeister. 2020. Sara, the Lecturer: Improving Learning in Online Education with a Scaffolding-Based Conversational Agent. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3313831.3376781 * Wu et al. (2022) Tongshuang Wu, Michael Terry, and Carrie Jun Cai. 2022. AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts. In _Conference on Human Factors in Computing Systems - Proceedings_. Association for Computing Machinery. https://doi.org/10.1145/3491102.3517582 arXiv:2110.01691 * Yeh et al. (2022) Su-Fang Yeh, Meng-Hsin Wu, Tze-Yu Chen, Yen-Chun Lin, XiJing Chang, You-Hsuan Chiang, and Yung-Ju Chang. 2022. How to Guide Task-oriented Chatbot Users, and When: A Mixed-methods Study of Combinations of Chatbot Guidance Types and Timings. In _Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems_ _(CHI ’22)_. Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3491102.3501941 * Yilmaz and Karaoglan Yilmaz (2023) Ramazan Yilmaz and Fatma Gizem Karaoglan Yilmaz. 2023. Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. _Computers in Human Behavior: Artificial Humans_ 1, 2 (2023), 100005. https://doi.org/10.1016/j.chbah.2023.100005 * Zamfirescu-Pereira et al. (2023) J.D. Zamfirescu-Pereira, Richmond Y. Wong, Bjoern Hartmann, and Qian Yang. 2023. Why Johnny Can’t Prompt: How Non-AI Experts Try (and Fail) to Design LLM Prompts. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ (Hamburg, Germany) _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, Article 437, 21 pages. https://doi.org/10.1145/3544548.3581388 * Zhang and Lim (2022) Wencan Zhang and Brian Y Lim. 2022. Towards Relatable Explainable AI with the Perceptual Process. In _Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems_ _(CHI ’22)_. Association for Computing Machinery, New York, NY, USA, 1–24. https://doi.org/10.1145/3491102.3501826 * Zheng (2015) Saijing Zheng. 2015. Retention in MOOCs: Understanding Users’ Motivations, Perceptions and Activity Trajectories. In _Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems_ _(CHI EA ’15)_. Association for Computing Machinery, New York, NY, USA, 247–250. https://doi.org/10.1145/2702613.2702628
Structural Stability Hypothesis of Dual Unitary Quantum Chaos Jonathon Riddell School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, Nottingham, NG7 2RD, UK Curt von Keyserlingk Department of Physics, King’s College London, Strand WC2R 2LS, UK Tomaž Prosen Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, SI1000 Ljubljana, Slovenia Institute of Mathematics, Physics, and Mechanics, Jadranska 19, SI1000 Ljubljana, Slovenia Bruno Bertini School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD, UK Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, Nottingham, NG7 2RD, UK Having spectral correlations that, over small enough energy scales, are described by random matrix theory is regarded as the most general defining feature of quantum chaotic systems as it applies in the many-body setting and away from any semiclassical limit. Although this property is extremely difficult to prove analytically for generic many-body systems, a rigorous proof has been achieved for dual-unitary circuits — a special class of local quantum circuits that remain unitary upon swapping space and time. Here we consider the fate of this property when moving from dual-unitary to generic quantum circuits focussing on the spectral form factor, i.e., the Fourier transform of the two-point correlation. We begin with a numerical survey that, in agreement with previous studies, suggests that there exists a finite region in parameter space where dual-unitary physics is stable and spectral correlations are still described by random matrix theory, although up to a maximal quasienergy scale. To explain these findings, we develop a perturbative expansion: it recovers the random matrix theory predictions, provided the terms occurring in perturbation theory obey a relatively simple set of assumptions. We then provide numerical evidence and a heuristic analytical argument supporting these assumptions. § INTRODUCTION Exact solutions of interacting many body systems are not just monuments to human ingenuity, they are also key instruments in both statistical mechanics and many-body dynamics. In fact, it is often implicitly assumed that each (dynamical) universality class in statistical physics should be endowed with at least one exactly solvable model through which one obtains deeper understanding of the whole universality class. Until recently exact solutions in interacting systems were limited to the Yang-Baxter paradigm which underlies the so-called integrable systems in two spatial (or $1+1$) dimensions <cit.>. The existence of an extensive number of conservation laws in these systems, however, makes their dynamical behaviour non-generic <cit.> and one was left to wonder how to describe generic dynamics. Namely, the ones of the so-called “chaotic" quantum many-body systems, which have only a finite number of conserved charges. While various types of random matrix theories, depending on the model's time-reversal and interaction-locality, turned out to be a successful tool for modelling quantum chaotic systems <cit.>, only very recently the first class of exactly solvable non-integrable systems has been discovered <cit.> and has led to exact solutions for many dynamical problems even in the absence of explicit randomness <cit.>. These are the so-called dual unitary circuits, which are expressed in the form of locally interacting quantum systems in discrete space time, and whose defining feature is that they generate a unitary evolution not only in time but also in space. This fundamental property is most clearly expressed in terms of the so-called space-time duality <cit.>, a space-time swap symmetry of the tensor network diagram representing the physical observable of interest. Among other useful features, dual-unitary circuits allow for exact analytical computation of dynamical correlation functions of local operators <cit.>, as well as long-range two-point spectral correlations as expressed in the form of the Spectral Form Factor (SFF) <cit.>. Given this novel class of exactly solvable chaotic systems a fundamental question concerns the stability or robustness of their dynamical features. One may trace the basic motivation for such a question to the notion of structural stability of hyperbolic flows in classical chaotic dynamical systems <cit.>. Contrary to integrable systems, which are structurally unstable and where perturbative expansions around them generically diverge (cf. Kolmogorov-Arnold-Moser theory in classical dynamical systems, or divergent Feynman diagram expansions of quantum field theories around their free/non-coupled limits), chaotic systems are expected to be robust against typical perturbations. In this work we formulate the hypothesis of structural stability of Floquet dual-unitary circuits (therefore focusing on discrete time-dynamics with explicit time translation symmetry), and streamline a simple strategy for its verification. Specifically, we conjecture that small, time-translation invariant perturbations of dual-unitary circuits (in the quantum chaotic and ergodic regime) remain quantum chaotic and ergodic. An equivalent re-formulation of the hypothesis in terms of the so-called spectral Lyapunov exponents (SLE) <cit.> states that the leading SLE in each of the $t$ time-translation symmetry sectors decays to $1$ exponentially fast in time $t$, which in turn implies a linear in $t$ growth of the SFF in the thermodynamic limit. We note that a somewhat simpler approach to structural stability hypothesis has been addressed by two of us and P. Kos in Ref. <cit.>, which studied the robustness of local dynamical correlators. Importantly, however, the decay of local correlators is not sufficient for establishing quantum chaos and ergodicity, hence our current study of SFF, a global (non-local) dynamical observable, provides a more stringent characterisation. We also stress that our study goes substantially beyond the one presented in Ref. <cit.>, where the stability of the SFF was investigated for a specific dual unitary circuit at first order in perturbation theory. Clearly, this question is very challenging, and extremely difficult to address in full mathematical rigour. The purpose of our paper is to establish the minimal set of assumptions — which turn out to be two — needed for a rigorous proof of the stability of the chaotic SFF. We do this by formulating a perturbation theory for the SLE, and identifying sufficient conditions for the expansion to behave in a way that is consistent with ergodicity. We furthermore verify these two assumptions numerically. We now summarise our assumptions and results in more detail. We study the SFF averaged over an ensemble of locally constructed unitaries $\mathcal{E}$ of an extended 1D system with $L$ sites $K(t,L)=\mathbb{E}_{\mathbb{U} \in \mathcal{E}} \left[|\mathrm{tr} \mathbb{U}^t|^2\right]$. The ensemble is parameterised by $\epsilon$, where the unperturbed point $\epsilon=0$ corresponds to an ensemble of dual unitary Floquet evolutions. Using the space-time duality noted above, the SFF may be recast as ${K(t,L) = \tr \mathcal{T}^L}$ for a SFF transfer matrix $\mathcal{T}$ acting on $2t$ sites <cit.>. The linear-in-$t$ ramp in the SFF characteristic of ergodic systems (see Eq. (<ref>)) is obtained if the leading $t$ eigenvalues of $\mathcal{T}$ are are nearly degenerate, i.e., exponentially close to unity, $\lambda_{j=1,\ldots,t}=1+e^{-O(t)}$. The unperturbed dual-unitary model provably has this eigenvalue structure; we investigate the circumstances under which it remains true in eigenvalue perturbation theory in $\epsilon$. We show that it follows from two assumptions. The first of these stipulates a non-crossing of the leading eigenvalue of $\mathcal{T}$ as a function of $\epsilon$. Recalling the von Neumann-Wigner theorem <cit.>, this corresponds to a genericity assumption on the perturbation. The second, more substantial assumption can be expressed in terms of an exponential bound on certain multi-point correlation functions involving the unperturbed SFF transfer matrix. We numerically demonstrate the validity of these assumptions in a particular family of perturbed Floquet dual unitary circuits, and corroborate those results with a heuristic analytical argument involving the spectral decomposition of the resolvent of the unperturbed SFF transfer matrix. The rest of the paper is structured as follows. In Sec. <ref> we recall the precise setting considered, introduce a simple minimal model that we use for the numerical tests, and briefly describe the numerical methods used in our computations. In Sec. <ref> we present a numerical survey suggesting that the dual unitary behaviour is indeed structurally stable. In Sec. <ref> we present our perturbative argument. In Sec. <ref> we discuss the validity of our second assumption, presenting numerical tests and an heuristic analytical argument. Finally, in Sec. <ref> we report our conclusions and discuss the outlook of our research. Some technical details and proofs are reported in the three appendices. § SETTING §.§ Physical System [baseline=(current bounding box.center), scale=0.65] ıin 0,...,4 [ thick] (-2*ı+2-1.5,4) arc (135:-0:0.15); [ thick] (-2*ı+2-2.5,4) arc (-325:-180:0.15); [ thick] (-2*ı+2-1.5,-2) arc (-45:180:-0.15); [ thick] (-2*ı+2-2.5,-2) arc (45:-180:0.15); ıin 0,...,4 [thick, opacity=0.7, dashed] (-2*ı+0.75,4) – (-2*ı+.75,-2); [thick, opacity=0.7, dashed] (-2*ı+0.75-1.5,4) – (-2*ı+.75-1.5,-2); ıin 0,1,2 [thick, opacity=0.7, dashed] (-9.5,2*ı-1.7) – (0.4,2*ı-1.7); [thick, opacity=0.7, dashed] (-9.5,2*ı-1.3) – (0.4,2*ı-1.3); ıin 3,...,13 [opacity=0.4] (-12.5+ı,-2.5) – (-12.5+ı,4.5); ıin -1,...,5 [opacity=0.4] (-10,3-ı) – (1,3-ı); [thick] (0.5,2*ı-0.5-3.5) arc (45:-90:0.17); [thick] (-10+0.5+0,2*ı-0.5-3.5) arc (90:270:0.15); [ thick] (0.5,1+2*ı-0.5-3.5) arc (-45:90:0.17); [ thick] (-10+0.5,1+2*ı-0.5-3.5) arc (270:90:0.15); ıin 1,2 ıin 1,3 ıin 1,3,5 ıin 1,2,3 [evaluate=as ȷusing -2*(ceil(/2)-/2)] in -1,-3,-5 ıin 1 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred10, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); ıin 2 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred9, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); ıin 3 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred8, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); ıin 4 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred7, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); ıin 5 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred6, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in -4,-2,0 ıin 1 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred5, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); ıin 2 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred4, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); ıin 3 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred3, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); ıin 4 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred2, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); ıin 5 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred1, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); [x=0.47,y=-2.9]$L\equiv 0$ Diagrammatic representation of $\tr[\mathbb{U}^t]$. The boxes represent local gates and different legs act on different spatial sites. Matrix product is represented by joining legs and goes from bottom to top. The lines at the left and right edges are joined because we consider periodic boundary conditions ($L\equiv 0$), while those at the top and bottom are joined because of the trace. The background grid specifies the space-time lattice. The gates in the same vertical column are identical. We consider a unitary quantum circuit acting on a chain of $2L$ qubits (the local Hilbert space has dimension $d=2$) at half-integer positions that are evolved by discrete applications of the Floquet operator $\mathbb{U} = \mathbb{U}_o \mathbb{U}_e$ such that \begin{equation} \mathbb{U}_o = U_0 \otimes\cdots\otimes U_{L-1}, \quad \mathbb{U}_e =U_{1/2} \otimes\cdots\otimes U_{L-1/2}. \label{eq:evolutionoperator} \end{equation} Here $\{U_{x}\}_{x=0, 1/2,\ldots, L-1/2}$ are the local gates, i.e., unitary matrices acting on two adjacent qubits, at positions $x$ and $x+1/2$. Matrices acting at different positions are generically different and we denote by the subscript $x$ the leftmost site where the matrix acts non-trivially. The local gates can be parameterised as U_x = V_x ·(u_x ⊗v_x), \begin{equation} \begin{aligned} &V_x \equiv e^{i \sum_{k=1}^3 J_{k, x} \sigma^{(k)}_x \sigma^{(k)}_{x+1/2}},\,\,\\ & u_x \equiv e^{i \boldsymbol{\theta}_x \cdot \boldsymbol{\sigma}_x},\,\, v_x \equiv e^{i \boldsymbol{\phi}_x \cdot \boldsymbol{\sigma}_{x+1/2}}, \end{aligned} \label{paramVuv} \end{equation} where $\boldsymbol{\sigma}=(\sigma^{(1)},\sigma^{(2)},\sigma^{(3)})$ is a vector of Pauli matrices, and $\boldsymbol{\sigma}_x$ is the corresponding local embedding in $(\mathbb C^2)^{\otimes 2L}$, while $(u_x \otimes v_x)$ is a tensor product of two one-site unitaries $u,v$ positioned at sites $x$ and $x+1/2$ respectively. Since the operator $\mathbb{U}$ is a special kind of matrix-product-operator, it can be depicted using the standard diagrammatic representation of tensor networks <cit.>. In particular, see Fig. <ref> for a diagrammatic portray of $\tr[\mathbb{U}^t]$ where each local gate is represented by U_x=[baseline=(current bounding box.center), scale=.7] [ thick] (-4.25,0.5) – (-3.25,-0.5); [ thick] (-4.25,-0.5) – (-3.25,0.5); [ thick, fill=myred, rounded corners=2pt] (-4,0.25) rectangle (-3.5,-0.25); [thick] (-3.75,0.15) – (-3.75+0.15,0.15) – (-3.75+0.15,0); and different shades denote in principle different matrices. Note that, choosing $J_{1,x} = J_{2,x} = {\pi}/{4}$ the quantum circuit becomes dual-unitary <cit.>, and also the left-to-right contraction of the diagram can be thought of as the trace of a power of unitary operator <cit.>. For simplicity from now on we focus on the case where the interaction term is the same at each half step. Namely, we consider J^'_j,x=J^'_j, J_j,x+1/2'=J_j', j=1,2,3, x ∈ℤ_L, and set V = e^i ∑_k=1^3 J_kσ^(k) ⊗σ^(k), W = e^i ∑_k=1^3 J_k'σ^(k) ⊗σ^(k), while the one-site gates $u_x$ and $v_x$ are position dependent, i.e., \boldsymbol{\phi}_x$ in (<ref>) are explicitly $x-$dependent. §.§ Spectral Form Factor and Space Transfer Matrix Our aim is to characterise the spectral statistics of the Floquet operator (<ref>) for generic choices of the local gates. Namely, we want to understand the general features of the distribution of the eigenvalues of $\mathbb U$, i.e., spect[𝕌]= {e^i φ_j; j=1,2…,2^2L} , where the quasienergies $\varphi_j$ can be taken in $[0,2\pi)$. To this end we compute the spectral form factor (SFF) \begin{equation} \label{eq:SFF} K(t,L) = \mathbb E\left[\sum_{j,j'=1}^{2^{2L}} e^{i (\varphi_j-\varphi_{j'}) t}\right], \end{equation} which measures spectral correlations over arbitrary distance and, over the last few years, has emerged as the standard spectral-correlation measure in extended systems, see, e.g., Refs. <cit.>. Here $\mathbb{E}[\cdot]$ denotes an expectation value over an ensemble of similar systems, which we conveniently generate by taking the local gates $v_x, u_x$ in Eq. (<ref>) to be independent and identically-distributed random matrices (equivalently one can take the angles $\{\boldsymbol \theta_x, \boldsymbol \eta_x\}$ to be i.i.d. random). The specific distribution of $\{v_x, u_x\}$ is irrelevant for the discussion below, and we will make a concrete choice when discussing the parameterisation used in our numerical studies (cf. Sec. <ref>). Introducing the average allows us to filter out the system-specific details and obtain a universal result which is expected to only depend on gross properties of the system. Specifically, in ergodic systems the SFF is expected to take a universal form that coincides with that observed in random matrices of the same size. The specific random-matrix ensemble to compare with depends on the anti-unitary symmetries of the Floquet operator (e.g. time reversal symmetry). In the generic case of no anti-unitary symmetries, which is the one of interest here, the relevant prediction is that of Dyson's Circular Unitary Ensemble (CUE), which reads as <cit.> \begin{align} K_{\rm CUE}(t,L) &= {\rm min}(t, 2^{2L})\,, \label{eq:CUEform} \end{align} showing a characteristic ramp-like shape. On the other hand, whenever the system is strongly non-ergodic (e.g. integrable, or localised) the energy levels are expected to be statistically independent. This means that the SFF should reproduce the Poissonian-distribution result K_Poisson(t,L) = 2^2L . [baseline=(current bounding box.center), scale=0.65] ıin 1,...,5 [thick, opacity=0.7, dashed] (2*ı+2-12.5+0.255,-1.75-0.1) – (2*ı+2-12.5+0.255,4.25-0.1); [thick, opacity=0.7, dashed] (2*ı+2-11.5-0.255,-1.75-0.1) – (2*ı+2-11.5-0.255,4.25-0.1); ıin 1,...,5 [thick] (2*ı+2-11.5,4) arc (-45:175:0.15); [thick] (2*ı+2-11.5,-2) arc (315:180:0.15); [thick] (2*ı+2-0.5-12,-2) arc (-135:0:0.15); ıin 2,...,6 [thick] (2*ı+2-2.5-12,4) arc (225:0:0.15); ıin 0,1,2 [thick, opacity=0.7, dashed] (-9.5,2*ı-1.745) – (0.4,2*ı-1.745); [thick, opacity=0.7, dashed] (-9.5,2*ı-1.255) – (0.4,2*ı-1.255); [thick] (0.5,2*ı-0.5-3.5) arc (45:-90:0.15); [thick] (-10+0.5+0,2*ı-0.5-3.5) arc (45:270:0.15); [ thick] (0.5,1+2*ı-0.5-3.5) arc (-45:90:0.15); [ thick] (-10+0.5,1+2*ı-0.5-3.5) arc (315:90:0.15); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in -1,-3,-5 ıin 1,...,5 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myorange, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+2,-1.35-) – (-2*ı+2.15,-1.35-) – (-2*ı+2.15,-1.5-); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in -4,-2,0 ıin 1,...,5 [thick] (.5-2*ı-1*ȷ,-2-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-2-); [thick] (.5-2*ı-1*ȷ,-1-1*) – (1-2*ı-1*ȷ,-1.5-); [thick] (1-2*ı-1*ȷ,-1.5-1*) – (1.5-2*ı-1*ȷ,-1-); [thick, fill=myred, rounded corners=2pt] (0.75-2*ı-1*ȷ,-1.75-) rectangle (1.25-2*ı-1*ȷ,-1.25-); [thick] (-2*ı+1,-1.35-) – (-2*ı+1.15,-1.35-) – (-2*ı+1.15,-1.5-); in 0,2,4 [ thick, fill=myblue1, rounded corners=2pt] (0.5,-1+) circle (.15); [ thick, fill=myblue2, rounded corners=2pt] (-0.5,-1+) circle (.15); [ thick, fill=myblue3, rounded corners=2pt] (-1.5,-1+) circle (.15); [ thick, fill=myblue4, rounded corners=2pt] (-2.5,-1+) circle (.15); [ thick, fill=myblue5, rounded corners=2pt] (-3.5,-1+) circle (.15); [ thick, fill=myblue6, rounded corners=2pt] (-4.5,-1+) circle (.15); [ thick, fill=myblue7, rounded corners=2pt] (-5.5,-1+) circle (.15); [ thick, fill=myblue8, rounded corners=2pt] (-6.5,-1+) circle (.15); [ thick, fill=myblue9, rounded corners=2pt] (-7.5,-1+) circle (.15); [ thick, fill=myblue10, rounded corners=2pt] (-8.5,-1+) circle (.15); [ thick, fill=myblue8, rounded corners=2pt] (0.5,) circle (.15); [ thick, fill=myblue9, rounded corners=2pt] (-0.5,) circle (.15); [ thick, fill=myblue1, rounded corners=2pt] (-1.5,) circle (.15); [ thick, fill=myblue3, rounded corners=2pt] (-2.5,) circle (.15); [ thick, fill=myblue2, rounded corners=2pt] (-3.5,) circle (.15); [ thick, fill=myblue4, rounded corners=2pt] (-4.5,) circle (.15); [ thick, fill=myblue10, rounded corners=2pt] (-5.5,) circle (.15); [ thick, fill=myblue9, rounded corners=2pt] (-6.5,) circle (.15); [ thick, fill=myblue8, rounded corners=2pt] (-7.5,) circle (.15); [ thick, fill=myblue7, rounded corners=2pt] (-8.5,) circle (.15); in 0,-2,-4 ıin 1,...,5 [thick] (-2*ı+1.4,-.95-) – (-2*ı+1.55,-.95-) – (-2*ı+1.55,-1.1-); ıin 1,...,5 [thick] (-2*ı+2.4,0.05-) – (-2*ı+2.55,0.05-) – (-2*ı+2.55,-0.1-); ıin 1,...,5 [thick] (-2*ı+1.45,-0.1-) – (-2*ı+1.45,0.05-) – (-2*ı+1.6,0.05-); ıin 0,...,4 [thick] (-2*ı+.45,-1.1-) – (-2*ı+.45,-0.95-) – (-2*ı+.6,-0.95-); ıin 1,...,5 [ thick, opacity=0.7, dashed] (2*ı+2-1.485+0.25+-11,-2.5-0.1++2.5) – (2*ı+2-1.485+0.25+-11,3.5-0.1++2.5); [ thick, opacity=0.7, dashed] (2*ı+2-0.525-0.25+-11,-2.5-0.1++2.5) – (2*ı+2-0.525-0.25+-11,3.5-0.1++2.5); ıin 0,1,2 [ thick, opacity=0.7, dashed] (1.75+-11,2*ı-1.25++2.5) – (11.5+-11,2*ı-1.25++2.5); [ thick, opacity=0.7, dashed] (1.5+-11,2*ı-.76++2.5) – (11.5+-11,2*ı-.76++2.5); ıin 1,...,5 [ thick] (2*ı+2-1.5-11+,3.5++2.5) arc (135:-0:0.15); [ thick] (2*ı+2-.5-11+,3.5++2.5) arc (-325:-180:0.15); [ thick] (2*ı+2-1.5-11+,-2.5++2.5) arc (-45:180:-0.15); [ thick] (2*ı+2-0.5-11+,-2.5++2.5) arc (45:-180:0.15); ıin 3,...,5 [ thick] (+.5,2*ı-0.5-3.5+) arc (45:-90:0.15); [ thick] (-10+0.5+0,2*ı-0.5-3.5+) arc (45:280:0.15); [ thick] (+.5,1+2*ı-0.5-3.5+) arc (-45:90:0.15); [ thick] (-10+0.5,1+2*ı-0.5-3.5+) arc (-45:-280:0.15); ıin 1,...,5 [ thick] (+.5-2*ı,6+) – (+1-2*ı,5.5+); [ thick] (+1.5-2*ı,6+) – (+1-2*ı,5.5+); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in 0,...,3 ıin 1,...,5 [ thick] (+.5-2*ı-1*ȷ,2+1*+) – (+1-2*ı-1*ȷ,1.5++); [ thick] (+1-2*ı-1*ȷ,1.5+1*+) – (+1.5-2*ı-1*ȷ,2++); ıin 0,...,4 [ thick] (-.5-2*ı,1+) – (+0.5-2*ı,0+); [ thick] (-0.5-2*ı,0+) – (+0.5-2*ı,1+); [ thick, fill=mygreen, rounded corners=2pt] (-0.25-2*ı,0.25+) rectangle (+.25-2*ı,0.75+); [thick] (-2*ı,0.65+) – (+.15-2*ı,.65+) – (+.15-2*ı,0.5+); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in -1,1,3 ıin 1,...,5 [ thick] (+.5-2*ı-1*ȷ,1+1*+) – (+1-2*ı-1*ȷ,1.5++); [ thick] (+1-2*ı-1*ȷ,1.5+1*+) – (+1.5-2*ı-1*ȷ,1++); [ thick, fill=myblue4, rounded corners=2pt] (+0.75-2*ı-1*ȷ,1.75++) rectangle (+1.25-2*ı-1*ȷ,1.25++); [thick] (+1-2*ı-1*ȷ,1.65+1*+) – (+1.15-2*ı-1*ȷ,1.65+1*+) – (+1.15-2*ı-1*ȷ,1.5+1*+); [evaluate=as ȷusing -2*(ceil(/2)-/2)] in 0,2,4 ıin 1,...,5 [ thick] (+.5-2*ı-1*ȷ,1+1*+) – (+1-2*ı-1*ȷ,1.5++); [ thick] (+1-2*ı-1*ȷ,1.5+1*+) – (+1.5-2*ı-1*ȷ,1++); [ thick, fill=myblue, rounded corners=2pt] (+0.75-2*ı-1*ȷ,1.75++) rectangle (+1.25-2*ı-1*ȷ,1.25++); [thick] (+1-2*ı-1*ȷ,1.65+1*+) – (+1.15-2*ı-1*ȷ,1.65+1*+) – (+1.15-2*ı-1*ȷ,1.5+1*+); in 0.5,2.5,4.5 [ thick, fill=mygray1, rounded corners=2pt] (.5+,-.5++) circle (.15); [ thick, fill=mygray4, rounded corners=2pt] (-.5+,-.5++) circle (.15); [ thick, fill=mygray3, rounded corners=2pt] (-1.5+,-.5++) circle (.15); [ thick, fill=mygray2, rounded corners=2pt] (-2.5+,-.5++) circle (.15); [ thick, fill=mygray1, rounded corners=2pt] (-3.5+,-.5++) circle (.15); [ thick, fill=mygray5, rounded corners=2pt] (-4.5+,-.5++) circle (.15); [ thick, fill=mygray4, rounded corners=2pt] (-5.5+,-.5++) circle (.15); [ thick, fill=mygray8, rounded corners=2pt] (-6.5+,-.5++) circle (.15); [ thick, fill=mygray7, rounded corners=2pt] (-7.5+,-.5++) circle (.15); [ thick, fill=mygray6, rounded corners=2pt] (-8.5+,-.5++) circle (.15); in 1.5,3.5,5.5 [ thick, fill=mygray3, rounded corners=2pt] (.5+,-.5++) circle (.15); [ thick, fill=mygray6, rounded corners=2pt] (-.5+,-.5++) circle (.15); [ thick, fill=mygray1, rounded corners=2pt] (-1.5+,-.5++) circle (.15); [ thick, fill=mygray2, rounded corners=2pt] (-2.5+,-.5++) circle (.15); [ thick, fill=mygray3, rounded corners=2pt] (-3.5+,-.5++) circle (.15); [ thick, fill=mygray4, rounded corners=2pt] (-4.5+,-.5++) circle (.15); [ thick, fill=mygray5, rounded corners=2pt] (-5.5+,-.5++) circle (.15); [ thick, fill=mygray6, rounded corners=2pt] (-6.5+,-.5++) circle (.15); [ thick, fill=mygray7, rounded corners=2pt] (-7.5+,-.5++) circle (.15); [ thick, fill=mygray8, rounded corners=2pt] (-8.5+,-.5++) circle (.15); in 0,-2,-4 ıin 1,...,5 [thick] (-2*ı+1.4+,-.95--0.5++1.5) – (-2*ı+1.55+,-.95--0.5++1.5) – (-2*ı+1.55+,-1.1--0.5++1.5); ıin 1,...,5 [thick] (-2*ı+2.4+,0.05--0.5++1.5) – (-2*ı+2.55+,0.05--0.5++1.5) – (-2*ı+2.55+,-0.1--0.5++1.5); ıin 1,...,5 [thick] (-2*ı+1.45+,-0.1--0.5++1.5) – (-2*ı+1.45+,0.05--0.5++1.5) – (-2*ı+1.6+,0.05--0.5++1.5); ıin 0,...,4 [thick] (-2*ı+.45+,-1.1--0.5++1.5) – (-2*ı+.45+,-0.95--0.5++1.5) – (-2*ı+.6+,-0.95--0.5++1.5); [thin, rounded corners=2pt, fill= gray, opacity=0.5] (1-4.3,4.4) rectangle (1+2-4.3,-0.4); (-2.3,5.4) node $\mathcal T_{v_x,u_x}$; (-2.3,4.8) node[rotate=90] $=$; Graphical representation of $K(t,L)$ (cf. Eq. (<ref>)). The symbols represent the local gates as per Eqs. (<ref>)–(<ref>). Random one-site gates along the same column coincide, while those on different columns are uncorrelated. As discussed in Refs. <cit.> (see also Refs. <cit.> where this approach is applied to non-dual-unitary systems), the spectral form factor of a quantum circuit can be rewritten in terms of the trace of the $L$-th power of a transfer matrix acting along the space direction. The idea is to exploit the symmetry of the diagram in Fig. <ref> under a 90° degree rotation (space-time swap), and the fact that the disorder is uncorrelated in space. The main steps go as follows. First we observe that, for $t>0$, the SFF in Eq. (<ref>) can be written as K(t,L) = 𝔼 [ | ( 𝕌^t ) |^2 ] = 𝔼 [ ( 𝕌⊗𝕌^* )^t ]. Using now the parameterisation (<ref>, <ref>) we can depict the quantity inside the average on the r.h.s. as in Fig. <ref> where we introduced the symbols \begin{align} V & = \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-3.25,-0.5); \draw[ thick] (-4.25,-0.5) -- (-3.25,0.5); \draw[ thick, fill=myred, rounded corners=2pt] (-4,0.25) rectangle (-3.5,-0.25); \draw[thick] (-3.75,0.15) -- (-3.75+0.15,0.15) -- (-3.75+0.15,0); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,, & W &=\begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-3.25,-0.5); \draw[ thick] (-4.25,-0.5) -- (-3.25,0.5); \draw[ thick, fill=myorange, rounded corners=2pt] (-4,0.25) rectangle (-3.5,-0.25); \draw[thick] (-3.75,0.15) -- (-3.75+0.15,0.15) -- (-3.75+0.15,0); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,, u_{x}, w_{x}= \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-4.25,-0.5); \draw[ thick, fill=myblue4, rounded corners=2pt] (-4.25,0) circle (.15); \draw[thick, rotate around = {-45:(0.525-4.77,0.375-0.4)}] (.45-4.77,0.3-0.4) -- (.45-4.77,0.45-0.4) -- (.6-4.77,0.45-0.4); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,,\label{eq:diaggates}\\ V^\dag & = \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-3.25,-0.5); \draw[ thick] (-4.25,-0.5) -- (-3.25,0.5); \draw[ thick, fill=myblue, rounded corners=2pt] (-4,0.25) rectangle (-3.5,-0.25); \draw[thick] (-3.75,0.15) -- (-3.75+0.15,0.15) -- (-3.75+0.15,0); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,, & W^\dag &=\begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-3.25,-0.5); \draw[ thick] (-4.25,-0.5) -- (-3.25,0.5); \draw[ thick, fill=myblue4, rounded corners=2pt] (-4,0.25) rectangle (-3.5,-0.25); \draw[thick] (-3.75,0.15) -- (-3.75+0.15,0.15) -- (-3.75+0.15,0); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,, u_{x}^\dag, w_{x}^\dag= \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4.25,0.5) -- (-4.25,-0.5); \draw[ thick, fill=mygray4, rounded corners=2pt] (-4.25,0) circle (.15); \draw[thick, rotate around = {-45:(0.525-4.77,0.375-0.4)}] (.45-4.77,0.3-0.4) -- (.45-4.77,0.45-0.4) -- (.6-4.77,0.45-0.4); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture}\,,\label{eq:diaggatesdag} \end{align} and random one-site gates of the same shade of colour are the same. As it is clear from the figure, this quantity can be equivalently represented as the trace of a matrix acting on the vertical lattice (of $2t$ qubits). Namely using the matrix $\mathcal T_{v_1,u_1}$ defined in the figure we have K(t,L) = 𝔼 [ ( 𝒯_v_1,u_1⋯𝒯_v_L,u_L )]. Finally we recall that random gates at different positions (i.e. those along different columns in the figure) are uncorrelated, therefore the average $\mathbb{E} \left[\cdot\right]$ is factorised in space and we can bring it inside the trace \begin{equation} \label{eq:transfermat} K(t,L) = \tr \mathcal{T}^L,\qquad \mathcal T \equiv \mathbb{E} \left[\mathcal T_{v_1,u_1}\right]\,. \end{equation} Explicitly, we have introduced the SFF transfer matrix \begin{equation} \mathcal{T} \!=\!(\tilde{\mathbb V} \!\otimes_r\! \tilde{\mathbb V}^*) \mathcal{O}^\dag (\Pi_{2t}\!\otimes_r\! \Pi^*_{2t}) (\tilde{\mathbb W} \!\otimes_r\!\tilde{\mathbb W}^*) (\Pi^{-1}_{2t}\!\otimes_r\! \Pi^{T}_{2t}) \mathcal{O}^{\phantom{\dag}}\!\!\!, \label{eq:SFFTM2} \end{equation} and the tensor product $\otimes_r$ combines operators acting in the forward evolving time-sheet with those acting on the backward evolving one (cf. Eq. (<ref>)): from now on we will refer to the lattices over which these operators act respectively as the forward and backward time lattice. In the above equation $(\cdot)^T$ denotes transposition, $\Pi_{n}$ is the one-site shift operator in a chain of $n$ qubits, and $\mathcal O$ is a non-expanding map implementing the average over the local disorder [The average of a set of unitary operators is non-expanding.], i.e., 𝒪 ≡𝔼[ (u_x ⊗)^⊗t⊗_r (u^†_x ⊗)^⊗t] ×𝔼[ (⊗v_x )^⊗t⊗_r (⊗v^†_x)^⊗t]. Moreover, we defined $\tilde{\mathbb V} $ and $\tilde{\mathbb W} $ as the many-body operators composed by vertical columns of gates acting on the time lattice. More precisely we have \begin{align} \tilde{\mathbb V} &\equiv \tilde V^{\otimes t},& \tilde{\mathbb W} &\equiv \tilde W^{\otimes t}\,, \label{eq:Utildeo} \end{align} where $\tilde V$ and $\tilde W$ are obtained by reshuffling the indices of the local gates to propagate from left to right. Namely, we have defined mapping $\,\tilde{(\cdot )}\,:{\rm End}(\mathbb C^{2}\otimes \mathbb C^{2}) \rightarrow {\rm End}(\mathbb C^{2}\otimes \mathbb C^{2})$ as Õ_ki;lj := O_ij,kl, i,j,k,l∈{0,1} . Note that, although we cannot generically prove that $\mathcal T$ is diagonalisable, Eq. (<ref>) implies that the SFF is solely determined by its spectrum and the size of its Jordan blocks. To see this we write the Jordan decomposition of $\mathcal T$ as follows 𝒯 = ℛ (𝒟+𝒦) ℛ^-1, where $\mathcal{R}$ is invertible, $\mathcal{D}$ is diagonal, and $\mathcal{K}$ is strictly upper triangular with zeros on the diagonal. The eigenvalues of $\mathcal{D}$ coincide with those of $\mathcal{T}$ and the degeneracy of a given eigenvalue $\lambda$ is given by d_λ= ∑_j=1^N_λ dim(J_j, λ), where $j$ labels all the Jordan blocks corresponding to $\lambda$ (their total number is $N_\lambda$), designated as $J_{j, \lambda}$, while ${\rm dim}(A)$ denotes the dimension of the matrix $A$. Plugging the decomposition (<ref>) into (<ref>) we find K(t,L) = 𝒟^L where we used that products of $\mathcal D$ and at least one $\mathcal K$ are traceless (using fact $\mathcal K$ strictly upper diagonal). A notable property of $\mathcal{T}$ is that it has a global $\mathbb{Z}_{t}\times \mathbb{Z}_{t}$ symmetry under independent two-site translations in the forward and backward lattices, i.e. \begin{equation} \label{eq:doubletranslation} [\Pi^{2\tau_1}_{2t} \otimes \Pi^{2\tau_2}_{2t} ,\mathcal{T}] = 0,\qquad \tau_1,\tau_2=0,\ldots,t-1. \end{equation} As a result, we can block-diagonalise it by considering a fixed double-momentum sector labelled by $(\nu,\nu')$, with $\nu,\nu'=0,\ldots,t-1$. Therefore, the eigenvalues of $\mathcal T$ (or $\mathcal{D}$) can be labelled as λ_a,(ν,ν'), ν,ν'=0,…,t-1, where $a=0,\ldots, N_{{(\nu,\nu')}}-1$ with $N_{{(\nu,\nu')}}$ the size of the $(\nu,\nu')$ sector. Noting that the projector onto the sector $(\nu,\nu')$ can be written as $Y^{(\nu)}\otimes Y^{(\nu')}$, where we introduced Y^(ν) = 1/t ∑_τ=0^t-1 exp(2πi τν/t) Π_2t^2τ, such that $Y^{(\nu)}Y^{(\nu')} = Y^{(\nu')}Y^{(\nu)}= \delta_{\nu,\nu'}Y^{(\nu)}$, we find N_(ν,ν')= [Y^(ν)][Y^(ν')] . Ref. <cit.> proved that in the dual unitary limit of the models considered here and away from the trivial non-interacting point (specifically for $J'_{1,2}=J_{1,2}=\pi/4$ and $J'_{3},J_{3}\neq\pi/4$ in Eq. (<ref>)), the transfer matrix $\mathcal{T}$ has exactly $t$ eigenvalues $\lambda = 1$, corresponding to one-dimensional Jordan blocks, while all other eigenvalues are contained in a disk of radius $r < 1$. In fact, there is exactly one maximal-magnitude eigenvalue in each diagonal sector $(\nu,\nu)$, and their corresponding eigenvectors read as |1_(ν,ν)⟩=|*⟩Y^(ν), ν=0,…,t-1. In the r.h.s. of Eq. (<ref>) we represented the operator in Eq. (<ref>) as a state of a Hilbert space with doubled dimension using to the operator-to-state mapping $(\mathbb C^2)^{\otimes 2t} \otimes_r (\mathbb C^2)^{\otimes 2t} \ni \ket{A} \mapsto A \in {\rm End}((\mathbb C^2)^{\otimes 2t})$ such that ⟨i_1⋯i_2t j_1⋯j_2t|A⟩= |j_1⋯j_2t⟩ . The presence of $t$ dominant eigenvalues with unit magnitude allows one to simply show that $K(t,L)$ is indeed described by the CUE form in Eq. (<ref>) for large enough $L$. As pointed out in Ref.  <cit.>, this $t$-fold degeneracy of $\mathcal T$, and the fact the eigenvectors preserve only the diagonal part of its $\mathbb{Z}_{t}\times \mathbb{Z}_{t}$ symmetry, indicates that the ramp in the spectral form factor is a manifestation of a spontaneous breaking of symmetry $\mathbb{Z}_{t}\times \mathbb{Z}_{t}$ to $\mathbb{Z}_{t}$. The goal of this paper is to understand if and why this spontaneous symmetry breaking is stable as one moves away from the dual unitary point, from the perspective of perturbation theory. §.§ Minimal Example Even though our theoretical analysis can be carried out for the full family of circuits, Eqs. (<ref>)–(<ref>), for our numerical investigations it is useful to fix some of the parameters and consider a minimal toy model example. Specifically, we consider \begin{equation} \label{eq:model} V=W \equiv U= e^{i\left( (\tfrac{\pi}{4}-\epsilon_1)\sigma^{(1)} \otimes \sigma^{(1)} + (\tfrac{\pi}{4}-\epsilon_2) \sigma^{(2)} \otimes\sigma^{(2)}\right)}, %+ J \sigma^{(3)} \otimes \sigma^{(3)} \end{equation} set $\theta^{(2)}_x = \phi^{(2)}_x = 0$, and average over $\theta^{(1)}_x, \theta^{(3)}_x, \phi^{(1)}_x, \phi^{(3)}_x$ with a Gaussian measure with zero mean and infinite variance, i.e., maximal disorder strength. This gives \begin{equation} \label{eq:model2} \mathcal{O} = \mathcal{O}_{0}^{(1)} \mathcal{O}_{1/2}^{(1)}\mathcal{O}_{0}^{(3)}\mathcal{O}_{1/2}^{(3)}, \end{equation} where we set 𝒪_s^(α) = lim_σ→∞ exp[- σ^2/2 (∑_τ∈ℤ_t σ^(α)_τ+s⊗_r - ⊗_r σ^(α)_τ+s)^2], $s=0,1/2$. These choices simplify drastically our numerical analysis as they reduce the number of parameters to only two, however, they still contain a rich phenomenology. The two extremal cases are found for $\epsilon_1 = \epsilon_2 =0$, when the model corresponds to an ensemble of ergodic dual unitary circuits, and for $\epsilon_1 = \epsilon_2 = {\pi}/{4}$ when the model is trivially localised as there is no coupling between different sites. We verified that including the term $J \sigma^{(3)} \otimes \sigma^{(3)}$ in the local gate ($J$ has been set to 0 in Eq. (<ref>)), or a finite disorder variance, does not qualitatively modify the numerical results. For instance, in Fig. <ref> we show the gap in the 0,0) momentum sector of $\mathcal T$ at the dual unitary point $\epsilon_1=\epsilon_2=0$, i.e., Δ_0 = 1 - |λ_1|, as a function of $J$ and $t$ in Fig. <ref>. We note that the gap is largest when $J= 0$, and closes ($\Delta_0\to 0$) when $J\to {\pi}/{4}$, i.e., when the local unitary $U$ equals the swap matrix and the circuit becomes non-interacting. We also emphasise that the choice of an infinite variance $\sigma^2\to\infty$, i.e., a maximal disorder strength, should be the most challenging for the survival of dual unitary behaviour away from the dual unitary point. Indeed, this is the optimal regime in which one might expect Floquet many-body localisation (MBL). The gap $\Delta_0(t) = 1-|\lambda_1|$ at the dual unitary point for various $t$ and $J$, all data is collected from $(\nu,\nu') = (0,0)$ symmetry sector. The data for $J = 0$ is featured in Fig. <ref>. The number of parameters can be reduced further by fixing the ratio between $\epsilon_1$ and $\epsilon_2$. The value of the ratio, however, affects rather drastically how dual-unitarity is broken and has to be chosen with care. Here we choose the two values of $\epsilon_2/\epsilon_1$ corresponding to weakest and strongest breaking of dual-unitarity. To find them we note that $\rho_{\tilde U}=\tilde U \tilde U^\dag/4$ is a positive matrix with unit trace [The latter property follows from the fact that, since $\tilde{(\cdot)}$ is just an index reshuffling, we have $\tr\smash{[\tilde U \tilde U^\dag]}=\tr\smash{[U U^\dag]}=4$.] and can be interpreted as a quantum state. Therefore, the dual-unitarity breaking can be estimated by computing the fidelity between $\rho_{\tilde U}$ and the maximally mixed state $\rho_\infty = \1/4$. In particular, an explicit calculation gives F(ρ_∞, ρ_Ũ) = |√(ρ_∞)√(ρ_Ũ)|^2= cos(ϵ_1)^2cos(ϵ_2)^2, where $|A|= \sqrt{A A^\dag}$. This means that, considering without loss of generality ϵ_2≤ϵ_1 =: ϵ, the two cases corresponding respectively to the weakest and strongest breaking of dual unitarity are $\epsilon_2=0$ and $\epsilon_2=\epsilon$. In the following we consider both these cases referring to them, respectively, as Case I and Case II. §.§ Numerical Approaches Besides standard exact diagonalisation of (<ref>) for small systems, in this paper we provide three numerical tests to characterise the space transfer matrix $\mathcal{T}$. The first is an Arnoldi exact diagonalisation method to converge several leading eigenvalues simultaneously. We use this to study the full spectrum of $\mathcal T$ without resolving the translation symmetry in Eq. (<ref>). Due to the number of eigenvalues we want to compare and the behaviour of the spectrum this method is limited to $t\leq 6$ (our full forward-backward time lattice comprises of $4t$ qubits). Next, we isolate a specific double-momentum sector $(\nu,\nu)$ and characterise the action of the transfer matrix within the sector (a detailed discussion of how this is achieved is presented in Appendix  <ref>). This removes degeneracy near the spectral edges of $\mathcal{T}$ and in most cases allows us to study the spectrum with the power iteration method, allowing us to access times $t\leq 9$ (for some data sets we had to use the Arnoldi method also within a given sector as the gap was too small). Moreover, we use this representation to evaluate the coefficients in our pertubative analysis of Sec. <ref>. Our third method is a Monte Carlo-based approach that approximates the maximal eigenvalue in a given double-momentum sector by stochastically unraveling the average in Eq. (<ref>) and allowing us to approach times $t\leq 12$. The idea is to observe that, before the average, the transfer matrices in Eq. (<ref>) are written as the tensor product of two operators (related by complex conjugation) acting only on the forward and backward lattices. Therefore, if we do not perform the average explicitly we can consider only one of the lattices, halving the number of qubits we need to simulate. More concretely we proceed as follows. Instead of considering directly Eq. (<ref>) we construct local gates on the time lattice as \begin{equation} \Tilde{U}_{\Vec{\theta}_x} = \Tilde{U}\cdot (u_x \otimes v_x), \end{equation} \begin{align} & u = \exp \left(i \theta^{(1)} \sigma^{(1)}\right) \exp \left( i \theta^{(3)} \sigma^{(3)}\right),\\ &v = \exp \left(i \phi^{(1)} \sigma^{(1)}\right) \exp \left( i \phi^{(3)} \sigma^{(3)}\right). \end{align} Therefore, the transfer matrix is characterised by four angles $\Vec{\theta} = (\theta^{(1)}_x,\theta^{(3)}_x,\phi^{(1)}_x,\phi^{(3)}_x)$, which are uniformly generated random numbers, and can be written as 𝒯_θ_x = 𝕌̃_θ_x ⊗_r 𝕌̃_θ_x^*, 𝕌̃_θ_x = U_θ_x^⊗t Π_2t U_θ_x^⊗t Π_2t^-1 . We then observe that for large enough $N$ \begin{align} \mathbb{E}\left[\prod_{n=1}^N\mathcal{T}_{\Vec{\theta}_n} |\psi \rangle\otimes_r |\psi\rangle^* \right] %&=\mathbb{E}\left[\prod_{n=1}^N \tilde{\mathbb U}_{\Vec{\theta}_n} |\psi\rangle \otimes_r\!\! \prod_{n=1}^N \tilde{\mathbb U}_{\Vec{\theta}_n}^*|\psi\rangle^* \right] \approx \lambda_0^N c_0|\lambda_0\rangle + \dots, \label{eq:monte} \end{align} where $\ket{\psi}$ is a random state on the forward lattice, $c_0 = \langle \lambda_0| (|\psi\rangle\otimes_r| \psi\rangle^*)$, and the average is performed over all choices of $\Vec{\theta}_n$, $n=1\dots N$. We now estimate the the left hand side of Eq. (<ref>) by sampling over the choices of $\Vec{\theta}_n$. Letting |ψ_N⟩=∏_n=1^N 𝕌̃_θ_n |ψ⟩, we have \begin{equation} \mathbb{E}\left[\prod_{n=1}^N\mathcal{T}_{\Vec{\theta}_n} |\psi \rangle |\psi\rangle \right]\approx \frac{1}{\Lambda} \sum_{m=1}^{\Lambda} \ket{\psi_N}_m \otimes_r \ket{\psi_N}_m, \end{equation} where $m$ labels the samples of $\Vec{\theta}_1, \ldots, \Vec{\theta}_n$ and $\Lambda$ denotes the sample size. Therefore, plugging back into (<ref>) we obtain \begin{equation} \label{eq:monte2} \frac{1}{\Lambda} \sum_{m=1}^{\Lambda} |\langle \psi |\psi_N\rangle_m|^2 \approx \lambda_0^N c_0 (\langle \psi | \otimes_r \langle\psi|^* )|\lambda_0\rangle + \dots. \end{equation} The left hand side of this equation can be easily computed for a large number of samples and moderately large $t$ because it is defined only on the forward lattice (the objects involved live in a vector space of half the size). A simple way to extract $\lambda_0$ from this relation is to take the logarithm of both sides and find the slope of the data as a function of $N$. In what follows we take $\Lambda=10^7$ for small $t$ and reduce our sample size to $\Lambda= 10^5$ for $t=11,12$. This method works best when $\lambda_0$ is significantly larger than one. Finally we mention that if one is only interested in characterising the evolution in time of the leading eigenvalues of $\mathcal T$ one can use the time-evolution-based method recently introduced in Ref. <cit.>. This method assumes a certain structure for the spectrum of $\mathcal T$ and evaluates the leading eigenvalues by performing an evolution in time in a quantum circuit with finite length and judiciously selected twisted boundary conditions. Here, however, we are interested in an assumption-free characterisation of $\mathcal T$ which goes beyond its leading eigenvalues (cf. Sec. <ref>). Therefore, we do not use this approach. § STABILITY OF THE ERGODIC PHASE: NUMERICAL SURVEY $\ln(\lambda_0(t)-1)$ as a function of time for $\epsilon=0.01,0.1,0.2$. Dotted lines indicate numerical fits. Circle data points are retrieved with the power method isolated in the $(\nu,\nu') = (0,0)$ symmetry sector. Diamond data points are calculated using the Monte Carlo. Monte Carlo data consists of $10^7$ samples for $t\leq 10$ and $t >10 $ use $10^5$ data points. In the top two panels we supply exact values and Monte Carlo estimates for all times $t$, while in the bottom two panels we simply plot exact values for $t\leq 9$ and the remaining points are Monte Carlo estimations. As recalled in Sec. <ref>, on the dual-unitary manifold the quantum circuit is chaotic in the sense that its spectral form factor exhibits the linear ramp characteristic of random matrix theory (cf. Eq. (<ref>)). This property is found to correspond to the space transfer matrix $\mathcal{T}$ having $t$ eigenvalues equal to one. In this section we investigate how the spectrum of $\mathcal{T}$ behaves when we move away from the dual unitary point using the model in Eq. (<ref>). Our expectation is that the $\epsilon$-dependence of the spectrum of $\cal T$ should be smooth for small enough $\epsilon$. This means that there should exist a phase where the spectrum of $\cal T$ is qualitatively similar to the that of the dual-unitary point: The leading eigenvalue, or SLE, should have an approximate $t$-fold degeneracy near unity (which becomes increasingly exact as $t\rightarrow \infty$) and the corresponding eigenvectors should lie in the diagonal momentum sectors (i.e. $(\nu,\nu)$). This phenomenology would be consistent with the spontaneous symmetry breaking scenario proposed in Ref. <cit.> (cf. Sec. <ref>). More concretely, we expect that at late enough times Eq. (<ref>) is dominated by the leading eigenvalues so that K(t,L) ≈∑^t-1_ ν=0 λ^L_0,(ν,ν). For $\epsilon=0$ this approximation is exact, the eigenvalues are all equal to $1$, so that $K(t,L)=t$ for large enough $L$. When $\epsilon \neq 0$, the requirement is that a linear ramp ensues on time scales in excess of the so called Thouless time (expected to be sub-polynomial in $L$ in the absence of conservation laws). Combined with Eq. (<ref>), this suggests that the leading eigenvalues in each sector are split from $1$ by an amount that decays exponentially in time, i.e., λ_0,(ν,ν)=1+O(e^- γt), γ>0. On the other hand, an MBL phase should be characterised by a single leading eigenvalue going to $\lambda_{0,(0,0)} = 4$ for large $t$, while all remaining eigenvalues having magnitude less than unity. In the language of Ref. <cit.> this corresponds to a symmetry-unbroken phase. The gap $\Delta = |\lambda_0| - |\lambda_1|$ versus time $t$ for various choices of perturbation. This data was retrieved using the Arnoldi method in the $(\nu,\nu') = (0,0)$ symmetry sector. Let us now proceed to substantiate these expectations using our exact numerical approaches. Even though maximal-magnitude eigenvalues exist in all the $(\nu,\nu)$ symmetry sectors, we begin by focussing our attention on the $(0,0)$ sector (and temporarily drop the sector label) as this gives us the ability to investigate longer times (larger time-lattice sizes). At the end of this section we will discuss all symmetry sectors simultaneously. We observe that in the $(0,0)$ sector, $\epsilon \neq 0 $ implies $\lambda_0(t) > 1$ for all the observable times $t$, that is, perturbing away from the dual unitary point increases the leading eigenvalue (Fig. <ref>). Conversely, in the other sectors we detect a decreasing in the size of the maximal eigenvalue, i.e., $|\lambda_{0,(\nu,\nu)}|<1$ for $\nu\neq 0$ and $\epsilon\neq 0$. In Fig. <ref> we report $\ln(\lambda_0(t)-1)$ versus $t$ for both Cases I and II and variety of values of $\epsilon$, all relatively small. We find good agreement with the following scaling form \begin{equation}\label{eq:lambda0expdecay} \lambda_0(t) \approx 1 + c(\epsilon) e^{-\gamma(\epsilon) t}, \end{equation} where $\gamma(\epsilon)>0$ and $c(\epsilon)$ are constants depending solely on $\epsilon$. In general, we observe that $\gamma(\epsilon)$ decreases monotonically with increasing $\epsilon$. In particular, we find that $\gamma(\epsilon)$ is substantially larger in Case I than in Case II (cf. Sec. <ref>). More specifically, we observe that taking $\epsilon$ twice as big in Case I compared to Case II produces similar $\gamma(\epsilon)$. For example, $\gamma(0.05)|_{\rm Case II} \approx 0.6191$ while $\gamma(0.1)|_{\rm Case I} \approx 0.5962$. This persists for all $\epsilon$ tested. We similarly observe $\gamma(0.3)_{\rm Case II} \approx 0.0687$ and $\gamma(0.6)_{\rm Case I} \approx 0.0805$. For larger values of $\epsilon$ the exponent $\gamma(\epsilon)$ becomes too small to be reliably determined in the time window accessible by our exact methods. However, our numerics indicate that for Case I $\gamma\left(\epsilon \to \frac{\pi}{4}\right)$ is non-zero. This suggests that for Case I ergodicity is stable for the entire parameter regime. In principle one can explore larger times by using the time-evolution based approach introduced in Ref. <cit.> (cf. Sec. <ref>), however, here our focus is chiefly on small $\epsilon$. Having discussed the behaviour of the leading eigenvalue in the $(0,0)$ sector we now move on and study the gap between the latter and the rest of the spectrum. In particular, in Fig. <ref> we report Δ(t) := |λ_0(t)|-|λ_1(t)|, as a function of time and for different choices of $\epsilon$ (again for both Cases I and II). In all the cases explored we find that the gap satisfies $\Delta(t)>0$ in $(0,0)$ sector and, therefore, $\lambda_0(t)$ is sufficient to characterise the large $L$ behaviour of the SFF. This means that, for small $\epsilon$, the $\nu=0$ component of Eq. (<ref>) is confirmed. To conclude our numerical test of Eqs. (<ref>) and (<ref>) it remains to check that this behaviour is mirrored in other symmetry sectors. We achieve this by using the Arnoldi method to check all sectors simultaneously. Our results are reported in Fig. <ref>. Interestingly, the leading eigenvalue in the $(0,0)$ sector is always observed to be the dominant one. Along with this property, it is observed to converge to $1$ more slowly than the other leading eigenvalues. For example the $t=6$, $\epsilon=0.6$ data set for Case I has $\lambda_{0,(0,0)} = 1.45898$ while the second furthest from unity is $\lambda_{0,(1,1)} = \lambda_{0,(5,5)} = 0.838665$. Leading eigenvalues are always found to be real numbers, while sub-leading values in general are real or complex with magnitude smaller than unity. The leading eigenvalue in the $(0,0)$ sector is the only one to be consistently greater than unity, other leading eigenvalues can oscillate around $1$ as functions of time. In summary, our numerical analysis is consistent with the expectation that for small $\epsilon$ the spectrum of $\cal T$ is a smooth deformation of that at the dual unitary point: We have $t$ real eigenvalues close to unity while the rest have significantly smaller magnitudes. We stress that this is the case despite the fact that our minimal model (<ref>, <ref>) involves maximal disorder strength. Interestingly, in all our numerical experiments we also observed that the $(0,0)$ sector has the largest eigenvalue. This is consistent with the symmetry breaking picture of Ref. <cit.>: as the system is perturbed away from the maximally ergodic point, i.e., the dual unitary point, a single symmetric eigenvalue becomes the dominant one. Full spectrum $\lambda_{n,(\nu,\nu)}$ analysis (all diagonal double momentum sectors) of $\mathcal{T}$ for various $\epsilon$ and $t$. Points are plotted in polar coordinates, with the radius being the magnitude of the eigenvalue (this plot therefore covers up some degeneracy in the sub-leading eigenvalues). The polar angle is $2\pi \nu/t$, where $\nu$ labels the symmetry sector $(\nu,\nu)$. Results were obtained by an Arnoldi method converging $n=12$ eigenvalues at the edge of the spectrum. § PERTURBATION THEORY In this section we propose an analytical explanation for the main observation of Sec. <ref>. Namely, that the dual-unitary eigenvalue structure is maintained for finite $\epsilon$ suggesting structural stability of the dual-unitary phase. The idea is to fix $\epsilon$ and write the maximal eigenvalue of $\mathcal{T}$ in each diagonal double-momentum sector $(\nu,\nu)$ — we denote it by $\lambda_{(\nu,\nu)}\equiv\lambda_{0,(\nu,\nu)}$ — as a perturbative series in an auxiliary parameter. Studying this series we then show that, if two assumptions are fulfilled, then $|\lambda_{(\nu,\nu)}-1|$ is bounded by a term that is exponentially small in $t$, implying that our circuit models are ergodic at the considered value of $\epsilon$. Remarkably, this happens even for the minimal model with maximal disorder strength discussed in Sec. <ref>. Specifically, our two assumptions are (i) No “maximal level" crossing occurs in the perturbative expansion, i.e., the evolution of the maximal eigenvalue in each sector can be followed by tracing the smooth deformation of the maximal eigenvalue at the dual-unitary point. (ii) The ($\epsilon$-dependent) coefficients of our perturbative series are bounded by an exponentially decaying function of $t$ and grow at most exponentially in $n$, where $n$ is the perturbative order. A more precise formulation of these assumptions is given in the upcoming derivation. The first assumption is safe for generic enough perturbations: if the perturbation couples all the eigenvectors in a given sector, all adjacent level encounters are avoided crossings. Therefore, Assumption (i) can be rephrased by saying that we assume our perturbation to be sufficiently generic. This assumption is consistent with our numerical survey of Sec. <ref> (cf. Fig. <ref>). The second assumption is our main one and, as we discuss in Sec. <ref>, can be partly justified by an analytical argument. In Sec. <ref> we also show that Assumption (ii) fails at the trivially localised point (i.e. for Case II and $\epsilon=\pi/4$, cf. Sec. <ref>) while we give numerical evidence of it holding for small $\epsilon$. We now proceed to show that when (i) and (ii) hold the quantum circuit (<ref>) is ergodic. We begin considering the eigenvalue equation for the maximal eigenvalue $\lambda$ of the transfer matrix resolved to the double momentum sector $(\nu,\nu)$, namely 𝒯 |*⟩λ = λ|*⟩λ. Here and in the following, we drop the dependence on the subscript $(\nu,\nu)$ whenever it is not ambiguous to do so. Since for $\epsilon=0$ only the diagonal sectors contain the maximal eigenvalues, and the latter are unique, Eq. (<ref>) contains all the necessary information to characterise the spectral form factor for small enough $\epsilon$. It is not obvious, however, that this will continue to hold also for finite $\epsilon$. Here we assume this to be the case, namely we make the following postulate [No leading eigenvalue crossing] The leading eigenvalues of $\mathcal T$ for small enough $\epsilon$ are obtained by smooth deformation of those of $\mathcal T|_{\epsilon=0}$. Namely, $\lambda$ is a smooth function of $\epsilon$. By virtue of Assumption <ref> we can limit our treatment to diagonal double-momentum sectors and only consider Eq. (<ref>), which we treat using a “dressed” perturbative approach. Specifically, we proceed as follows. First we set 𝒯_0 ≡𝒯 |_ϵ=0, T̅_ϵ≡𝒯 -𝒯_0/ϵ, and rewrite Eq. (<ref>) as (𝒯_0 + ϵT̅_ϵ)|*⟩λ = λ|*⟩λ . Next we solve (𝒯_0 + x T̅_ϵ)|*⟩λ = λ|*⟩λ, in perturbation theory in $x$ for fixed $\epsilon$. Finally we recover a perturbative solution of Eq. (<ref>) by setting $x=\epsilon$ in the end. The advantage of this approach is that it involves only a first order correction to the transfer matrix as it happens in standard time-independent perturbation theory in quantum mechanics, at the cost of making the perturbation manifestly $\epsilon$-dependent. Explicitly, we expand both $\ket{\lambda}$ and $\lambda$ in $x$ \begin{align} \ket*{\lambda} &= \sum_{k=0}^{\infty} x^k \ket*{{\lambda}}^{(k)}, & \ket*{\lambda}^{(0)}&=\ket{1}, \label{eq:lambdax}\\ \lambda &= \sum_{k=0}^{\infty} x^k \lambda^{(k)}, & {\lambda^{(0)}}&={1}, \label{eq:lambdax} \end{align} and impose (<ref>) order by order in $x$. As shown explicitly in Appendix <ref>, this yields |*⟩λ^(1) = 𝒢 T̅_ϵ|1⟩, |*⟩λ^(n>1) = ∑_ℓ=1^n-1 ∑_k_1,…,k_ℓ=1 k_1+…+ k_ℓ=n-1^n-1 𝒢 𝒦_k_1 𝒢 𝒦_k_2 ⋯𝒢 𝒦_k_ℓ 𝒢 T̅_ϵ|1⟩, λ^(1) = 1T̅_ϵ1, λ^(2) = 1 T̅_ϵ𝒢 T̅_ϵ1, λ^(n>2) = ∑_ℓ=1^n-2 ∑_k_1,…,k_ℓ=1 k_1+…+ k_ℓ=n-2^n-2 1T̅_ϵ𝒢 𝒦_k_1 𝒢 𝒦_k_2 ⋯𝒦_k_ℓ 𝒢 T̅_ϵ1, where we set \begin{align} &\mathcal Q= \1- \ketbra*{1_{(\nu,\nu)}},\\ &\mathcal G = \sum_{n=0}^\infty \mathcal Q \mathcal{T}^n \mathcal Q = \mathcal Q (1-\mathcal T\mathcal Q)^{-1}, \\ &\mathcal{K}_{k} = \delta_{k,1}\mathcal{\bar{T}}_\epsilon - \lambda^{(k)} \1. \label{eq:Kktilde} \end{align} Note that $\mathcal G$ is the resolvent $\mathcal R(z)=(z \1-\mathcal T_0)^{-1}$ of $\mathcal T_0$ projected away from the leading-eigenvalue subspace and evaluated at $z=1$. The projection makes this operator well defined. Using these expressions one can write $\lambda^{(n)}$ in terms of the following expectation values of products of the perturbation $\bar{\mathcal T}_\epsilon$ and the projected resolvent $\mathcal G$ [k_1,…,k_m] = *1T̅_ϵ𝒢^k_1 T̅_ϵ𝒢^k_2⋯𝒢^k_m T̅_ϵ1, k_j≥1, [,] =*1T̅_ϵ1. Explicitly, the first few orders read as λ^(1) = [,], λ^(2) = [1], λ^(3) = [1,1]-[,][2], λ^(4) = [1,1,1]-[1][2]-[,][2,1]-[,][1,2]+[,]^2 [3] , λ^(5) = [1,1,1,1]-[1,1][2]+[,][2]^2-[1,2][1]-[2,1][1] +[,]^2 [3,1] + [,]^2 [2,2] + [,]^2 [1,3]-[,]^3 [4] . Continuing to arbitrary order we have \begin{align} \lambda^{(n>2)} = & \sum_{q=1}^\infty \sum_{p_1, \ldots, p_q =1}^{\infty} \sum_{ k_{11},\ldots, k_{q p_q} =1}^{n-1} \!\!\!\!\!\!\!\!(-1)^{q+1} C[\{k_{ij}\}] \notag\\ &\quad\times\delta\!\!\left[\sum_{i=1}^q \sum_{j=1}^{p_q} k_{ij} -(n-1)\right] \theta\!\!\left[n-\sum_{i=1}^q (p_i + 1)\right]\notag\\ &\quad\times [\phantom{,}]^{n-\sum_{i=1}^q (p_i + 1)} \prod_{m=1}^q {[k_{m1},\ldots,k_{m p_m}]}, \label{eq:perturbativesymbols} \end{align} where $\delta[x]$ and $\theta[x]$ are equal to one when $x=0$ and $x\geq0$ respectively and to 0 otherwise, while $C[\{k_{ij}\}]$ counts the combinatorial multiplicity of a given term (we do not need its explicit expression). Here $q$ denotes the number of ${[k_1,\ldots,k_m]}$ symbols appearing in a given term, $p_m$ is the length of the $m$-th symbol. Each symbol contains $p_m+1$ factors of the perturbation $\mathrm{\bar{\cal T}_\epsilon}$, while each $[\phantom{,}]$ contains one factor. Therefore, the final line of Eq. (<ref>) contains in total $n$ factors of the perturbation, consistent with it being an $n$-th order perturbative term. To find the constraints we used that ${p_i} +1$ is the number of $\bar{\cal T}_\epsilon$ in ${[k_{i1},\ldots,k_{i p_i}]}$ and $\sum_{j=1}^{p_q} k_{ij}$ is the number of $\mathcal G$s. Since the total number of $\bar{\cal T}_\epsilon$ in each term at order $n$ equals $n$ we have ∑_i=1^q (p_i+1) = n-m , where $m$ is the number of $[\phantom{,}]$'s in the term. On the other hand from the last of (<ref>) and the last of (<ref>) we have that each term contributing to the $n$-th order in perturbation theory contains $n-1$ occurrences of $\mathcal G$. Therefore we find ∑_i=1^q ∑_j=1^p_q k_ij = n-1 . Our goal is to bound from above the magnitude of the perturbative correction $\lambda^{(n)}$ to the maximal eigenvalue. To this end we make the following assumption on the scaling of the symbols ${[k_1,\ldots,k_m]}$ The symbols ${[k_1,\ldots,k_m]}$ and $[\phantom{,}]$can be bounded as follows |[k_1,…,k_m]| ≤e^-β(t-t_0) e^α∑_j=1^m k_j e^γ(m+1), |[,]| ≤e^-β(t-t_0), where $\beta > 0, \alpha, \gamma, t_0 $ are independent of $m,k_j$ and $t$. Using Assumption <ref> in Eq. (<ref>) we obtain the following bound |λ^(n)| ≤e^- α-β(t-t_0) e^(α+ γ) n 𝒩_n, where $\mathcal N_n$ is defined as \begin{align} {\mathcal N}_n \equiv & \sum_{q=1}^\infty \sum_{p_1, \ldots, p_q =1}^{\infty} \sum_{ k_{11},\ldots, k_{q p_q} =1}^{n-1} C[\{k_{ij}\}]\notag\\ &\times\delta\!\!\left[\sum_{i=1}^q \sum_{j=1}^{p_q} k_{ij} \!=\! n-1\right] \theta\!\!\left[n\!-\!\sum_{i=1}^q (p_i+1)\right]\!. \end{align} This number can be computed with three simple observations. First we note that (<ref>) implies ${\mathcal N}_1= {\mathcal N}_{2}=1$. Next, we observe that ${\mathcal N}_{n>2}$ can be alternatively written as 𝒩_n>2 = ∑_ℓ=1^n-2 ∑_k_1,…,k_ℓ=1 k_1+…+ k_ℓ=n-2^n-2 #_k_1,…,k_ℓ, where ${\#}_{k_1,\ldots,k_\ell}$ denotes the number of terms in the expansion of $\mel{1}{ \bar{\mathcal T}_{\epsilon} \mathcal G {\mathcal K}_{k_1} \mathbb G \mathcal {\mathcal K}_{k_2} \cdots {\mathcal K}_{k_\ell} \mathcal G \bar{\mathcal T}_{\epsilon}}{{1}}$. Finally, noting #_k_1,k_2…,k_ℓ = #_k_1,k_2,…,k_ℓ-1 ( 𝒩_1+1), k_ℓ=1, #_k_1,k_2,…,k_ℓ-1 𝒩_k_ℓ, k_ℓ>1, we immediately find ${\mathcal N}_3 =3$ and the following recursive relation 𝒩_n>3 = 𝒩_n-2+ ∑_p=1^n-3 ( 𝒩_p+δ_p,1) 𝒩_n-p= ∑_p=1^n-1 𝒩_p 𝒩_n-p, where we used ${\mathcal N}_{1}= {\mathcal N}_2 =1$. This means that ${\mathcal N}_{n+1}$ fulfils the recursive relation of the Catalan numbers with the same initial condition. Therefore 𝒩_n = 𝒞_n-1 = 1/n 2n-2n-1 ≃4^n/4 n^3/2 √(π) . Plugging back into (<ref>) we find |λ-1| ≤∑_n=1^∞x^n |λ^(n)| = e^- α-β(t-t_0) ∑_n=1^∞x^n e^(α+γ) n 𝒩_n . To conclude we observe that the sum on the r.h.s. is always convergent for small enough $x$. Namely we have convergence whenever x ≤ϵ≤e^-(α+γ)/4. For all the values of $\epsilon$ fulfilling the above bound we then have |λ-1| ≤A(γ, α) e^-βt. This expression recovers Eq. (<ref>) and shows that whenever Assumptions <ref> and <ref> hold the ergodic phase is stable. $[n]$ terms in perturbation theory as a function of $t$ and $n$. Solid dots represent the natural logarithm of data retrieved through exact evaluation of $[n]$ in the $(\nu,\nu') = (0,0)$ sector. Dotted lines are numerical fits indicating exponential dependence on the independent variables $t,n$. $[n]$ terms in perturbation theory as a function of $t$ and $n$. Solid dots represent the natural logarithm of data retrieved through exact evaluation of $[n]$ in the $(\nu,\nu') = (0,0)$ sector. Dotted lines are numerical fits indicating exponential dependence on the independent variables $t,n$. § DISCUSSION OF ASSUMPTION <REF> Our Assumption <ref> on the behaviour of the perturbative coefficients in Eq. (<ref>) can be justified by an analytical argument assisted by numerical observations. For definiteness we again focus on the sector $(\nu,\nu)=(0,0)$, although other double momentum sectors show similar behaviour. We begin by considering the simplest of the coefficients in Eq. (<ref>), i.e., \begin{equation} \label{eq:peaked} [n] = \mel*{1}{\bar{\cal T}_\epsilon \mathcal G^n \bar{\cal T}_\epsilon }{1}, \end{equation} and compute it numerically for $n=1,\ldots,30$ and $t= 3,\ldots,8$. Some representative examples of our results, for both Cases I and II, are reported in Figs. <ref> and <ref>. Overall we see that, in agreement with Eq. (<ref>), the term increases exponentially as a function of $n$ and is exponentially suppressed as a function of $t$. Data extracted from numerical fits in Fig. <ref> and <ref> for different choices of $\epsilon_1,\epsilon_2$. $\alpha(t,\epsilon),\delta(t,\epsilon)$ correspond to the quantities defined in Eq. (<ref>). $\Delta_0(t)$ is the gap at the dual unitary point. Importantly we plot $\Delta_0(9)$ for reference, $\alpha(t,\epsilon), \delta(t,\epsilon)$ were not extracted for $t=9$. A more refined analysis is provided by fixing $t$, varying $n$, and performing a linear fit. Namely we set log|[n]| ≈α(t, ϵ) n + δ(t, ϵ), and find $\alpha (t, \epsilon)$ and $\delta(t, \epsilon)$ providing the best fit [We called the slope $\alpha(t, \epsilon)$ as it plays the same role as $\alpha$ in Eq. (<ref>).]. Studying these coefficients (cf. Fig. <ref>) we find that, very interestingly, the slope $\alpha(t, \epsilon)$ is roughly independent of $\epsilon$. Moreover — and this is a key observation — it matches remarkably well the logarithm of the inverse of the spectral gap calculated at the dual-unitary point, i.e., α(t, ϵ) ≈-logΔ_0(t). In fact, since in this case the largest sub-leading eigenvalue is unique and real (cf. Tab. <ref>) we have $\Delta_0(t)= 1- \lambda_1(t)$. In fact, our Assumption <ref> requires $\delta(t, \epsilon)$ to decrease linearly in $t$. Here our data are less convincing and sensitive to the parity of $t$, but have the correct overall trend. $t$ $\lambda_1$ $\lambda_2$ $3$ $0.4081361$ $-0.0982743\pm 0.1952433i$ $4$ $0.5225734$ $0.3919454$ $5$ $0.4245154$ $0.2848218$ $6$ $0.4221175$ $0.3426469$ $7$ $0.3630244$ $0.2755609$ $8$ $0.3392898$ $0.3102323$ $9$ $0.3113076$ $0.2756089$ In this table we present the raw data used to calculate $\Delta_0(t)$ in Fig. <ref>. We in general observe the value to be real for this choice of parameters. We also include $\lambda_2(t)$ which was extracted from the symmetry resolved Arnoldi method. This result can be reproduced by making two assumptions on the structure of the dual-unitary transfer matrix $\mathcal T_0$. First, we assume that $\mathcal T_0$ is diagonalisable: this seems a reasonable assumption given that $\mathcal T_0$ is an average over matrices, and that defective (i.e. non-diagonalisable) matrices are non-generic. In fact, our upcoming reasoning continues to hold also when there are non-trivial Jordan blocks but only for the eigenvalue $0$. The latter requirement is easier to check numerically, and it is fulfilled in all our numerical observations [Even though the Jordan decomposition is numerically unstable, one can exclude the presence of non-trivial Jordan blocks by verifying that the numerically-computed eigenvalues of $\mathcal T_0$ in a given sector are non-degenerate. In our numerical investigations we only saw degeneracy of the eigenvalue 0.]. Therefore, we write $\mathcal G$ as [To lighten the notation from now on we drop the explicit dependence of $\lambda_j$ on $t$.] 𝒢 = ∑_j> 0 1/(1-λ_j) |λ_j, r⟩⟨λ_j, l|/⟨λ_j, r|λ_j, l⟩ , where $\ket{\lambda_j, {\rm l}}$ and $\bra{\lambda_j, {\rm r}}$ are respectively the (orthogonal) right and left eigenvectors corresponding to the $j$-th sub-leading eigenvalue $\lambda_j$. Next, assuming $1 - \lambda_1 \ll |1-\lambda_{j>1}|$ we truncate the spectral decomposition (<ref>) to the leading eigenvalue 𝒢 ≈1/(1-λ_1) |λ_1, r⟩⟨λ_1, l|/⟨λ_1, r|λ_1, l⟩. This gives [n] ≈1/(1-λ_1)^n 1T̅_ϵλ_1, rλ_1, lT̅_ϵ1/⟨λ_1, r|λ_1, l⟩, which is consistent with the numerical observation (<ref>). In fact, the approximate form (<ref>) of the resolvent can also be used to explain the exponential decay in time observed in the upper panels of Figs. <ref> and <ref>. We begin noting that for a large enough $\ell \in \mathbb N$ one can make the following approximation |λ_1, r⟩⟨λ_1, l|/⟨λ_1, r|λ_1, l⟩ ≈(𝒯_0 𝒬/λ_1)^ℓ= (𝒯_0/λ_1)^ℓ𝒬, where $\mathcal Q$ is the projector on the leading eigenspace of $\mathcal T_0$ (cf. Eq. (<ref>)) and we neglected terms that are prima facile $O((\lambda_j/\lambda_1)^\ell)$. This approximation is in fact more subtle than it might appear because the terms $\ketbra{\lambda_j, {\rm r}}{\lambda_j, {\rm l}}/({\braket{\lambda_j, {\rm r}}{\lambda_j, {\rm l}}})$ can have large operator norm (possibly even exponentially large in $t$). This means that one might need to consider $\ell = O(t)$ to safely neglect higher order terms. Here we assume that this is not the case and take $\ell$ to be $O(t^0)$. Using (<ref>) we can rewrite Eq. (<ref>) as [n] (1-λ_1)^n ≈1/λ_1*1T̅_ϵT^ℓ_0 𝒬 T̅_ϵ1 = 1/λ_1 ϵ^2(*1T T^ℓ_0 T 1-*1T1^2). Next, we rewrite the last line in terms of the original time evolving gates, undoing the space-time duality transformation discussed in Sec. <ref>. The idea is to represent the expectation value of a product of $n$ transfer matrices on a state in terms of the time-evolution operator of a chain of $n$ qubits. The state translates into the boundary conditions imposed on the time-evolution operator. In our case the boundary conditions will pair forward and backward evolution, giving a non-unitary boundary term to the evolution operator. More concretely, considering for instance the first term we have *1T T^ℓ_0 T 1 = ∑_τ=0^t-1 *T T^ℓ_0 T Π_2t^2τ/∑_τ=0^t-1 ⟨|Π_2t^2τ⟩ ≃1/2^2t ∑_τ=0^t-1 *T T^ℓ_0 T Π_2t^2τ, where $\ket{O}$ is the state corresponding to the operator $O$ under the mapping in Eq. (<ref>). In the second step we dropped terms that are at most $O(2^{-t})$ by using ∑_τ=0^t-1 ⟨|Π_2t^2τ⟩ = ∑_τ=0^t-1 2^2 gcd(t, τ) = 2^2t + ∑_τ=1^t-1 2^2 gcd(t,τ) , and noting that the sum on the r.h.s. is bounded by $2^t$. [baseline=(current bounding box.center), scale=0.55] [thick, black,decorate,decoration=brace,amplitude=5pt,mirror,xshift=0.0pt,yshift=-0.0pt](0.5,1.8) – (-10,1.8) node[black,midway,yshift=0.4cm] $2n-1$; [ thick, fill=myyellow1, rounded corners=2pt] (0.5,-0.5) circle (.15); [ thick, fill=myyellow2, rounded corners=2pt] (-0.5,-.5) circle (.15); [ thick, fill=myyellow3, rounded corners=2pt] (-1.5,-.5) circle (.15); [ thick, fill=myyellow4, rounded corners=2pt] (-2.5,-.5) circle (.15); [ thick, fill=myyellow5, rounded corners=2pt] (-3.5,-.5) circle (.15); [ thick, fill=myyellow6, rounded corners=2pt] (-4.5,-.5) circle (.15); [ thick, fill=myyellow7, rounded corners=2pt] (-5.5,-.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (-6.5,-.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-7.5,-.5) circle (.15); [ thick, fill=myyellow10, rounded corners=2pt] (-8.5,-.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-9.5,-0.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (0.5,0.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-0.5,0.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-1.5,0.5) circle (.15); [ thick, fill=myyellow3, rounded corners=2pt] (-2.5,0.5) circle (.15); [ thick, fill=myyellow2, rounded corners=2pt] (-3.5,0.5) circle (.15); [ thick, fill=myyellow4, rounded corners=2pt] (-4.5,0.5) circle (.15); [ thick, fill=myyellow10, rounded corners=2pt] (-5.5,0.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-6.5,0.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (-7.5,0.5) circle (.15); [ thick, fill=myyellow7, rounded corners=2pt] (-8.5,0.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-9.5,0.5) circle (.15); [thick, fill=white] (-10.75,-0.75) circle (0.1cm); [thick, fill=white] (-10.75,0.75) circle (0.1cm); [thick, fill=white] (1.75,1.75) circle (0.1cm); [thick, fill=white] (1.75,0.25) circle (0.1cm); in -0.5 ıin 1,...,5 [ thick] (-2*ı+1.4,-.95-) – (-2*ı+1.55,-.95-) – (-2*ı+1.55,-1.1-); ıin 1,...,6 [ thick] (-2*ı+2.4,0.05-) – (-2*ı+2.55,0.05-) – (-2*ı+2.55,-0.1-); ıin 1,...,5 [ thick] (-2*ı+1.45,-0.1-) – (-2*ı+1.45,0.05-) – (-2*ı+1.6,0.05-); ıin 0,...,5 [ thick] (-2*ı+.45,-1.1-) – (-2*ı+.45,-0.95-) – (-2*ı+.6,-0.95-); [ thick] (3,-1) – (3,3); [thin, rounded corners=2pt, fill= gray, opacity=0.5] (1-10.7,1) rectangle (-.2-10.7,-1); (-10.5,-.75+2) node[rotate=90] $=$; [thin, rounded corners=2pt, fill= gray, opacity=0.5] (.7,2) rectangle (1.9,0); (1.4,-.75+.5) node[rotate=90] $=$; [baseline=(current bounding box.center), scale=0.55] [thick, black,decorate,decoration=brace,amplitude=5pt,mirror,xshift=0.0pt,yshift=-0.0pt](0.5,1.8) – (-10,1.8) node[black,midway,yshift=0.4cm] $2n-1$; [thick, black,decorate,decoration=brace,amplitude=5pt,mirror,xshift=0.0pt,yshift=-0.0pt](6,1.8) – (1.5,1.8) node[black,midway,yshift=0.4cm] $\tau$; [ thick, fill=myyellow1, rounded corners=2pt] (0.5,-0.5) circle (.15); [ thick, fill=myyellow2, rounded corners=2pt] (-0.5,-.5) circle (.15); [ thick, fill=myyellow3, rounded corners=2pt] (-1.5,-.5) circle (.15); [ thick, fill=myyellow4, rounded corners=2pt] (-2.5,-.5) circle (.15); [ thick, fill=myyellow5, rounded corners=2pt] (-3.5,-.5) circle (.15); [ thick, fill=myyellow6, rounded corners=2pt] (-4.5,-.5) circle (.15); [ thick, fill=myyellow7, rounded corners=2pt] (-5.5,-.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (-6.5,-.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-7.5,-.5) circle (.15); [ thick, fill=myyellow10, rounded corners=2pt] (-8.5,-.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-9.5,-0.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (0.5,0.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-0.5,0.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-1.5,0.5) circle (.15); [ thick, fill=myyellow3, rounded corners=2pt] (-2.5,0.5) circle (.15); [ thick, fill=myyellow2, rounded corners=2pt] (-3.5,0.5) circle (.15); [ thick, fill=myyellow4, rounded corners=2pt] (-4.5,0.5) circle (.15); [ thick, fill=myyellow10, rounded corners=2pt] (-5.5,0.5) circle (.15); [ thick, fill=myyellow9, rounded corners=2pt] (-6.5,0.5) circle (.15); [ thick, fill=myyellow8, rounded corners=2pt] (-7.5,0.5) circle (.15); [ thick, fill=myyellow7, rounded corners=2pt] (-8.5,0.5) circle (.15); [ thick, fill=myyellow1, rounded corners=2pt] (-9.5,0.5) circle (.15); [thick, fill=white] (-10.75,-0.75) circle (0.1cm); [thick, fill=white] (-10.75,0.75) circle (0.1cm); in -0.5 ıin 1,...,5 [ thick] (-2*ı+1.4,-.95-) – (-2*ı+1.55,-.95-) – (-2*ı+1.55,-1.1-); ıin 1,...,6 [ thick] (-2*ı+2.4,0.05-) – (-2*ı+2.55,0.05-) – (-2*ı+2.55,-0.1-); ıin 1,...,5 [ thick] (-2*ı+1.45,-0.1-) – (-2*ı+1.45,0.05-) – (-2*ı+1.6,0.05-); ıin 0,...,5 [ thick] (-2*ı+.45,-1.1-) – (-2*ı+.45,-0.95-) – (-2*ı+.6,-0.95-); in 0,...,3 [very thick, rounded corners] (1.75+,-0.75) – (1.75+,-0.3) – (2.75+,-0.3) – (2.75+,0); in 0,...,3 [very thick, rounded corners] (2.75+,0) – (2.75+,1.75); [very thick, rounded corners] (2.75+3,-0.75) – (2.75+3,-0.45) – (1.75,-0.45) – (1.75,0) – (1.75,.25); Diagrammatic representation of $\mathcal B_{n,0}$ (a) and $\mathcal B_{n,\tau\neq0}$ (b). The terms on the r.h.s. of Eq. (<ref>) can be easily translated in the time evolving picture because the states $\ket*{\Pi_{2t}^{2j}}$ implement simple pairings between backward and forward evolution. For instance, the term with $\tau=0$ is written as *T T^ℓ_0 T = 𝔼[[ℬ_ℓ+2,0^t]], where we introduced the $4^{2n-1} \times 4^{2n-1}$ matrix ℬ_n,0 = (( 𝕌 ⊗_r 𝕌^*) ⊗m_n) (m_0⊗(𝕎 ⊗_r 𝕎^*)). The latter is written in terms of the time evolution operators for $2n-2$ qubits ($\in {\rm End}(\mathbb C^{2^{2n-2}})$) \begin{align} & \mathbb U = U_{0} \otimes \ldots \otimes U_{n-1}, \\ & \mathbb W = U_{1/2} \otimes \ldots \otimes U_{n-1/2}, \end{align} and the boundary matrices ($\in {\rm End}(\mathbb C^{4})$) \begin{align} &[m_0]_{ij} = \frac{1}{2} \sum_{r,s=1}^{2} [U_{-1/2}]_{s,j_1}^{r,i_1} ([U_{-1/2}]_{s,j_2}^{r,i_2})^* ,\\ &[m_{n}]_{ij} = \frac{1}{2} \sum_{r,s=1}^{2} [U_{n}]_{j_1,s}^{i_1,r} ([U_{n}]_{j_2,s}^{i_2,r})^*. \end{align} Introducing a convenient diagrammatic representation for objects acting on both the forward and backward time sheets \begin{align} &\begin{tikzpicture}[baseline=(current bounding box.center), scale=.8] \def\eps{0.5}; \draw[very thick] (-0.5, 0.5) -- (0.5,-0.5); \draw[very thick] (-0.5,-0.5) -- (0.5,0.5); \draw[ thick, fill=mygreen, rounded corners=2pt] (-0.25,+0.25) rectangle (0.25,-0.25); \draw[thick] (0,0.15) -- (0.15,0.15) -- (0.15,0); \Text[x=-0,y=-0.6]{} \end{tikzpicture} \begin{tikzpicture}[baseline=(current bounding box.center), scale=.8] \draw[thick] (-1.65,0.65) -- (-0.65,-0.35); \draw[thick] (-1.65,-0.35) -- (-0.65,0.65); \draw[ thick, fill=myblue, rounded corners=2pt] (-1.4,0.4) rectangle (-.9,-0.1); \draw[thick] (-1.15,0) -- (-1,0) -- (-1,0.15); \draw[thick] (-2.25,0.5) -- (-1.25,-0.5); \draw[thick] (-2.25,-0.5) -- (-1.25,0.5); \draw[ thick, fill=myred, rounded corners=2pt] (-2,0.25) rectangle (-1.5,-0.25); \draw[thick] (-1.75,0.15) -- (-1.6,0.15) -- (-1.6,0); \Text[x=-2.25,y=-0.6]{} \end{tikzpicture} = U\otimes U^{*}\,,\notag\\ &\begin{tikzpicture}[baseline=(current bounding box.center), scale=.8] \def\eps{0.5}; \draw[very thick] (-0.5, 0.5) -- (0.5,-0.5); \draw[very thick] (-0.5,-0.5) -- (0.5,0.5); \draw[ thick, fill=OliveGreen, rounded corners=2pt] (-0.25,+0.25) rectangle (0.25,-0.25); \draw[thick] (0,0.15) -- (0.15,0.15) -- (0.15,0); \Text[x=0,y=-0.6]{} \end{tikzpicture} \begin{tikzpicture}[baseline=(current bounding box.center), scale=.8] \draw[thick] (-1.65,0.65) -- (-0.65,-0.35); \draw[thick] (-1.65,-0.35) -- (-0.65,0.65); \draw[ thick, fill=myblue4, rounded corners=2pt] (-1.4,0.4) rectangle (-.9,-0.1); \draw[thick] (-1.15,0) -- (-1,0) -- (-1,0.15); \draw[thick] (-2.25,0.5) -- (-1.25,-0.5); \draw[thick] (-2.25,-0.5) -- (-1.25,0.5); \draw[ thick, fill=myorange, rounded corners=2pt] (-2,0.25) rectangle (-1.5,-0.25); \draw[thick] (-1.75,0.15) -- (-1.6,0.15) -- (-1.6,0); \Text[x=-2.25,y=-0.6]{} \end{tikzpicture} = W\otimes W^{*}\,,\\ \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[very thick] (-4.25,0.5) -- (-4.25,-0.5); \draw[ thick, fill=myYO, rounded corners=2pt] (-4.25,0) circle (.15); \draw[thick, rotate around = {-45:(0.525-4.77,0.375-0.4)}] (.45-4.77,0.3-0.4) -- (.45-4.77,0.45-0.4) -- (.6-4.77,0.45-0.4); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture} \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[ thick] (-4,0.5) -- (-4,-0.5); \draw[ thick, fill=mygray4, rounded corners=2pt] (-4,0) circle (.15); \draw[thick, rotate around = {135:(0.525-4.27-0.25,0.375-0.35)}] (.45-4.27-.25,0.3-0.35) -- (.45-4.27-.25,0.45-0.35) -- (.6-4.27-.25,0.45-0.35); \draw[ thick] (-4.25,0.5) -- (-4.25,-0.5); \draw[ thick, fill=myblue10, rounded corners=2pt] (-4.25,0) circle (.15); \draw[thick, rotate around = {-45:(0.525-4.77,0.375-0.4)}] (.45-4.77,0.3-0.4) -- (.45-4.77,0.45-0.4) -- (.6-4.77,0.45-0.4); \Text[x=-4.25,y=-0.75]{} \end{tikzpicture} & = u_{x}\otimes u_{x}^{*},\,\, w_{x}\otimes w_{x}^{*}\,,\qquad \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[very thick] (-0.15,0.25) -- (-0.15,-0.251); \draw[thick, fill=white] (-.15,-0.25) circle (0.1cm); \end{tikzpicture} \frac{1}{\sqrt{2}}\,\, \begin{tikzpicture}[baseline=(current bounding box.center), scale=.7] \draw[thick] (-2,0.25) -- (-2,-0.251); \draw[thick] (-1.5,0.4) -- (-1.5,-0.101); \draw[thick] (-2,-0.25) to[out=-85,in=-80] ( (-1.505,-0.1); \end{tikzpicture}\,, \label{eq:circstate} \end{align} we can depict $\mathcal B_{n,0}$ as in Fig. <ref>a. Analogously, a generic term with $\tau\neq 0$ is written as *T T^ℓ_0 T Π_2t^2τ = 𝔼[[ℬ^t_ℓ+2,τ]], where we introduced the matrices \begin{align} &\mathcal B_{n,\tau} = ((\mathbb U \!\otimes_r\! \mathbb U^*) \!\otimes\! b_{\tau} \otimes \1^{\otimes 2(\tau-1)}) \notag\\ &\qquad\quad \times (m_0\!\otimes\!((\mathbb W\otimes \Pi_{\tau}) \!\otimes_r\! (\mathbb W^*\otimes \Pi^*_{\tau}))),\\ &[b_\tau]_{i j} = \frac{1}{d} [U_{n}]_{j_1,j_3}^{i_1,j_4} ([U_{n}]_{j_2,i_3}^{i_2,i_4})^*. \end{align} Note that $\mathcal B_{n,\tau}\in{\rm End}(\mathbb C^{4^{2n+\tau-1}})$ and $b_\tau \in{\rm End}(\mathbb C^{4^{2}})$. Introducing the following diagrammatic representation for $b_\tau$, [baseline=(current bounding box.center), scale=.7] [very thick] (-0.5, +0.5) – (0.5,-0.5); [very thick] (-0.5,-0.5) – (0.5,0.5); [thick, fill=OliveGreen, rounded corners=2pt] (0,0) circle (.35); we can depict $\mathcal B_{n,\tau}$ as in Fig. <ref>b. The traces of $\mathcal B_{n,\tau}$ can be treated following Ref. <cit.>. In particular, using Theorem 1 of the aforementioned reference we have that if there are no $x\leq y$ such that \begin{align} U_x (\1\otimes a) U^\dag_x = \1\otimes a', \label{eq:leftcondition}\\ U_y (b\otimes \1) U^\dag_y = b'\otimes \1, \label{eq:rightcondition} \end{align} for some local operators $a,a',b,b'$, then [ℬ_n,0^t] = 1 + O(e^-βt) , β>0. Note that, although $\beta>0$ for all values of $n$, Ref. <cit.> gives no information on its $n$ dependence. With a similar reasoning we prove in Appendix <ref> that if (<ref>) does not hold for any $x$, then $\rho(\mathcal B_{n,\tau})<1$, where we used $\rho(\cdot)$ to denote the spectral radius. This implies [ℬ_n,τ^t] = O(e^-βt), τ>0 . Since for $(\epsilon_1,\epsilon_2)\neq (\pi/4,\pi/4)$ the gates fulfilling (<ref>) and (<ref>) have measure zero in the disorder average, we conclude that *1T T^ℓ_0 T 1 ≃1+ A(ϵ) e^-β(ϵ) t, where the constant $A(\epsilon)$ vanishes for $\epsilon=0$ because the l.h.s. is trivially equal to one for $\epsilon=0$ while Fig. <ref> suggests that $\beta(0)$ is finite. Note that, since we do not control the $\beta$ dependence on $\ell$, we cannot exclude that it approaches 0 in the limit of infinite $\ell$. This is why we had to assume $\ell=O(t^0)$ in Eq. (<ref>). Proceeding analogously we find *1T1 ≃1+ B(ϵ) e^-β(ϵ) t , with $B(\epsilon)\simeq A(0)' \epsilon/2$ for small $\epsilon$. Putting all together in Eq. (<ref>) we then have [n] ≈C_2(ϵ) e^-β(ϵ) t/λ_1^2 (1-λ_1)^n, where $C_2(\epsilon)$ is $O(1)$ for small $\epsilon$. If we compare Eq. (<ref>) with our two parameter fit for $[n]$ (Eq. (<ref>)), we predict that $-\beta$ is the slope of $\delta$ with respect to $t$. The right panel of Fig. <ref> then suggests that $\beta(\epsilon)$ depends weakly on $\epsilon$ for small enough $\epsilon$. Flat coefficients $[1,1,1\dots 1]$ as a function of $t$ for different values of $m$ and $\epsilon$. The top panels report two examples of Case I ($(\epsilon_1, \epsilon_2)=(0.1,0)$ and $(\epsilon_1,\epsilon_2)=(0.3,0)$) while the bottom ones report two examples of Case II ($\epsilon_1 = \epsilon_2= 0.01$ and $\epsilon_1 = \epsilon_2= 0.1$). Proceeding along similar lines we can estimate all the coefficients in Eq. (<ref>). In particular, using the approximations (<ref>) and (<ref>) we have that the generic coefficient ${[k_1,\ldots,k_m]} $ is written as [k_1,…,k_m] ≈*1T̅_ϵ(𝒯_0 𝒬 T̅_ϵ)^m1/λ_1^m(1-λ_1)^k_1+…+k_m . While applying (<ref>) and (<ref>) we have [k_1,…,k_m] ≈C_m+1(ϵ) e^-βt/λ_1^m(1-λ_1)^k_1+…+k_m, [,] ≈C_1(ϵ) e^-βt, with $C_m(\epsilon)= O(1)$ for small $\epsilon$. This provides an analytical justification to Assumption <ref>. In Figs. <ref> and <ref> we provide an independent numerical test of this assumption by considering the behaviour of the flat coefficients \begin{eqnarray} \label{eq:flat} [1,1\dots 1] = \mel*{1}{\bar{\cal T}_\epsilon \mathcal G \bar{\cal T}_\epsilon \mathcal G\cdots \mathcal G \bar{\cal T}_\epsilon }{1}. \end{eqnarray} These contributions are those for which the approximation (<ref>) is the least justified as the latter becomes exact only when the resolvent is taken to infinite power. Nevertheless, from Fig. <ref> we clearly see an exponential decay in time in agreement with Eq. (<ref>) and hence with Assumption <ref>. This despite the relatively short times accessible in our numerical simulations. Instead, Fig. <ref> reports the behaviour of $[1,1\dots 1]$ as a function of $m$, i.e. the number of ones in the coefficient. We see that the coefficient decays exponentially in $m$. This suggests that $C_{m+1}(\epsilon)$ in Eq. (<ref>) is bounded by an exponentially decaying constant. In fact we note that, at least for Case I and small enough $\epsilon$, Fig. <ref> shows that the exponential decay in time becomes stronger when $m$ increases. This can be explained by our asymptotic form Eq. (<ref>) if we admit that the factor $\lambda_1(1-\lambda_1)$ appearing in the denominator increases as a function of time. Note that this growth cannot be unbounded as $\lambda_1(1-\lambda_1)\leq 1/4$. Flat coefficients $[111\dots 1]$ as a function of $m$ for different values of $t$ and $\epsilon$. The top panels report two examples of Case I ($(\epsilon_1, \epsilon_2)=(0.1,0)$ and $(\epsilon_1,\epsilon_2)=(0.3,0)$) while the bottom ones report two examples of Case II ($\epsilon_1 = \epsilon_2= 0.01$ and $\epsilon_1 = \epsilon_2= 0.1$). Finally we stress that Eq. (<ref>) do not hold at the trivially localised point $\epsilon_1 = \epsilon_2= \pi/4$. Indeed, in that case Eqs. (<ref>) and (<ref>) do not apply as Eqs. (<ref>) and (<ref>) are clearly satisfied for all $x$ and $y$. As a result, in this case there is no exponential decay in time of the coefficients (<ref>). For instance, using the simple form of $\mathcal T$ at the localised point (see, e.g., Ref. <cit.>) it is easy to show that [,]|_ϵ_1 = ϵ_2= π/4 = 4/π + O(t^-α). In this figure we define $|Y_n\rangle = \left(\bar{\mathcal{T}}_\epsilon\mathcal{G} \right)^n \bar{\mathcal{T}}_\epsilon |Y_0\rangle $. All data is collected exactly in the $(\nu,\nu') = (0,0)$ double-momentum sector.In the top two panels we have lines of best fit on a logged y-axis, indicating exponential decay in $n$. § CONCLUSIONS In this work we laid down a general framework to investigate the structural stability of dual-unitary spectral correlations, which can be loosely thought of as the quantum many-body analogue of the theory of structural stability of hyperbolic flows in classical chaotic dynamical systems <cit.>. Our guiding principle has been that, contrary to integrable systems, dual-unitary systems should be robust under typical perturbations as they are quantum chaotic. Therefore, the spectral correlations of a perturbed dual-unitary system, or at least their universal part, should be accessible by devising an appropriate perturbation theory. Here we formulated such a perturbation theory and identified two key assumptions needed for a rigorous proof of its convergence. We then then provided a compelling numerical evidence for the validity of these assumptions in a particular family of perturbed Floquet dual-unitary circuits, and corroborated them with a heuristic analytical argument (supported by numerical evidence) involving the spectral decomposition of the resolvent of the unperturbed transfer matrix. Besides their consequences on the structural stability of dual-unitary correlations, our findings have an important consequence on the interplay between ergodicity and disorder. Indeed, since spatial disorder does not affect dual-unitarity breaking (only two-body couplings can break dual-unitarity), our results imply that whenever a quantum many-body system is close to an interacting (non-SWAP) dual unitary point its spectral correlations are always random-matrix-like, irrespective of the disorder strength — for instance, the particular family used in our numerical analysis has maximal disorder strength. This rules out the possibility of Floquet MBL in the thermodynamic limit for systems close enough to the dual-unitary point. Although these findings are remarkable, they are in many ways only a stepping stone to the development of a comprehensive theory of structural stability in quantum many-body systems, and many key questions are still open. In particular, we identify two compelling directions for future research. The first is to find quantitative estimates or bounds for the radius of convergence of our perturbative expansion. Indeed, at the moment we have merely shown that, under our two assumptions, the radius is finite. However, we gave no information on its value. A quantitative estimate of the radius of convergence could potentially lead to the identification of the point of transition to the non-ergodic (i.e. localised) regime, which might be occur for a finite value of two-body coupling or only at the trivially localised point where the qubits are disconnected. A related question is whether one can identify the transition point expanding around the trivially localised point. The second direction is, of course, to provide rigorous mathematical proofs of our assumptions, in particular to the second one that appears the more substantial. We believe that the heuristic analytical argument we provided in support of that assumption can be used as blueprint for such a proof. We thank Pavel Kos for collaboration on related topics and Juan Garrahan for useful discussions. We acknowledge financial support from the Royal Society through the University Research Fellowship No. 201101 (J. R. and B. B.), a UKRI Future Leaders Fellowship MR/T040947/1 (C. K.), and Grants P1-0402, N1-0334, N1-0219 of Slovenian Research and Innovation Agency (T. P.). J. R., T. P. and B. B. warmly acknowledge the hospitality of the Simons Center for Geometry and Physics during the program “Fluctuations, Entanglements, and Chaos: Exact Results” where part of the project has been carried out. § PERTURBATION THEORY TO AN ARBITRARY ORDER Writing explicitly the coefficient of $x^n$ in Eq. (<ref>) we have (1-𝒯_0)|*⟩λ^(n) = T̅_ϵ|*⟩λ^(n-1) - ∑_k=1^n λ^(k) |*⟩λ^(n-k) , where we assumed $n>0$ as the equation is trivially satisfied for $n=0$. Next, we note that fixing the arbitrary phase and normalisation of $\ket{\lambda}$ as gives $|\lambda\rangle^{(n)}\perp |\lambda\rangle^{(0)}$ for $n>0$. Therefore, we can rewrite Eq. (<ref>) as \begin{align} \!\!\!\lambda^{(n)} &= \langle \lambda|^{(0)}\bar{\cal T}_\epsilon \ket{\lambda}^{(n-1)}, \label{eq:recursiveeval}\\ \!\!\!\!\!\!\!\ \ket*{\lambda}^{(n)} \!\! &= \mathcal G \bar{\cal T}_\epsilon \ket{\lambda}^{(n-1)}\! -\! \sum_{k=1}^{n-1} \lambda^{(k)} \mathcal G \!\ket*{\lambda}^{(n-k)} \notag\\ &=\!\! \sum_{k=1}^{n-1} \mathcal G \mathcal{K}_k \!\ket*{\lambda}^{(n-k)} \!\!. \label{eq:recursivelambdavec} \end{align} To obtain the first equation we took the scalar product of Eq. (<ref>) with $\ket*{\lambda}^{(0)}$ and to obtain the second we multiplied it by $\mathcal Q$, used $\ket{\lambda}^{(n>0)}=\mathcal Q \ket{\lambda}^{(n>0)}$, and noted $\mathcal G = (\mathcal Q (1-\mathcal T_0) \mathcal Q)^{-1}$. Finally, we recalled the definition of $\mathcal{K}_k$ from Eq. (<ref>). Using (<ref>) we find |*⟩λ^(1) = 𝒢 T̅_ϵ|*⟩λ^(0) , recovering the first of (<ref>). Moreover, for $n\geq 2$ we obtain |*⟩λ^(n) = 𝒢 𝒦_n-1 𝒢 T̅_ϵ|*⟩λ^(0) + ∑_k=1^n-2 𝒢 𝒦_k |*⟩λ^(n-k) . Using Eq. (<ref>) we then have \begin{align} \ket*{\lambda}^{(n)} &= \mathcal G \mathcal{K}_{n-1} \mathcal G \bar{\cal T}_\epsilon \ket*{\lambda}^{(0)} + \sum_{k_1=1}^{n-2}\sum_{k_2=1}^{n-k_1-1} \mathcal G \mathcal{K}_{k_1} \mathcal G \mathcal{K}_{k_2} \ket*{\lambda}^{(n-k_1-k_2)} \notag\\ &= \mathcal G \mathcal{K}_{n-1} \mathcal G \bar{\cal T}_\epsilon \ket*{\lambda}^{(0)} + \sum_{k_1=1}^{n-2} \mathcal G \mathcal{K}_{k_1} \mathcal G \mathcal{K}_{n-k_1-1} \mathcal G \bar{\cal T}_\epsilon \ket*{\lambda}^{(0)} + \sum_{k_1=1}^{n-2}\sum_{k_2=1}^{n-k_1-2} \mathcal G \mathcal{K}_{k_1} \mathcal G \mathcal{K}_{k_2} \ket*{\lambda}^{(n-k_1-k_2)} \notag\\ &= \mathcal G \mathcal{K}_{n-1} \mathcal G \bar{\cal T}_\epsilon \ket*{\lambda}^{(0)} + \sum_{\substack{k_1,k_2=1 \\ k_1+k_2=n-1}}^{n-1} \mathcal G \mathcal{K}_{k_1} \mathcal G \mathcal{K}_{k_2} \mathcal G \bar{\cal T}_\epsilon \ket*{\lambda}^{(0)} + \sum_{k_1=1}^{n-2}\sum_{k_2=1}^{n-k_1-2} \mathcal G \mathcal{K}_{k_1} \mathcal G \mathcal{K}_{k_2} \ket*{\lambda}^{(n-k_1-k_2)} \,. \end{align} We iterate this procedure $n-1$ times and note that ∑_k_1=1^n-2∑_k_2=1^n-k_1-2⋯∑_k_n=1^n-k_1⋯-k_n-1-1 𝒢 𝒦_k_1 𝒢 𝒦_k_2⋯G 𝒦_k_n|*⟩λ^(n-k_1 ⋯-k_n) =0, to obtain Eq. (<ref>). Using that expression in Eq. (<ref>) gives Eq. (<ref>). § SPECTRAL RADIUS OF $\MATHCAL B_{N,\TAU}$ In this appendix we show that if Eq. (<ref>) is not satisfied for any $x$, then $\rho(\mathcal B_{n,\tau})<1$. We begin by noting that $\|\mathcal B_{n,\tau}\|_\infty=1$. This can be easily seen writing \begin{align} &\mathcal B_{n,\tau}\mathcal B_{n,\tau}^\dag =\notag\\ &\!\!\!= U_{1/2}(m_0 m^\dag_0 \otimes \1) U^\dag_{1/2}\otimes \1^{\otimes (2n-2)} \otimes b_\tau b_\tau^\dag \otimes \1^{\tau -1} \end{align} and observing $\|m_0\|=\|b_\tau\|=1$. Since ρ(ℬ_n,τ) ≤ℬ_n,τ_∞=1, our goal is then to show that Eq. (<ref>) is not satisfied for any $x$, $\mathcal B_{n,\tau}$ does not have a unit-magnitude eigenvalue. Let us proceed by contradiction and assume that there exists a $\ket{\Lambda}$(normalised) such that ℬ_n,τ |Λ⟩= e^i θ |Λ⟩. Using $\|\mathcal B_{n,\tau}\|_\infty=1$ we then have \begin{align} &\mathcal B_{n,\tau}^\dag \mathcal B_{n,\tau} \ket{\Lambda}= \ket{\Lambda},\\ &\mathcal B_{n,\tau} \mathcal B_{n,\tau}^\dag \ket{\Lambda} = \ket{\Lambda},\label{eq:BBdag} \end{align} which also give ℬ^†_n,τ |Λ⟩= e^-i θ |Λ⟩ . We now proceed along the lines of Ref. <cit.> and note that (<ref>) and the fact that Eq. (<ref>) is not satisfied for any $x\leq y$ imply |Λ⟩= |⟩⊗|Λ'⟩, where $\ket{\mcirc} = \sum_{k=1}^2 \ket{k}\otimes_r\ket{k}/\sqrt{2}$ (cf. Eq. (<ref>)). Contracting (<ref>) with this state we find ℬ'_n,τ |Λ'⟩= e^i θ |Λ'⟩. where we introduced \begin{align} &\mathcal B'_{n,\tau} = (m_0 \otimes (\mathbb U' \!\otimes_r\! \mathbb U^{\prime*}) \!\otimes\! b_{\tau} \otimes \1^{\otimes 2(\tau-1)})\notag\\ &\qquad\qquad\times (((\mathbb W\otimes \Pi_{\tau}) \!\otimes_r\! (\mathbb W^*\otimes \Pi^*_{\tau})))%\in{\rm End}(\mathbb C^{4^{2n+\tau-2}}), \end{align} where $\mathbb U'$ is obtained from $\mathbb U$ by removing the leftmost gate. Proceeding as before we find |Λ'⟩ = |⟩⊗|Λ”⟩. This procedure can be iterated $2n-2$ times and gives |Λ⟩ = |⟩^⊗2(n-1)⊗|ν⟩, where $\ket{\nu}$ fulfils (b_τ ⊗^⊗2(τ-1)) (m_0⊗(Π_τ ⊗_r Π^*_τ)) |ν⟩ = e^i θ |ν⟩ . This implies that b_τ (m_0⊗)_∞=1. Namely that there exists a $\ket{\lambda}$ such that (m_0^†⊗) b_τ^†b_τ (m_0⊗)λ=1. Recalling that $\|m_0\|=\|b_\tau\|=1$ this also means \begin{align} & (m_0^\dag m_0\otimes \1)\ket{\lambda} = \ket{\lambda}, \\ & b_{\tau}^\dag b_{\tau} (m_0\otimes \1)\ket{\lambda} = (m_0\otimes \1)\ket{\lambda}. \end{align} The first of these equations implies $\ket{\lambda}=\ket{\mcirc}\otimes\ket{a}$ so that we finally have b_τ^†b_τ |⟩⊗|a⟩ = |⟩⊗|a⟩. Rewriting this equation in terms of the local gate it reads as U_n (⊗a) = b⊗ , b= 1/2 tr_2[U_n (⊗a)]. This equation is solved only by $a$ unitary and $U_n=\1\otimes a^\dag$. Since such a gate fulfils Eq. (<ref>), we have a contradiction. § WORKING IN A FIXED DOUBLE-MOMENTUM SECTOR In this section we briefly describe how to reduce the numerical analysis to a given double-momentum sector. As mentioned in the main text the transfer matrix $\mathcal{T}$ is invariant under two-site translations in the forward and backward lattices \begin{equation} [\Pi^{2\tau_1}_{2t} \otimes \Pi^{2\tau_2}_{2t} ,\mathcal{T}] = 0,\qquad \tau_1,\tau_2=0,\ldots,t-1. \end{equation} For the forward and backward lattice the most natural basis to work in is therefore the eigenbasis of the two-site shift operator. Considering the forward time lattice, we have $2t$ qubits, and we know we have $\Pi_{2t}^{2t} = 1$, giving eigenvalues $e^{{2\pi} i \nu/t}$ with $ \nu = 0, \dots t - 1$. To generate the eigenbasis we select a set of reference states $|f\rangle$ (taken to be product states in the computational basis) and write \begin{equation} |f_\nu\rangle = \frac{1}{\sqrt{L_f}} \sum_{r = 0}^{L_f-1} e^{{2\pi} i \nu r /t}\Pi_{2t}^{2r} |f\rangle, \end{equation} where $L_f$ is the period of the reference state $\Pi_{2t}^{2L_f}|f\rangle = |f\rangle$. Typically $L_f = t$ however some special states will have periods that are integer multiples of $t$. Computationally this representation reduces our overall storage from $2^{2t} \to {2^{2t}}/{t}$ approximately, with the largest sector being the $\nu =0$ sector (see, e.g., Ref. <cit.> for more details). For the purposes of discussing complexity later in this section we will call $D = 2^{2t}$ and $D_\nu = {2^{2t}}/{t}$. An important observation about $\mathcal{T}$ is that it is made up of a product of operators which independently are translation invariant under two shifts. This is evident from Eq. (<ref>) and implies that we can update vectors in distinct steps. First let us treat the case where we do not couple the forward and backward lattice, i.e., we consider an operator of the form \begin{equation} \tilde{\mathbb{U}}_o = \tilde{U}_0 \otimes\cdots\otimes \tilde{U}_{2t-2}, \end{equation} where $\tilde{U}$ is a two-local operator (it acts non-trivially only on a pair of nearest neighbours). We will use the above as an example to illustrate working in the momenta basis for operators with this general structure. Since \begin{equation} [\Pi_{2t}^{2r}, \tilde{\mathbb{U}}_o ] = 0, \end{equation} we can write \begin{equation} \tilde{\mathbb{U}}_o|\psi_\nu \rangle = \frac{1}{\sqrt{L_f}} \sum_f c_f \sum_{r = 0}^{L_f-1}e^{{2\pi} i \nu r /t} \Pi_{2t}^{2r} \tilde{\mathbb{U}}_o|f\rangle. \label{eq:Uopsi} \end{equation} Where the sum over $f$ is taken over the set of representation basis states. One memory inefficient way to evaluate this expression is to simply evaluate $\tilde{\mathbb{U}}_o|f\rangle$ in the full Hilbert space, and then compress the state back into the translation invariant representation. This approach is computationally costly and likely memory bound, severely slowing down the code. It also removes the advantage of working in the symmetry resolved basis by increasing storage requirements to $D$, which, once we couple the forward and backward lattice, will eliminate our advantage with this approach. In fact, an open question we were not able to answer is how to evaluate $\tilde{\mathbb{U}}_o|\psi_\nu\rangle$ faster than $O(D_\nu^2)$. The operations similarity to a discrete Fourier transform indicates this may be possible. Instead of updating the vector directly from the expression (<ref>) we focus our efforts on computing matrix elements of the operator \begin{equation} \label{eq:resolveU} \!\!\langle m_\nu | \tilde{\mathbb{U}}_o | f_\nu \rangle \!=\!\! \sum_{p = 0}^{L_m-1}\sum_{r = 0}^{L_f-1} \frac{e^{{2\pi} i \nu (r-p) /t}}{\sqrt{L_m L_f }}\langle m |\Pi_{2t}^{2(r-p)} \tilde{\mathbb{U}}_o|f\rangle. \end{equation} The above equation is simpler to evaluate. To see this without loss of generality take $r=p=0$. Because $\tilde{\mathbb{U}}_o$ is made up of a product of commuting terms the expression factorises \begin{equation} \langle m|\tilde{\mathbb{U}}_o|f\rangle = \prod_{j=0}^{t}\langle m_{2j} m_{2j+1} | \tilde{U}_o | f_{2j} f_{2j+1} \rangle, \end{equation} where we have broken the representative state into its computational basis form for individual time lattice qubits. Note that Eq. (<ref>) can be computed in $O(t)$ steps due to repeated computations. For convenience we will call this new symmetry resolved operator \begin{equation} \tilde{\mathbb{U}}_{0,(m,f)}^{(\nu)}\equiv \langle m_\nu | \tilde{\mathbb{U}}_o | f_\nu \rangle. \end{equation} This allows us to store it in a $D_\nu \times D_\nu$ dimensional matrix, the same size as we will see, as the many body vectors once we combine the forward and backward lattice. A vector on the full space can be represented by \begin{equation} |\psi_{(\nu,\nu')}\rangle = \sum_{f,b} C_{f,b} |f_\nu \rangle |b_{\nu'}\rangle, \end{equation} where the sum is taken over the set of representation basis states. Updating the full forward and backward lattice state with the symmetry resolved operator is now given by \begin{equation} \left( \tilde{\mathbb{U}}_o \otimes \tilde{\mathbb{U}}_o^* \right) |\psi_{(\nu,\nu')}\rangle \to \tilde{\mathbb{U}}_o^{(\nu)} C \tilde{\mathbb{U}}_o^{(l) \dagger}. \end{equation} The final piece of the puzzle is to understand the operation \begin{equation} \mathcal{O}_{0}^{(3)} |\psi_{(\nu,\nu')}\rangle = \sum_{f,b} C_{f,b} \mathcal{O}_{0}^{(3)} |f_\nu \rangle |b_{\nu'}\rangle. \end{equation} We focus here on $\mathcal{O}_{0}^{(3)}$ as it is diagonal in the computational basis. Other non-diagonal terms can be evaluated with a simple basis rotation, and then following the steps we will outline. The action of this operator on the translation invariant basis is trivial, we have, \begin{equation} \!\!\!\!\mathcal{O}_{0}^{(3)} \!|f_\nu \rangle |b_{\nu'}\rangle \!=\!\! \begin{cases} |f_\nu \rangle |b_{\nu'}\rangle & \sum_{m=0}^{t-1} f_{2m} = \sum_{m=0}^{t-1} b_{2m} \\ 0 & \text{ otherwise} \end{cases}\!\!. \end{equation} Where we again used the computational basis representation of our reference states. This concludes all necessary steps to reduce the overall storage required by the problem by a factor of $t^2$, along with working with the symmetry resolved transfer matrix.
# An Experience-based Direct Generation approach to Automatic Image Cropping Casper Christensen Gracenote A Nielsen Company Aneesh Vartakavi Gracenote A Nielsen Company <EMAIL_ADDRESS> ###### Abstract Automatic Image Cropping is a challenging task with many practical downstream applications. The task is often divided into sub-problems - generating cropping candidates, finding the visually important regions, and determining aesthetics to select the most appealing candidate. Prior approaches model one or more of these sub-problems separately, and often combine them sequentially. We propose a novel convolutional neural network (CNN) based method to crop images directly, without explicitly modeling image aesthetics, evaluating multiple crop candidates, or detecting visually salient regions. Our model is trained on a large dataset of images cropped by experienced editors and can simultaneously predict bounding boxes for multiple fixed aspect ratios. We consider the aspect ratio of the cropped image to be a critical factor that influences aesthetics. Prior approaches for automatic image cropping, did not enforce the aspect ratio of the outputs, likely due to a lack of datasets for this task. We, therefore, benchmark our method on public datasets for two related tasks - first, aesthetic image cropping without regard to aspect ratio, and second, thumbnail generation that requires fixed aspect ratio outputs, but where aesthetics are not crucial. We show that our strategy is competitive with or performs better than existing methods in both these tasks. Furthermore, our one-stage model is easier to train and significantly faster than existing two-stage or end-to-end methods for inference. We present a qualitative evaluation study, and find that our model is able to generalize to diverse images from unseen datasets and often retains compositional properties of the original images after cropping. We also find that the model can generate crops with better aesthetics than the ground truth in the MIRThumb dataset for image thumbnail generation with no fine tuning. Our results demonstrate that explicitly modeling image aesthetics or visual attention regions is not necessarily required to build a competitive image cropping algorithm. ## 1 Introduction With the proliferation of devices like smartphones, smart televisions, and tablets, imagery in different aspect ratios is necessary for a user interface to comply with responsive web design standards. These images are often manually cropped, which can be very laborious to perform for a large number of images. Automatic Image Cropping, therefore, has great practical significance for large catalogs of images. An effective aesthetic cropping algorithm could be helpful to industries and applications that store and display large amounts of media, such as social networks or image sharing platforms, image galleries, surveillance systems, photography and graphic design software. Image cropping is often performed to highlight visual attention regions discarding unwanted regions in the process. Alternatively or in conjunction, cropping can be performed to improve or maintain the aesthetics of an image. Experienced users, including trained photographers, may use composition concepts such as the rule of thirds or the golden ratio to maximize aesthetics while deciding how to crop images. The aspect ratio of the final cropped image is also essential when performing this task, as it affects the aesthetics and the framing of the image. For example, selecting a portrait crop from a landscape image with multiple subjects can only include a subset of them, and the final crop should not include any partially cropped faces for aesthetic reasons. Advances in image cropping methods could therefore inform and guide research in visual perception and aesthetics. The detection of visual attention regions in images has been an active area of research for some time [33]. Attention-based automatic cropping approaches build on it by drawing bounding boxes around the image’s salient regions, assuming that the best crop should include the salient region. Assessment of image aesthetics is also an active research area, starting with low-level rules and features, which are difficult to formulate and do not generalize, to recent deep learning approaches [9]. The aspect ratio of an image is also essential to the perceived aesthetics, recognized by some recent aesthetics assessment approaches [35, 5]. However, it is rarely mentioned as a requirement or concern in prior approaches to automatic image cropping. Although some techniques can output bounding boxes in different aspect ratios [32, 43], they do so by evaluating multiple candidates and are therefore inefficient. Image cropping is a typical first stage for thumbnail generation approaches, which try to create smaller representations of images. These strategies often create thumbnails in fixed aspect ratios but usually do not consider image aesthetics [9, 3]. Early approaches to automatic image cropping tended to focus on either the aesthetics or the visual attention regions. More recent solutions try to incorporate both by modeling the cropping process in two stages. First, they determine a visual attention region or a Region Of Interest (ROI), and then, they draw a bounding box to maximize aesthetics. This two-stage approach has some disadvantages: when the image has no salient regions [24, 32], or when it has multiple salient subjects, some of which may need to be excluded for aesthetic reasons [32, 22]. We believe that a single-stage approach that implicitly models the image’s aesthetics and attention regions can overcome some of the drawbacks of existing image cropping techniques. Our proposed model is less susceptible to failure cases that occur when attention or aesthetics are modeled explicitly, such as, when no salient region is found or when the ground truth for aesthetic assessment is ambiguous due to neutral image aesthetics [9]. We evaluate several common CNN architectures in a transfer learning framework and find that a WideResNet50-2[42] backend achieves the best overall performance on our dataset with an IoU of $0.867$. This model is more lightweight and efficient than two-stage approaches and is simpler to train. Without any model optimization or pruning, our model can process over $600$ images/sec, or over $3000$ crops/sec as each image is cropped in $5$ aspect ratios on a single Nvidia Tesla V100 GPU during inference. This is significantly faster than existing approaches [3, 22, 25, 24, 23]. To the best of our knowledge, this work is the first attempt at addressing the problem of image cropping directly, without explicitly modeling visual attention or aesthetics.Due to a lack of public datasets to support our approach, we train our CNN-based model using a large internal dataset of images cropped by experienced editors in fixed aspect ratios, who simultaneously maintain image aesthetics and important image content. We propose an efficient architecture that predicts bounding boxes for multiple aspect ratios simultaneously, without evaluating multiple crop candidates. Prior approaches for image cropping did not enforce the aspect ratio of their outputs. We, therefore, benchmark our task on datasets for two related tasks - FCDB [6] for aesthetic image cropping without regard to aspect ratio, and MIR- Thumb[3] for thumbnail generation in fixed aspect ratios where aesthetics are not crucial. Our model with a WideResNet50-2 backend, modified to generate outputs in any aspect ratio, is competitive with and more efficient than existing approaches on FCDB, achieving an IoU of 0.692. We also achieve state- of-the-art performance on the MIR-Thumb dataset at an IoU of 0.741 with no fine-tuning. This demonstrates that explicitly modeling aesthetics or attention regions is not strictly required for accurate and efficient image cropping. Finally, we include a qualitative evaluation, where we investigate the generalization ability of the model on the FCDB and MIRThumb datasets without fine tuning. Where also observe that the model can generate more aesthetic crops on MIR-Thumb than the original ground truth. This finding highlights some challenges in the objective evaluation of image cropping systems, such as the reliance on crowd-sourced workers to gather ground truth and using a single reference for the IoU metric when several equally good crops may exist. In summary: * • We are the first work to attempt aesthetic image cropping directly and show that explicitly modeling visual attention or image aesthetics is not necessary to build a competitive image cropping algorithm. * • We propose a simple architecture, with no bells and whistles that is easier to train compared to recent state-of-the-art approaches, such as separated network branches for bounding box prediction [3], ROI-aware pooling operations [3, 22, 25], human-defined composition patterns [32], and custom loss functions [22, 32]. * • Our proposed single-stage model is efficient and able to output bounding boxes of multiple fixed aspect ratios, without evaluating multiple candidates, which is novel for aesthetic aware image cropping approaches. ## 2 Related Work Prior approaches to solve the automatic image cropping problem can be distinguished by how the cropping candidates are initially determined and how they are evaluated to get the final crop. The task of selecting cropping candidates is generally solved by a few different approaches: * • _Sliding-Judging_ \- These techniques generate a large number of candidates by moving windows of varying sizes and aspect ratios over the original image, each of which is then evaluated against some criterion such as image aesthetics or attention regions to find the best candidate [31, 43, 28, 11]. These strategies are generally computationally inefficient as the search space spans the entire image [22, 37]. Some authors have developed strategies to mitigate this by exploiting properties such as local redundancy [43] or by eliminating candidates that do not encompass the entire region of interest [40]. Other authors suggest more efficient solutions that evaluate fewer candidates, but without regard to aesthetics [4]. * • _Determining-adjusting_ \- These methods try to first determine an ROI in the image. They then generate many candidates around that region by adjusting the position, height, or aspect ratio of the bounding boxes, and evaluate each of them to find the best cropping candidate [37, 38]. They are more efficient than sliding-judging approaches because they generate fewer candidates, but they struggle when no ROI is found [24, 22]. * • _Finding-generating_ \- These methods aim to predict a single crop region by calculating a bounding box that includes the visual attention region in the image. This is then fed into a regression network that predicts the optimal bounding box [24, 22]. These strategies are efficient because they generate a single candidate, instead of generating and evaluating multiple candidates as in determining-adjusting approaches. However, these methods also struggle when no ROI is found [24, 22]. Once the candidates are generated, prior approaches evaluate them in a few different ways: * • _Saliency_ or attention-based methods assume that the best crop will generally contain the most salient regions. The techniques for finding the salient regions range from signal processing [16] to deep learning methods [34, 20, 36]. _Determining-adjusting_ approaches often use these methods to find an ROI [37, 38]. Other saliency-based cropping methods include Ardizzone et al. [1], Ciocca et al. [8], and Sun and Ling [31]. * • _Aesthetic evaluation_ methods try to quantify and score images or crop candidates based on their aesthetic qualities. A comprehensive review of these methods is presented by Deng et al. [9]. Aesthetic image cropping algorithms sometimes use features inspired by composition rules such as the rule of thirds and visual balance [17, 32, 40]. Datasets such as AVA [27] enable learning aesthetics using deep learning methods [26]. Other aesthetics-based image cropping approaches include Nishiyama et al. [28], Zhang et al. [45], and Chen et al. [7]. * • _Fusion_ methods try to combine attention and aesthetic methods in two stages and harness the advantages of both. Some approaches use a determining- adjusting strategy by first predicting the attention region, then generating a small number of candidates around it, and finally selecting the one with the best aesthetic evaluation score [37, 38]. Finding-generation strategies try to regress the bounding box after detecting salient regions in an image [24, 22]. Other fusion approaches include Tu et al. [32] , Guo et al. [13], and Li et al. [19]. * • _Experience-based_ methods try to predict a bounding box using a dataset of images cropped by humans. Prior methods that follow this strategy design handcrafted features such as sharpness and color distance that are then used for regressing the bounding boxes [40, 41]. Some image cropping methods do not easily fit into this framework: a reinforcement learning framework [19], rank-based evaluation metrics on a densely annotated dataset [43, 44], weakly supervised learning [21], and rank- based learning approaches [25, 6]. We propose an _experience-based direct generation_ strategy, which has not been attempted for aesthetic image cropping to the best of our knowledge. We propose the term _direct generation_ to represent methods that predict the bounding box directly from an input image, without the overhead of detecting visual attention regions or evaluating multiple cropping candidates. These methods do not suffer from the same drawbacks as _finding-generating_ approaches, such as when an ROI is absent, and are more efficient than _sliding-judging_ and _determining-adjusting_ methods. Our model is trained to directly predict the bounding boxes for different aspect ratios simultaneously, using a shared feature extractor for efficiency. We build a large internal dataset to train our experience-based approach with no handcrafted features, overcoming the limitations that restricted other methods [24, 22, 13, 39]. There are some use cases where efficiency is not as important and evaluating multiple candidates may be desired, for example when presenting multiple candidates to a user and allowing them to pick the best candidate based on their preferences. However, in this work, we continue a trend in prior work that focuses on applications that benefit from reducing the number of evaluated candidates for efficiency reasons. Image cropping is related to thumbnail generation, which aims to create smaller representative versions of the original images by preserving the most useful content from the original image and discarding the background. In contrast, image cropping approaches try to create new images, balancing aesthetic quality while including visually salient regions. Our approach is similar to some recent thumbnail generation approaches such as FastAT [10], and CropNet [3] in that they predict an output without generating multiple candidates. CropNet uses a similar strategy of a shared feature extractor and dedicated branches to predict multiple bounding boxes of fixed aspect ratios but follows a different strategy of predicting bounding boxes. CropNet is trained on MIR-Thumb, a smaller crowd-sourced dataset annotated by non- experienced workers, in contrast with our larger dataset annotated by trained experts who also pay attention to image aesthetics. In Section 4, we benchmark our approach on the MIR-Thumb test set and achieve state-of-the-art performance with no fine-tuning. We also demonstrate that our algorithm can produce more aesthetically pleasing images and display some examples. ## 3 Our Approach ### 3.1 Dataset Prior approaches [24, 22, 13, 39] often cite the lack of large datasets for effective image cropping and design workarounds to overcome this limitation. We were unable to find existing datasets that support our experience-based direct generation approach to image cropping with strict aspect ratio requirements. We therefore collect an internal dataset of about $51,000$ images for this study, serving as iconic imagery for TV programs and movies. The images usually include the lead characters along with a background that conveys context relevant to the program. Each image was manually cropped in up to 5 aspect ratios (16:9, 4:3, 2:1, 3:4, and 1:1) by a large group of experienced editors who were asked to retain important image content, preserve the aesthetics, and adhere to strict aspect ratio requirements. Unlike some datasets such as FCDB, we did not rate or discard images based on their aesthetics in an effort to mitigate subjective bias. Some prior datasets suggested bounding boxes for their workers to rank or annotate, citing efficiency reasons [39, 6]. In contrast, we allowed our editors to crop the images directly to avoid bias. Only a single editor was allowed to crop a given image in one aspect ratio, and no ranking or rating information was collected. We present some examples of these images in Section 4.4. Of all the images in the resulting dataset, not every image was cropped in every aspect ratio, as can be seen in Figure 1(a). The dataset is also diverse in the aspect ratios of the original images, as illustrated in Figure 1(b). A successful model would have to generalize to input images of many sizes. We consider other common aspect ratios in addition to those mentioned above for this visualization. We also compute the mean absolute error of each image to the closest aspect ratio in Figure 1(c), similar to the analysis performed by Celona et al. [2]. (a) Distribution of aspect ratios of cropped images (b) Distribution of aspect ratios of original images (c) Mean Absolute Error (MAE) between the aspect ratio of the original image and the closest aspect ratio Figure 1: Aspect Ratio Distributions FeatureExtractor FeatureVectorRegressionHeads16:9$(0.01,\ 0.23,\ 0.99,\ 0.78)$$(0.02,\ 0.01,\ 1.00,\ 0.99)$1:1$(0.13,\ 0.01,\ 0.87,\ 0.99)$3:4……ConvolutionMax PoolConvolutionInputImage (a) Model Architecture 4096x1L-ReLU………L-ReLUL-ReLUL- ReLU512x1128x164x13x1sigmoid$c_{x}$$c_{y}$$w$Transform$(x_{tl},y_{tl},x_{br},y_{br})$FeatureVectorRegressionHead (b) Aspect Ratio Enforced Regression Head, where the transform generates the bounding box in the correct aspect ratio. Figure 2: Proposed Model Framework ### 3.2 Pre-processing and Augmentation We resize our images to $(224,224)$ and pad with zeros where necessary, to retain the aspect ratio of the original image. We store the bounds locations and the bounding boxes annotated by the editors as normalized coordinates of the top left and the bottom right corners. We then augment the image by randomly applying horizontal flips or color transformations such as changing the brightness or saturation or converting to grayscale. We do not apply any spatial transformation such as rotation or vertical flipping because these affect the composition of the image [43]. ### 3.3 Model Figure 3: (a) Non-enforced bounding box prediction. (b) Enforced bounding box prediction where width $\hat{w}$ is inferred from the aspect ratio $\alpha$ (c) Enforced bounding box prediction where height $\hat{h}$ is inferred from the aspect ratio $\alpha$. Our proposed model can be conceptually divided into two modules - a shared CNN-based feature extractor as the backbone, and multiple parallel regression heads, one for each aspect ratio. We illustrate this in Figure 2(a). This design allows us to add predictor heads for new aspect ratios without having to retrain the rest of the network from scratch, or significantly increasing the time for inference. As illustrated in 4.3, we are also able to generate crops for unseen aspect ratios without pre-training by leveraging the predictions from similar aspect ratios, which is helpful when training data is scarce. #### 3.3.1 Feature Extractor The feature extractor is designed to output a fixed-length feature vector for each input image, which is subsequently fed to the regression heads. We use a shared feature extractor because the regression head for each aspect ratio needs similar information to make a prediction, such as the important regions and their locations in the image. We try a few common CNN architectures for the backbone, including VGG [30], ResNet [14], DenseNet [15], WideResNet [42] and MobileNet-v2 [29]. These architectures have been used for many computer vision tasks, including some previous solutions to automatic image cropping [25, 32]. Another advantage of using common architectures is the wide availability of pre-trained network weights on Image Classification and related tasks. We study the effect of transfer learning using these pre- trained networks in Section 4.1. #### 3.3.2 Regression Head As shown in Figure 2(b), each regression head is a densely connected neural network, with Leaky ReLU as the activation function for the intermediate layers, and sigmoid activation at the output. Each regression head is dedicated to predicting a bounding box of a single aspect ratio, often represented as coordinates of the top-left $(x_{tl},y_{tl})$ and the bottom- right corners $(x_{br},y_{br})$. However, predicting the bounding box using this representation does not guarantee that the output would correspond to the desired fixed aspect ratio. We use an alternate regression head to predict images with a fixed aspect ratio, which we call an aspect ratio enforced regression head. For a landscape or square aspect ratio, we predict the coordinates of the center $(x_{c},y_{c})$ and the width $w$, using the aspect ratio $\alpha$ to predict the height. For a portrait aspect ratio, we predict the center coordinates $(x_{c},y_{c})$ and the height $h$, using $\alpha$ to predict the width. We illustrate this in Figure 3. Since the aspect ratio, $\alpha$, is fixed for a given regression head, we can draw a bounding box by calculating the remaining dimension, represented by the transform operation in Figure 2(b). At run time, we clip the prediction bounding box to the largest possible bounding box for the given image and the predicted center coordinates ($x_{c},y_{c}$) to avoid invalid output. We use the Smooth L1 loss between the annotated and the predicted bounding box coordinates, similar to Fast R-CNN [12]. Our experiments below were performed with models with a single enforced regression head per aspect ratio. The architecture could be extended to include multiple regression heads per aspect ratio if desired. This could be useful in cases where, for example, a close-up version and a zoomed out version for each aspect ratio are needed. ## 4 Experiments and Ablation Study We perform a 60/20/20 split on our dataset to create training, validation, and test sets. We use the ADAM [18] optimizer with a default learning rate of $0.0001$ and a batch size of 128, and use early stopping with the validation set. We use the Boundary Displacement Error (BDE) and the Intersection over Union (IoU) to evaluate cropping, in line with previous approaches [22, 24, 32, 13, 2]. As we did not have editors rank or rate different crops, we cannot compute ranking metrics or leverage ranked learning approaches. All metrics in the tables in this section are averaged across all the aspect ratios in our test set. Model | Pre-Training | Train Set | Enforced | IoU $\uparrow$ | BDE $\downarrow$ ---|---|---|---|---|--- Baseline-0.8 | - | - | - | 0.645 | 0.076 Baseline-0.9 | - | - | - | 0.697 | 0.061 Baseline-1.0 | - | - | - | 0.728 | 0.053 GAIC[43] | ImageNet | GAIC[43] | True* | 0.723 | 0.058 Ours (WideResNet-50-2) | ImageNet | Ours | True | 0.867 | 0.023 Ours (WideResNet-50-2) | ImageNet | Ours | False | 0.855 | 0.025 Ours (WideResNet-50-2) | None | Ours | True | 0.832 | 0.030 Table 1: Model evaluation on our dataset. ### 4.1 Evaluation on our Dataset We compare our models with a baseline method that predicts bounding boxes of varying sizes in the correct aspect ratio around the center of the image [2]. We denote this family of methods, _Baseline- $s$_, where $s$ represents the scaling factor of the bounding box as a fraction of the largest possible bounding box for that aspect ratio. We also compare our method with GAIC[43], a recent method capable of cropping images in fixed aspect ratios. This is enabled by selecting the aspect ratio of the generated candidates, different from our enforced predictor head method which is more efficient. We use the trained models and code released by the authors, but are unable to fine-tune GAIC on our training set as GAIC relies on densely annotated images which are not available in our dataset. Our consolidated results, evaluated on our dataset can be seen in Table 1. We also study the influence of pre-training on the ImageNet dataset for the feature extractor component of the model. We find that pre-training offers significant performance improvements, and present our results in Table 1, likely because both tasks require the model to learn the position and the type of objects in an image. We use transfer learning for all subsequent experiments. #### 4.1.1 Enforced Aspect Ratio Prediction We test our method of enforcing the aspect ratio of the bounding box, and report the results with non-enforced and enforced predictions in Table 1. The aspect ratio enforced prediction method improves model performance while also satisfying the exact aspect ratio requirement. #### 4.1.2 CNN Backbone Architecture We experiment with various common CNN architectures pre-trained on ImageNet for the feature extractor. We use enforced aspect ratio regression heads, keep all other hyper-parameters such as learning rate constant and present the metrics in Table 2. We find that the WideResNet-50-2 architecture performs the best on our test set overall. We also find that MobileNet-v2 performs very well, considering its smaller size in terms of the number of trainable parameters. Model | Size | IoU $\uparrow$ | BDE $\downarrow$ ---|---|---|--- VGG16 | 138.3M | 0.854 | 0.025 WideResNet-50-2 | 68.8M | 0.867 | 0.023 ResNet-50 | 25.5M | 0.861 | 0.025 ResNeXt-50 | 25.0M | 0.846 | 0.027 Densenet-121 | 7.9M | 0.860 | 0.025 MobileNet-v2 | 3.5M | 0.854 | 0.025 Table 2: CNN Backbone Architecture Comparison, pre-trained on ImageNet and fine-tuned on our dataset ### 4.2 Evaluation on FCDB The datasets most commonly used to evaluate automatic image cropping methods like FCDB [6] do not impose any requirements on aspect ratios. Since our model is designed to predict fixed aspect ratios, this makes an accurate benchmark difficult. Nevertheless, we modify our model for this experiment to produce bounding boxes in any aspect ratio. Specifically, we remove the aspect ratio enforced regression heads and attach a single non-enforced regression head to the trained feature extractor. We further split the FCDB training set 80/20 into a training and validation split, and then fine-tune our modified model on the resulting training split using a batch size of 128, and an ADAM optimizer with a learning rate of $1\times 10^{-5}$ for 300 epochs. We use early stopping on the validation split, similar to the previous experiments. We present metrics on the FCDB test set in Table 3. We report the metrics of VFN [7] and VPN [39] as in Lu et al. [25], without including the ground truth window as a candidate view for VFN, and without the post processing step in VPN. The results demonstrate that our approach is competitive with other models that explicitly model image aesthetics or visual attention regions without evaluating multiple crop candidates. Our model achieves a higher IoU score than the end-to-end model by Lu et al. [22], with a more straightforward training approach that does not require the identification of visual attention regions. LVRN [25] achieves a slightly higher IoU score, but evaluates an average of 1,745 candidates per image, which is inefficient. The ASM-Net [32] achieves a higher IoU score, but uses an inefficient two-stage searching step and derives composition patterns from human-defined composition rules that may not generalize. Out of these, VFN [7], LVRN [25], Wang et al. [38] and Lu et al. [22] perform fine-tuning on the FCDB training set, while the authors of the other approaches only use the FCDB test set for evaluation purposes. We are not able to find a significant influence of fine-tuning on the metrics, likely because of the relatively small size of FCDB (1395 train and 348 test images) and the wide differences between individual approaches. Our model’s performance is comparable to or better than the subset of models that fine- tune on FCDB, and is significantly more efficient. We also study the impact of transfer learning on our dataset, by initializing the feature extractor using the weights from ImageNet and training the model as described above. The resulting model labeled ”Ours-ImageNet Only”, achieves an IoU of $0.679$, slightly worse than the model initialized with the weights learned on our dataset, but better than Lu et al. [22], VFN [7] and VPN [39]. This implies that the proposed architecture is a more significant contributor to the performance on FCDB, compared to pre-training on our dataset. Prior approaches measured their efficiency during using the time to crop a single image as a metric, which is dependent on hardware, input image size and implementation details, making a fair comparison difficult. Nevertheless, we present the number of crops per second and hardware self-reported by the authors in Table 3 as a measure of efficiency. Amongst the approaches compared, sliding-judging methods such as VFN [7] have the lowest efficiency. More recent approaches from Lu et al. [22, 23] report an overall processing speed of 50fps for their image cropping solution an Nvidia 2080Ti GPU. The weakly supervised approach by Lu et al. [21] is able to crop images at 285 fps. In contrast, our model with a WideResNet-50-2 backbone can crop 606 input frames per second on a single Nvidia Tesla V100 GPU in 5 different aspect ratios simultaneously, resulting in over 3000 output crops per second. This time is calculated for inference only without any optimizations, and does not include time to load and pre-process the images, or save the final cropped images in order to be consistent and enable comparisons with prior work[3]. Model | Fine Tuning | Avg. Candidates | IoU $\uparrow$ | BDE $\downarrow$ | FPS$\uparrow$ | GPU Hardware ---|---|---|---|---|---|--- VFN [7] | Yes | 137 | 0.632 | 0.098 | 0.78 [21] | N/A A2-RL [19] | No | 13.56 | 0.663 | 0.089 | 4.08 | Nvidia Titan X VPN [39] | No | 895 | 0.664 | 0.085 | 75 | N/A Wang et al.* [38] | Yes | 1296 | 0.65 | 0.08 | - | - LVRN [25] | Yes | 1745 | 0.7100 | 0.0735 | 125 | Nvidia 1080 ASM-Net * [32] | No | N.A. | 0.748 | 0.068 | - | - Lu et al.* [22] | Yes | 1 | 0.673 | 0.058 | 50 | Nvidia 2080 Ti Lu et al.* [23] | No | 1 | 0.673 | 0.058 | 50 | Nvidia 2080 Ti Lu et al.* [21] | No | 1 | 0.681 | 0.084 | 285 | Nvidia 1080 Ti Ours-ImageNet Only | Yes | 1 | 0.679 | 0.067 | 606 | Nvidia Tesla V100 Ours | Yes | 1 | 0.692 | 0.064 | 606 | Nvidia Tesla V100 Table 3: Evaluation on FCDB, where * highlight models that explicitly model aesthetics and/or attention regions. FPS refers to the number of input frames per second. ### 4.3 Evaluation on MIR-Thumb Even though the goals of image cropping methods differ from those of thumbnail generation, datasets like MIR-Thumb used by CropNet [3] are similar to ours, in that they annotate the same image with bounding boxes of different aspect ratios. We, therefore, evaluate our model on the MIR-Thumb test set to test its generalization ability. We also include our baseline methods from Section 4.1.2 for comparison. MIR-Thumb includes some aspect ratios that were not present in our dataset, namely 21:9, 9:16, and 9:21. We synthesize the model predictions for these aspect ratios by adjusting the bounding boxes of the closest aspect ratio in our model. To generate the target aspect ratio of 21:9, we reduce the height uniformly around the center of the 2:1 prediction in our model and keep the width constant. The closest aspect ratio to 9:16 and 9:21 was 3:4, where we keep the height constant and shrink the width. The results are shown in Table 4, where our model achieves state-of-the-art performance with no fine-tuning. We achieve a significantly higher IoU of 0.770 when we only consider aspect ratios that were in our training set namely 16:9, 1:1, 4:3, and 3:4. These results indicate that our learned model generalizes well to other similar datasets and tasks. Model | Train Set | Test Set | IoU $\uparrow$ ---|---|---|--- Baseline-0.8 | None | MIR-Thumb | 0.488 Baseline-0.9 | None | MIR-Thumb | 0.505 Baseline-1.0 | None | MIR-Thumb | 0.506 CropNet | FAT-Clean [3] | MIR-Thumb | 0.672 CropNet | MIR-Thumb | MIR-Thumb | 0.711 Ours | Ours | MIR-Thumb | 0.741 Table 4: Evaluation on MIR-Thumb ### 4.4 Qualitative Assessment The perception of image aesthetics is inherently subjective. We, therefore, provide visual examples of some results of our algorithm on images from MIR- Thumb, FCDB, and our dataset. We first illustrate a few cases where our model with no fine-tuning produced crops with better aesthetics than even the annotated images in the MIR-Thumb test set, as seen in Figure 4. Most of these cases involve partially cropped human subjects which are rare in our training dataset but appear more frequently in the MIR-Thumb dataset, resulting in our model predictions getting a low IoU score, even though the predicted crops have arguably better aesthetics. This finding reveals some open challenges in the objective evaluation of image cropping systems, such as using the IoU as a metric, the reliance on a single reference annotation and using inexperienced crowd-sourced workers. Future research in these areas is critical in order to build reliable and robust image cropping systems. Additionally, we find that our model predictions appear to retain some composition aspects from the original image without explicitly modeling aesthetics during training. To illustrate this, we draw a rule-of-thirds grid over some images from MIR-Thumb and the resulting predictions in Figure 5. This behavior is consistent even when the aspect ratios of the source images and the target crop are quite different. We believe this behavior is a likely result of our dataset that includes well-composed source images cropped by editorial experts, unlike other datasets for image cropping such as FLMS that exclude well-composed images, assuming that they do not require further cropping [11]. Figure 4: Our model can generate crops with better aesthetics than the original annotations in MIR-Thumb without fine-tuning Figure 5: Illustrating the model’s ability to retain aesthetic and composition properties (eg. rule of thirds) of the original image, evaluated on images sourced from the MIR- Thumb dataset without fine-tuning. Furthermore, we include some of the model predictions from our test set in Figure 6. The model can identify the main subject in the image, even if the subject is relatively small, facing away from the camera or is inanimate. The last two rows in Figure 6 are intended to display predictions when we input two images with similar content but different aspect ratios. In both cases, the model can preserve the regions of interest while producing aesthetically similar crops for many of the output aspect ratios. We finally include some examples of the model predictions on FCDB without any fine tuning in Figure 7, to illustrate the generalization ability of our model on a different dataset. The model is able to perform well on challenging and diverse images such as close up images of pets, day and night time landscapes, abstract patterns and inanimate objects. We observe that the model is able to retain important image content even in difficult cases when a large portion of the image has to be excluded, such as choosing a 2:1 crop of a portrait image (as seen in rows 1, 5, and 7 of Figure 7). Figure 6: Results on images from our test dataset Figure 7: Results from our model trained on our dataset and evaluated on images in FCDB with no fine tuning ## 5 Conclusions We proposed a novel _experience-based direct generation_ strategy for image cropping. The model was designed to directly predict bounding boxes for a fixed aspect ratio, without explicitly modeling image aesthetics or visual attention regions. The model was trained on a large dataset of images annotated by experts, who tried to maintain image aesthetics and visual attention regions in the cropped images. We designed an efficient, straightforward architecture with a shared feature extractor and multiple dedicated regression heads to simultaneously predict the bounding box for different aspect ratios. Our model is easier to train than existing multi- stage approaches, and more efficient for inference as it does not evaluate multiple candidates. Due to a lack of public datasets for our task, we benchmarked our model on two related datasets - FCDB for aesthetic image cropping without regard to aspect ratio, and MIR-Thumb for image thumbnail generation in fixed aspect ratios where aesthetics are not crucial. Our model, modified to generate outputs without defined aspect ratios, achieved results comparable to existing approaches, while being more efficient and easier to train. We achieved state- of-the-art results on the MIR-Thumb dataset without fine-tuning. Finally, we displayed some examples where our model generates more aesthetic crops than the ground truth annotations in MIRThumb. We also performed a qualitative evaluation and showed that our model is able to generalize across multiple datasets without fine-tuning, and also frequently retain aesthetic properties of the source image in the final crops. ## References * [1] Edoardo Ardizzone, Alessandro Bruno, and Giuseppe Mazzola. Saliency based image cropping. In Alfredo Petrosino, editor, Image Analysis and Processing – ICIAP 2013, pages 773–782, Berlin, Heidelberg, 2013. Springer Berlin Heidelberg. * [2] Luigi Celona, Gianluigi Ciocca, Paolo Napoletano, and Raimondo Schettini. Autocropping: A closer look at benchmark datasets. In Elisa Ricci, Samuel Rota Bulò, Cees Snoek, Oswald Lanz, Stefano Messelodi, and Nicu Sebe, editors, Image Analysis and Processing – ICIAP 2019, pages 315–325, Cham, 2019. Springer International Publishing. * [3] Huarong Chen, Bin Wang, Tianxiang Pan, Liwang Zhou, and Hua Zeng. Cropnet: Real-time thumbnailing. In Proceedings of the 26th ACM international conference on Multimedia, pages 81–89, 2018. * [4] J. Chen, G. Bai, S. Liang, and Z. Li. Automatic image cropping: A computational complexity study. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 507–515, 2016. * [5] Qiuyu Chen, Wei Zhang, Ning Zhou, Peng Lei, Yi Xu, Yu Zheng, and Jianping Fan. Adaptive fractional dilated convolution network for image aesthetics assessment. CoRR, abs/2004.03015, 2020. * [6] Yi-Ling Chen, Tzu-Wei Huang, Kai-Han Chang, Yu-Chen Tsai, Hwann-Tzong Chen, and Bing-Yu Chen. Quantitative analysis of automatic image cropping algorithms: A dataset and comparative study. In 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pages 226–234. IEEE, 2017. * [7] Yi-Ling Chen, Jan Klopp, Min Sun, Shao-Yi Chien, and Kwan-Liu Ma. Learning to compose with professional photographs on the web. In Proceedings of the 25th ACM international conference on Multimedia, pages 37–45, 2017. * [8] Gianluigi Ciocca, Claudio Cusano, Francesca Gasparini, and Raimondo Schettini. Self-adaptive image cropping for small displays. IEEE Transactions on Consumer Electronics, 53(4):1622–1627, 2007\. * [9] Yubin Deng, Chen Change Loy, and Xiaoou Tang. Image aesthetic assessment: An experimental survey. IEEE Signal Processing Magazine, 34(4):80–106, 2017. * [10] Seyed A Esmaeili, Bharat Singh, and Larry S Davis. Fast-at: Fast automatic thumbnail generation using deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4622–4630, 2017. * [11] Chen Fang, Zhe Lin, Radomir Mech, and Xiaohui Shen. Automatic image cropping using visual composition, boundary simplicity and content preservation models. In Proceedings of the 22nd ACM international conference on Multimedia, pages 1105–1108, 2014. * [12] Ross Girshick. Fast r-cnn. In Proceedings of the IEEE international conference on computer vision, pages 1440–1448, 2015. * [13] Guanjun Guo, Hanzi Wang, Chunhua Shen, Yan Yan, and Hong-Yuan Mark Liao. Automatic image cropping for visual aesthetic enhancement using deep neural networks and cascaded regression. IEEE Transactions on Multimedia, 20(8):2073–2085, 2018. * [14] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. * [15] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4700–4708, 2017. * [16] Laurent Itti, Christof Koch, and Ernst Niebur. A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on pattern analysis and machine intelligence, 20(11):1254–1259, 1998. * [17] Md Baharul Islam Chen Tet Khuan and Muhammad Ehsan Rana Md Kabirul Islam. Aaics: Aesthetics-driven automatic image cropping and scaling. In The International Conference on Data Mining, Multimedia, Image Processing and their Applications (ICDMMIPA2016), page 8, 2016. * [18] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. * [19] Debang Li, Huikai Wu, Junge Zhang, and Kaiqi Huang. A2-rl: Aesthetics aware reinforcement learning for image cropping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8193–8201, 2018. * [20] Nian Liu, Junwei Han, Dingwen Zhang, Shifeng Wen, and Tianming Liu. Predicting eye fixations using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 362–370, 2015. * [21] Peng Lu, Jiahui Liu, Xujun Peng, and Xiaojie Wang. Weakly supervised real-time image cropping based on aesthetic distributions. In Proceedings of the 28th ACM International Conference on Multimedia, MM ’20, page 120–128, New York, NY, USA, 2020. Association for Computing Machinery. * [22] Peng Lu, Hao Zhang, Xujun Peng, and Xiaofu Jin. An end-to-end neural network for image cropping by learning composition from aesthetic photos. CoRR, abs/1907.01432, 2019. * [23] P. Lu, H. Zhang, X. Peng, and X. Jin. Learning the relation between interested objects and aesthetic region for image cropping. IEEE Transactions on Multimedia, pages 1–1, 2020. * [24] Peng Lu, Hao Zhang, XuJun Peng, and Xiang Peng. Aesthetic guided deep regression network for image cropping. Signal Processing: Image Communication, 77:1–10, 2019. * [25] W. Lu, X. Xing, B. Cai, and X. Xu. Listwise view ranking for image cropping. IEEE Access, 7:91904–91911, 2019. * [26] Long Mai, Hailin Jin, and Feng Liu. Composition-preserving deep photo aesthetics assessment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 497–506, 2016. * [27] Naila Murray, Luca Marchesotti, and Florent Perronnin. Ava: A large-scale database for aesthetic visual analysis. In 2012 IEEE Conference on Computer Vision and Pattern Recognition, pages 2408–2415. IEEE, 2012. * [28] Masashi Nishiyama, Takahiro Okabe, Yoichi Sato, and Imari Sato. Sensation-based photo cropping. In Proceedings of the 17th ACM international conference on Multimedia, pages 669–672, 2009. * [29] Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov, and Liang-Chieh Chen. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4510–4520, 2018. * [30] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015. * [31] Jin Sun and Haibin Ling. Scale and object aware image thumbnailing. International journal of computer vision, 104(2):135–153, 2013\. * [32] Yi Tu, Li Niu, Weijie Zhao, Dawei Cheng, and Liqing Zhang. Image cropping with composition and saliency aware aesthetic score map. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 12104–12111. AAAI Press, 2020. * [33] Shimon Ullman and Amnon Sha’ashua. Structural saliency: The detection of globally salient structures using a locally connected network. 1988\. * [34] Eleonora Vig, Michael Dorr, and David Cox. Large-scale optimization of hierarchical features for saliency prediction in natural images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2798–2805, 2014. * [35] Lijie Wang, Xueting Wang, Toshihiko Yamasaki, and Kiyoharu Aizawa. Aspect-ratio-preserving multi-patch image aesthetics score prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019. * [36] Wenguan Wang, Qiuxia Lai, Huazhu Fu, Jianbing Shen, and Haibin Ling. Salient object detection in the deep learning era: An in-depth survey. CoRR, abs/1904.09146, 2019. * [37] Wenguan Wang and Jianbing Shen. Deep cropping via attention box prediction and aesthetics assessment. In Proceedings of the IEEE International Conference on Computer Vision, pages 2186–2194, 2017. * [38] Wenguan Wang, Jianbing Shen, and Haibin Ling. A deep network solution for attention and aesthetics aware photo cropping. IEEE transactions on pattern analysis and machine intelligence, 41(7):1531–1544, 2018. * [39] Zijun Wei, Jianming Zhang, Xiaohui Shen, Zhe Lin, Radomir Mech, Minh Hoai, and Dimitris Samaras. Good view hunting: Learning photo composition from dense view pairs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5437–5446, 2018. * [40] Jianzhou Yan, Stephen Lin, Sing Bing Kang, and Xiaoou Tang. Learning the change for automatic image cropping. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 971–978, 2013. * [41] Jianzhou Yan, Stephen Lin, Sing Bing Kang, and Xiaoou Tang. Change-based image cropping with exclusion and compositional features. International Journal of Computer Vision, 114(1):74–87, 2015. * [42] Sergey Zagoruyko and Nikos Komodakis. Wide residual networks. CoRR, abs/1605.07146, 2016. * [43] Hui Zeng, Lida Li, Zisheng Cao, and Lei Zhang. Grid anchor based image cropping: A new benchmark and an efficient model, 2019. * [44] H. Zeng, L. Li, Z. Cao, and L. Zhang. Reliable and efficient image cropping: A grid anchor based approach. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 5942–5950, 2019. * [45] Luming Zhang, Mingli Song, Qi Zhao, Xiao Liu, Jiajun Bu, and Chun Chen. Probabilistic graphlet transfer for photo cropping. IEEE Transactions on Image Processing, 22(2):802–815, 2012.
Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece <EMAIL_ADDRESS>]<EMAIL_ADDRESS>] <EMAIL_ADDRESS>]<EMAIL_ADDRESS>] <EMAIL_ADDRESS>]<EMAIL_ADDRESS>] <EMAIL_ADDRESS>]<EMAIL_ADDRESS>] [1] <EMAIL_ADDRESS>] [1] [1]Corresponding author. # Watch out Venomous Snake Species: A Solution to SnakeCLEF2023 Feiran Hu Peng Wang Yangyang Li Chenlong Duan Zijian Zhu Fei Wang Faen Zhang Yong Li Xiu-Shen Wei School of Computer Science and Engineering, Nanjing University of Science and Technology, Nanjing, China Qingdao AInnovation Technology Group Co,. Ltd. (2023) ###### Abstract The SnakeCLEF2023 competition aims to the development of advanced algorithms for snake species identification through the analysis of images and accompanying metadata. This paper presents a method leveraging utilization of both images and metadata. Modern CNN models and strong data augmentation are utilized to learn better representation of images. To relieve the challenge of long-tailed distribution, seesaw loss [1] is utilized in our method. We also design a light model to calculate prior probabilities using metadata features extracted from CLIP [2] in post processing stage. Besides, we attach more importance to venomous species by assigning venomous species labels to some examples that model is uncertain about. Our method achieves 91.31% score of the final metric combined of F1 and other metrics on private leaderboard, which is the 1st place among the participators. The code is available at https://github.com/xiaoxsparraw/CLEF2023. ###### keywords: Snake Species Identification Fine-grained image recognition Long-tailed Metadata SnakeCLEF ## 1 Introduction Fine-grained visual categorization is a well-established and pivotal challenge within the fields of computer vision and pattern recognition, serving as the cornerstone for a diverse array of real-world applications [3]. The SnakeCLEF2023 competition, co-hosted as an integral part of the LifeCLEF2023 lab within the CLEF2023 conference, and the FGVC10 workshop in conjunction with the esteemed CVPR2023 conference, is geared towards advancing the development of a robust algorithm for snake species identification from images and metadata. This objective holds profound significance in the realm of biodiversity conservation and constitutes a crucial facet of human health preservation. In this paper, we introduce a method that addresses the recognition of snake species by leveraging both metadata and images. ConvNeXt-v2 [4] and CLIP [2] are used to extract images features and metadata features separately, and the image features and text features are concatenated to be input of MLP classifier, thus getting better representation of examples and recognition results. Seesaw loss [1] are utilized in our method, thereby alleviating the long-tailed distribution problem. Notably, our proposed method takes into careful consideration the critical real-world need to distinguish venomous and harmless snake species by using the Real-World Weighted Cross-Entropy (RWWCE) loss [5] and post-processing, resulting in exemplary performance surpassing that of other solutions presented in this year’s competition. Experiments and competition results show that our method is effective in snake species recognition task. The subsequent sections of this paper provide a comprehensive overview of the key aspects. Section 2 introduces the competition challenges and datasets, accompanied by the examination of the evaluation metric utilized. Section 3 describes our proposed methodologies, offering a comprehensive and detailed introduction to the techniques. Section 4 presents the implementation details, alongside a comprehensive analysis of the principal outcomes achieved. Finally, Section 5 concludes this paper by summarizing the key findings and offering future research directions. ## 2 Competition Description Understanding datasets and metrics is an essential requirement for engaging in a machine learning competition. Within this section, we aim to introduce our comprehension of the datasets and provide overview of the evaluation metrics employed by the competition organizers. ### 2.1 Challenges of the Competition Past iterations of this competition have witnessed remarkable accomplishments by machine learning models [6, 7, 8, 9, 10, 11]. To further enhance the competition’s practical relevance and address the exigencies faced by developers, scientists, users, and communities, such as addressing post- snakebite incidents, the organizers have imposed more stringent constraints. The ensuing challenges of this year’s competition can be summarized as follows: * • Fine-grained image recognition: The domain of fine-grained image analysis has long posed a challenging problem within the FGVC workshop, deserving further investigation and study. * • Utilization of metadata: The incorporation of metadata, particularly pertaining to the geographical distribution of snake species, plays a vital role in their classification. Such metadata is commonly employed by individuals to identify snakes in their daily lives. Hence, utilization of location metadata holds significance and needs careful consideration. * • Long-tailed distribution: Long-tailed distributions are common in real-world scenarios, and the distribution of snake species is no exception. * • Identification of venomous and harmless species: The distinction between venomous and harmless snake species is meaningful, as venomous snake bites lead to large number of death each year. Consequently, leveraging deep learning methodologies to address this problem is of paramount urgency. * • Model size limitation: A strict limitation has been imposed on the model size, constraining it to a maximum of 1GB. ### 2.2 Dataset The organizers provide a dataset, consisting of 103,404 recorded snake observations, supplemented by 182,261 high-resolution images. These observations encompass a diverse range of 1,784 distinct snake species and have been documented across 214 geographically varied regions. It is worth to note that the provided dataset is in a heavily long-tailed distribution, as shown in Fig. 1. In this distribution, the most frequently encountered species have 1,262 observations consists of 2,079 accompanying images. However, the least frequently encountered species is captured by a mere 3 observations, showing its exceptional rarity within the dataset. Figure 1: Long-tailed distribution of the SnakeCLEF2023 training dataset. The blue color means head classes, which means most images in the dataset belong to these classes. The orange color means tail classes, which means most classes in the dataset are tail classes. ### 2.3 Evaluation Metric In addition to the conventional evaluation metrics of Accuracy (Acc) and Mean F1-Score, this year’s competition incorporates a novel evaluation metric, denoted as “public_score_track1” on the leaderboard. This metric combines the F1-Score with an assessment of the confusion errors related to venomous species. It is calculated as a weighted average, incorporating both the macro F1-score and the weighted accuracy of various types of confusions: $M=\frac{w_{1}F_{1}+w_{2}\left(100-P_{1}\right)+w_{3}\left(100-P_{2}\right)+w_{4}\left(100-P_{3}\right)+w_{5}\left(100-P_{4}\right)}{\sum_{i}^{5}w_{i}}\,,$ (1) where $w_{1}=1.0,w_{2}=1.0,w_{3}=2.0,w_{4}=5.0,w_{5}=2.0$ are the weights of individual terms. The metric incorporates several percentages, namely $F_{1}$ representing the macro F1-score, $P_{1}$ denoting the percentage of harmless species misclassified as another harmless species, $P_{2}$ indicating the percentage of harmless species misclassified as a venomous species, $P_{3}$ reflecting the percentage of venomous species misclassified as another harmless species, and $P_{4}$ representing the percentage of venomous species misclassified as another venomous species. This metric is bounded below by 0% and above by 100%. The lower bound is attained when all species are misclassified, including misclassification of harmless species as venomous and vice versa. Conversely, if the F1-score reaches 100%, indicating correct classification of all species, each $P_{i}$ value must be zero, leading to an overall score of 100%. ## 3 Method In this section, we shall introduce the methodologies employed to address the task of snake species classification. ### 3.1 Data Preprocessing Data preprocessing plays a crucial role in machine learning, as it influences not only the final performance but also the feasibility of problem resolution. Upon obtaining the dataset provided by the competition organizers, several issues emerged. For instance, certain images listed in the metadata CSV file were found to be nonexistent within the corresponding image folders. To address this, we generated a new metadata CSV file by eliminating the affected rows from the original file. Additionally, a subset of images within the dataset was found to be corrupted, potentially due to network transmission or other factors. To mitigate this concern, we utilized OpenCV to read the problematic images and subsequently re-wrote them to the file system, thereby solving the corruption issue. The SnakeCLEF dataset includes valuable metadata pertaining to the observation locations. Leveraging this location information is of great significance, as certain snake species inhabit geographically confined areas. However, the metadata presents the location in the form of country or region codes, which cannot be directly utilized as inputs for convolutional neural network (CNN) or Vision Transformer (ViT) [12]. To address this challenge, we employ CLIP [2] to extract location features without engaging in fine-tuning. Subsequently, Principal Component Analysis (PCA) [13] is employed to reduce the dimension of the resulting feature vectors. Data augmentation serves as a key technique in computer vision tasks. Within our methodology, we leverage fundamental image augmentation methods from Albumentations [14]. These methods encompass RandomResizedCrop, Transpose, HorizontalFlip, VerticalFlip, ShiftScaleRotate, RandomBrightnessContrast, PiecewiseAffine, HueSaturationValue, OpticalDistortion, ElasticTransform, Cutout, and GridDistortion. Furthermore, we incorporate data mixing augmentation techniques, such as Mixup [15], CutMix [16], TokenMix [17], and RandomMix [18], during the course of the competition. These data augmentation methods provide strong regularization to models by softening both images and labels, avoiding the model overfitting in training dataset. ### 3.2 Model Throughout the competition, we explored various models, including both classical and state-of-the-art architectures, such as Convolutional Neural Networks and Vision Transformers. Models employed during the competition include ResNet [19], VOLO [20], ConvNeXt [21], BEiT-v2 [22], EVA [23] and ConvNeXt-v2 [4]. The implementation of these models was facilitated using the timm [24] library. In light of the imposed limitations on model parameters and the consideration of the model representation capabilities, we selected ConvNeXt-v2 [4] as the backbone architecture in our final method. However, relying solely on the visual backbone is insufficient for effectively addressing the task at hand. Given the availability of metadata in the competition and the inherent challenges associated with fine-grained image classification, it becomes necessary to modify the architecture of the vision model to achieve superior performance. The architectural design of the model employed in our final submission is illustrated in Fig. 2. Following the completion of the third stage of ConvNeXt-v2 [4], the intermediate-level feature map is combined with the high-level image features after the final stage, along with the metadata features. This concatenation process yields a comprehensive representation that captures both the image and metadata information. To mitigate overfitting, we have incorporated MaxPooling [25], BatchNorm [26], and Dropout [27] techniques into our methodology. Once the comprehensive representation is obtained, a classifier comprising two linear layers and ReLU [28] activation functions follows and generates classification results. Figure 2: Architecture of our model. Take ConvNeXt-v2 [4] as the backbone, which is made up of 4 stages, feature vector extracted from metadata ($v_{1}$), original feature vector ($v_{2}$) and feature vector from middle stage of the backbone ($v_{3}$) are concatenated to get the final feature vector $v$, a MLP classifier is followed to get the final classification results. ### 3.3 Optimization Procedure Addressing long-tailed recognition is another challenge encountered in the competition. To tackle this issue, we extensively explored various techniques implemented in BagofTricks-LT [29]. In our final submission, we incorporated the seesaw loss [1] as a key component. The seesaw loss formulation can be expressed as follows: $\displaystyle L_{\text{seesaw }}(\mathbf{z})=-\sum_{i=1}^{C}y_{i}\log\left(\widehat{\sigma}_{i}\right)\,,$ (2) $\displaystyle\text{ with }\widehat{\sigma}_{i}=\frac{e^{z_{i}}}{\sum_{j\neq i}^{C}\mathcal{S}_{ij}e^{z_{j}}+e^{z_{i}}}\,,$ where $\mathbf{z}$ denotes the output obtained from the fully connected layer, $C$ represents the total number of classes, and $y_{i}$ corresponds to the one-hot label of the image. The hyper-parameters $\mathcal{S}_{ij}$ are carefully set based on the distribution characteristics inherent in the dataset. Distinguishing between venomous and non-venomous snake species and the consequential assignment of varying costs to different classification errors are of great importance in this year’s challenge, as demonstrated by Eq. 1. In accordance with these requirements, loss function that effectively models the real-world costs associated with mislabeling [5] is utilized by us. To align with this objective, we incorporate the Real-World Weighted Cross-Entropy (RWWCE) loss function [5] during the final three epochs of training, employing a reduced learning rate. In addition to the choice of loss functions, the selection of an optimizer and an appropriate learning rate decay strategy are important in the training of our models. For optimization, we adopt the AdamW optimizer [30]. To enhance convergence speed and overall performance, we implement cosine learning rate decay [31] coupled with warmup techniques during the training process. These strategies collectively facilitate more effective and efficient model convergence. ### 3.4 Post-processing In this year’s challenge, the task requires the solution to accurately identify the venomous nature of snakes, particularly focusing on distinguishing the venomous species, with the limited model capacity. It is challenging but fortunately, the organizers provided a metadata repository, with a particular focus on geographical information. In practical contexts, where reliance solely on visual cues may prove insufficient performance on fine-grained classification, the supplementation of geographical details assumes a crucial role in assisting human experts in making judgment. Thus, the integration of geographical information within the metadata exhibits the potential to enhance the decision-making prowess of classification models. Inspired by [32], assuming the above-mentioned trained model as $f$, we developed a simple prior model denoted as $g$. This prior model is simple but efficiently, composed of three fully connected layers with non-linear activation function and employed dropout regularization. In the training process of this light model, we adopt the AdamW [30] optimizer and performed balanced sampling on the training data, to mitigate the impact of the long- tail distribution in the dataset. The objective of this training process was to minimize the following loss function: $\displaystyle\mathcal{L}_{loc}(\mathbf{x},\mathbf{r},\mathbf{O},y)=$ $\displaystyle\lambda\log\left(s\left(g(\mathbf{x})\mathbf{O}_{:,y}\right)\right)+\sum_{\begin{subarray}{c}i=1\\\ i\neq y\end{subarray}}^{C}\log\left(1-s\left(g(\mathbf{x})\mathbf{O}_{:,i}\right)\right)+$ (3) $\displaystyle\sum_{i=1}^{C}\log\left(1-s\left(g(\mathbf{r})\mathbf{O}_{:,i}\right)\right)\,,$ where the metadata features extracted from CLIP is denoted as $\mathbf{x}$. $\mathbf{O}$ is the category embedding matrix, where each column is the prototype of different category, pre-computed by our trained model $f$, e.g., ConvNeXt-v2 [4]. Furthermore, $\mathbf{r}$ signifies a uniformly random location data point, and $\lambda$ serves as a hyper-parameter for weighting positive observations. It is important to note that if a category $y$ has been observed at the spatial location $\mathbf{x}$ within the training set, the value of $s\left(g(\mathbf{x})\mathbf{O}_{:,y}\right)$ should approximate 1. Conversely, if the category has not been observed, the value should approximate 0. During the inference stage, our prior model efficiently calculates the prior class embeddings denoted as $\mathbf{P}$. Utilizing the following equation: $\mathbf{S^{\prime}}=Softmax(\mathbf{P})\odot\mathbf{S},$ (4) where $\mathbf{S}$ is the prediction score computed by $f$. We derive the final class scores $\mathbf{S^{\prime}}$ by computing the joint probability of predictions from the two models $f$ and $g$. In real-world scenarios, misclassifying a non-venomous snake as venomous carries significant consequences and is deemed unacceptable. To address this concern, we implement a robust post-processing approach. When the predicted confidence of an image $\mathbf{x}$ is relatively low, we analyze its top-5 predictions. If any of these predictions include a venomous class, we classify the image as venomous. This post-processing technique represents a well-considered compromise between precision and recall. Notably, this approach enable us to get the 1st place in the private leaderboard. We firmly believe that this strategy possesses considerable advantages for practical applications. ## 4 Experiments In this section, we will introduce our implementation details and main results. ### 4.1 Experiment Settings The proposed methodology is developed utilizing the PyTorch framework [33]. All models employed in our approach have been pre-trained on the ImageNet dataset [34], readily available within the timm library [24]. Fine-tuning of these models was conducted across 4 Nvidia RTX3090 GPUs. The initial learning rate was set to $2\times 10^{-5}$, and the total number of training epochs was set to 15, with the first epoch dedicated to warm-up, employing a learning rate of $2\times 10^{-7}$. To optimize model training, we utilized the AdamW optimizer [30] in conjunction with a cosine learning rate scheduler [31], setting the weight decay to $2\times 10^{-5}$. During inference on the test dataset, we adopted test time augmentation. Furthermore, considering that an observation may consist of multiple images, we adopted a simple averaging approach to obtain a singular prediction for each observation. ### 4.2 Main Results In this section, we present our primary findings attained throughout the challenge, as illustrated in Tab. 1. The “Metric” column within the table corresponds to the public track1 metric featured on the leaderboard. Table 1: Main results of SnakeCLEF. Backbone | Resolution | Metric (%) | Comments ---|---|---|--- ResNet50 [19] | $224\times 224$ | 72.22 | baseline BEiT-v2-L [22] | $224\times 224$ | 82.59 | stronger backbone BEiT-L [35] | $384\times 384$ | 88.74 | cutmix EVA-L [23] | $336\times 336$ | 86.82 | cutmix Swin-v2-L [36] | $384\times 384$ | 88.19 | cutmix VOLO [20] | $448\times 448$ | 88.50 | cutmix ConvNeXt-v2-L [4] | $384\times 384$ | 88.98 | seesawloss + randommix ConvNeXt-v2-L [4] | $384\times 384$ | 89.47 | seesawloss + cutmix ConvNeXt-v2-L [4] | $512\times 512$ | 90.86 | seesawloss + cutmix + metadata ConvNeXt-v2-L [4] | $512\times 512$ | 91.98 | seesawloss + cutmix \+ metadata + middle-level feature ConvNeXt-v2-L [4] | $512\times 512$ | 93.65 | seesawloss + cutmix + metadata \+ middle-level feature + post-processing As indicated by Tab. 1, the model parameters and image resolution hold crucial significance in image recognition tasks, aligning with conventional expectations. An increase in model parameters and image resolution corresponds to improvement in the public leaderboard score. Furthermore, data augmentation plays as a key factor in enhancing the generalization capacity of models. Notably, CutMix [16] outperforms alternative data mixing augmentation techniques, such as RandomMix [18], based on our experimental observations. Metadata plays a pivotal role in the recognition of snake species, enabling models to acquire enhanced representations of observations and thereby achieve superior classification results. In our experiments, the utilization of metadata facilitated the acquisition of enriched contextual information, leading to improved model performance. Additionally, the incorporation of the Seesaw loss [1] demonstrated notable efficacy in mitigating the challenges posed by long-tailed distributions, surpassing the conventional CrossEntropy loss. Moreover, the integration of middle-level features proved effective in alleviating the complexities associated with fine-grained image recognition, enabling more precise discrimination between similar snake species. Given that the final evaluation metric takes into account the demands of real- world applications and imposes greater penalties for misclassifying a venomous snake species as harmless compared to misclassifying a harmless species as venomous, we place significant emphasis on post-processing techniques. Specifically, when the model exhibits uncertainty in its predictions for a particular observation, we adopt a cautious approach and classify it as a venomous snake species based on the top-5 predictions. This post-processing strategy has proven highly advantageous, leading to substantial improvements in both the public leaderboard and the private test data performance, as evidenced by Tab. 1. ## 5 Conclusion Fine-grained visual analysis holds great practical significance, particularly in accurately discerning the toxicity of snakes within the domain of snake sub-classification. This paper focuses on addressing the snake classification problem by harnessing the valuable metadata present in the dataset for posterior filtering. Additionally, a robust post-processing technique is employed to facilitate toxicity identification. These approaches have culminated in our noteworthy achievement of securing the first-place position in the challenge, attaining an impressive overall evaluation score of 91.31% on the private leaderboard. ## References * Wang et al. [2021] J. Wang, W. Zhang, Y. Zang, Y. Cao, J. Pang, T. Gong, K. Chen, Z. Liu, C. C. Loy, D. Lin, Seesaw loss for long-tailed instance segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2021, pp. 9695–9704. * Radford et al. [2021] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al., Learning transferable visual models from natural language supervision, in: International Conference on Machine Learning, PMLR, 2021, pp. 8748–8763. * Wei et al. [2021] X.-S. Wei, Y.-Z. Song, O. Mac Aodha, J. Wu, Y. Peng, J. Tang, J. Yang, S. Belongie, Fine-grained image analysis with deep learning: A survey, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2021) 8927–8948. * Woo et al. [2023] S. Woo, S. Debnath, R. Hu, X. Chen, Z. Liu, I. S. Kweon, S. Xie, ConvNeXt V2: Co-designing and scaling convnets with masked autoencoders, arXiv preprint arXiv:2301.00808 (2023). * Ho and Wookey [2019] Y. Ho, S. Wookey, The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling, IEEE Access 8 (2019) 4806–4813. * Picek et al. [2020] L. Picek, I. Bolon, A. M. Durso, R. R. de Castañeda, Overview of the snakeclef 2020: Automatic snake species identification challenge, Working Notes of CLEF (2020). * Picek et al. [2021] L. Picek, A. M. Durso, I. Bolon, R. R. de Castañeda, Overview of snakeclef 2021: Automatic snake species identification with country-level focus, Working Notes of CLEF (2021). * Picek et al. [2022] L. Picek, M. Hrúz, A. M. Durso, I. Bolon, Overview of snakeclef 2022: Automated snake species identification on a global scale, Working Notes of CLEF (2022). * Bloch et al. [2020] L. Bloch, A. Boketta, C. Keibel, E. Mense, A. Michailutschenko, O. Pelka, J. Rückert, L. Willemeit, C. M. Friedrich, Combination of image and location information for snake species identification using object detection and efficientnets, Working Notes of CLEF (2020). * Chamidullin et al. [2021] R. Chamidullin, M. Šulc, J. Matas, L. Picek, A deep learning method for visual recognition of snake species, Working Notes of CLEF (2021). * Zou et al. [2022] C. Zou, F. Xu, M. Wang, W. Li, Y. Cheng, Solutions for fine-grained and long-tailed snake species recognition in snakeclef 2022, arXiv preprint arXiv:2207.01216 (2022). * Dosovitskiy et al. [2021] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, Proceedings of the International Conference on Learning Representations (2021). * Maćkiewicz and Ratajczak [1993] A. Maćkiewicz, W. Ratajczak, Principal components analysis (PCA), Computers & Geosciences 19 (1993) 303–342. * Buslaev et al. [2020] A. Buslaev, V. I. Iglovikov, E. Khvedchenya, A. Parinov, M. Druzhinin, A. A. Kalinin, Albumentations: Fast and flexible image augmentations, Information 11 (2020) 125. * Zhang et al. [2017] H. Zhang, M. Cisse, Y. N. Dauphin, D. Lopez-Paz, Mixup: Beyond empirical risk minimization, arXiv preprint arXiv:1710.09412 (2017). * Yun et al. [2019] S. Yun, D. Han, S. J. Oh, S. Chun, J. Choe, Y. Yoo, Cutmix: Regularization strategy to train strong classifiers with localizable features, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 6023–6032. * Liu et al. [2022a] J. Liu, B. Liu, H. Zhou, H. Li, Y. Liu, Tokenmix: Rethinking image mixing for data augmentation in vision transformers, in: European Conference on Computer Vision, Springer, 2022a, pp. 455–471. * Liu et al. [2022b] X. Liu, F. Shen, J. Zhao, C. Nie, Randommix: A mixed sample data augmentation method with multiple mixed modes, arXiv preprint arXiv:2205.08728 (2022b). * He et al. [2016] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 770–778. * Yuan et al. [2022] L. Yuan, Q. Hou, Z. Jiang, J. Feng, S. Yan, Volo: Vision outlooker for visual recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence (2022). * Liu et al. [2022] Z. Liu, H. Mao, C.-Y. Wu, C. Feichtenhofer, T. Darrell, S. Xie, A convnet for the 2020s, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022, pp. 11976–11986. * Peng et al. [2022] Z. Peng, L. Dong, H. Bao, Q. Ye, F. Wei, Beit v2: Masked image modeling with vector-quantized visual tokenizers, arXiv preprint arXiv:2208.06366 (2022). * Fang et al. [2023] Y. Fang, W. Wang, B. Xie, Q. Sun, L. Wu, X. Wang, T. Huang, X. Wang, Y. Cao, Eva: Exploring the limits of masked visual representation learning at scale, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2023, pp. 19358–19369. * Wightman [2019] R. Wightman, Pytorch image models, https://github.com/rwightman/pytorch-image-models, 2019\. * Boureau et al. [2010] Y.-L. Boureau, F. Bach, Y. LeCun, J. Ponce, Learning mid-level features for recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2010, pp. 2559–2566. * Ioffe and Szegedy [2015] S. Ioffe, C. Szegedy, Batch normalization: Accelerating deep network training by reducing internal covariate shift, in: International Conference on Machine Learning, PMLR, 2015, pp. 448–456. * Hinton et al. [2012] G. E. Hinton, N. Srivastava, A. Krizhevsky, I. Sutskever, R. R. Salakhutdinov, Improving neural networks by preventing co-adaptation of feature detectors, arXiv preprint arXiv:1207.0580 (2012). * Nair and Hinton [2010] V. Nair, G. E. Hinton, Rectified linear units improve restricted boltzmann machines, in: International Conference on Machine Learning, PMLR, 2010, pp. 807–814. * Zhang et al. [2021] Y. Zhang, X. Wei, B. Zhou, J. Wu, Bag of tricks for long-tailed visual recognition with deep convolutional neural networks, in: Proceedings of AAAI Conference on Artificial Intelligence, 2021, pp. 3447–3455. * Loshchilov and Hutter [2017] I. Loshchilov, F. Hutter, Decoupled weight decay regularization, arXiv preprint arXiv:1711.05101 (2017). * Loshchilov and Hutter [2016] I. Loshchilov, F. Hutter, Sgdr: Stochastic gradient descent with warm restarts, arXiv preprint arXiv:1608.03983 (2016). * Mac Aodha et al. [2019] O. Mac Aodha, E. Cole, P. Perona, Presence-only geographical priors for fine-grained image classification, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019. * Paszke et al. [2019] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Kopf, E. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, S. Chintala, Pytorch: An imperative style, high-performance deep learning library, in: Advances in Neural Information Processing Systems, 2019, pp. 8024–8035. * Deng et al. [2009] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2009, pp. 248–255. * Bao et al. [2021] H. Bao, L. Dong, S. Piao, F. Wei, Beit: Bert pre-training of image transformers, arXiv preprint arXiv:2106.08254 (2021). * Liu et al. [2022] Z. Liu, H. Hu, Y. Lin, Z. Yao, Z. Xie, Y. Wei, J. Ning, Y. Cao, Z. Zhang, L. Dong, et al., Swin transformer v2: Scaling up capacity and resolution, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2022, pp. 12009–12019.
Perturbation and bifurcation analysis of a gonorrhoea dynamics model with control Louis Omenyi${}^{1,3,\ast,\href https://orcid.org/0000-0002-8628-0298},$ Aloysius Ezaka${}^{2},$ Henry O. Adagba${}^{2},$ Friday Oyakhire${}^{1},$ Kafayat Elebute${}^{1},$ Akachukwu Offia1 and Monday Ekhator1 1 Department of Mathematics and Statistics, Alex Ekwueme Federal University, Ndufu-Alike, Nigeria 2Department of Industrial Mathematics and Applied Statistics, Ebonyi State University, Abakaliki, Nigeria 3Department of Mathematical Sciences, Loughborough University, Leicestershire, United Kingdom Corresponding author, email<EMAIL_ADDRESS> ###### Abstract A model for the transmission dynamics of gonorrhoea with control incorporating passive immunity is formulated. We show that introduction of treatment or control parameters leads to transcritical bifurcation. The backward bifurcation coefficients were calculated and their numerical perturbation results to different forms of equilibria. The calculated effective reproduction number of the model with control is sufficiently small. This implies asymptotically stability of the solution, thus, the disease can be controlled in a limited time. _Keywords:_ Gonorrhoea dynamics and control; passive immunity; reproduction number; stability; bifurcation; equilibria ## 1 Introduction Due to increasing rate of infertility among the teaming population as a result of sexually transmitted infections, it becomes necessary to undertake prompt prevention and control activities to tackle the ugly incidence of sexually transmitted diseases [6]. Gonorrhoea is one of such sexually transmitted infectious diseases caused by a bacterium called Neisseria gonorrhoeae [21]. The neisseria gonorrhoea is characterized by a very short period of latency, namely, $2-10$ days [11] and is commonly found in the glummer epithelium such as the urethra and endo-cervix epithelia of the reproductive track [5]. Gonorrhoea is transmitted to a new born infant from the infected mother through the birth canal thereby causing inflammations and eye infection such as conjunctivitis. It is also spread through unprotected sexual intercourse, [20]. Studies by Usman and Adam [22] and Center for Disease Control Report in show that male patients of gonorrhoea have pains in the testicles (known as epididymitis), painful urination due to scaring inside the urethra while in female patients, the disease may ascend the genital tract and block the fallopean tube leading to pelvic inflammatary disease (PID) and infertility, see also [15]. Other complications associated with this epidemic include arthritis, endocarditis, chronic pelvic pain, meningitis and ectopic pregnancy, [16]. Gonorrhoea confers temporal immunity on some individuals in the susceptible class while some others are not immuned, [20]. This immunity through the immune system plays an important role in protecting the body against the infection and other foreign substances, [3]. That is why an immuno-compromised patient has a reduced ability to fight infectious disease such as gonorrhoea due to certain diseases and genetic disorder, [18]. Such patient may be particularly vulnerable to opportunistic infection such as gonorrhoea. Hence, immune reaction can be stimulated by drug-induced immune system such as Thrombocytopenia, [18]. This helps to reduce the waning rate of passive immunity in the immune class, [2]. However, if the activity of immune system is excessive or over-reactive due to lack of cell mediated immunity, a hypersensitive reaction develops such as auto- immunity and allergy which may be injurious to body or may even cause death [25]. Statistically, gonorrhoea infection has spread worldwide with more than $360$ million new cases witnessed globally in adults aged $15-49$ years, [3]. In 1999, above over $120$ million people in African countries were reported to have contracted the disease. While over $82$ million people were reported in Nigeria, [3]. Researches abound on the modelling and control of this epidemic with various approaches and controls, see e.g. [3, 9, 10, 17, 20, 21] and mostly recently [1, 24, 14] and [4]. This present study continues the discussion by incorporating passive immunity in the model and introducing control measures capable of eliminating the disease in Nigeria. To validate the claim, we employ perturbation and bifurcation of the model variables and parameters and mathematically analyse the stability of the system. This underscores the role of mathematical analysis of models to elicit desired results, see e.g. [12] and [13]. Education and enlightenment, use of condom and treatment of patients with ampilicin and azithromycin are the control measures adopted to eradicate the disease. ## 2 Materials and Methods To formulate the model, in time $t,$ we let $Q(t)$ be passive immune class, $S(t)$ the susceptible compartment, $L(t)$ the latent class $I(t),$ the infectious class, $T(t),$ the treated class and $R(t)$ be the recovered compartment. Let the parameters of the model $\sigma$ as level of recruitment, $\upsilon$ as waning rate of immunity, $\mu$ as rate of natural mortality, $\lambda$ as contact rate between the susceptible and the latent classes, $\eta$ as treatment rate of latent class, $\gamma$ as induced death rate due to the infection, $\alpha$ as treatment rate of infected compartment, $\beta$ as infectious rate of Latent class, $\omega$ as recovery rate of treated class, $\delta$ as rate at which recovered class become susceptible again, $\theta$ as infectious rate from the susceptible class direct to the infectious class, $k_{1}$ as control measure given to latent class as $k_{2}$ as control measure given to infected class. We assume that recruitment into the population is by birth or immigration; all the parameters of the model are positive, some proportions of new birth are immunized against the infection; the immunity conferred on the new birth wanes after sometime, and that the rate of contact of the disease due to interaction $\lambda$ rate is due to the movement of the infected population. Consequently, the total population at time $t$ is $N(t)=Q(t)+S(t)+L(t)+I(t)+T(t)+R(t).$ So, the flow diagram of the model is shown as figure (1). Figure 1: The Extended Model with Control. So, the model for the gonorrhoea transmission dynamics is given by the following deterministic systems of non-linear differential equations (2.7): $\displaystyle\left.\begin{array}[]{rcl}\frac{dQ}{dt}&=&f\sigma-\upsilon Q-\mu Q\\\ \frac{dS}{dt}&=&\upsilon Q+(1-f)\sigma-\theta S(1-k_{2})+\delta R-\mu S-\theta SI\\\ \frac{dL}{dt}&=&\theta SI-\beta L-\mu L-\eta(1+k_{1})L\\\ \frac{dI}{dt}&=&\beta L+\theta S(1-k_{2})-((\mu+\gamma)+\alpha(1+k_{2}))I\\\ \frac{dR}{dt}&=&\omega T-\mu R-\delta R\\\ \frac{dT}{dt}&=&\eta(1+k_{1})L+\alpha(1+k_{2})I-\mu T-\omega T.\end{array}\right\\}$ (2.7) We will use that bifurcation theory states that perturbation in the parameter of a model leads to a change in the behaviour of the equilibrium solution, [5]. In the model, we use the center manifold method to assess the direction of bifurcation (i.e, either forward or backward). The method reduces the system to a smaller system which has the same qualitative properties and can be studied in a relatively easier way, [2]. This leads to a result on endemic equilibrium and backward bifurcation for our model. Besides, the theory of epidemiology signifies the phenomenon of backward bifurcation, that is the classical requirement the model’s effective reproduction number $R_{e}<1.$ Although this is necessary, it is no longer sufficient to conclude the effective control or elimination of gonorrhoea in a population, see e.g. [25]. Therefore, in this model we consider the nature of the equilibrium solution near the bifurcation point $R_{e}=1$ in the neighbourhood of the disease-free equilibrium $(E_{0}).$ The disease-free equilibrium is locally asymptotically stable if $R_{e}<1$ and unstable if $R_{e}>1.$ But when $R_{e}=1,$ another equilibrium point bifurcates from the disease-free equilibrium. In this case, the disease would invade the population in the case of backward bifurcation, [6]. ## 3 Results We first observe that setting the right hand side of the system (2.7) to zero gives the disease-Free Equilibrium (DFE) of the model as the equilibria: $(Q^{0},S^{0},L^{0},I^{0},R^{0},T^{0})=\frac{f\sigma}{\mu+\upsilon},\frac{\upsilon f\sigma+(\mu+\upsilon)(1-f)\sigma}{(\mu+\upsilon)(\theta+\mu)}.$ Now suppose $L\neq 0,I\neq 0,R\neq 0~{}~{}\text{and}~{}~{}T\neq 0$ then the model attains endemic equilibrium and solving the endemic equilibria system of the model gives the endemic state to be $\displaystyle Q^{*}$ $\displaystyle=$ $\displaystyle\frac{f\sigma}{\mu+\upsilon};$ $\displaystyle S^{*}$ $\displaystyle=$ $\displaystyle\frac{(\mu+\delta)(\mu+\omega)f\sigma+(\mu+\upsilon)(\mu+\delta)(\mu+\omega)\sigma(1-f)+(\mu+\upsilon)\delta\omega(\alpha+\eta)}{(\mu+\upsilon)(\mu+\delta)(\mu+\omega)};$ $\displaystyle L^{*}$ $\displaystyle=$ $\displaystyle\frac{(\lambda)(\mu+\delta)(\mu+\omega)f\sigma+(\mu+\upsilon)(\mu+\delta)(\mu+\omega)\sigma(1-f)+(\mu+\upsilon)\delta\omega(\alpha+\eta)}{(\mu+\beta+\eta)(\mu+\upsilon)(\mu+\delta)(\mu+\omega)};$ $\displaystyle I^{*}$ $\displaystyle=$ $\displaystyle\frac{(\mu+\delta)(\mu+\omega)f\sigma+(\mu+\upsilon)(\mu+\delta)(\mu+\omega)\sigma(1-f)+(\mu+\upsilon)\delta\omega(\alpha+\eta)(\beta\lambda+(\mu+\beta+\eta)\theta)}{(\mu+\alpha+\gamma)(\mu+\beta+\eta)(\mu+\upsilon)(\mu+\delta)(\mu+\omega)};$ $\displaystyle R^{*}$ $\displaystyle=$ $\displaystyle\frac{\omega(\alpha+\eta)}{(\mu+\delta)(\mu+\omega)};$ $\displaystyle T^{*}$ $\displaystyle=$ $\displaystyle\frac{\alpha+\eta}{\mu+\omega}.$ ###### Lemma 3.1. A qualitative change in the behaviour of the equilibria due to perturbation results in bifurcation. ###### Proof. For $\mu_{0},\mu_{1}>0,$ it follows that the model is stable and that at steady state: $\frac{dQ}{dt}=0,~{}~{}\frac{dS}{dt}=0,~{}~{}\frac{dL}{dt}=0,~{}~{}\frac{dI}{dt}=0,~{}~{}\frac{dR}{dt}=0~{}~{}\text{and}~{}~{}\frac{dT}{dt}=0.$ Thus, $\displaystyle\frac{dQ}{dt}$ $\displaystyle=$ $\displaystyle f\sigma-\mu_{2}Q$ $\displaystyle\frac{dS}{dt}$ $\displaystyle=$ $\displaystyle(1-f)\sigma+\upsilon Q+\delta R-\theta S(1-k_{2})-\mu S-\theta IS$ $\displaystyle\frac{dL}{dt}$ $\displaystyle=$ $\displaystyle\theta IS-\mu_{1}L$ (3.1) $\displaystyle\frac{dI}{dt}$ $\displaystyle=$ $\displaystyle\beta L+\theta S(1-k_{2})-\mu_{0}I$ (3.2) $\displaystyle\frac{dR}{dt}$ $\displaystyle=$ $\displaystyle\omega T-\mu_{2}R$ $\displaystyle\frac{dT}{dt}$ $\displaystyle=$ $\displaystyle\eta(1+k_{1})L+\alpha(1+k_{2})I-\mu_{3}T.$ So letting $\displaystyle\mu_{0}$ $\displaystyle=$ $\displaystyle\mu+\alpha+\gamma$ $\displaystyle\mu_{1}$ $\displaystyle=$ $\displaystyle\mu+\beta+\eta$ $\displaystyle\mu_{2}$ $\displaystyle=$ $\displaystyle\mu+\upsilon$ $\displaystyle\mu_{3}$ $\displaystyle=$ $\displaystyle\mu+\delta.$ At steady state, the equilibrium points of (3.1) become $0=\theta IS-\mu_{1}L\Rightarrow L=\frac{\theta IS}{\mu_{1}}\Rightarrow L=(0,\frac{\theta IS}{\mu_{1}}).$ While the equilibrium points of equation (3.2) become $0=\beta L+\theta S(1-k_{2})-\mu_{0}I\Rightarrow I=\frac{\beta L+\theta S(1-k_{2})}{\mu_{0}}\Rightarrow I=(0,\frac{\beta L+\theta S(1-k_{2})}{\mu_{0}}).$ ∎ This result is consistent with those of perturbed systems in [9] and [6]. We have the next result. ###### Proposition 3.2. The disease dynamics is controllable in the population with a sufficient perturbation for sufficiently long time. ###### Proof. As shown above, the introduction of treatment (or control) parameter changes the initial stage of the infection, hence, transcritical bifurcation. Now adding small perturbations to the equilibrium points of the model subject to changes in control or bifurcation parameter, we have $L=0+\varepsilon L=\frac{\theta IS}{\mu_{1}}+\varepsilon L$ and $I=0+\varepsilon I=\frac{\beta L+\theta S(1-k_{2})}{\mu_{0}}+\varepsilon I.$ Similarly, $\frac{dL}{dt}=\theta IS-\mu_{1}L=\theta IS-\mu_{1}(\frac{\theta IS}{\mu_{1}}+\varepsilon L)=-\mu_{1}\varepsilon L.$ Solving this gives $L(t)=Be^{-\mu_{1}\varepsilon t}$ (3.3) where $B$ is an arbitrary constant. Clearly, $|L|\rightarrow 0$ as $|t|\rightarrow\infty.$ Observe that equation (3.3) indicates that there is stability for all $\mu_{1}>0.$ This means that the infection can be controlled in the population. Moreover $\frac{dI}{dt}=\beta L+\theta S(1-k_{2})-\mu_{0}I=\beta L+\theta S(1-k_{2})-\mu_{0}\varepsilon I.$ Solving this gives $I(t)=Ae^{-\mu_{0}\varepsilon t}$ (3.4) for an arbitrary constant $A.$ So, there is linear stability for all $\mu_{0}>0.$ Moreover, $|I|\rightarrow 0~{}~{}\text{as}~{}~{}|t|\rightarrow\infty.$ On the addition of treatment or control parameters, we have the bifurcation shown in graphically in Figure 2. Figure 2: The Transcritical Bifurcation of the gonorrhoea model with passive immunity. ∎ When one considers the basic reproduction number $R_{0},$ which is the expected number of secondary infection produced in a completely susceptible population by a typical or one infected individual [23], other results of this analysis follow. The basic reproduction number is an important parameter used to determine how long an infectious disease can last or prevail in a given population. When $R_{0}<1,$ it means that with time the disease will die out of the population thereby giving it a clean health bill [5]. But if $R_{0}>1,$ it is expected that the disease will persist in the population. So for the disease to die out of the population, the associated reproduction number must be less than $1$ [7]. When control measure is given to a model, the reproduction number of the infectious disease becomes effective reproduction number $R_{e},$ [8]. ###### Proposition 3.3. The controls in the model system (2.7) for the gonorrhoea dynamics extinct the pandemics from the population. ###### Proof. For the infectious classes are L, I and T, let $f_{i}=\begin{bmatrix}\theta IS\\\ \theta(1-k_{2})S\\\ 0\end{bmatrix}$ So that $\frac{\partial f_{i}}{\partial x_{j}}E_{0}=F=\begin{pmatrix}0&\theta S&0\\\ 0&0&0\\\ 0&0&0\end{pmatrix}.$ Also $v_{i}=\begin{bmatrix}\beta L+\mu L+\eta(1+k_{1})L\\\ \mu I+\gamma I+\alpha(1+k_{2})I-\beta L-\theta S(1-k_{2})\\\ \mu T+\omega T-\eta(1+k_{1})L-\alpha(1+k_{2})I\end{bmatrix}.$ So that $\frac{\partial v_{i}}{\partial x_{j}}E_{0}=V=\begin{pmatrix}(\beta+\mu+\eta(1+k_{1}))&0&0\\\ -\beta&(\mu+\gamma+\alpha(1+k_{2}))&0\\\ -\eta(1+k_{1})&-\alpha(1+k_{2})&(\mu+\omega)\end{pmatrix}.$ The matrix formed by the co-factors of the determinant is $\small{\begin{pmatrix}(\mu+\gamma+\alpha(1+k_{2}))(\mu+\omega)&-\beta(\mu+\omega)&\alpha\beta(1+k_{2})+\eta(1+k_{1})(\mu+\gamma+\alpha(1+k_{2})\\\ 0&(\beta+\mu+\eta(1+k_{1}))(\mu+\omega)&-\alpha(1+k_{2})(\beta+\mu+\eta(1+k_{1}))\\\ 0&0&(\beta+\mu+\eta(1+k_{1})(\mu+\gamma+\alpha(1+k_{2}))\end{pmatrix}}$ so that $V^{-1}=\begin{pmatrix}\frac{1}{\beta+\mu+\eta(1+k_{1})}&0&0\\\ \frac{\beta}{(\beta+\mu+\eta(1+k_{1}))(\mu+\alpha(1+k_{2})+\gamma)}&\frac{1}{\mu+\gamma+\alpha(1+k_{2})}&0\\\ \frac{\alpha\beta(1+k_{2})+\eta(1+k_{1})}{(\beta+\mu+\eta(1+k_{1})(\mu+\omega)}&\frac{-\alpha(1+k_{2})}{\mu+\gamma+\alpha(1+k_{2})(\mu+\omega)}&\frac{1}{\mu+\omega}\end{pmatrix}.$ Also, $|FV^{-1}-\lambda I|=\begin{vmatrix}\frac{\beta\theta S}{(\beta+\mu+\eta(1+k_{1}))(\mu+\gamma+\alpha(1+k_{2}))}-\lambda&\frac{\theta S(1-k_{2})}{\mu+\gamma+\alpha(1+k_{2})}&0\\\ 0&0-\lambda&0\\\ 0&0&0-\lambda\end{vmatrix}=0.$ Hence, $\lambda^{2}(\frac{(\beta\theta S)}{(\beta+\mu+\eta(1+k_{1}))(\mu+\gamma+\alpha(1+k_{2}))}-\lambda)=0.$ Thus, either $\lambda^{2}=0~{}~{}\text{or}~{}~{}\lambda=\frac{(\beta\theta S)}{(\beta+\mu+\eta(1+k_{1}))(\mu+\gamma+\alpha(1+k_{2}))}.$ Therefore, the effective reproduction number $R_{e}=\frac{(\beta\theta S)}{(\beta+\mu+\eta(1+k_{1}))(\mu+\gamma+\alpha(1+k_{2}))}.$ (3.5) ∎ To illustrate this, let our variables and parameters be as in Table 1: Parameter/Variable | $\beta$ | $\theta$ | $\mu$ | $\eta$ | $\gamma$ | $\alpha$ | $\delta$ | $\upsilon$ | $\omega$ | $\sigma$ ---|---|---|---|---|---|---|---|---|---|--- Value | $0.01$ | $0.5$ | $0.2$ | $0.1$ | $0.01$ | $0.2$ | $0.8$ | $0.4$ | $0.7$ | $0.4$ Parameter/Variable | $d_{1}=k_{1}$ | $d_{2}=k_{2}$ | $f$ | $S$ | $Q$ | $R$ | $T$ | $L$ | $I$ | Value | $0.5$ | $0.8$ | $0.91$ | $2000$ | $1000$ | $500$ | $1000$ | $1000$ | $500$ | Table 1: Parameters/variables and values. then, $R_{e}=\frac{\sigma\beta\theta((\mu+\upsilon)-\mu f)}{\mu(\mu+\alpha+\gamma)(\mu+\beta+\eta)(\mu+\upsilon)}=0.09700176367<1.$ (3.6) We have the following main result. ###### Theorem 3.4. The gonorrhoea model undergoes backward bifurcation at $R_{e}=1$ whenever the bifurcation co-efficient $a$ and $b$ are positive. ###### Proof. Now, recall the effective reproduction number $R_{e}$ of the gonorrhoea infection as shown by equation (3.7) $R_{e}=\frac{S\beta\theta}{\mu+\gamma+\alpha(1+k_{2})(\mu+\beta+\eta(1+k_{1})}$ (3.7) Or $R_{e}=\frac{\sigma\beta\theta((\mu+\upsilon)-\mu f)}{\mu(\mu+\alpha+\gamma)(\mu+\beta+\eta)(\mu+\upsilon)}=0.09700176367<1.$ (3.8) Let $\psi=\theta s$ be the parameter by which the bifurcation occurs at $R_{e}=1.$ Equation (3.7) becomes $\displaystyle 1$ $\displaystyle=$ $\displaystyle\frac{\psi\beta}{\mu+\gamma+\alpha(1+k_{2})\mu+\beta+\eta(1+k_{1})}$ $\displaystyle\psi$ $\displaystyle=$ $\displaystyle\frac{\mu+\gamma+\alpha(1+k_{2})}{\beta};~{}~{}\beta\neq 0.$ Let $x_{1}=Q,$ $x_{2}=S,$ $x_{3}=L,$ $x_{4}=I,$ $x_{5}=R,$ and $x_{6}=T.$ Furthermore, by using the vector notation, $X=(x_{1},x_{2},x_{3},x_{4},x_{5},x_{6})^{T}$ The model can be written in the form $\frac{dx}{dt}=(f_{1},f_{2},f_{3},f_{4},f_{5},f_{6})^{T}$ then the model equations (2.7) become $\displaystyle f_{1}$ $\displaystyle=$ $\displaystyle f\sigma-(\mu+\upsilon)x_{1}$ $\displaystyle f_{2}$ $\displaystyle=$ $\displaystyle(1-f)\sigma+\upsilon x_{1}+\delta x_{5}-\psi(1-k_{2})-\psi x_{4}-\mu x_{2}$ $\displaystyle f_{3}$ $\displaystyle=$ $\displaystyle\psi x_{4}-\mu x_{3}-\beta x_{3}-\eta(1+k_{1})x_{3}$ $\displaystyle f_{4}$ $\displaystyle=$ $\displaystyle\beta x_{3}+\psi(1-k_{2})-\mu x_{4}-\alpha(1+k_{2})x_{4}-\gamma x_{4}$ $\displaystyle f_{5}$ $\displaystyle=$ $\displaystyle\eta(1+k_{1})x_{3}+\alpha(1+k_{2})x_{4}-\mu x_{5}\omega x_{5}$ $\displaystyle f_{6}$ $\displaystyle=$ $\displaystyle\omega x_{5}-\mu x_{6}-\delta x_{5}.$ Here $\mu_{2}=\mu+\upsilon,$ $\mu_{3}=\mu+\delta$ and $\mu_{4}=\mu+\omega.$ The Jacobian matrix at DFE is therefore given by $J=\begin{pmatrix}-(\mu+\upsilon)&0&0&0&0&0\\\ \upsilon&-\mu&0&-\psi&0&\delta\\\ 0&0&-(\mu+\beta+\eta(1+k_{1}))&\psi&0&0\\\ 0&0&\beta&-(\mu+\gamma+\alpha(1+k_{2})&0&0\\\ 0&0&\eta(1+k_{1})&\alpha(1+k_{2})&-(\mu+\omega)&0\\\ 0&0&0&0&\omega&-(\mu+\delta)\end{pmatrix}.$ The Jacobian of the linearised system has a simple zero eigenvalues, with all other eigenvalues having negative real parts, hence the center manifold theory can be used to analyse the dynamics of the system around the bifurcation point $\psi,$ [19] and [23]. The Jacobian matrix has a right eigenvectors (corresponding to the zero eigenvalues) given by $h=(h_{1},h_{2},h_{3},h_{4},h_{5},h_{6})$ $\displaystyle-(\mu+\upsilon)h_{1}=0\Rightarrow h_{1}=0$ $\displaystyle\upsilon h_{1}-\mu h_{2}-\psi h_{4}+\delta h_{6}=0\Rightarrow h_{2}=\frac{\delta h_{6}-\psi h_{4}}{\mu}$ $\displaystyle h_{3}=\frac{\psi h_{4}}{\mu+\beta+\alpha(1+k_{2})}$ $\displaystyle\beta h_{3}-(\mu+\gamma+\alpha(1+k_{2})h_{4}=0\Rightarrow h_{4}=\frac{\beta h_{3}}{\mu+\gamma+\alpha(1+k_{2})}$ $\displaystyle h_{5}=\frac{\eta(1+k_{1})h_{3}+\alpha(1+k_{2})h_{4}}{\mu+\omega}$ $\displaystyle\omega h_{5}-(\mu+\delta)h_{6}0\Rightarrow h_{6}=\frac{\omega}{\mu+\delta}.$ Similarly, the left eigenvectors (corresponding to the zero eigenvalues) are given by $v=(v_{1},v_{2},v_{3},v_{4},v_{5},v_{6})$ where $\displaystyle-\mu v_{2}=0\Rightarrow v_{2}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle-(\mu+\upsilon)v_{1}+\upsilon v_{2}=0\Rightarrow v_{1}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle\delta v_{2}-(\mu+\delta)v_{6}=0\Rightarrow v_{6}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle-(\mu+\omega)v_{5}=0\Rightarrow v_{5}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle-(\mu+\beta+\alpha(1+k_{2})v_{3}+\beta v_{4}+\eta(1+k_{1})v_{5}$ $\displaystyle=$ $\displaystyle 0$ $\displaystyle\Rightarrow v_{3}$ $\displaystyle=$ $\displaystyle\frac{\beta v_{4}}{\mu+\beta+\alpha(1+k_{2})}$ $\displaystyle-\psi v_{2}+\psi v_{3}-(\mu+\gamma+\alpha(1+k_{2}))v_{4}+\alpha(1+k_{2}))v_{5}=0\Rightarrow v_{4}$ $\displaystyle=$ $\displaystyle\frac{\psi v_{3}}{\mu+\gamma+\alpha(1+k_{2})}\qquad.$ So that $v\cdot h=1$ in line with [5]. We are now left to consider $f_{k};~{}~{}k=3,4$ since $v_{1}=v_{2}=v_{5}=v_{6}=0.$ The local dynamic of the system is totally govern by the signs of $a$ and $b$ . For instance, if $a=0$, and $b>0$ when $\psi<0$, then, $0$ is locally asymptotically stable and there exist a positive stable equilibrium [17]. Hence, by computing the non-zero partial derivatives of the right-hand function $f_{i},$ $i=1,2,\cdots,6,$ the associated backward bifurcation coefficients $a$ and $b$ are given respectively by $a=\sum_{i=j=k=1}^{n}v_{k}h_{i}h_{j}\frac{\partial^{2}f_{k}}{\partial x_{i}x_{j}}(0,0).$ So, $\displaystyle\frac{\partial^{2}f_{3}}{\partial x_{3}\partial x_{4}}=0~{}~{}\text{and}~{}~{}\frac{\partial^{2}f_{4}}{\partial x_{3}\partial x_{4}}=0.$ This implies $\displaystyle v_{3}h_{4}\frac{\partial^{2}f_{3}}{\partial x_{3}\partial x_{4}}+v_{4}h_{3}\frac{\partial^{2}f_{4}}{\partial x_{3}\partial x_{4}}=0\Rightarrow a=0.$ and $b=\sum_{i=j=k=1}^{n}v_{k}h_{i}\frac{\partial^{2}f_{k}}{\partial x_{i}x_{\psi}}(0,0)$ with $\displaystyle\frac{\partial^{2}f_{3}}{\partial x_{3}\partial\psi}=1~{}~{}\text{and}~{}~{}\frac{\partial^{2}f_{4}}{\partial x_{4}\partial\psi}=0.$ So, $\displaystyle v_{3}h_{3}\frac{\partial^{2}f_{3}}{\partial x_{3}\partial\psi}+v_{4}h_{4}\frac{\partial^{2}f_{4}}{\partial x_{4}\partial\psi}=1+0=1>0\Rightarrow b>0.$ Since the backward bifurcation co-efficient $b$ is positive, it follows that the gonorrhoea model will undergo backward bifurcation. This means that there is Endemic Equilibrium when $R_{e}>1$, and when $R_{e}=1.$ But from equations of $R_{0}$ and $R_{e}$, they are both less than $1,$ showing that the disease will be controlled in the population in a limited time. ∎ ## 4 Discussion of Results Graphical simulation buttress our results. These are the following: Figure 3: Effect of decreasing waning rate on the susceptible and immune classes, i.e., $\upsilon=0.2$. Figure 4: Effect of increasing waning rate on the susceptible and immune classes, i.e., $\upsilon=0.6$. Figure 4 suggests that when the waning rate $\upsilon$ is low (i.e., $\upsilon=0.2$), the passive immune population decreases exponentially with time, while Figure 4 indicates that as the waning rate is high, (i.e., $\upsilon=0.6$), the passive immune population decreases faster and varnishes with time. The continuous decay in the population of the immune class (Q) with time is due to the fact that the immunity conferred on the individuals in this class is temporal and hence, expires with time. However, the susceptible population increases slower to the turning point at about one year and three months as the waning rate $\upsilon$ is low and increases faster as the waning rate $\upsilon$ is high as shown in Figures 4 and 4 respectively. In both cases, the susceptible class later decreases with time due to the interaction among the latent, infected and the susceptible classes coupled with the natural mortality rate $\mu.$ The impact of contact rate on Susceptible, Latent and Infected classes is shown in Figure 5: Figure 5: The Effect of reducing contact rate $\lambda=\theta I$ Figure 5 indicates that when the interaction rate is low (i.e., $\theta=0.3$), the latent and the infected classes decrease exponentially with time, and even varnishes in the long run since there will be almost nobody to contact and suffer the disease. It is also shown that when the interaction rate $\theta=0$, the reproduction number of the disease becomes zero. That is, $R_{0}=\frac{(\beta\theta S)}{(\beta+\mu+\eta)(\mu+\gamma+\alpha)}=0.$ Thus, at this point, the contact rate $\lambda$ becomes zero and hence, nobody suffers the disease. ## 5 Conclusion Based on the analysis and results of this work, we observed that the disease would be eradicated from the population since the effective reproduction number is less than $1.$ Again, addition of treatment or control measures such as condom and education enlightenment helped to reduce the infection in the population. However, addition of control parameters led to transcritical bifurcation. From the graphical illustrations, we concluded that immune population continues to decay exponentially due to temporal immunity conferred on the individuals in the immune class. We also concluded that reproduction number of the infection grows when there is no control measure in the model and decays when control measure is applied in the model. Finally, we concluded that for the disease to be totally eliminated from the community, the interaction rate $\theta$ with the infective which leads to contacts should be totally reduced to the barest minimum or zero. ## References * [1] Adam II and Sulaiman U (2018). Mathematical Model for the dynamics of Neisseria Gonorrhea disease with Natural immunity and treatment effects. Journal of Mathematics Research 10(2), 2018: 151. https://doi.org/10.5539/jmr.v10n2p151 * [2] Echeng BB and Adagba HO (2021). Global Stability Analysis of the Role of Antiretroviral Therapy (ART) Abuse in HIV/AIDS Treatment Dynamics. Pure and Applied Mathematics Journal, 10(1): 9-31. https://doi.org/10.11648/j.pamj.20211001.12 * [3] Centers for Disease Control (CDC) (2016). Antibiotic-Resistant gonorrhoea: Basic Information. Center for Disease Control (CDC). * [4] Didelot X, Kendall M, Xu Y, White PJ and McCarthy N (2021). Genomic epidemiology analysis of infectious disease outbreaks using TransPhylo. Current protocols, 1(2).https://doi.org/10.1002/cpz1.60 * [5] Garba SM, Safi MA and Gumel AB (2013). Cross immunity backward bifurcation for a model of transmission dynamics of two strains of influenza. Nonlinear analysis B: Real World Applications 1384. * [6] Gregory Faye (2011). An Introduction to bifurcation theory. Neuro Mathematical Computer Laboratory, Sophia Antipolis Paris, France. * [7] Hethcoote HW and York JA (1984). Lecture notes in biomathematics, vol 56: gonorrhoea transmission dynamics and controls. Springer-Verlag, Heidelberg. * [8] Hook EW and Handsfield HH (2008). Sexually transmitted disease 4th edition. Mchraw-Hill Education, New York, 627-45. * [9] Jing F, Qixing H, Yuguo L and Daqing J (2015). Asymptotic behavior of a multigroup SIS epidemic model with stochastic perturbation. Advances in Difference Equations, 2015:1-9. * [10] Mushayabasa S, Tchuenche JM, Bhunu CP and Ngarakana-Gwasira E (2011). Modeling gonorrhoea and HIV co-interaction. Biosystems, 103: 27-37. * [11] Mushayabasa S and Bhunu CP (2011). Modelling the effect of heavy alcohol consumption on the transmission dynamics of gonorrhoea. National University of Science and Technology Zimbabwe. * [12] Omenyi L and Uchenna M (2019). Global analysis on Riemannian manifolds. Australian Journal of Mathematical Analysis and Applications, 16(2):1-17. Online: https://ajmaa.org/searchroot/files/pdf/v16n2/v16i2p11.pdf. * [13] Omenyi L, Omaba M, and Nwaeze E, et al (2021). Analysis of Gegenbauer kernel filtration on the hypersphere. International Journal of Advanced and Applied Sciences, 8(11): 1-9. * [14] Osnes MN, Didelot X, Korne-Elenbaas J, Alfsnes K, Brynildsrud OB, Syversen G, Nilsen J, De Blasio BF, Caugant DA and Eldholm V (2020). Sudden emergence of a Neisseria gonorrhoeae clade with reduced susceptibility to extended-spectrum cephalosporins, Norway. Microbial genomics, 6(12). https://doi.org/10.1099/mgen.0.000480 * [15] Rama Kishore R and Pattabhiramacharyulu NC (2011). A numerical approach for the spread of gonorrhoea in homosexuals. ARPN Journal of Engineering and Applied Sciences, 6(6): 1-8. * [16] Riley S, Fraser C, Donnelly CA, Ghani AC, Abu-Raddad LJ, Hedley AJ, Leung GM, Ho LM, Lam TH, Thach TQ, Chau P, Chan KP, Lo SV, Leung PY, Tsang T, Ho W, Lee KH, Lau EM, Ferguson NM, Anderson RM (2003). Transmission dynamics of the etiological agent of SARS in Hong Kong: impact of public health interventions. Science. 2003 Jun 20;300(5627):1961-1966. https://doi:10.1126/science.1086478.Epub2003May23.PMID:12766206. * [17] Sacrifice NK, et al (2016). A qualitative Analysis of Neisseria gonorrhoea Disease with treatment effect. Applied Mathematics, 6(1), 6-15. * [18] Schiffert Health Center (2011). Patient Information: gonorrhoea question and answers. Virginia Tech Division of Student Affairs. http://www.healthcenter.vt.edu/assets/docs/gonorrhea.pdf * [19] Shaban N and Hawa M (2014). Modeling the impact of vaccination and screening on the dynamics of human papillomavirus infection. International Journal of Mathematical Analysis, 8(9) 441-454. https://dx.doi.org * [20] Ugwu CS (2015). Mathematical model on gonorrhoea transmission. MSc dissertation Submitted to the Department of mathematics, University of Nigeria, Nsukka. * [21] Unemo M (2015). Current and future antimicrobial treatment of gonorrhoea: the rapidly evolving Neisseria gonorrhoeae continues to challenge. BMC Infectious Diseases, 15(364). https://doi.org/10.1186/s12879-015-1029-2 * [22] Usman S and Adam II (2017). Modeling the transmission Dynamics of the monkeypox virus Infection with treatment interventions. Journal of Applied Mathematics and Physics, 5, 2335-2353. * [23] Van den Driessche P and Watmough J (2002). Reproduction number and sub-threshold endemic equilibria for compartmental model of disease transmission. Mathematical Biosciences, 180, 29-18. * [24] Whittles LK, White PJ and Didelot X (2020). Assessment of the potential of vaccination to combat antibiotic resistance in gonorrhoea: A modeling analysis to determine preferred product characteristics. Clinical infectious diseases : an official publication of the Infectious Diseases Society of America, 71(8), 1912-1919. https://doi.org/10.1093/cid/ciz1241 * [25] World Health Organization (2006). Prevention and control of sexually transmitted Infections. Draft global strategy, Report by the Secretariat.(Geneva: WHO), http://www.who.int/reproductivehealth/docs/stis * [26] Workowsk KA and Bolan GA (2015). Sexually transmitted diseases treatment guidelines. MMWR. Recommendations and Reports/CDC.64(RR-03): 1-137. PMID 26042815.
# Some contributions to Lagrangian modelling of Power Converters Mosaib Ul Munieeb Shakir Showkat Sofi<EMAIL_ADDRESS>Fazil Bashir Munieeb Ul Hassan Shahkar Ahmad Nahvi ###### Abstract Lagrangian modelling can be used to derive mathematical models for complex power electronic converters. This approach uses scalar quantities (kinetic and potential energy) to derive models, which is simpler than using (vector-based) force balance equations. It employs generalized coordinates, making it easier to deal with complex systems with constraints. This systematic approach results in equations that can be expressed in state-space form, which allows for the simplification of the simulation and design process and the use of many standard software packages for system analysis and simulation. In this work, contributions are made regarding the procedure to be followed for the Lagrangian modelling of power converters and the incorporation of constraints within the Lagrangian framework. Furthermore, for the first time, Lagrangian modelling is extended to non-ideal, high-fidelity descriptions of standard power electronic circuits. ###### keywords: Modelling , Euler-Lagrangian models , energy , power converters , switched systems. [1]organization=Dept. Electrical Engineering, IUST, city=Awantipora, state = J&K, postcode=192122, country=India [2]organization=Dept. Electrical Engineering, NTUST, city=Taipei, postcode=10607, country=Taiwan [3]organization=Dept. Electrical Engineering (ESAT), KU Leuven, city=Leuven, postcode=3000, country=Belgium fn1fn1footnotetext: At the time of the research, all the authors were affiliated to the Department of Electrical Engineering, IUST, J&K, 192122, India. ## 1 Introduction Power electronic converters find increasing applications in devices and systems like computers, cell phones, domestic appliances, cars, aeroplanes, industrial processes, medical applications, communication systems, transportation systems and high-power electrical transmission systems. Modelling and simulation of power electronic converters are required for verification, testing and optimisation of their design. It is required to conceptualise and fabricate power electronic converters in stages [1], starting with an ideal circuit and gradually incorporating complex phenomena like parasitics. Accurately simulating power electronic converters is required in many applications, e.g., ones which require their dynamic characterisation or evaluation of phenomena like electromagnetic interference. This is especially true for power electronic converters that process power in the range of megawatts, wherein the requirement for accurate simulation is more stringent and includes accounting for distributed stray parameters to obtain the energy balance of various transient topological models[2]. Energy and its conversion is a basic, underlying phenomenon within all physical systems. Energy is a scalar quantity, and obtaining important system information from it is easier and more systematic in general, for example, in complex mechanical systems, using Energy-based methods for deriving models is easier than using force balance equations. Modelling of mechanical systems based on energy was introduced by Euler and Lagrange in the 1750s. Analogies between mechanical and electrical systems are ubiquitous [3] and are used to derive equations of motion for mechanical systems using electrical models. Representing physical properties in terms of energy and power for power electronic converters helps in obtaining a deeper insight into the workings of the converters. Examples of such methods include models based on Brayton-Moser equations [4], port-Hamiltonian [5] and Euler-Lagrangian (EL) models [6, 7, 8, 9, 10]. One advantage of representing the physical properties of power electronic converters in terms of energy and power is that these properties can be used for controller design. For example, the EL modelling framework causes the dynamical equations to clearly reflect energy storage, which is required in the design of passivity-based controllers [11]. In this paper, we make three distinct contributions. First, we revisit the step-by-step procedure for EL modelling of switching circuits discussed in detail in [7, 9, 12, 13] and point out cases where it is not directly applicable or gives erroneous results. We also suggest modifications to the procedure in order to improve it. Secondly, we discuss in detail the issue of incorporation of constraints in EL modelling. We deliberate on the contention [7] that the choice of canonical coordinates charge and current, and the corresponding Lagrangian mean that Kirchoff current law is not included in the framework and point out the fact that proper labelling of system variables (in this case, in terms of the currents flowing through the dynamic circuit elements) lead to automatic incorporation of constraints in the framework and EL modelling in the unconstrained form. Lastly, we describe for the first time how to derive switched-state space models in the standard form of high- fidelity descriptions of power converters using the EL formulation. The outline of the paper is as follows. In Section-II, the basic formulation of the EL method is reviewed, and its application to switching circuits is revisited. In Section-III, the issue of incorporation of constraints in the EL formulation is discussed, and it is demonstrated that they can be automatically included if proper labelling is done for currents flowing in the circuit. Section IV discusses high-fidelity equivalent circuits of power electronic converters and demonstrates the application of the EL method to these high-fidelity models. ## 2 The EL formulation procedure and application to switching networks The EL equation is a second-order non-linear partial differential equation written in terms of generalised coordinates $z\in\mathbb{R}^{n}$ and given by: $\frac{d}{dt}\Biggl{(}\frac{\partial\mathcal{L}(z,\dot{z})}{\partial\dot{z}}\Biggr{)}-\frac{\partial\mathcal{L}(z,\dot{z})}{\partial z}=-\frac{\partial\mathcal{D}(\dot{z})}{\partial\dot{z}}+\mathcal{F}(z),$ (1) where $\mathcal{L}(z,\dot{z})=\mathcal{T}(z,\dot{z})-\mathcal{V}(z)$ is the Lagrangian which is the difference between kinetic energy $\mathcal{T}(z,\dot{z})$ and potential energy $\mathcal{V}(z)$ of the system, $\mathcal{D}(\dot{z})$ is the Rayleigh dissipation constant and $\mathcal{F}\in\mathbb{R}^{n}$ is the set of generalised forcing functions associated with each generalised coordinate. The equation (1) is generally used for non-conservative systems, for conservative systems, the dissipation term is not present. The application of this method to electrical systems can be done in two ways [14]. In the loop formulation, the state variables are electric charge $q$ and current $\dot{q}$, the Lagrangian is the difference between magnetic co-energy and electrical field energy, the forcing functions being voltages in the circuit loops. In the nodal formulation, the state variables are the magnetic flux $\varphi$ and voltage $\dot{\varphi}$, the Lagrangian is the difference between magnetic energy and electrical co-energy, the forcing function being currents at the nodes. A comprehensive methodology for EL modelling of switching electrical networks (like the ones found in Power electronics) is enumerated in [7]. This methodology uses the loop formulation, hence charge $q$ and current $\dot{q}$ are taken as the dynamic variables, and the following constraint form of the EL equations is written: $\displaystyle\begin{split}&\frac{d}{dt}\Biggl{(}\frac{\partial\mathcal{L}(q,\dot{q})}{\partial\dot{q}}\Biggr{)}-\frac{\partial\mathcal{L}(q,\dot{q})}{\partial q}=\\\ &-\frac{\partial\mathcal{D}(\dot{q},u)}{\partial\dot{q}}+A(q,u)\lambda+\mathcal{F}(q,u);\end{split}$ (2) $\displaystyle A(q,u)^{T}\dot{q}=0.$ (3) Here $\mathcal{D}(\dot{q},u)$, $\mathcal{F}(q,u)$ \- the Rayleigh dissipation function and the generalised forcing function are functions of the switch position $u$, the constraint equations obtained from Kirchoff’s current laws are given by (3) and $\lambda$ is the Lagrange multiplier. A discussion on why the constraint form of the EL equation is required is given in [7], which basically says that the unconstrained EL equations constitute a voltage balance and since Kirchoff’s current law (required for circuits with more than one mesh) is not included in the framework, it has to be incorporated using the constraint form. Before we proceed further, the EL procedure based on loop formulation is outlined in Figure 1 (for details, see, e.g., [7]). Dynamic Variables: Define charge $(q)$ and current $(\dot{q})$ coordinate for each dynamic element (loop formulation) Energy: Calculate magnetic co-energy $\mathcal{T}(q,\dot{q})$ (for inductive elements) and electric field energy $\mathcal{V}(q)$ (for capacitive elements) in terms of $(q,\dot{q})$ \- coordinates Dissipation: Determine Rayleigh dissipation function $\mathcal{D}(\dot{q},u)$ for resistive elements Equations of Motion: Integrate all information into E-L equations to form a state space model Constraints and Interconnections: Formulate constraint equations using Kirchhoff’s laws Forcing Functions: Calculate generalized forcing functions $\mathcal{F}(q,u)$ from voltage sources and switching modes Figure 1: Procedure for obtaining state space representation of switched electrical circuits using EL formulation In this section, we demonstrate that the previous methodology (especially the one given in [7]) for applying the EL formulation to switched circuits can give erroneous results. We do this using a simple example and also point out the reasons for the error and the means to correct it. Figure 2: Simple diode circuit Consider the diode circuit shown in the Fig. 2. Using the methodology outlined in [7], we select the generalised coordinates as $q_{L},q_{C},\dot{q}_{L},\dot{q}_{C}$ and write the following expressions for the magnetic co-energy, the electric field energy and the forcing function vector for the circuit: $\displaystyle\mathcal{T}(q,\dot{q})=\frac{1}{2}L_{s}\dot{q_{L}}^{2};\mathcal{V}(q)=\frac{{q_{C}}^{2}}{2C};$ (4) $\displaystyle\mathcal{F}(q,u)=\left[\begin{array}[]{cc}uV_{i}&0\end{array}\right]^{T}.$ (6) Consequently the expression for the Lagrangian $\mathcal{L}(q,\dot{q})=\mathcal{T}(q,\dot{q})-\mathcal{V}(q)$ and the Rayleigh dissipation function can be written as: $\displaystyle\mathcal{L}(q,\dot{q})=\frac{1}{2}L_{s}\dot{q_{L}}^{2}-\frac{{{q}_{C}}^{2}}{2C},$ (7) $\displaystyle\mathcal{D}(\dot{q},u)=\frac{1}{2}(\dot{q_{L}}^{2}R_{s}+(u\dot{q}_{L}-\dot{q}_{C})^{2}R).$ (8) It is important to note that the switching function $u$ takes a value equal to either $1$ or $0$ depending on whether the diode is conducting or not. This determines whether the source $V_{i}$ is connected to the circuit or not and consequently, gives the expression (6) for ${F}(q,u)$. Further, the current through the resistor $i_{R}$ is either $\dot{q}_{L}-\dot{q}_{C}$ for $u=1$ or $-\dot{q}_{C}$ for $u=0$, and hence the expression for $\mathcal{D}(\dot{q},u)$ in (8). The equation (8) also takes care of the constraint equation obtained from Kirchoff’s current law by relating $i_{R},\dot{q}_{L}$ and $\dot{q}_{C}$, so $A(\dot{q},u)=0$. Substituting (7,8) in (2,3) gives: $\displaystyle L_{s}\ddot{q}_{L}=-(R_{s}+u^{2}R)\dot{q}_{L}+uR\dot{q}_{C}+uV_{i},$ (9) $\displaystyle\frac{q_{C}}{C}=(u\dot{q}_{L}-\dot{q}_{C})R.$ (10) Substituting for $\dot{q}_{C}$ from (10) in (9) and rearranging gives the following switched state space model for the circuit: $\displaystyle\left[\begin{array}[]{c}\ddot{q_{L}}\\\ \\\ \frac{\dot{q}_{C}}{C}\end{array}\right]=\left[\begin{array}[]{cc}-\frac{R_{s}}{L_{s}}&\frac{-u}{L_{s}}\\\ \\\ \frac{u}{C}&\frac{-1}{RC}\end{array}\right]\left[\begin{array}[]{c}\dot{q_{L}}\\\ \\\ \frac{q_{C}}{C}\end{array}\right]+\left[\begin{array}[]{c}\frac{u}{L_{s}}\\\ \\\ 0\end{array}\right]V_{i}.$ (23) The model obtained in (23) is a correct representation of the system dynamics for $u=1$ but does not give the correct set of equations for $u=0$. After a closer inspection it becomes clear that the expressions (7,8) written following the procedure outlined in [7] are not correct. The insistence in [7] that $\mathcal{L}$ should not be a function of the switch position $u$ is not right. To investigate further, we write separate expressions for the quantities $\mathcal{L}(q,\dot{q})$, $\mathcal{D}(\dot{q},u)$, $\mathcal{F}(q,u)$ and $A(\dot{q},u)$ for the two modes of operation of the circuit. For $u=1$ the expressions are: $\displaystyle\mathcal{T}(q,\dot{q})=\frac{1}{2}L_{s}\dot{q_{L}}^{2};\mathcal{V}(q)=\frac{{q_{C}}^{2}}{2C};\mathcal{F}(q,u)=\left[\begin{array}[]{cc}V_{i}&0\end{array}\right]^{T};$ (25) $\displaystyle\mathcal{L}(q,\dot{q})=\frac{1}{2}L_{s}\dot{q_{L}}^{2}-\frac{{{q}_{C}}^{2}}{2C};$ (26) $\displaystyle\mathcal{D}(\dot{q},u)=\frac{1}{2}(\dot{q_{L}}^{2}R_{s}+(\dot{q}_{L}-\dot{q}_{C})^{2}R).$ (27) For $u=0$ the expressions are: $\displaystyle\mathcal{T}(q,\dot{q})=0;\mathcal{V}(q)=\frac{{q_{C}}^{2}}{2C};\mathcal{F}(q,u)=\left[\begin{array}[]{cc}0&0\end{array}\right]^{T};$ (29) $\displaystyle\mathcal{L}(q,\dot{q})=-\frac{{{q}_{C}}^{2}}{2C};$ (30) $\displaystyle\mathcal{D}(\dot{q},u)=\frac{1}{2}{\dot{q}_{C}}^{2}R.$ (31) It is clear from the above equations that the EL parameters of the two circuits, generated by the two switch positions, do not result in identical values for magnetic and electric energies (amongst other quantities like $\mathcal{D}$ and $\mathcal{F}$). A combined set of EL equations from the two separate descriptions valid for both the switch positions can be written as follows: $\displaystyle\mathcal{T}(q,\dot{q},u)=\frac{1}{2}L_{s}{(u\dot{q_{L}})}^{2};\mathcal{V}(q)=\frac{{q_{C}}^{2}}{2C};$ (32) $\displaystyle\mathcal{F}(q,u)=\left[\begin{array}[]{cc}uV_{i}&0\end{array}\right]^{T};$ (34) $\displaystyle\mathcal{L}(q,\dot{q},u)=\frac{1}{2}L_{s}{(u\dot{q_{L}})}^{2}-\frac{{{q}_{C}}^{2}}{2C};$ (35) $\displaystyle\mathcal{D}(\dot{q},u)=\frac{1}{2}({(u\dot{q_{L}})}^{2}R_{s}+(u\dot{q}_{L}-\dot{q}_{C})^{2}R).$ (36) It is evident from the above equations that $\mathcal{T}$ and $\mathcal{L}$ are functions of $u$ and this dependence is clearly shown. The set (34, 35, 36) when substituted in (2,3) gives the following equations: $\displaystyle u^{2}L_{s}\ddot{q}_{L}=-u^{2}\dot{q}_{L}R_{s}+(-u^{2}\dot{q}_{L}+u\dot{q}_{C})R+uV_{i},$ (37) $\displaystyle\frac{q_{C}}{C}=(u\dot{q}_{L}-\dot{q}_{C})R.$ (38) Substituting for $\dot{q}_{C}$ from (38) in (37) and rearranging gives the following switched state space model for the circuit in descriptor form: $\displaystyle\begin{split}\left[\begin{array}[]{cc}u&0\\\ \\\ 0&1\end{array}\right]\left[\begin{array}[]{c}\ddot{q_{L}}\\\ \\\ \frac{\dot{q}_{C}}{C}\end{array}\right]=\left[\begin{array}[]{cc}-\frac{uR_{s}}{L_{s}}&\frac{-u}{L_{s}}\\\ \\\ \frac{u}{C}&\frac{-1}{RC}\end{array}\right]\left[\begin{array}[]{c}\dot{q_{L}}\\\ \\\ \frac{q_{C}}{C}\end{array}\right]\\\ +\left[\begin{array}[]{c}\frac{u}{L_{s}}\\\ \\\ 0\end{array}\right]V_{i}\end{split}$ (39) Given the fact that $u$ can be either $0$ or $1$, (39) has been written by writing $u^{2}$ as $u$. Further, the descriptor form is indispensable because cancelling by $u$ on both sides of (37) is incorrect when $u=0$. It is worth noting that although (39) does not reconstruct the second state equation $\dot{q}_{L}=0$ for $u=0$, it also does not give an erroneous representation like (23). The equation $\dot{q}_{L}=0$ for $u=0$ cannot be derived directly from the EL formulation because the relevant circuit representation (29-31) does not contain that information. The above example shows that in switched circuits it is possible that the magnetic and electric energies of the system are also functions of the switching state, and ignoring this dependence while writing the EL equations for the circuit can lead to errors. The solution is to write the EL equations for each mode of the circuit (corresponding to a particular value of $u$) and then write them as functions of $u$ for the overall circuit, as demonstrated above. In the next section we discuss the issue of incorporation of constraints while writing the EL equations for switched circuits. ## 3 Incorporation of constraints in EL formulation The use of constraint form of EL formulation is necessitated [7] by the fact that the corresponding Lagrangian does not include complete information about the circuit. For example, in a circuit having more than one mesh, with current and charge of dynamic elements selected as generalised coordinates, the EL formulation represents a force balance, which means that the algebraic relation between various currents is missing from the formulation and has to be incorporated by adding constraints. In this section we examine this contention in detail and show that this is not always necessarily true. We demonstrate using examples that proper labelling of currents flowing in the circuit leads to automatic incorporation of Kirchoff’s current law in the framework. Consequently, EL modelling in the unconstrained form is both possible as well as convenient. The simplest case arises when there are $m$ constraints amongst the $n$ generalised coordinates (i.e., the charges and currents of dynamic elements $L$ and $C$). In this case it is optimum to choose the number of generalized coordinates of the system as $N=n-m$. Subsequently, EL modelling can be done using the unconstrained form. This also ensures that a minimal representation for the system is directly obtained. A brief discussion of this case for EL modelling of three-phase pulse-width modulated AC-to-DC converters is given in [2]. The second case arises when constraints are not between the generalised coordinates but stem from branch relations involving dissipative elements like resistances. In this case the constraints are automatically taken care of while writing the expression for the Rayleigh dissipation function $\mathcal{D}(\dot{q},u)$. An example of this case is the circuit studied in Section-2 where the expression for $\mathcal{D}(\dot{q},u)$ in (8) is written by writing the current through the resistor $i_{R}$ in terms of the generalised current coordinates $\dot{q}_{L}$ and $\dot{q}_{C}$. Figure 3: Example Circuit Another possibility is that the constraints stem from branch relations between currents through various voltage sources. In this case it is more convenient to write the branch current through the source as a linear combination of the generalised current coordinates and incorporate the forcing functions as part of electric field energy of the system. This is illustrated using the circuit shown in Fig. 3 for which following expression can be written for the energies and the dissipation function: $\displaystyle\mathcal{T}(q,\dot{q})=\frac{1}{2}(L_{1}\dot{q_{L_{1}}}^{2}+L_{2}\dot{q_{L_{2}}}^{2});$ (40) $\displaystyle\mathcal{V}(q)=\frac{{q_{C}}^{2}}{2C}-E_{1}q_{L_{1}}-E_{2}(q_{C}-q_{L_{1}}+q_{L_{2}});$ (41) $\displaystyle\mathcal{D}(\dot{q})=\frac{1}{2}R(\dot{q_{L_{1}}}-\dot{q_{C}})^{2}.$ (42) The labelling of currents ensures that all branch relations are taken care of and the unconstrained form of the EL equations can be directly applied. Further, as seen from (41), since the forcing function $E_{2}$ is not associated with any one generalised coordinate, it is more convenient to include forcing functions as part of $\mathcal{V}$ instead of writing a separate expression for $\mathcal{F}$. Figure 4: LC circuit Lastly, we examine one more case, where the constraints are not between the generalised coordinates and also cannot be incorporated using the dissipation function or electric field energy. The idea is demonstrated using the circuit shown in Fig. 4 taken from [7]. The following equations can be written for the circuit: $\displaystyle\mathcal{T}(q,\dot{q})=\frac{1}{2}(L_{1}\dot{q_{L_{1}}}^{2}+L_{2}\dot{q_{L_{2}}}^{2});$ (43) $\displaystyle\mathcal{V}(q)=\frac{(q_{L_{1}}-q_{L_{2}})^{2}}{2C_{1}}-Eq_{L_{1}};\mathcal{D}(\dot{q})=0;$ (44) $\displaystyle\begin{split}\mathcal{L}(q,\dot{q})=\frac{1}{2}(L_{1}\dot{q_{L_{1}}}^{2}+L_{2}\dot{q_{L_{2}}}^{2})\\\ -\frac{(q_{L_{1}}-q_{L_{2}})^{2}}{2C_{1}}+Eq_{L_{1}}.\end{split}$ (45) Writing the equations like this automatically incorporates Kirchoff’s current law and the constraint form is not required. Substitution of the equations (43-45) in the EL equations (1) alongwith the third state equation $\dot{q}_{C1}=\dot{q}_{L1}-\dot{q}_{L2}$ gives the state space representation for the circuit. We have thus demonstrated that constraints can be incorporated using a standard procedure while writing the EL formulation for a circuit. It is best to label generalised current coordinates for the dynamic elements and then write all other currents as a linear combination of these generalised coordinates. This leads to automatic incorporation of Kirchoff’s current law and reduces the amount of bookkeeping required for solving the problem. In the next section we describe modelling for power converters in more detail, and discuss their high-fidelity equivalent circuits. We then proceed to demonstrate the application of the EL formulation to these high fidelity models using the insights gained from the discussion in Sections 2 and 3. ## 4 High-fidelity equivalents of power converters and their EL modelling ### 4.1 Mathematical Modelling of power converters Mathematical models of systems are based on the understanding of their physical behaviour and are needed for their simulation and control. Basic circuit modelling of power electronic converters typically produces continuous-time, non-linear, time-varying models in the following form: $\dot{x}(t)=f_{\sigma(t)}(x(t),u(t))$ (46) with the state $x\in R^{n}$, the input $u\in R^{p}$ and $\sigma(t):[0,\inf)\rightarrow\Sigma$ is a right continuous function which is piecewise constant and which selects the index of the active system from the index set $\Sigma$ at each time instant $t$. The function $\sigma(t)$ is also called the mode-selector function. If the switching is only time-dependent, then corresponding to $\sigma(t)$ the following switching functions can be defined: $\displaystyle q_{i}(t):[0,\infty)\rightarrow\\{0,1\\},$ (47) $\displaystyle\Sigma_{i=1}^{m}q_{i}(t)=1,\forall t\in[0,\infty).$ (48) Each index $\sigma(t)=i$ is called a mode and each mode defines a different dynamic behaviour of the system. Consequently, equation (46) can be re-written as: $\dot{x}(t)=\Sigma_{i=1}^{m}q_{i}(t)f_{i}(x(t),u(t)).$ (49) Depending on the application, sufficiently accurate assumptions not affecting the validity of the models can be made. Switches can be considered ideal, consequently modelling them as resistances of zero and infinity during turn-on and turn-off, respectively. The switching time can be considered to be infinitely short. Generators can be considered ideal and passive circuit elements (R,L and C) can be considered linear and time-invariant. Applying these assumptions would lead to a switched state-space linear time invariant (LTI) model of the following form: $\dot{x}(t)=A_{\sigma(t)}x(t)+B_{\sigma(t)}u(t).$ (50) Using the switching function notation, this can be re-written as: $\dot{x}(t)=\Sigma_{i=1}^{m}q_{i}(t)(A_{i}x(t)+B_{i}u(t)).$ (51) However, if the switching is state dependent as well (this happens when devices like diodes are present), then the switching function would be written as $q_{i}(t,x(t))$, and the model ceases to remain linear. A further complication in the model is introduced by the fact that a pulse width modulated (PWM) converter switches in response to a modulating signal $m(t)$. Augmenting the model to represent the relation between $q_{i}(t)$ and $m(t)$ introduces additional non-linearity and time varying behaviour. In addition to switched state-space models, other modelling approaches include ones based on circuit averaging, sampled data and dynamic phasors. For details see [15, 16]. ### 4.2 High-fidelity equivalents of power converters EL modelling of power converters has been confined so far to idealized equivalent circuits, especially idealized equivalents for switching devices and dynamic elements. Parasitics are not included, and hence the models obtained are not suitable for component-level study to examine phenomena such as device voltage and current transients. In this section, we extend EL modelling of power electronic controllers to complex, non-ideal, high-fidelity (H-F) descriptions of these converters. These H-F state-space models are otherwise difficult to obtain using classical methods. H-F models of power converters [17, 18] are obtained when stray inductances and capacitances of passive components are also considered. An inductor is replaced [19] by $R_{L}$, $C_{L}$ and the bulk inductance $L$, and the capacitor is modelled with an equivalent series resistance $R_{c}$ and inductance $L_{c}$ in addition to bulk capacitance $C$, shown in Fig. 5. The Diode is modelled as a piece-wise linear model. Its voltage drop $V_{D}(u)$ is a function of the switch state $u$ of the diode, $V_{D}(0)=0$, $V_{D}(1)=V_{d_{on}}$, where $u=0$ for off state and $u=1$ for on state of the diode. $R_{d}(u)$ is used to model the linear portion of the diode characteristic curve, This resistance has two values based on the diode state: $R_{d}(1)$ = $R_{d_{\emph{on}}}$, $R_{d}(0)$ = $R_{d_{\emph{off}}}$. The junction capacitance $C_{d}$ models reverse recovery effects. Figure 5: H-F inductor and capacitor model The MOSFET drain to source characteristics is modeled by $R_{s}(u_{m})$, $L_{s}$ and $C_{s}$, with $R_{s}(u_{m})$ being the switch mode dependent resistance, where $u_{m}$ is the switch state of MOSFET ($u_{m}=0$ for off, $u_{m}=1$ for on), see Fig. 6. Figure 6: H-F model of Diode and MOSFET ### 4.3 EL modelling of H-F equivalents #### 4.3.1 H-F model of diode rectifier The H-F equivalent of the simple diode rectifier circuit shown in Fig. 2 is shown in Fig. 7. Figure 7: H-F model of diode rectifier It is clear from Fig. 7 that only currents in the dynamic elements are labelled, all other currents would be written as a linear combination of these currents. The generalised coordinate set includes these currents and corresponding charges, i.e., the set $(q_{s},q_{Lc},q_{cd},\dot{q}_{s},\dot{q}_{Lc},\dot{q}_{cd})$, and the following expressions can be written for $\mathcal{V}(q,u),\mathcal{T}(q,\dot{q}),\mathcal{L}(q,\dot{q},u)$ and $\mathcal{D}(\dot{q},u)$. $\displaystyle\mathcal{T}(q,\dot{q})=\frac{1}{2}L_{s}\dot{q_{s}}^{2}+\frac{1}{2}L_{c}\dot{q}_{Lc}^{2},$ (52) $\displaystyle\begin{split}\mathcal{V}(q,u)=\frac{1}{2C_{d}}q_{cd}^{2}+\frac{1}{2C}q_{Lc}^{2}-V_{i}q_{s}\\\ +V_{D}(u)(q_{s}-q_{cd}),\end{split}$ (53) $\mathcal{L}(q,\dot{q},u)=\frac{1}{2}L_{s}\dot{q_{s}}^{2}+\frac{1}{2}L_{c}\dot{q}_{Lc}^{2}-\frac{1}{2C_{d}}q_{cd}^{2}-\frac{1}{2C}q_{Lc}^{2}\\\ +V_{i}q_{s}-V_{D}(u)(q_{s}-q_{cd}),$ (54) $\mathcal{D}(\dot{q},u)=\frac{1}{2}R_{s}\left(\dot{q_{s}}\right)^{2}+\frac{1}{2}R_{c}\left(\dot{q}_{Lc}\right)^{2}+\frac{1}{2}R_{d}(u)\left(\dot{q_{s}}-\dot{q}_{cd}\right)^{2}\\\ +\frac{1}{2}R_{L}\left(\dot{q_{s}}-\dot{q}_{Lc}\right)^{2}.$ (55) It maybe noted that in the above equations the voltage drop $V_{D}$ and resistance $R_{d}$ of the diode are written as $V_{D}(u)$ and $R_{d}(u)$, respectively, to clearly illustrate their dependence on the switching function $u$. Consequently, this dependence is reflected in the expressions for $\mathcal{V}$, $\mathcal{L}$ and $\mathcal{D}$ given above. Further, current constraints have been incorporated automatically in these equations, so $\mathcal{A}(\dot{q},u)=0$. Using equations (52, 53, 54, 55) in equation (2) and solving for $q=q_{s}$ we get: $\ddot{q_{s}}=\frac{(-R_{s}-R_{L})}{L_{s}}\dot{q_{s}}-\frac{q_{cd}}{L_{s}C_{d}}+\frac{R_{L}}{L_{s}}\dot{q}_{Lc}+\frac{V_{i}}{L_{s}},$ for $q=q_{Lc}$ we obtain $\ddot{q}_{Lc}=\frac{R_{L}}{L_{c}}\dot{q}_{s}-\frac{(-R_{L}-R_{c})}{L_{c}}\dot{q}_{Lc}-\frac{q_{Lc}}{CL_{s}},$ while for $q=q_{cd}$ we obtain $\dot{q}_{cd}=\dot{q}_{s}-\frac{q_{cd}}{R_{d}(u)C_{d}}+\frac{V_{D}(u)}{R_{d}(u)}.$ Selecting the state vector as $\mathbf{x}=\left(i,v_{d},i_{Lc},v_{c}\right)^{T}=\left(\dot{q_{s}},\frac{{q}_{cd}}{C_{d}},\dot{q}_{Lc},\frac{q_{Lc}}{C}\right)^{T}$, writing $V_{D}(u)$ as $uV_{d_{on}}$ and re-writing the input vector $u$ as $u=\left(V_{i},V_{d_{on}}\right)^{T}$ the final state-space model obtained is: $\mathbf{A(u)}=\begin{bmatrix}\frac{\left(-R_{s}-R_{L}\right)}{L_{s}}&\frac{-1}{L_{s}}&\frac{R_{c}}{L_{s}}&0\\\ \\\ \frac{1}{C_{d}}&-\frac{1}{R_{d}(u)C_{d}}&0&0\\\ \\\ \frac{R_{L}}{L_{c}}&0&\frac{\left(-R_{L}-R_{c}\right)}{L_{c}}&\frac{-1}{L_{c}}\\\ \\\ 0&0&\frac{1}{C}&0\end{bmatrix};$ $\mathbf{B(u)}=\begin{bmatrix}\frac{1}{L_{s}}&0\\\ \\\ 0&\frac{u}{R_{d}(u)C_{d}}\\\ \\\ 0&0\\\ \\\ 0&0\end{bmatrix};\mathbf{C}=\left[I\right]_{4\times 4};\mathbf{D}=\left[0\right]_{4\times 2}.$ #### 4.3.2 H-F model of DC-DC boost converter A DC-DC converter is shown in Fig. 8. Replacing the switches and the dynamic elements by their H-F equivalents gives the circuit configuration shown in Fig. 9. As before, the generalised coordinates selected are currents and charges of dynamic elements with the currents shown in Fig. 9. Writing the equations like we did for the previous example, we obtain: Figure 8: DC-DC Boost-converter $\mathcal{T}(q,\dot{q})=\frac{1}{2}L\dot{q_{1}}^{2}+\frac{1}{2}L_{s}\dot{q_{2}}^{2}+\frac{1}{2}L_{c}\dot{q_{3}}^{2},$ (56) $\displaystyle\begin{split}\mathcal{V}(q,u_{d})=\frac{1}{2C}q_{3}^{2}+\frac{1}{2C_{s}}q_{4}^{2}+\frac{1}{2C_{d}}q_{5}^{2}-V_{i}q_{1}\\\ +V_{D}(u_{d})(q_{1}-q_{2}-q_{5}),\end{split}$ (57) $\mathcal{L}(q,\dot{q},u_{d})=\frac{1}{2}L\dot{q_{1}}^{2}+\frac{1}{2}L_{s}\dot{q_{2}}^{2}+\frac{1}{2}L_{c}\dot{q_{3}}^{2}-\frac{1}{2C}q_{3}^{2}-\frac{1}{2C_{s}}q_{4}^{2}\\\ -\frac{1}{2C_{d}}q_{5}^{2}+V_{i}q_{1}-V_{D}(u_{d})(q_{1}-q_{2}-q_{5}),$ (58) $\mathcal{D}(\dot{q},u_{d})=\frac{1}{2}R_{L}\left(\dot{q_{1}}\right)^{2}+\frac{1}{2}R_{s}(u_{m})\left(\dot{q_{2}}-\dot{q_{4}}\right)^{2}+\frac{1}{2}R_{c}\left(\dot{q_{3}}\right)^{2}\\\ +\frac{1}{2}R_{d}(u_{d})\left(\dot{q_{1}}-\dot{q_{2}}-\dot{q_{5}}\right)^{2}+\frac{1}{2}R_{o}\left(\dot{q_{1}}-\dot{q_{2}}-\dot{q_{3}}\right)^{2}.$ (59) Figure 9: H-F model of DC-DC Boost converter Again, in the above equations, $V_{D}(u_{d}),R_{d}(u_{d})$ illustrate the dependence of the diode parameters on the diode switching function $u_{d}$ and $R_{s}(u_{m})$ illustrates the dependence of the MOSFET resistance on the MOSFET switching function $u_{m}$. Putting equations (56-59) in (2) and solving for $q=q_{1}$ we get: $\ddot{q_{1}}=\frac{(-R_{L}-R_{o})}{L}\dot{q_{1}}+\frac{R_{o}}{L}\dot{q_{2}}+\frac{R_{o}}{L}\dot{q_{3}}-\frac{q_{5}}{LC_{d}}+\frac{V_{i}}{L},$ for $q=q_{2}$ we obtain: $\ddot{q_{2}}=\frac{R_{o}}{L_{s}}\dot{q_{1}}-\frac{R_{o}}{L_{s}}\dot{q_{2}}-\frac{R_{o}}{L_{s}}\dot{q_{3}}-\frac{q_{4}}{L_{s}C_{s}}+\frac{q_{5}}{L_{s}C_{d}},$ for $q=q_{3}$ we obtain: $\ddot{q_{3}}=\frac{R_{o}}{L_{c}}\dot{q_{1}}-\frac{R_{o}}{L_{c}}\dot{q_{2}}-\frac{q_{3}}{CL_{c}}+\frac{\left(-R_{c}-R_{o}\right)}{L_{c}}\dot{q_{3}},$ for $q=q_{4}$, we obtain: $\dot{q_{4}}=\dot{q_{2}}-\frac{q_{4}}{R_{s}(u_{m})C_{s}},$ and for $q=q_{5}$ we obtain: $\dot{q_{5}}=\dot{q_{1}}-\dot{q_{2}}-\frac{q_{5}}{R_{d}(u_{d})C_{d}}+\frac{V_{D}(u_{d})}{R_{d}(u_{d})}.$ Selecting the state vector as $\mathbf{x}=\left(i,v_{c},i_{Ls},v_{cs},i_{Lc},v_{d}\right)^{T}=\left(\dot{q_{1}},\frac{q_{3}}{C},\dot{q_{2}},\frac{q_{4}}{C_{s}},\dot{q_{3}},\frac{q_{5}}{C_{d}}\right)^{T}$, re-writing $V_{D}(u_{d})$ as $u_{d}V_{d_{on}}$ and writing the input vector $u$ as $u=(V_{i},V_{d_{on}})^{T}$, we obtain the final state-space model (see Appendix). It is worth noting that $u_{m}$ and $u_{d}$ are complementary to each other, so the state matrix derived can be written as a function of $u_{d}$ alone. ## 5 Simulation results In this section, we present simulation results for two high-fidelity power converters, taking into account the non-idealities of circuit components. We also provide model parameters and a brief discussion of the results. Additionally, for reproducibility, we include the code at the following repository: https://github.com/ShakirSofi/H-F-Dc-Dc-Power-converters. ### 5.1 H-F model diode rectifier ##### Parameters The input is a square wave signal of $\pm 12$ V with a switching frequency of $1$ kHz, and the other parameters are set as follows: $R_{s}=0.01\,\Omega$, $C=1\,\text{mF}$, $L_{c}=10\,\mu\text{H}$, $R_{L}=10\,\Omega$; $L_{s}=10\,\mu\text{H}$, $R_{d_{\text{off}}}=10k\,\Omega$, $R_{d_{\text{on}}}=0.05\,\Omega$, $C_{d}=10\,\text{nF}$, $R_{c}=1\,\Omega$, $R_{L}=10\,\Omega$, and $L_{c}=10\,\text{nH}$. ##### Results Figure 10 shows that the inductor current $i$ changes smoothly when the switch goes from the OFF to the ON state and starts storing energy. The current increases as the magnetic field collapses when the switch is turned OFF. The inductor will release this energy by increasing voltage. On the right, the figure shows damped oscillations, and the peak oscillation occurs at the start when the switch is turned OFF, i.e., the current decreases significantly, causing an increase in voltage. Similarly, the bottom left plot shows the voltage buildup across the capacitor. Figure 10: Simulation results of H-F model of diode rectifier ### 5.2 H-F DC-DC boost converter ##### Parameters The input voltage is $V_{i}=10$ V, and the parameters are as follows: $R_{L}=0.1\,\Omega$, $C=42\,\mu\text{F}$, $L=1.6\,\text{mH}$, $L_{s}=20\,\text{nH}$, $R_{s_{\text{off}}}=2\,M\Omega$, $R_{s_{\text{on}}}=0.2\,\Omega$, $R_{d_{\text{on}}}=50\,m\Omega$, $C_{s}=200\,pF$, $C_{d_{\text{on}}}=15\,mF$, $R_{c}=0.4\,\Omega$, $R_{d_{\text{off}}}=40\,M\Omega$, $L_{c}=100\,pH$, $d(t)=0.5$, and switching frequency $f=50$ kHz. ##### Results In the ideal scenario, the output voltage at a duty ratio of 0.5 should be $20V$ ($=\frac{V_{in}}{1-d}$). However, due to non-idealities, the measured voltage across $R_{o}$ is $18.22V$ in the steady state, and the steady-state inductor current obtained is $2.13A$, as shown in Figure 11. The plots in the bottom row show the current across inductor $I_{Lc}$ and the voltage across diode $v_{d}$. It can be inferred that the steady-state diode voltage is approximately $-v_{c}$ when the switch is ON, 0.7 (diode bias voltage) otherwise. This can be seen in the plots and impulsive behaviour as the switching frequency is high. Figure 11: Simulation results of H-F model of DC-DC Boost Converter ## 6 Conclusions We have revisited the procedure to be followed for Lagrangian modelling of switching circuits and pointed out errors in the erstwhile assumption that energies and consequently the Lagrangian cannot depend on the switching function. We have also demonstrated how to write down the EL formulation correctly without resorting to this assumption and obtain the correct state- space representation. We have examined the issue of incorporation of constraints in the EL formulation and shown how it can be done by an appropriate labelling of circuit currents in terms of generalised current coordinates. Lastly, with the help of the above-mentioned insights, we have extended Lagrangian modelling to high-fidelity equivalent circuits of power electronic converters. It is shown that despite the complex nature of these circuits, state-space models can be derived in an easy and systematic manner using the EL framework. Additionally, convincing numerical simulations are also presented to capture the non-ideal behaviour of high-fidelity circuits. ## Declarations ### Funding and Competing interests This research received no external funding. The authors declare that they have no competing interests relevant to the article. ## Appendix $\mathbf{A(u_{d},u_{m})}=\begin{bmatrix}\frac{\left(-R_{L}-R_{o}\right)}{L}&0&\frac{R_{o}}{L}&0&\frac{R_{o}}{L}&\frac{-1}{L}\\\ \\\ 0&0&0&0&\frac{1}{C}&0\\\ \\\ \frac{R_{o}}{L_{s}}&0&-\frac{R_{o}}{L_{s}}&\frac{-1}{L_{s}}&-\frac{R_{o}}{L_{s}}&\frac{1}{L_{s}}\\\ \\\ 0&0&\frac{1}{C_{s}}&-\frac{1}{R_{s}(u_{m})C_{s}}&0&0\\\ \\\ \frac{R_{o}}{L_{c}}&-\frac{1}{L_{c}}&-\frac{R_{o}}{L_{c}}&0&\frac{\left(-R_{c}-R_{o}\right)}{L_{c}}&0\\\ \\\ \frac{1}{C_{d}}&0&-\frac{1}{C_{d}}&0&0&-\frac{1}{R_{d}(u_{d})C_{d}}\end{bmatrix};$ $\mathbf{B(u_{d})}=\begin{bmatrix}\frac{1}{L}&0\\\ \\\ 0&0\\\ \\\ 0&0\\\ \\\ 0&0\\\ \\\ 0&0\\\ \\\ 0&\frac{u_{d}}{R_{d}(u_{d})C_{d}}\end{bmatrix};\mathbf{C}=\left[I\right]_{6\times 6};\mathbf{D}=\left[0\right]_{6\times 2}.$ ## References * [1] N. Mohan, W. P. Robbins, T. M. Undeland, R. Nilssen, O. Mo, Simulation of power electronic and motion control systems-an overview, Proceedings of the IEEE 82 (8) (1994) 1287–1302. * [2] G. Tan, H. Chen, X. Zhang, Comments on “lagrangian modeling and passivity-based control of three-phase ac/dc voltage-source converters”, IEEE Transactions on Industrial Electronics 55 (4) (2008) 1881–1882. * [3] A. Bloch, Electromechanical analogies and their use for the analysis of mechanical and electromechanical systems, Journal of the Institution of Electrical Engineers - Part I: General 92 (52) (1945) 157–169. * [4] D. Jeltsema, J. M. Scherpen, Dynamics and Control of Switched Electronic Systems, Springer-Verlag, London, 2012, Ch. Power based modelling. * [5] G. Escobar, A. J. van der Schaft, R. Ortega, A hamiltonian viewpoint in the modeling of switching power converters, Automatica 35 (3) (1999) 445 – 452. * [6] R. D. Middlebrook, S. Cuk, A general unified approach to modelling switching-converter power stages, in: 1976 IEEE Power Electronics Specialists Conference, 1976, pp. 18–34. * [7] J. M. Scherpen, D. Jeltsema, J. Klaassens, Lagrangian modeling of switching electrical networks, Systems and Control Letters 48 (5) (2003) 365 – 374. * [8] D. Jeltsema, J. M. Scherpen, On the existence of lagrangians for clarke and park transformed switched-mode electrical networks, IFAC-PapersOnLine 52 (16) (2019) 90–95, 11th IFAC Symposium on Nonlinear Control Systems NOLCOS 2019. * [9] K. Umetani, Lagrangian method for deriving electrically dual power converters applicable to nonplanar circuit topologies, IEEJ Transactions on Electrical and Electronic Engineering 11 (4) (2016) 521–530. * [10] J. A. Russer, P. Russer, Lagrangian and hamiltonian formulations for classical and quantum circuits, IFAC Proceedings Volumes 45 (2) (2012) 439–444. * [11] R. Ortega, J. A. L. Perez, P. J. Nicklasson, H. Sira-Ramirez, Passivity-based Control of Euler-Lagrange Systems, Springer-Verlag, London, 1998. * [12] J. Scherpen, J. Klaassens, L. Ballini, Lagrangian modeling and control of dc-to-dc converters, in: Proceedings of the INTELEC’99, University of Groningen, Research Institute of Technology and Management, Groningen, 1999. * [13] H. A. Yildiz, L. Goren-Sumer, Lagrangian modeling of dc-dc buck-boost and flyback converters, in: 2009 European Conference on Circuit Theory and Design, 2009, pp. 245–248. * [14] J. Meisel, Principles Of Electromechnical-energy Conversion, Krieger Publishing Company, Florida, 1984. * [15] S. Bacha, I. Munteanu, A. I. Bratcu, Power Electronic Converters Modeling and Control, Advanced Textbooks in Control and Signal Processing, Springer-Verlag, London, 2014. * [16] D. Maksimovic, A. M. Stankovic, V. J. Thottuvelil, G. C. Verghese, Modeling and simulation of power electronic converters, Vol. 89, 2001, pp. 898–912. * [17] P. L. Chapman, Multi-resolution switched system modeling, in: 2004 IEEE Workshop on Computers in Power Electronics, 2004. Proceedings., 2004, pp. 167–172. * [18] H. Khan, M. A. Bazaz, S. A. Nahvi, Model order reduction of power electronic circuits, in: 2017 6th International Conference on Computer Applications In Electrical Engineering-Recent Advances (CERA), 2017, pp. 450–455. * [19] A. Massarini, M. K. Kazimierczuk, Self-capacitance of inductors, IEEE Transactions on Power Electronics 12 (4) (1997) 671–676.
# D-STACK: High Throughput DNN Inference by Effective Multiplexing and Spatio- Temporal Scheduling of GPUs Aditya Dhakal<EMAIL_ADDRESS>University of California, RiversideUSA , Sameer G. Kulkarni<EMAIL_ADDRESS>IIT GandhinagarIndia and K. K. Ramakrishnan<EMAIL_ADDRESS>University of California, RiversideUSA ###### Abstract. Hardware accelerators such as GPUs are required for real-time, low latency inference with Deep Neural Networks (DNN). However, due to the inherent limits to the parallelism they can exploit, DNNs often under-utilize the capacity of today’s high-end accelerators. Although spatial multiplexing of the GPU, while limiting the GPU resources (GPU%) to each DNN to the right amount, leads to higher GPU utilization and higher inference throughput, there remain a number of challenges. Finding the GPU% for right-sizing the GPU for each DNN through profiling, determining an optimal batching of requests to balance throughput improvement while meeting application-specific deadlines and service level objectives (SLOs), and maximizing throughput by appropriately scheduling DNNs are still significant challenges. This paper, introduces a dynamic and fair spatio-temporal scheduler (D-STACK) that enables multiple DNNs to run in the GPU concurrently. To help allocate the appropriate GPU% (we call it the ”Knee”), we develop and validate a model that estimates the parallelism each DNN can utilize.We also develop a lightweight optimization formulation to find an efficient batch size for each DNN operating with D-STACK. We bring together our optimizations and our spatio-temporal scheduler to provide a holistic inference framework. We demonstrate its ability to provide high throughput while meeting application SLOs. We compare D-STACK with an ideal scheduler that can allocate the right GPU% for every DNN kernel. D-STACK gets higher than 90% throughput and GPU utilization compared to the ideal scheduler. We also compare D-STACK with other GPU multiplexing and scheduling methods (e.g., NVIDIA Triton, Clipper, Nexus), using popular DNN models. Our controlled experiments with multiplexing several popular DNN models achieve up to $1.6\times$ improvement in GPU utilization and up to $4\times$ improvement in inference throughput. ††copyright: none ## 1\. Introduction Deep Neural Networks (DNNs) are widely used for many applications, including image recognition, natural language processing, etc. Accelerators have become indispensable for DNN learning and inference. Accelerators such as GPUs, TensorCores (Markidis et al., 2018), and TPU (Jouppi et al., 2017) reduce the DNN inference times, often by 2-3 orders of magnitude compared to even using a high-end CPU cluster. These accelerators are widely used by cloud services as a part of their inference-as-a-service (IaaS) offerings, where trained DNN models are hosted in a Cloud or an Edge Cloud (especially for low-latency operation). User requests are inferred using the GPUs deployed in the cloud. Most DNN models running in inference frameworks (PyTorch (Paszke et al., 2019), TensorFlow Serving (Ten, 2020), NVIDIA’s Triton (tri, 2021) etc.) often execute far fewer floating-point operations per second (FLOPS) than the capacity of these high-end GPUs (Dhakal et al., 2020; Zhang et al., 2019; Inci et al., 2020), TPUs (Wang et al., 2020) and other accelerators (Kong et al., 2021). We observed that DNN models, when performing inference even using a single GPU, do not significantly reduce the DNN’s processing latency when provided with additional GPU resources (i.e., number of Streaming Multiprocessors (SMs) - GPU compute units analogous to CPU cores) beyond a certain point. We call this point as a ”Knee” for the DNN (expressed as a percentage of the total SMs available in the GPU, e.g., 50% of a V100 GPU (which has 80 SMs in total) is 40 SMs.). Running applications with resources matching the Knee is desirable for a cloud operator providing Inference as a Service, since multiplexing a GPU (or similar accelerator) across as many applications as possible keeps costs low. Operating at the Knee also keeps the latency low for the user. When more GPU resources are provided for a DNN (e.g., by giving the full GPU to an application, possibly using temporal sharing), it is wasteful as the GPU is not fully utilized. We see two fundamental reasons for this under-utilization of multi-core accelerators such as GPUs by DNNs when given more than the Knee’s resources: i) Amount of parallelism over the entirety of DNN’s execution is not uniform, i.e., many DNN functions (e.g., convolution, ReLU etc.) are unable to fully utilize the parallelism offered by the accelerator; ii) DNN operations also involve other overheads (e.g., kernel launches, memory read-write, etc.). We study the execution of a variety of DNN models to understand the root causes of under-utilization of such accelerators, particularly GPUs, and develop methods to improve the overall system utilization, thus improving throughput and reducing inference latency. Multiplexing GPUs in the Edge Cloud: DNN inference requests for applications such as autonomous driving, augmented reality, etc., have stringent deadlines (e.g., $<$100ms). A cloud providing IaaS also has to account for the network latency. Edge Clouds offer a sweet spot reducing both latency and offering the necessary processing resources, although more constrained than centralized cloud services. Multiplexing the expensive hardware accelerator is therefore very desirable. Current GPU virtualization and inference service frameworks such as Nexus (Shen et al., 2019), NVIDIA’s Triton Inference Server (Triton) (tri, 2021), gPipe (Huang et al., 2019), and PipeDream (Narayanan et al., 2019) either use a ’single GPU per DNN’ model or time-share the GPU across multiple DNN models. These current state-of-the-art frameworks for DNNs allocate the full GPU (i.e., 100% of GPU) for the time quantum as shown in Fig. 1 (left). However, dedicating an entire GPU to run a single DNN model at a time can be wasteful. Furthermore, interleaving execution of tenant applications by temporally sharing increases inference latency for all of them, because of the significant cost of frequent switching between applications. Multiplexing several applications on the GPU to run concurrently, through spatial as well as temporal multiplexing, helps to better utilize the GPU and achieve much higher aggregate inference throughput. Our approach utilizes the CUDA Multi-process Service (MPS) (NVIDIA, Tesla, 2019) to spatially share the GPU across several applications, similar to GSLICE (Dhakal et al., 2020). But, existing approaches of spatial multiplexing with the GPU either only statically partition the GPU for each application or does not guarantee computing resource isolation while multiplexing. This has the potential to allocate fewer resources than necessary for an application. It also causes interference among the multiplexed applications when too many models share the GPU, thus, increasing the inference latency. We illustrate with an example when four different models have to be run on a V100 GPU (three are already executing and a fourth is added). Temporal sharing allocates the GPU to each model for a time slice. Static spatial sharing with CUDA-MPS will allow all 4 models to run in an uncontrolled manner, causing interference as noted in (Dhakal et al., 2020). GSLICE will initially spatially share the 3 models, and allocate GPU resources according to their Knee GPU% capacities. When the fourth model is added (in Fig. 1(middle)), the VGG-19 model’s GPU% is reduced from 50% to 25%, causing increased inference latency for that more complex VGG-19 model, which also is undesirable. Figure 1. GPU multiplexing scenarios On the other hand, our GPU virtualization framework, with our spatio-temporal scheduler, Dynamic Spatio-Temporal pACK (D-STACK), can run on multiple NVIDIA GPU-based systems (single GPU or GPU clusters). D-STACK schedules DNNs based on spatial resources (Knee GPU%, number of SMs), and the appropriate time slice. Combining spatial and temporal scheduling, D-STACK is designed to meet the inference deadline for each DNN model. D-STACK goes well beyond the basic idea of simple temporal or static spatial multiplexing of a GPU presented in earlier works (Dhakal et al., 2020; Inci et al., 2020; tri, 2021). The example of Spatio-Temporal scheduling in Fig. 1 (right), has all 4 models getting their Knee GPU%. When a model completes its inference, another model utilizes the GPU resources, thus, sharing the GPU resources both temporally and spatially. D-STACK’s scheduler further utilizes the idle processing resource of the GPU by dynamically running any ’ready’ models, thus maximizing GPU utilization. D-STACK’s Innovations: i.) Understanding a DNN’s demand: For efficient utilization of the GPU, D-STACK requires information about the resource requirements of each DNN model. Providing the right resources for the DNN is not just a challenge for the GPU, but is fundamental for all such accelerators that utilize a multitude of compute engines for parallel processing. In this paper, along with our analytical models of DNN execution and scheduling, we estimate what would be theoretically possible for a DNN to exploit available parallelism by knowing exactly how much computational capacity is required, assuming that instantaneous switching between multiplexed tasks is possible. We then show how close we come to that theoretical optimal by implementing our GPU virtualization framework using our D-STACK scheduler on a GPU cluster. ii.) Dynamic Resource Allocation in GPU: Currently, dynamic resource allocation of the GPU requires reloading of applications with their new desired GPU%. For typical DNN models, this reloading time can be 10s of seconds, during which the GPU is idle, lowering the overall system utilization and throughput. In D-STACK, we address the dynamic allocation of GPU resources by overlapping the loading of a DNN model with the new resource allocation, by continuing to execute the existing DNN model, thus effectively masking the loading latency. We thus reduce the time the GPU is idle to less than 100 micro-seconds with D-STACK. iii.) Multi-GPU Cluster: Understanding the use of a single GPU and increasing its utilization translates to improving overall throughput of a GPU cluster. D-STACK’s optimization can be easily extended to a multi-GPU cluster. In this paper we present the implementation of D-STACK’s Spatio-temporal scheduler across multi-GPU cluster to increase the system througput by 200%. Table 1. Triton and D-STACK with 4 DNN models | Triton Server | D-STACK | Latency Reduction(%) ---|---|---|--- Task completion (sec.) | 58.61 | 35.59 | 37% Comparing with State-of-the-art: We present a comparison of D-STACK with NVIDIA’s Triton Inference Server. We evaluate the total time taken to infer with 4 different DNN models, Alexnet, Mobilenet, ResNet-50, and VGG-19 being multiplexed on one V100 GPU, each concurrently inferring 10000 images each. The results in Table. 1 show that the Triton server takes about 58 seconds to finish inference. The D-STACK scheduler completes inference on all requests more than 37% faster (only 36 seconds). D-STACK’s spatial multiplexing, providing just the right amount of GPU% and its dynamic spatio-temporal scheduling results in more effective use of the GPU and achieving higher DNN inference throughput than NVIDIA’s Triton server, while also lowering task completion time. Based on these experiments, we see that implementation of Spatio-temporal scheduling can further enhance throughput when inferring with multiple different models concurrently. Contributions: D-STACK improves GPU utilization by 60% and increases in DNN inference throughput by 4$\times$ compared to a pure temporal scheduler, while still avoiding any deadline (SLO) violations. Our key contributions are: * • We investigate the extent to which a DNN can exploit parallelism (§3), and devise an analytical model to demonstrate this limitation of typical DNNs when performing inference with GPUs(§ 4). * • We develop a Spatio-Temporal scheduler for DNNs, using the GPU% and batch size derived from our analytical models, to maximize inference throughput while allocating GPU resources fairly (§6). * • We develop an optimization framework to determine the optimal DNN Batch size and GPU%. We evaluate the efficacy of GPU usage when choosing the optimal batch size and Knee GPU%. (§ 5). * • We compare D-STACK’s approach with the Triton server and other state-of-the- art scheduling algorithms. ## 2\. Related Work GPU Multiplexing: Multiplexing GPU to increase the GPU utilization and system throughput has been discussed in many studies. Proprietary products such as Nutanix (NVIDIA, 2017), vGPU (NVIDIA, 2021b) utilize GPU virtualization to multiplex GPU across VMs. Many consider temporal multiplexing and seek increased GPU utilization through batching and better scheduling (Crankshaw et al., 2017; Gu et al., 2019; Shen et al., 2019; Gujarati et al., 2020; Gao et al., 2018; AWS, 2021; Yeh et al., 2020). Gandiva (Xiao et al., 2018) and Mystic (Ukidave et al., 2016) address multiplexing the GPU while observing but not solving the interference caused while multiplexing DNNs in the GPU. Unlike these, our work can concurrently run multiple applications in GPU, improve GPU utilization _and_ reduce or eliminate the interference through controlled spatial multiplexing. Spatial Multiplexing of GPU: GSLICE (Dhakal et al., 2020) utilizes CUDA MPS to spatially share the GPU among multiple DNN applications. However, it partitions the GPU statically and does not schedule the execution of DNNs. With GSLICE, executing a large number of models potentially cause each model get a small GPU slice (less than the Knee), leading to higher inference latency and lower throughput. Moreover, the lack of a scheduler means it is insufficient for deadline-driven inference scenarios. We compare D-STACK with GSLICE in §7. Laius (Zhang et al., 2019), G-Net (Zhang et al., 2018a), Gost (Zhu et al., 2021) and Baymax (Chen et al., 2016) spatially multiplex GPU kernels. Unlike these works, our platform focuses on the spatially multiplex entire DNNs consisting of multiple kernels. Moreover, we run DNN applications in their native DNN framework (e.g., PyTorch, TensorFlow) without any algorithmic modifications, unlike the whitebox approach of Laius and Baymax. S3DNN(Zhou et al., 2018) (uses Streams) and Prophet (Chen et al., 2017) (uses MPS) and CuMAS (Belviranli et al., 2016) profile each kernel and use a shim to capture kernel launches and reorder kernel executions for proper spatial sharing. In contrast, our approach does not require a shim or reordering of kernels and works in a black box manner, without requiring an application’s individual kernel profile (which may not be available). DNN’s limits on Utilizing GPUs: Several works (Jeon et al., 2019; Yeung et al., 2020a, b) have discussed the under-utilization of GPU by DNNs, and have proposed algorithmic optimizations that make DNN kernel computation more efficient (Jia et al., 2019; Du et al., 2017; Song et al., 2017; Chen et al., 2018). These solutions require whitebox models that can be changed. There have been works analyzing how DNN’s exploit parallelism. (Jain et al., 2019, 2018) show that DNNs attain a much smaller number of FLOPS than what a GPU can provide. Poise (Dublish et al., 2019) and (Kayıran et al., 2013) shows that the high data load latency from the GPU memory to the processing unit is also a reason for the limit in parallelism. (Liang et al., 2022) creates an analytical model to predict the inference latency and mainly utilize temporal queuing solution to meet deadlines. (Liang et al., 2022)’s model uses default MPS, and due to interference causing increased latency, they limit the number of models spatially sharing the GPU at a time. On the other hand, D-STACK provides fine-grained spatial and temporal control of resources of the GPU and thus is able to run far more models with larger batch sizes without interference. With a spatio-temporal scheduler D-STACK utilizes resources both spatially and temporally to meet the inference deadline. (Inci et al., 2020) shows lack of resources in CPU and GPU spatial resources will greatly slowdown GPU execution. Our work complements (Inci et al., 2020) by demonstrating a method to find the Knee beyond which applications fail to utilize GPU efficiently. We utilize understanding from these related work to create an analytical DNN model that helps deriving the Knee% necessary for inference without slowdowns. Furthermore, we evaluate our methods in a real system. Multi-Instance GPUs (MIGs) such as the NVIDIA A100 are hardware-based approaches for coarser-grained, spatial multiplexing. MIGs allow static partitioning of a GPU into multiple smaller GPU instances (up to 7 instances with the A100). However, MIGs require the GPU to be reset or VMs to be restarted to change the resource allocation. This causes significant downtimes as all the processing using the GPU has to also be restarted. D-STACK’s spatio-temporal scheduling avoids the GPU reset and quickly allocates the desired GPU resources. Moreover, note that MIG GPUs are also able to run as a single GPU (similar to V100). Thus, they can benefit from D-STACK without any modification. ## 3\. Understanding DNN Parallelism through Measurement Experimental Setup and Testbed: We used a Dell Server with Intel(R) Xeon(R) Gold 6148 CPU with 20 cores, 256 GB of system memory, and one NVIDIA V100 GPU, and an Intel X710 10GbE NIC as our testbed. The V100 has 80 SMs and 16 GB of memory. Our workload for the vision based DNNs (Alexnet (Krizhevsky et al., 2012a), Mobilenet (Howard et al., 2017), ResNets (He et al., 2016), VGG (Simonyan and Zisserman, 2014), Inception (Szegedy et al., 2015), ResNext (Xie et al., 2017)) consists of color images of resolution 224$\times$224\. This resolution choice is inspired by initial work (Krizhevsky et al., 2012b; Simonyan and Zisserman, 2014; tor, 2021). For BERT (Devlin et al., 2019), a natural language processing DNN, we utilize sentences of 10 words. We use OpenNetVM (Zhang et al., 2016) to host our framework that runs multiple DNN models for inference. We use Moongen (Emmerich et al., 2015) to transmit ~1920 images/sec. on a 10Gbps Ethernet link. Our platform can batch input data to the desired batch size. We primarily report the execution time for inference in the GPU for all our experiments and do not consider the additional latency contributed by network protocols. Therefore, our results are independent of the network transport protocol used. We utilize CUDA Multi- Process Service (MPS) to spatially multiplex the GPU. We use CUDA_MPS_ACTIVE_ THREAD_PERCENTAGE environmental variable to provide GPU%. Once set, the GPU% cannot be changed for a process. ### 3.1. Measurement with ML Models We now present measurements performed on our testbed with multiple DNNs, to demonstrate the limits in the parallelism of those DNN models. We measured the latency for inferring a batch of 16 images/sentences using different GPU% for several popular DNN models using PyTorch framework. We utilize models with different compute requirements. From Fig. 3, we see that the inference latency remains unchanged above 30-50% of GPU for most models (Knee point). With a smaller batch size, the Knee% is lower (20%-35%). However, we also observe that using fewer than necessary SMs (low GPU%) leads to an exponential increase in model latency (also observed in (Inci et al., 2020)). We observed a similar knee with other GPUs as well. We evaluated computationally light models, Alexnet (A-P100 and A-T4) and Squeezenet (Sq-P100 and Sq-T4) on both the P100 and T4 GPUs. The T4 GPU supports CSS, but the P100 only supports default MPS. We present their results in Fig. 3. Even with different GPUs, we see the knee behavior in Alexnet and Squeezenet. Only the computationally dense ResNet-50 (R-P100 and R-T4) does not show an obvious knee. Both the P100 and T4 GPUs have lower computational capacity than the V100, therefore, ResNet-50 can fully utilize those GPUs. As the knee for these models exists in other GPUs as well, our platform can be used more generally in other GPUs as well. Figure 2. V100 lat. vs. GPU%(Batch = 16) Figure 3. P100 and T4 GPUs profile ### 3.2. Dynamic GPU Resource Reconfiguration Due to the limitation of CUDA MPS (NVIDIA, Tesla, 2019), any GPU resource readjustment requires us to spin up a new CPU process with an updated GPU%. This results in several seconds of downtime (depending on the ML framework initialization). We utilize the overlapped execution approach of GSLICE (Dhakal et al., 2020), which maintain an active-standby pair of process, where an active process keeps processing incoming requests while a standby process loads the DNN model into the GPU with updated GPU%. The standby takes over inference when ready, thus, avoiding downtime. While changing the GPU%, two instances of the same model, the original and the new model, occupy the GPU during the brief overlap time. This increases the GPU memory demand. We overcome this drawback through DNN parameter sharing utilized in GSLICE (Dhakal et al., 2020). We use cudaIPC to share the weights and parameters loaded by the original model with the new loading model, thus, removing the need of loading the weights again.Parameter sharing reduces the memory required by the newly loaded DNN model by up to 40%. ### 3.3. Loading models without known Knee% When a model which is not profiled and whose knee is not known is started, our platform initially provides it a nominal, 30%, GPU. The GPU% is then readjusted using Dynamic GPU resource reconfiguration to find the knee based on the inference latency using a simple binary search. ## 4\. Modeling DNN parallelism ### 4.1. Compute Bound vs. Memory Bound Workloads Table 2. Compute & memory bound kernels Model | Layer | GFLOPs | Bytes $(10^{6})$ | Arit. Int. | Limit ---|---|---|---|---|--- Alexnet | Conv.2 | 0.30 | 0.22 | 182 | Compute ResNet50 | Conv.2 | 0.103 | 0.121 | 393 | Compute VGG-19 | Conv.11 | 3.7 | 9.44 | 391 | Compute GNMT | LSTM | 0.016 | 8.38 | 2 | Memory The latency of accessing parameters and weights of the DNN layer from the GPU DRAM can be significant. Many studies (Zhang et al., 2018b) have suggested that memory-bound DNN kernels may have a small amount of compute and are likely to be limited by GPU memory bandwidth. NVIDIA has proposed an arithmetic intensity (A.int) metric (NVIDIA, 2021a) to estimate if a kernel is memory or compute bound. The A. int of a kernel is computed as a ratio of floating point operations to memory (bytes) it fetched. i.e., $A.int=\frac{\\#operations}{\\#bytes}$. NVIDIA reports the arithmetic index of V100 GPU (in our testbed) is 139.8 FLOPS/Byte (NVIDIA, 2021a). Any kernel lower than the GPU’s arithmetic index is memory-bound, while a kernel with higher index is compute-bound. We analyzed the most frequently occurring kernels of CNNs Alexnet (Krizhevsky et al., 2012b), ResNet-50 (He et al., 2016), VGG-19 (Simonyan and Zisserman, 2014), and an RNN, GNMT (Wu et al., 2016), to illustrate the behavior of compute and memory-bound DNNs. We present the results in Table. 4.1. Most convolution layers exceed the GPU’s A.int, thus, are compute-bound. These layers can reduce their runtimes if more compute is available. However, kernels like LSTM in GNMT, which operate with large input and output features (1024 features in GNMT), require a lot of data but perform relatively fewer computations compared to convolution. Therefore, they score very low A.int. We should note that DNNs are not entirely constructed of convolution or LSTM layers. However, CNNs, in general, have more convolution kernels. ### 4.2. Memory Contention While Multiplexing Studies (Mei and Chu, 2017; Jia et al., 2018) of scientific computation workloads have shown that the GPU cache size and occupancy are important factors influencing the latency of kernel execution. We also examine the effect of cache contention while running multiple DNN models. However, we observe with DNNs, that the inference latency does not vary significantly _if_ SM isolation is maintained. Since we indeed maintain SM isolation with spatial multiplexing using CSS, the impacts of contention in the GPU cache or other memory resources is minimal. We present the 99th-percentile inference latency (batch = 16) of DNN models running in isolation (Fig. 3) versus the same model multiplexed at its knee GPU% with 4 other models in Table 3. Inference latency varies less than 3%, confirming this minimal impact. Thus, we do not utilize a separate variable for delay caused by the GPU cache. Instead, in the model of a DNN that we discuss in the next subsection, we consider all the memory related delays as a single variable. Table 3. Latency (ms) in isolation and multiplexed Model | Knee% | Isolation | Multiplexed ---|---|---|--- Mobilenet | 20% | 9.8 (ms) | 9.9 ResNet-18 | 30% | 12.4 | 12.4 BERT | 30% | 9.3 | 9.3 ResNet-50 | 40% | 28.9 | 28.5 VGG-19 | 50% | 51.2 | 52.4 Table 4. Table of Notations for DNN Model Variable | Description ---|--- $b$ | Batch Size $p$ | 1st kernel’s number of concurrent ops. (tasks) $Kmax$ | Maximum number of kernels $K_{i}$ | $i^{th}$ kernel $N_{i}$ | Number of parallelizable operations for $K_{i}$ $R_{i}$ | Number of repetition of $K_{i}$ in DNN $M$ | Memory Bandwidth per SM $d_{i}$ | Data for $i^{th}$ kernel (parameters & input) $S$ | Number of allocated SMs ### 4.3. Modeling DNNs We now model an analytical DNN model that exhibits the characteristics of most actual DNN models, in terms of the variation in the compute workload across their different kernels. We model the DNN composed of multiple sequential kernels executing in GPU (and other accelerators) instead of layers as often used in other ML studies. We have observed using NVPROF profiling that each layer (e.g., convolution layer) is often implemented as combination of multiple kernels in GPU, thus, we use kernel as basic component of DNN execution in this model. The model guides the determination of the best operating point (Knee) GPU% for a DNN. In our model, we breakdown the DNN workload into parallelizable operations (compute tasks), memory read/write as well as serialized (non-parallelizable) operations, and observe the effect of changing GPU resources. While our model is simple, it captures all the system level overheads that contributes to DNN latency, and provides us with good approximation of the Knee of each model. The simplicity of the model further aids in evaluating DNNs in different GPUs, with different numbers of SMs, as well as other accelerator hardware. Selected notation used in the analysis is shown in Table. 4. As in typical GPUs, each of the $\mathbf{S}$ SMs allocated to a DNN will process one parallel operation per $\mathbf{t_{p}}$ time. From a modeling perspective, we order the kernels by their amount of computation without losing generality. DNNs have an arbitrary order in kernel execution. However, the knee of the model is dependent on peak computation requirements of the kernels rather than the order of execution of each kernel. We set the first kernel $\mathbf{K_{1}}$ as that with the greatest amount of parallelizable operations $\mathbf{N_{1}}$, which is selected as $N_{1}=\mathbf{p}$ for modeling purposes. For subsequent kernels, the workload decreases by a fixed amount, so that $\mathbf{N_{i}>N_{i+1}}$. Eq. 1 specifies the amount of parallelizable operations for each kernel in the DNN. We decrease the amount of parallelizable tasks by a fixed amount, $\frac{p\times b}{Kmax}$, (1) $N_{i}=\begin{cases}p\times b,&i=1\\\ \left\lfloor{N_{i-1}-\frac{p\times b}{Kmax}}\right\rfloor,&i\geq 2\end{cases}$ (a) (b) (c) (d) Figure 4. (a), (b) Inference characteristics of analytical DNN models with varying amounts of parallelism and hardware resources.(c), (d) Demonstration of analytical model’s understanding on real DNN Mobilenet for each subsequent kernel. The number of concurrent operations decrease and reaches $\sim 0$ for the last ($K_{max}$) kernel. Correspondingly, we define the total execution time for each kernel’s parallelizable tasks as $\mathbf{W_{i}}=N_{i}\times t_{p}$. Note: Ideally, $W_{i}$ can potentially be completed in $t_{p}$ units of time when we allocate greater than or equal to the $N_{i}$ SMs to execute $W_{i}$. If we consider that the GPU hardware is able to provide $S$ SMs to execute $K_{i}$, then, without loss of generality, we can show that the time taken to finish processing the kernel would depend on the minimum of the inherent parallelism, as defined by $N_{i}$, and the number of SMs allocated for executing the operation. Thus, the execution time for parallelizable operations at each kernel of the DNN can be computed using Eq. 2. Individual kernels (2) $E_{i}=\frac{W_{i}}{max(1,min(S,N_{i}))}$ in the DNN often run repeatedly during a DNN inference. We define the number of repetitions of kernel $K_{i}$ as $\mathbf{R_{i}}$. We then factor the time taken to run all the serialized operations, including for kernel starting and kernel waiting for data. The kernel starting time is considered a constant, $\mathbf{t_{np}}$, per layer. The kernel’s time waiting for data, however, depends on the kernel’s input and parameters. Each kernel of a DNN has a certain amount of data (model parameters, input data) that has to be fetched from GPU DRAM (main/global memory of GPU) to the CUDA cores in the SMs. We have observed that the total global memory read/write bandwidth increases with the proportion to the number of SMs allocated. Other studies (Zhang et al., 2020; Micikevicius, 2012) also point to a proportional increase. We define the latency per kernel, caused by kernel waiting for parameters, input, and other data to be loaded, as Eq. 3. Thus, we can define the total time of non- parallelizable (sequential) operations $\mathbf{W_{se}}$ as Eq 4. We use Eqs. 2 and 4 to compute DNN execution time, $\mathbf{E_{t}}$ as in Eq. 5. (3) $E_{m}=\frac{d_{i}\times S}{M}$ (4) $W_{se}=b\times\sum_{i=1}^{K_{max}}R_{i}\times\left(t_{np}+E_{m}\right)$ (5) $E_{t}=W_{se}+\sum_{i=1}^{K_{max}}R_{i}E_{i}$ We now simulate the total time to execute a DNN under varying conditions i.e., by varying the amount of parallelizable and non-parallelizable operations at each kernel and the number of SMs in the GPU. As in typical GPUs, we assume the number of SMs allocated for an DNN remains static. Fig. 4(a) shows the impact on the DNN execution time when assigning different numbers of SMs. First, we created a DNN with 50 kernels i.e., $K_{max}=50$. We set the time taken for the parallel operation $t_{p}$ to be 40 units and for serialized operations $t_{np}$ to be 10 units. We repeat the simulation for 3 cases, varying the maximum amount of parallelization (concurrent operations at the first kernel) $N_{1}$ as 60, 40, and 20. For all three cases, the execution time is very high when the number of SMs is small (1 to 5 SMs), reflecting the penalty of insufficient resources for the inherent degree of parallelism while executing the DNN kernel. However, as the number of SMs increases, the execution latency decreases. Interestingly (see zoomed part of Fig. 4(a)), there occurs a point when giving more SMs beyond a point does not improve latency further, in each of the scenarios. When the number of SMs provisioned exceeds the amount of parallelism inherent in the DNN kernel, there is no further reduction in the latency. Even before reaching this point, the latency improvements from having an increased number of SMs reaches a point of diminishing returns111 i.e., showing marginal improvements. The DNN execution latency is impacted by both the number of parallelizable and non-parallelizable operations and it varies inversely with the number of allocated SMs, by Amdhal’s law (Amdahl, 1967). Batching increases parallelizable work (Gustafson, 1988).. We seek to find the most efficient number of SMs ($S$) needed for executing a given DNN, so that the utilization of the allocated SMs is maximized. To compute this, we have to find the maximum of $\frac{1}{E_{t}*{S}}$, which represents the DNN work processed per unit time per SM. For this, we differentiate $\frac{1}{E_{t}*{S}}$ with respect to the time taken to execute the DNN. (6) $\frac{d}{dE_{t}}\left(\frac{1}{E_{t}*S}\right)=-\frac{1}{\left(E_{t}\right)^{2}*S}$ Fig. 4(b) shows this first order derivative of the inverse of latency (Eq.6), showing that SMs for $N_{1}=20,40$ and $60$ reaches a maximum at $9$, $24$ and $31$ SMs respectively. Hence, operating at this derived ‘maximum’ point for a DNN guarantees that there are sufficient number of SMs to provide low latency while achieving the most efficient use of the SMs. Moreover, we can see from this that the ‘maximum’ peaks at a much lower SMs than the corresponding value of $N_{1}$. This is due to the impact of performing serialized tasks adjacent to the parallelizable tasks. This results in lower (or no) utilization of many of the allocated SMs for the serialized tasks. Thus, further reduction in latency by increasing SMs is minimal. Figure 5. Thread count & runtime (shown as area of circle) of 156 kernel of Mobilenet. ### 4.4. Analyzing Execution of Typical DNNs We profiled and analyzed Mobilenet, ResNet and GNMT DNNs using the NVPROF profiler (NVI, 2021) to capture the GPU resource usage and the execution time of the DNN kernels. #### 4.4.1. CNN model: Mobilenet We profiled the inference of Mobilenet using 100% of a V100 GPU. For each kernel, we show the GPU thread count on the y-axis (in log scale) and the corresponding runtime as the area of the bubble in Fig. 5. The approximate GPU% required for all the threads to run concurrently is on Y2-axis (log scale, on the right). We approximate this GPU% by considering that only 2048 threads can run in an SM concurrently, due to limits on the number of concurrent blocks and warps (NVI, 2018). The kernel’s design and thread distribution across different threadblocks can lead to a higher SM demand than absolutely required. We plot 11 distinct kernels of a Mobilenet model (each identified by a different color in Fig. 5). These kernels are executed a total of 156 times per inference. We observe that few of the kernels (kernel 3, 4 and 6, in particular) require more than 100% of the GPU to run. These kernels demand more threads than a GPU can run concurrently. However, these kernels run for a very short time and do not contribute significantly to the total inference latency. The kernels that contribute more to the total latency, such as kernels 10 and 7 utilize less than 10% of the GPU. This is due to the fact that the DNN’s inference feature matrix gets smaller, thus, resulting in limiting the inherent parallelism. Thus, these kernels use fewer parallel GPU threads and run for long time with low GPU% demand. They contribute to lowering the Knee GPU% of the entire DNN model. From this understanding, when the amount of parallelism of a kernel is low, increasing the number of GPU SMs will not reduce the execution time of the kernel, since the additional SMs will not be utilized. We also analyzed the inference time with different batch sizes of Mobilenet (Fig. 4(c)). In all the cases, for a given batch size, the latency reduces with an increase in GPU%. But, across all evaluated GPU percentages, the latency _increases_ with increasing batch sizes. Fig. 4(d) shows the first derivative of the inverse of Mobilenet’s latency obtained using Eq. 6. The maximum of the derivative, i.e., the most efficient point for DNN operation, for batch sizes of 1, 2, 4 and 8 occurs at GPU% of $\sim$ 10, 20, 40, and 50 respectively. This shows that with increasing batch size, i.e., increased parallelism, the GPU% at which the maximum utilization point occurs, based on Eq. 6, also increases. Fig. 6(a) shows the different maximum utilization points for the different models. Lightweight models such as Inception and ResNet-18 have a maximum at a lower GPU%, while compute-heavy VGG-19 does not see an inflection point up to 100% GPU. These characteristics of the individual DNN’s execution strongly correlate and match with the theoretical DNN model we presented. (a) (b) Figure 6. DNN Latency, first derivative as in Eq. 6 #### 4.4.2. Transformer Model BERT We also present the evaluation of the inference latency for the transformer- based natural language processing DNN, BERT, as well as the first order derivative, per GPU% in Fig. 6(b). We evaluated sentences with 10 and 20 words. We can observe that longer sentences results in higher inference latency. But again, we see that the inference latency does not improve after a point. The first order derivative of the latency for 10 and 20 word sentences shows a peak at around 30% and 40% GPU respectively. Thus, both our model prediction and our evaluation of representative compute-heavy CNN and memory- bound Transformer models show that there is indeed a limit to parallelism utilized by DNNs. This motivates our approach to further examine improving GPU utilization with spatio-temporal scheduling. ## 5\. Optimal Batching for DNNs Batching is a trade-off between improving throughput at the cost of higher latency. Inferring a batch of requests requires more computation, thus increasing inference time. Preparing a bigger batch, i.e., receiving and transferring data from the network to GPU also contributes additional latency. Providing a higher GPU% for a bigger batch can mitigate the inference latency increase. However, giving more than a certain GPU% may be wasteful. We use the metric (7) $\textit{Efficacy }(\eta)=\frac{\textit{Throughput}}{\textit{Latency}\times GPU\%}$ of _Efficacy_ ($\eta$) of using GPU resources as the basis to find a good operating point with respect to batch size and GPU%. We define $\eta$ of a DNN at a certain batch size and GPU% as Eq. 7. Efficacy, $\eta$, lets us know how much throughput the GPU produces per unit time, per unit of GPU resource (GPU%). ### 5.1. Optimum Batch Size for Inference We profiled the ResNet-50 model for inference at different batch sizes & GPU% configuration. Fig. 7 shows that both very high and very low batch size leads to low Efficacy due to high latency and reduced throughput respectively, thus, an optimal batch size is desired. We now develop an optimization formulation that can provide us with the right batch size and GPU% for a model, given a deadline. First, we present the key notations used for the optimization in Table 5.1. Table 5. Notation for Optimization Formulation Notation | Description ---|--- $p_{i}$ | GPU% for Session $i$ $b_{i}$ | batch size for Session $i$ $f_{L}(p_{i},b_{i})$ | inference latency of batch $b_{i}$ for model $M_{i}$ at GPU% $p_{i}$ $C_{i}$ | Request assembly time for Session $i$ The batch size is a product of the average incoming request rate and request assembly time. Thus, $b_{i}=\texttt{Request-Rate}\times C_{i}$. Throughput $T_{i}$ is number of images inferred per unit time (Eq. 8. Knowing throughput (Eq. 8) we can write $\eta$ (Eq. 7), as Eq. 9. Eq. 9 is of the same form as the first derivative of inverse of latency, Eq. 6, §4.2. (8) $T_{i}=\frac{b_{i}}{f_{L}({p_{i},b_{i})}}$ (9) $\eta=\frac{b_{i}}{\left(f_{L}(p_{i},b_{i})\right)^{2}\times GPU\%}$ We seek to maximize Efficacy ($\eta$) to get the best balance in parameters based on the constraints 10, 11, and 12. The constraints express following requirements: Eq. 10: Batch size must be less than or equal to maximum batch size a (10) $\displaystyle 1\leq b_{i}\leq\mathit{MaxBatchSize}$ (11) $\displaystyle f_{L}(p_{i},b_{i})+C_{i}\leq SLO_{i}$ (12) $\displaystyle f_{L}(p_{i},b_{i})\leq\frac{SLO_{i}}{2}$ model can accept. Eq. 11: The sum of times taken for aggregation of batch via network$(C_{i})$, and its inference execution, which has to satisfy the SLO. Eq. 12: When working with a high request rate, we can regularly gather large batch sizes for inference. However, a request that cannot be accommodated into the current batch due to constraint Eq. 11, has to be inferred in the next batch. Then the deadline for next batch is the deadline of the oldest pending request. Therefore, we make sure that SLO is twice the time required to run a batch. We computed the latency function $f_{L}(p_{i},b_{i})$, by fitting the latency observed while inferring DNN models with a batch size of 1,2,4,8,10,12,16 and GPU% from 10-100 at 10% intervals on our testbed. The optimization is solved using the non-linear programming solver ’fmincon’ in MATLAB. Requests (images of resolution $224\times 224$) arrive over a 10 Gbps link. 1 image is assembled every $\sim 481{}\mu{}s$. We use an SLO of 50 ms, allowing for an interactive system that can be used in safety critical environments such as autonomous driving(Qiu et al., 2018). We present the feasibility region (where the SLO constraints are fulfilled) and optimal point provided by the optimization formulation in Fig. 8. The infeasible area is in a lighter shade. It is particularly revealing that Mobilenet has an optimal point close to 30%. Figure 7. Efficacy of ResNet-50 Figure 8. Mobilenet feasibility region (darker shade) Estimation of the Knee for Real Systems: We view these optimal values in relative terms, representative of the limit to parallelism that the model exhibits, because the optimization does not necessarily factor all the aspects that influence the execution of the model in the real system. We, however, pick a batch size and GPU% values from the high efficacy region in the optimization output in Fig. 8 and over-provision the GPU% by 5-10% while deploying the model in a real system. ## 6\. GPU Scheduling of DNN models We now discuss the Spatio-temporal scheduling with D-STACK. We run the DNN models concurrently and meet their SLO while keeping the GPU from over- subscription. Over-subscription occurs when the aggregate GPU% of concurrent models exceed 100%. ### 6.1. Scheduling with varying SLO We schedule multiple models with different SLOs (deadlines), optimal batch sizes, and GPU% with D-STACK. Our scheduler considers two primary constraints. First, the DNN model must be scheduled at least once before an interval equal to its SLO, using an optimal batch size as predicted by the model in § 5. Second, the aggregate GPU demand at any point in the schedule should not exceed 100%. We choose a time period defined by the largest SLO to be a Session. Models with an SLO smaller than a session will run multiple times in a session. e.g., for a 100 ms session, a model with 25ms SLO will run at least 4 times. Our spatio-temporal scheduling also accommodates dynamic arrivals of requests by utilizing a Fair, Opportunistic and Dynamic scheduling module which dynamically recomputes the schedule, thus increasing the effective utilization of the GPU. Table 6. Characteristics of different DNN models Model | Knee% | SLO (ms) | Batch ($B_{i}$) Sentence len. | Runtime ($L_{i}$) (ms) ---|---|---|---|--- Mobilenet | 20 | 25 | 16 | 10 Alexnet | 30 | 25 | 16 | 8 BERT | 30 | 25 | 16 (10-words) | 9 ResNet-50 | 40 | 50 | 16 | 28 VGG-19 | 50 | 100 | 16 | 55 ResNet-18 | 30 | 25 | 16 | 12 Inception | 40 | 50 | 16 | 25 ResNeXt-50 | 50 | 100 | 16 | 40 We use 8 different DNN models and present their optimal batch size, GPU% and the latency of inference at that batch-size/GPU% in Table 6.1. We obtain the knee GPU% and Batch Size from the model in § 5. We chose our SLO based on safety-critical work such as autonomous driving(Qiu et al., 2018), where it is determined that less than 130ms processing is required to safely stop a car running at 80 miles/hr ($\sim$130 kmph). We choose a much more conservative 100 ms (effectively about 50 ms as rest is spent for preparing batch) for higher accuracy (VGG-19 and ResNext-50) and smaller SLOs (50 ms and 25 ms) for latency-optimized models (ResNet-50, Inception, Mobilenet, Alexnet and ResNet-18) aimed for application such as 30fps video stream. Unlike (Zhang et al., 2019), we realistically consider that a model’s execution cannot be preempted from GPU. We first examine a temporal schedule with Alexnet, ResNet-50, and VGG-19. We provide time slices proportional to the model’s SLOs. We utilize an adaptive batching algorithm mentioned in clipper (Crankshaw et al., 2017) and Nexus (Shen et al., 2019) to obtain the batch size for each model’s time slice. Fig. 9(a) is the visualization of such a schedule. The SLOs are visualized as the vertical dotted lines. We compute GPU utilization by using Knee% for each model as shown in Table 6.1. With temporal sharing, we achieve mean GPU utilization of 44%. #### 6.1.1. D-STACK: Spatio-Temporal Scheduling Our D-STACK’s scheduler aims to fit as many models as possible (potentially being different from each other) and run them concurrently in the GPU. We seek to be able to meet each model’s (potentially different) SLO. We employ a simple version of the Earliest Deadline First Scheduling (EDF) algorithm to schedule all the models. EDF schedules the model with the tightest deadline to run first. However, we should note that as a model’s inference is not preempted, this simple schedule cannot guarantee that the GPU will not be oversubscribed at any moment in the schedule. To aid in fitting in as many models as possible, we schedule consecutive executions of any model with the shortest SLOs to be as far apart as possible. This allows us to fit longer running models in the GPU in the interim without oversubscribing it. We demonstrate a schedule generated by spatio-temporal only algorithm in Fig. 9(b). We observe that the model with the smallest SLO, Alexnet (bottom), is scheduled to meet its SLO, but the time between the execution of the first instance and the second can be large because its execution time is short. This allows us to run ResNet-50 (second from the bottom) and VGG-19 (third) in between consecutive executions of Alexnet. Note that D-STACK’s scheduler can also schedule a model with GPU% lower than its Knee, albeit with high inference latency when necessary. D-STACK also considers the additional latency of launching a new DNN model at lower GPU% into the schedule. This latency-GPU% trade-off has to be considered carefully before starting inference. Once a DNN process starts with its allocated GPU%, it cannot be changed for that instance’s execution lifetime. #### 6.1.2. Fair, Opportunistic, Dynamic Scheduling (a) (b) (c) (d) Figure 9. (a, b, c) Scheduling Algorithms; (A-N=Alexnet, R-50=ResNet-50, V-19=VGG-19) (d) Comparison with ideal scheduler To efficiently utilize the GPU resource while ensuring that the system meets SLO guarantees, we further propose an opportunistic dynamic scheduling enhancement. The dynamic scheduling is triggered when a new request dynamically arrives for a model and when a model ends inference. The dynamic scheduler picks a model that is not active. This opportunistic addition is allowed as long as the GPU is not oversubscribed (so as to not interfere with the already scheduled models). To ensure fairness among available models, we use a scoreboard that tracks how many times each model has run in the last few (e.g., ten) sessions and prioritizes the models that have run the fewest. The algorithm then finds a time slice for the model to finish inferring and also determines a batch size that can complete within the time slice. If the highest priority model cannot be run, the algorithm picks the model with the next higher priority. We show the output of the D-STACK scheduling in Fig. 9(c). With this dynamic scheduling packing more models to be scheduled opportunistically, the average GPU utilization increases from 60% in the plain spatio-temporal schedule (Fig. 9(b)) to 74% with the D-STACK schedule (Fig. 9(c)). ### 6.2. An Ideal Spatio-Temporal Schedule vs D-STACK We compare D-STACK against an ideal scheduler, which is a theoretical spatial and temporal schedule at the granularity of individual DNN kernels. For the ideal case, we assume GPU kernel preemption is allowed, a DNN’s instantaneous GPU demand is known and the GPU’s allocated resources are adjusted instantaneously. Any realistic system that does not preempt a currently running DNN model until its inference is completed, together with scheduling overheads to switch from one model to another inevitably under-utilizes the GPU. Thus, the ideal scheduler provides a theoretical ’optimal’ performance achievable by D-STACK or other schedulers. We consider a time-slotted system (e.g., 100$\mu$s for experiments with a small scale DNN), where $S_{i}$ represents $i^{th}$ time slot in the schedule. We schedule the kernel $k_{m}$ from DNN model $m$. We include as many model’s kernels as will fit in the GPU at their Knee%, ordered by their earliest deadline. We compute the aggregate GPU% as $G_{ui}=\sum_{k\in S_{i}}GPU\%_{k}$ for each time slot $S_{i}$. We use an exhaustive search-based schedule to maximize the GPU utilization for every time slot (Eq. 13). The overall GPU utilization $G_{u}$ is maximized as: (13) $\max{G_{u}},\quad\texttt{where }G_{u}=\sum_{i}G_{ui}=\sum_{i}\sum_{k\in S_{i}}GPU\%_{k}$ (14) $\displaystyle\texttt{such that }G_{ui}\leq 100\%$ $\displaystyle\texttt{ and }k_{i}\in E\implies k_{i-1}\in E$ The first constraint for scheduling kernels of different models (Eq. 14) is that the sum of the GPU% of all concurrent kernels in a time slot should not exceed 100%. Second, only eligible kernels (set $\left(E\right)$) can run concurrently in the time slot $S_{i}$ being scheduled. DNN kernels are executed sequentially. We experimented by scheduling 3 convolution neural networks (ConvNet) based on LeNet (LeCun et al., 1989). Each ConvNet has 3 convolution, 2 average-pool and 2 linear kernels. The dimensions of filters of the convolution layers are varied, varying the compute requirement for each ConvNet model. The inference image has a resolution of 224$\times$224\. The knee-runtime combination for ConvNet-1, ConvNet-2 and ConvNet-3 are 30%-10.3ms; 40%-14.6ms, and 60%-15.4ms, respectively. We computed the knee of each kernel of each model, for use by the ideal scheduling during inference. We present the GPU utilization and throughput in Fig. 9(d). Temporal scheduling has a much lower GPU utilization, as it runs a single kernel on the GPU at a time. GSLICE improves the GPU utilization, but its static schedule leads to lower utilization when not enough models are running on the GPU. Ideal scheduling attains almost 95% GPU utilization, because it schedules kernels leveraging preemption. D-STACK schedules without preemption of a kernel, runs a DNN kernel to completion even if a kernel that could utilize the GPU better is waiting. Nonetheless, D-STACK still achieves $\sim$86% GPU utilization. The throughput attained by the three CNN models follows the same trend. D-STACK’s overall throughput is slightly higher than 90% of the throughput of ideal scheduling - a measure of how close it comes to the ideal scheduler. ### 6.3. Evaluation of D-STACK Scheduler We evaluate D-STACK using four popular DNN models (Alexnet, Mobilenet, ResNet-50, and VGG-19) that are run with fixed SLOs, GPU%, and runtime as presented in Table 6.1. We ran the models concurrently for 10 seconds. We took the workload mix from the Imagenet (Deng et al., 2009) (vision DNNs), and IMDB dataset (Maas et al., 2011) (sentence classification with BERT). We introduce a random, uniformly distributed inter-arrival delay between requests destined for the same DNN model. We compare the throughput, and GPU runtime of D-STACK with the baseline temporal sharing, and a schedule that maximizes the sum of the throughput across all the models ( max-throughput). We also evaluate the fairness of the schedulers, measured by the GPU runtime each model gets. For this, we compare D-STACK against a Max-Min fair scheduler (Bertsekas et al., 1992), which maximizes the placement of the minimum (smallest) demand (GPU%). The throughput result is shown in Fig. 10(a), and the GPU runtime each model gets is in Fig. 10(b). D-STACK gets 2$\times$ the throughput of temporal sharing for the two compute- heavy models, ResNet-50 and VGG-19 (Fig. 10(a)). At the same time, the lighter-weight Alexnet and Mobilenet get 4$\times$ higher throughput. In temporal scheduling, running compute-heavy (a) (b) Figure 10. (a) Throughput of models running with different scheduling algo. and (b) Total runtime (s) per model DNNs with longer runtimes results in fewer opportunities for the other models, as there is no spatial sharing. Temporal scheduling runs models for only 1.6 sec. out of 10 secs. time, negatively impacting their throughput. Fig. 10(b) shows that the D-STACK runs all the models longer than temporal sharing . This is because D-STACK can run multiple DNNs concurrently, providing higher throughput compared to temporal sharing (Fig. 10(a)). We compare D-STACK’s throughput with the ’max-throughput’ schedule. D-STACK gets more than 80% throughput of the max-throughput for the model with the lowest runtime (Alexnet) while providing better fairness as we see next. The Max-Min fair schedule provides higher runtime for Mobilenet (Fig. 10(b)) than D-STACK since Mobilenet has the minimum demand (25% knee%). However, D-STACK achieves higher throughput than Max-Min for the medium runtime ResNet-50 (Fig. 10(a)). D-STACK’s fairness measure picks the model that has run for the least time in the GPU over past sessions to schedule. Thus, D-STACK seeks to act like a proportional fair scheduler, as with the Completely Fair Scheduler (CFS) in Linux (Pabla, 2009). The fairness of D-STACK is shown in Fig. 10(b). Max-Min gives more time to a low-demand model like Mobilenet. With D-STACK, all the models get similar GPU time, thus boosting the total throughput of higher demand models like ResNet-50. Overall, the D-STACK scheduling beats temporal sharing’s throughput by 4$\times$, gets more than 80% of the max-throughput scheduler and fairly shares GPU execution time while meeting SLOs. ## 7\. Validating Our Overall Approach (a) (b) Figure 11. (a) C-2 = ResNet-50 + VGG-19, C-3 = C-2 + BERT, C-4 = C-3 + Mobilenet, C-7 = C-4 + ResNet-18 + Inception + ResNeXt-50. (b) Throughput adjustment in D-STACK with varying request rate We compare D-STACK with other multiplexing methods. Multiplexing DNN models on the GPU: We evaluate three different cases of multiplexing by running 2, 3, 4 and 7 DNNs, respectively. By multiplexing 7 different DNNs, we demonstrate how D-STACK is still successful in scheduling a number of models with tight latency constraints, even if the sum-total of their demand (i.e., knee-capacity) is substantially higher than 100% GPU. We show D-STACK can improve throughput and utilize the GPU better while reducing the SLO violations compared to the other approaches, with all, including D-STACK having to compromise by missing the deadline on some inference requests. We compare our approach, including D-STACK, with four other methods of GPU multiplexing, namely, Fixed batching with Default CUDA MPS (FB), and temporal sharing (T), Triton Inference Server (Tri) and GSLICE (G). In Fixed batching with CUDA MPS (FB), the largest batch size of 16 is picked for inference every time and the multiplexing models share the GPU with MPS without an explicit GPU%. In temporal sharing (T), time slices are set in the proportion of the models’ SLO length. With Triton server (Tri), we request the inference with multiple clients concurrently, allowing Triton server to dynamically batch and infer our requests. With GSLICE (G), we use all GSLICE’s features, including adaptive batching and spatial sharing of the GPU at each DNN’s knee. Finally, in D-STACK, we use the batch size and GPU% from our optimization formulation and utilize D-STACK scheduling to schedule the models. We evaluate the throughput and the SLO violations per second for each model in Fig. 11(a). We measure SLO violations per second as the sum of all the inference requests that violate the SLO and all the unserved requests. Inference requests are generated at the rate of $\sim$1920 images/sec (max. request rate limited by the 10 Gbps link in testbed). Requests are divided into the multiplexed models in proportion to their SLOs. Thus, for the experiments C-2, C-3 and C4, Alexnet and Mobilenet get 700 inference requests/sec, ResNet-50 gets 320 requests/sec and VGG-19 gets 160 requests/sec.For the experiment with 7 DNN models running concurrently (i.e., C-7), Alexnet, Mobilenet and ResNet-18 receive 440 inference requests/sec, ResNet-50 and Inception receive 220 requests/sec while ResNeXt-50 and VGG-19 get 80 requests/sec. We observe from Fig. 11(a) that our framework provides more than a 3$\times$ increase in aggregate throughput when multiplexing 7 different models. D-STACK achieves the highest throughput even when fewer models are running concurrently. For MPS, the lack of batching causes it to miss most of the SLOs for requests. Fixed batch, temporal sharing, GSLICE and Triton server provide good throughput while running just 2 models. However, as the number of models multiplexed increases, each new added model contends for GPU resources in Fixed Batch, decreasing the throughput. Meanwhile, in temporal sharing, each model gets less and less GPU time, impacting throughput. Models hosted in Triton server too have to multiplex GPU temporally, thus, get lower throughput when more models are added. With GSLICE, multiplexing more models means some models get resources lower than knee GPU%, exponentially increasing the inference latency. D-STACK provides both the right amount of GPU resources and the appropriate batch size. Furthermore, there are no SLO violations in D-STACK when multiplexing 2-4 models. However, when overloading the GPU by multiplexing 7 DNNs, we see a few SLO violations for the models with longer runtime (Inception, Resnet-50, ResNeXt-50 and VGG-19). D-STACK misses SLOs for 10% of all requests, compared to more than 68% for the alternatives. SLO misses for D-STACK are from the smaller fraction of requests sent to compute heavy models such as ResNet-50, ResNext-50 and VGG-19. Even with some of the medium-to-large sized models with longer runtimes, such as ResNet-50 and Inception, only 13% of requests see a SLO violation. This is due to the fact that running 7 models concurrently exceeds the capacity of GPU even with D-STACK. With D-STACK the average GPU utilization is 92% while multiplexing with 7 models. With all the models having a knee greater than 10%, this is close to fully utilizing the GPU. Benefit of D-STACK Scheduler: Wherever possible, D-STACK tries to opportunistically schedule additional model instances during the session, possibly with a smaller batch size to utilize the available GPU. To show the effectiveness of the D-STACK, we present a scenario where the request rate of the multiplexed DNN models varies dynamically. To start with, in session $T_{0}$, we have 4 models, Alexnet, Mobilenet, ResNet-50 and, VGG-19, same as in ’C-4’ in Fig. 11(a) running with their request rates high enough to support the optimal batch size, as determined in Table 6.1. The GPU utilization we achieve is $\sim 85\%$. We then change the request rate of one model (Alexnet in session $T_{1}$) by a random amount. We still allow for the optimal batch to form for each model. The throughput of the models dynamically adjust with the throughput of other models increasing due to use of the un-utilized resources left by Alexnet (see $T_{1}$). Since these three models have a high GPU% requirement, there is not enough GPU to accommodate an instance of another model. Thus, the GPU utilization drops very slightly. At $T_{2}$, Alexnet’s request rate goes back up, while Mobilenet request rate lowers, once again by a random amount. Alexnet opportunistically uses the GPU to achieve a throughput higher than what it achieved in the baseline session $T_{0}$. Similarly, when ResNet-50 and, VGG-19’s arrival rates drop at $T_{3}$ and $T_{4}$, respectively, the other models increase their throughput. We also see that across these sessions, the GPU utilization is nearly unchanged, remaining high, indicating that the D-STACK effectively uses the GPU. ### 7.1. D-STACK in Multi-GPU Clusters We evaluated D-STACK in a multiple GPU cluster of 4 NVIDIA T4 GPUs, each having 40 SMs (fewer than a V100) and 16 GB of memory. We utilized 4 different vision models, Mobilenet, Alexnet, ResNet-50 and VGG-19 (knee GPU% is different for T4 GPU vs. V100). We compare throughput of 3 different multiplexing and scheduling scenarios. First, we provide one T4 GPU for each DNN model exclusively. In the second scenario, we place all 4 models in each GPU, temporally sharing the GPU. Finally, we evaluate D-STACK with the 4 DNN models. Fig. 12 shows temporal scheduling has almost the same throughput as each model having an exclusive GPU. Figure 12. GPU cluster throughput This is because of the under-utilization of the GPU by the DNN models. D-STACK has much higher throughput for every model, with 160% overall higher throughput than temporal sharing. The overall inference throughput increases substantially as the multi-GPU cluster is better utilized by D-STACK. ## 8\. Conclusions DNNs critically depend on GPUs and other accelerators, but often under-utilize the parallel computing capability of current high-performance accelerators. Due to uneven workloads of different DNN kernels, a DNN as a whole is unable to fully utilize all the parallelism of the GPU (i.e., all SMs). Furthermore, there are non-parallelizable tasks while executing a DNN on a GPU-based system limiting the effective use of a GPU’s parallelism. We validated these conclusions from our model of a DNN through measurements of different types of DNNs (CNNs, and Transformers) on an V100 GPU. Since batching DNN requests improves inference throughput and GPU utilization, we develop an optimization framework to establish an optimal operating point (GPU%, Batch Size) for a DNN utilizing the GPU at the highest efficacy. We bring the optimal batch size and GPU% together in D-STACK to develop a spatio-temporal, fair, opportunistic, and dynamic scheduler to create an inference framework that effectively virtualizes the GPU. D-STACK accounts for a DNN model’s SLO, GPU resource allocation, and batch size, to provide a schedule that maximizes meeting SLOs, across multiple DNN models while seeking to utilize the GPU fully. D-STACK benefits both single GPUs and multi-GPU clusters. Our enhancements in D-STACK do not require modifications to the GPU architecture, the runtime, or the DNN models themselves. D-STACK’s features can easily help improve existing DNN inference platforms (e.g., Triton server) as well. We show that D-STACK can attain higher than 90% throughput of an ideal scheduler, which we speculate can switch tasks instantaneously at a very fine time granularity, ignoring practical limitations. Our controlled testbed experiments with 4 T4 GPU clusters show the throughput improvement of 160%-180% with D-STACK compared to providing an entire GPU to each individual DNN model. With an NVIDIA V100 GPU, D-STACK shows benefit in the range of ~$1.6\times$ improvement in GPU utilization and 3$\times$ to 4$\times$ increase in throughput with no impact in latency compared to the baseline temporal sharing. ## References * (1) * NVI (2018) 2018\. NVIDIA Tesla V100 GPU Architecture. http://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf. Accessed: 2018-12-01. * Ten (2020) 2020\. TensorFlow Serving. https://www.tensorflow.org/tfx/guide/serving. * tri (2021) 2021\. NVIDIA Triton Inference Server. https://docs.nvidia.com/deeplearning/triton-inference-server/master-user-guide/docs/. * NVI (2021) 2021\. NVIDIA Visual Profiler User Guide. https://docs.nvidia.com/pdf/CUDA_Profiler_Users_Guide.pdf. Accessed:2021-12-01. * tor (2021) 2021\. TorchVision Model Zoo. https://pytorch.org/docs/master/torchvision/models.html. Online; accessed 13 June 2021. * Amdahl (1967) Gene M Amdahl. 1967\. Validity of the single processor approach to achieving large scale computing capabilities. In _Proceedings of the April 18-20, 1967, spring joint computer conference_. 483–485. * AWS (2021) AWS. 2021. Host Multiple Models with Multi-Model Endpoints. https://docs.aws.amazon.com/sagemaker/latest/dg/multi-model-endpoints.html. * Belviranli et al. (2016) Mehmet E. Belviranli, Farzad Khorasani, Laxmi N. Bhuyan, and Rajiv Gupta. 2016. CuMAS: Data Transfer Aware Multi-Application Scheduling for Shared GPUs. In _Proceedings of the 2016 International Conference on Supercomputing_ (Istanbul, Turkey) _(ICS ’16)_. Association for Computing Machinery, New York, NY, USA, Article 31, 12 pages. https://doi.org/10.1145/2925426.2926271 * Bertsekas et al. (1992) Dimitri P Bertsekas, Robert G Gallager, and Pierre Humblet. 1992\. _Data networks_. Vol. 2. Prentice-Hall International New Jersey. * Chen et al. (2017) Quan Chen, Hailong Yang, Minyi Guo, Ram Srivatsa Kannan, Jason Mars, and Lingjia Tang. 2017\. Prophet: Precise qos prediction on non-preemptive accelerators to improve utilization in warehouse-scale computers. In _Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems_. 17–32. * Chen et al. (2016) Quan Chen, Hailong Yang, Jason Mars, and Lingjia Tang. 2016\. Baymax: Qos awareness and increased utilization for non-preemptive accelerators in warehouse scale computers. _ACM SIGPLAN Notices_ 51, 4 (2016), 681–696. * Chen et al. (2018) Tianqi Chen, Thierry Moreau, Ziheng Jiang, Lianmin Zheng, Eddie Yan, Haichen Shen, Meghan Cowan, Leyuan Wang, Yuwei Hu, Luis Ceze, et al. 2018\. $\\{$TVM$\\}$: An automated end-to-end optimizing compiler for deep learning. In _13th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 18)_. 578–594. * Crankshaw et al. (2017) Daniel Crankshaw, Xin Wang, Guilio Zhou, Michael J Franklin, Joseph E Gonzalez, and Ion Stoica. 2017\. Clipper: A low-latency online prediction serving system. In _14th $\\{$USENIX$\\}$ Symposium on Networked Systems Design and Implementation ($\\{$NSDI$\\}$ 17)_. 613–627. * Deng et al. (2009) J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. 2009. ImageNet: A Large-Scale Hierarchical Image Database. In _CVPR09_. * Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs.CL] * Dhakal et al. (2020) Aditya Dhakal, Sameer G Kulkarni, and K. K. Ramakrishnan. 2020\. GSLICE: Controlled Spatial Sharing of GPUs for a Scalable Inference Platform. In _Proceedings of the 11th ACM Symposium on Cloud Computing_ (Virtual Event, USA) _(SoCC ’20)_. Association for Computing Machinery, New York, NY, USA, 492–506. * Du et al. (2017) Xianzhi Du, Mostafa El-Khamy, Jungwon Lee, and Larry Davis. 2017. Fused DNN: A deep neural network fusion approach to fast and robust pedestrian detection. In _2017 IEEE winter conference on applications of computer vision (WACV)_. IEEE. * Dublish et al. (2019) Saumay Dublish, Vijay Nagarajan, and Nigel Topham. 2019\. Poise: Balancing thread-level parallelism and memory system performance in GPUs using machine learning. In _2019 IEEE International Symposium on High Performance Computer Architecture (HPCA)_. IEEE, 492–505. * Emmerich et al. (2015) Paul Emmerich, Sebastian Gallenmüller, Daniel Raumer, Florian Wohlfart, and Georg Carle. 2015\. MoonGen: A Scriptable High-Speed Packet Generator. In _Internet Measurement Conference 2015 (IMC’15)_. Tokyo, Japan. * Gao et al. (2018) Pin Gao, Lingfan Yu, Yongwei Wu, and Jinyang Li. 2018\. Low latency rnn inference with cellular batching. In _Proceedings of the Thirteenth EuroSys Conference_. 1–15. * Gu et al. (2019) Juncheng Gu, Mosharaf Chowdhury, Kang G. Shin, Yibo Zhu, Myeongjae Jeon, Junjie Qian, Hongqiang Liu, and Chuanxiong Guo. 2019\. Tiresias: A GPU Cluster Manager for Distributed Deep Learning. In _16th USENIX Symposium on Networked Systems Design and Implementation (NSDI 19)_. USENIX Association, Boston, MA, 485–500. https://www.usenix.org/conference/nsdi19/presentation/gu * Gujarati et al. (2020) Arpan Gujarati, Reza Karimi, Safya Alzayat, Wei Hao, Antoine Kaufmann, Ymir Vigfusson, and Jonathan Mace. 2020. Serving DNNs like Clockwork: Performance Predictability from the Bottom Up. In _14th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 20)_. 443–462. * Gustafson (1988) John L. Gustafson. 1988\. Reevaluating Amdahl’s Law. _Commun. ACM_ 31, 5 (May 1988), 532–533. https://doi.org/10.1145/42411.42415 * He et al. (2016) Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016\. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 770–778. * Howard et al. (2017) Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, and Hartwig Adam. 2017\. Mobilenets: Efficient convolutional neural networks for mobile vision applications. _arXiv preprint arXiv:1704.04861_ (2017). * Huang et al. (2019) Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. 2019\. Gpipe: Efficient training of giant neural networks using pipeline parallelism. _Advances in neural information processing systems_ 32 (2019), 103–112. * Inci et al. (2020) Ahmet Fatih Inci, Evgeny Bolotin, Yaosheng Fu, Gal Dalal, Shie Mannor, David W. Nellans, and Diana Marculescu. 2020. The Architectural Implications of Distributed Reinforcement Learning on CPU-GPU Systems. _CoRR_ abs/2012.04210 (2020). arXiv:2012.04210 https://arxiv.org/abs/2012.04210 * Jain et al. (2018) Paras Jain, Xiangxi Mo, Ajay Jain, Harikaran Subbaraj, Rehan Sohail Durrani, Alexey Tumanov, Joseph Gonzalez, and Ion Stoica. 2018\. Dynamic Space-Time Scheduling for GPU Inference. _arXiv preprint arXiv:1901.00041_ (2018). * Jain et al. (2019) Paras Jain, Xiangxi Mo, Ajay Jain, Alexey Tumanov, Joseph E. Gonzalez, and Ion Stoica. 2019\. The OoO VLIW JIT Compiler for GPU Inference. _CoRR_ abs/1901.10008 (2019). arXiv:1901.10008 http://arxiv.org/abs/1901.10008 * Jeon et al. (2019) Myeongjae Jeon, Shivaram Venkataraman, et al. 2019\. Analysis of large-scale multi-tenant $\\{$GPU$\\}$ clusters for $\\{$DNN$\\}$ training workloads. In _2019 $\\{$USENIX$\\}$ Annual Technical Conference ($\\{$USENIX$\\}$$\\{$ATC$\\}$ 19)_. 947–960. * Jia et al. (2018) Zhe Jia, Marco Maggioni, Benjamin Staiger, and Daniele P Scarpazza. 2018. Dissecting the NVIDIA volta GPU architecture via microbenchmarking. _arXiv preprint arXiv:1804.06826_ (2018). * Jia et al. (2019) Zhihao Jia, James Thomas, Tod Warszawski, Mingyu Gao, Matei Zaharia, and Alex Aiken. 2019\. Optimizing dnn computation with relaxed graph substitutions. _SysML 2019_ (2019). * Jouppi et al. (2017) Norman P Jouppi, Cliff Young, Nishant Patil, David Patterson, Gaurav Agrawal, Raminder Bajwa, Sarah Bates, Suresh Bhatia, Nan Boden, Al Borchers, et al. 2017\. In-datacenter performance analysis of a tensor processing unit. In _Computer Architecture (ISCA), 2017 ACM/IEEE 44th Annual International Symposium on_. IEEE, 1–12. * Kayıran et al. (2013) Onur Kayıran, Adwait Jog, Mahmut T Kandemir, and Chita R Das. 2013. Neither more nor less: optimizing thread-level parallelism for GPGPUs. In _Proceedings of the 22nd international conference on Parallel architectures and compilation techniques_. IEEE, 157–166. * Kong et al. (2021) Hao Kong, Shuo Huai, Di Liu, Lei Zhang, Hui Chen, Shien Zhu, Shiqing Li, Weichen Liu, Manu Rastogi, Ravi Subramaniam, et al. 2021\. EDLAB: A Benchmark for Edge Deep Learning Accelerators. _IEEE Design & Test_ (2021). * Krizhevsky et al. (2012a) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012a. ImageNet Classification with Deep Convolutional Neural Networks. In _Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1_ (Lake Tahoe, Nevada) _(NIPS’12)_. Curran Associates Inc., Red Hook, NY, USA, 1097–1105. * Krizhevsky et al. (2012b) Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012b. Imagenet classification with deep convolutional neural networks. In _Advances in neural information processing systems_. 1097–1105. * LeCun et al. (1989) Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. 1989. Backpropagation applied to handwritten zip code recognition. _Neural computation_ 1, 4 (1989), 541–551. * Liang et al. (2022) Qianlin Liang, Walid A. Hanafy, Ahmed Ali-Eldin, and Prashant Shenoy. 2022. Model-driven Cluster Resource Management for AI Workloads in Edge Clouds. arXiv:2201.07312 [cs.DC] * Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011\. Learning Word Vectors for Sentiment Analysis. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_. Association for Computational Linguistics, Portland, Oregon, USA, 142–150. http://www.aclweb.org/anthology/P11-1015 * Markidis et al. (2018) Stefano Markidis, Steven Wei Der Chien, Erwin Laure, Ivy Bo Peng, and Jeffrey S Vetter. 2018\. Nvidia tensor core programmability, performance & precision. In _2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)_. IEEE, 522–531. * Mei and Chu (2017) Xinxin Mei and Xiaowen Chu. 2017. Dissecting GPU Memory Hierarchy Through Microbenchmarking. _IEEE Transactions on Parallel and Distributed Systems_ 28, 1 (2017), 72–86. https://doi.org/10.1109/TPDS.2016.2549523 * Micikevicius (2012) Paulius Micikevicius. 2012\. GPU performance analysis and optimization. In _GPU technology conference_ , Vol. 3. * Narayanan et al. (2019) Deepak Narayanan, Aaron Harlap, Amar Phanishayee, Vivek Seshadri, Nikhil R Devanur, Gregory R Ganger, Phillip B Gibbons, and Matei Zaharia. 2019. PipeDream: generalized pipeline parallelism for DNN training. In _Proceedings of the 27th ACM Symposium on Operating Systems Principles_. 1–15. * NVIDIA (2017) NVIDIA. 2017. DRIVING DIGITAL TRANSFORMATION WITH GPU VIRTUALIZATION AND ENTERPRISE CLOUD. https://www.nvidia.com/content/dam/en-zz/Solutions/Data-Center/nutanix/pdf/nutanix-solution-overview.pdf. * NVIDIA (2021a) NVIDIA. 2021a. Deep Learning Performance Documentation. https://docs.nvidia.com/deeplearning/performance/dl-performance-gpu-background/index.html. Accessed: 2021-04-07. * NVIDIA (2021b) NVIDIA. 2021b. Unlock Next Level Performance with virtual GPUs. https://www.nvidia.com/en-us/data-center/virtual-solutions/. * NVIDIA, Tesla (2019) NVIDIA, Tesla. 2019\. MULTI-PROCESS SERVICE. _NVIDIA. May_ (2019), 108\. * Pabla (2009) Chandandeep Singh Pabla. 2009\. Completely Fair Scheduler. _Linux J._ 2009, 184, Article 4 (Aug. 2009). * Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. PyTorch: An Imperative Style, High-Performance Deep Learning Library. In _Advances in Neural Information Processing Systems 32_ , H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett (Eds.). Curran Associates, Inc., 8024–8035. http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf * Qiu et al. (2018) Hang Qiu, Fawad Ahmad, Fan Bai, Marco Gruteser, and Ramesh Govindan. 2018. Avr: Augmented vehicular reality. In _Proceedings of the 16th Annual International Conference on Mobile Systems, Applications, and Services_. 81–95. * Shen et al. (2019) Haichen Shen, Lequn Chen, Yuchen Jin, Liangyu Zhao, Bingyu Kong, Matthai Philipose, Arvind Krishnamurthy, and Ravi Sundaram. 2019. Nexus: a GPU cluster engine for accelerating DNN-based video analysis. In _Proceedings of the 27th ACM Symposium on Operating Systems Principles_. 322–337. * Simonyan and Zisserman (2014) Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. _arXiv preprint arXiv:1409.1556_ (2014). * Song et al. (2017) Mingcong Song, Yang Hu, Huixiang Chen, and Tao Li. 2017\. Towards pervasive and user satisfactory cnn across gpu microarchitectures. In _2017 IEEE International Symposium on High Performance Computer Architecture (HPCA)_. IEEE, 1–12. * Szegedy et al. (2015) Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2015. Going deeper with convolutions. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 1–9. * Ukidave et al. (2016) Yash Ukidave, Xiangyu Li, and David Kaeli. 2016. Mystic: Predictive scheduling for gpu based cloud servers using machine learning. In _2016 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_. IEEE, 353–362. * Wang et al. (2020) Yu Wang, Gu-Yeon Wei, and David Brooks. 2020. A Systematic Methodology for Analysis of Deep Learning Hardware and Software Platforms.. In _MLSys_. * Wu et al. (2016) Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016\. Google’s neural machine translation system: Bridging the gap between human and machine translation. _arXiv preprint arXiv:1609.08144_ (2016). * Xiao et al. (2018) Wencong Xiao, Romil Bhardwaj, Ramachandran Ramjee, Muthian Sivathanu, Nipun Kwatra, Zhenhua Han, Pratyush Patel, Xuan Peng, Hanyu Zhao, Quanlu Zhang, et al. 2018\. Gandiva: Introspective cluster scheduling for deep learning. In _13th $\\{$USENIX$\\}$ Symposium on Operating Systems Design and Implementation ($\\{$OSDI$\\}$ 18)_. 595–610. * Xie et al. (2017) Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In _Proceedings of the IEEE conference on computer vision and pattern recognition_. 1492–1500. * Yeh et al. (2020) Ting-An Yeh, Hung-Hsin Chen, and Jerry Chou. 2020. KubeShare: A Framework to Manage GPUs as First-Class and Shared Resources in Container Cloud. In _Proceedings of the 29th International Symposium on High-Performance Parallel and Distributed Computing_ (Stockholm, Sweden) _(HPDC ’20)_. Association for Computing Machinery, New York, NY, USA, 173–184. * Yeung et al. (2020b) Gingfung Yeung, Damian Borowiec, Renyu Yang, Adrian Friday, RHR Harper, and Peter Garraghan. 2020b. Horus: An Interference-aware Resource Manager for Deep Learning Systems. (2020). * Yeung et al. (2020a) Ging-Fung Yeung, Damian Borowiec, Adrian Friday, RHR Harper, and Peter Garraghan. 2020a. Towards GPU Utilization Prediction for Cloud Deep Learning. (2020). * Zhang et al. (2018a) Kai Zhang, Bingsheng He, Jiayu Hu, Zeke Wang, Bei Hua, Jiayi Meng, and Lishan Yang. 2018a. G-NET: Effective $\\{$GPU$\\}$ Sharing in $\\{$NFV$\\}$ Systems. In _15th $\\{$USENIX$\\}$ Symposium on Networked Systems Design and Implementation ($\\{$NSDI$\\}$ 18)_. 187–200. * Zhang et al. (2018b) Minjia Zhang, Samyam Rajbhandari, Wenhan Wang, and Yuxiong He. 2018b. DeepCPU: Serving RNN-based Deep Learning Models 10x Faster. In _2018 USENIX Annual Technical Conference (USENIX ATC 18)_. USENIX Association, Boston, MA, 951–965. https://www.usenix.org/conference/atc18/presentation/zhang-minjia * Zhang et al. (2020) Wei Zhang, Quan Chen, Kaihua Fu, Ningxin Zheng, Zhiyi Huang, Jingwen Leng, Chao Li, Wenli Zheng, and Minyi Guo. 2020. Towards QoS-Aware and Resource-Efficient GPU Microservices Based on Spatial Multitasking GPUs In Datacenters. arXiv:2005.02088 [cs.DC] * Zhang et al. (2019) Wei Zhang, Weihao Cui, Kaihua Fu, Quan Chen, Daniel Edward Mawhirter, Bo Wu, Chao Li, and Minyi Guo. 2019\. Laius: Towards latency awareness and improved utilization of spatial multitasking accelerators in datacenters. In _Proceedings of the ACM International Conference on Supercomputing_. 58–68. * Zhang et al. (2016) Wei Zhang, Guyue Liu, Wenhui Zhang, Neel Shah, Phillip Lopreiato, Gregoire Todeschi, K. K. Ramakrishnan, and Timothy Wood. 2016\. OpenNetVM: A Platform for High Performance Network Service Chains. In _Proceedings of the 2016 ACM SIGCOMM Workshop on Hot Topics in Middleboxes and Network Function Virtualization_. ACM. * Zhou et al. (2018) Husheng Zhou, Soroush Bateni, and Cong Liu. 2018. S^ 3dnn: Supervised streaming and scheduling for gpu-accelerated real-time dnn workloads. In _2018 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS)_. IEEE, 190–201. * Zhu et al. (2021) Andong Zhu, Deze Zeng, Lin Gu, Peng Li, and Quan Chen. 2021. Gost: Enabling Efficient Spatio-Temporal GPU Sharing for Network Function Virtualization. In _2021 IEEE/ACM 29th International Symposium on Quality of Service (IWQOS)_. 1–10. https://doi.org/10.1109/IWQOS52092.2021.9521266
# Homointerface planar Josephson junction based on inverse proximity effect Juewen Fan Bingyan Jiang Jiaji Zhao Ran Bi State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China Jiadong Zhou Key Lab of Advanced Optoelectronic Quantum Architecture and Measurement (Ministry of Education), Beijing Key Lab of Nanophotonics & Ultrafine Optoelectronic Systems, and School of Physics, Beijing Institute of Technology, Beijing 100081, China Zheng Liu School of Materials Science and Engineering, Nanyang Technological University, Singapore 639798, Singapore Ning Kang Key Laboratory for the Physics and Chemistry of Nanodevices and Department of Electronics, Peking University, Beijing 100871, China Fanming Qu Li Lu Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China Xiaosong Wu <EMAIL_ADDRESS>State Key Laboratory for Artificial Microstructure and Mesoscopic Physics, Frontiers Science Center for Nano-optoelectronics, Peking University, Beijing 100871, China Collaborative Innovation Center of Quantum Matter, Beijing 100871, China Shenzhen Institute for Quantum Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China ###### Abstract The quality of a superconductor–normal metal–superconductor (SNS) Josephson junction (JJ) depends crucially on the transparency of the superconductor–normal metal (S/N) interface. We demonstrate a technique for fabricating planar JJs with perfect interfaces. The technique utilizes a strong inverse proximity effect (IPE) discovered in $\mathrm{Al}/\mathrm{V}_{5}\mathrm{S}_{8}$ bilayers, by which Al is driven into the normal state. The highly transparent homointerface enables the flow of Josephson supercurrent across a 2.9 $\mu$m long weak link. Moreover, our JJ exhibits a giant critical current and a large product of the critical current and the normal state resistance. The latter exceeds the theoretical bound, which is probably related to the unusual normal metal weak link. xxx A JJ consists of two superconductors coupled through a weak link and is the fundamental element in a variety of superconducting electronicsHayakawa _et al._ (2004); Clarke and Wilhelm (2008); Hamilton (2000). Depending on the weak link, there are different types of JJs, e.g., superconductor–insulator–superconductor (SIS), SNS and superconductor–narrow constriction–superconductor (ScS) Golubov _et al._ (2004). SNS JJs have a negligible inherent capacitance. Being overdamped, their current–voltage characteristics can be made, in principle, non-hysteretic, which is desired in high frequency applicationsBelogolovskii _et al._ (2017). Moreover, they have potentially higher $I_{\mathrm{c}}R_{\mathrm{N}}$ valueKulik and Omel’yanchuk (1975), which is the figure of merit in many applicationsBelogolovskii _et al._ (2017); Yu _et al._ (2006). Here, $I_{\mathrm{c}}$ is the critical current, $R_{\mathrm{N}}$ is the normal state resistance. Recently, the interest in SNS JJs have been intensified, as it has been proposed that such junctions, when N is topologically nontrivial, can be used in topological quantum computingFu and Kane (2008). However, the characteristics of SNS devices are strongly affected by the superconductor–normal metal interface, which poses a challenge in device fabrication. It is also known that disorders in the interface lead to the soft-gap problemTakei _et al._ (2013). Various techniques have been employed to achieve transparent and consistent interfaces, e.g., shadow depositionDolan and Dunsmuir (1988); Dubos _et al._ (2000), in situ epitaxial growthKrogstrup _et al._ (2015); Shabani _et al._ (2016); Fornieri _et al._ (2019). Here, we demonstrate a technique for constructing SNS JJs utilizing the IPE. We observed a complete suppression of superconductivity in a 31 nm aluminum film deposited on a 10 nm novel V5S8 superlattice film. Using this non- superconducting $\mathrm{Al}/\mathrm{V}_{5}\mathrm{S}_{8}$ bilayer as the weak link, an aluminum SNS JJ with fully transparent homointerfaces was fabricated. Such junctions exhibit high critical currents and large $I_{\mathrm{c}}R_{\mathrm{N}}$ values. Figure 1: IPE in bilayer of Al/$\mathrm{V_{5}S_{8}}$. (a) Temperature dependence of the resistance for a 31-nm-thick Al film on $\mathrm{V_{5}S_{8}}$ (device S1) and $\mathrm{SiO_{2}}$. The resistance $R$ is normalized by $R_{\mathrm{N}}$. The inset is an illustration of the four- probe measurement of device S1. The scale bar represents 2 $\mu$m. (b) Normalized $R$ versus $T$ for device S1, S2 and S3. The Al thicknesses in these devices are 31, 72 and 96 nm, respectively. $T$ is normalized by $T_{\mathrm{cs}}$, the transition temperature of the corresponding Al film on the $\mathrm{SiO_{2}}$ substrate. (c) Normalized transition temperature $T_{\mathrm{c}}/T_{\mathrm{cs}}$ versus thickness $d_{\mathrm{s}}$ of Al in Al/$\mathrm{V_{5}S_{8}}$ bilayer system. Dots with arrow denote the upper limit of the critical temperature as the device is not superconducting down to the lowest temperature of this study. The solid line represents the result of the Werthamer theory on the proximity effect in the S/N bilayer. (d) $R$–$B$ curves of 72-nm-thick Al film on the $\mathrm{SiO_{2}}$ substrate at different temperatures. (e) The linear-$T$ dependence of $B_{\mathrm{c2\perp}}$. (f) Comparison of $d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}$ in Al on $\mathrm{V}_{5}\mathrm{S}_{8}$ with those in Pb/CuHilsch (1962), Al/(Co/Pd)10Xia _et al._ (2009), V/Ni and V/CoAarts _et al._ (1997). $\mathrm{V}_{5}\mathrm{S}_{8}$ superlattice films used in this study were grown by a chemical vapor deposition method on $\mathrm{SiO_{2}}$ substratesZhou _et al._ (2021). Devices were patterned using standard e-beam lithography, followed by e-beam deposition of a 2 nm titanium adhesion layer and the aluminum layer of desired thickness. Devices were loaded into an Oxford 3He cryostat with a base temperature of about 250 mK. Low temperature electrical measurements, with multiple-stage low-pass filtering, were carried out using a lock-in amplifier. The compound of V5S8 known in the literature is VS2 layers self-intercalated with vanadiumKawada _et al._ (1975). It becomes an antiferromagnet below 32 KNozaki _et al._ (1978). In stark contrast, V5S8 used in this study has a unique superlattice structure consisting of VS2 layers intercalated with V2S2 atomic chainsZhou _et al._ (2021), seen in the Supplementary Material. It shows no indication of either ferromagnetism or antiferromagnetism. More interestingly, the new V5S8 displays an exotic in-plane Hall effect that has not been reported before. This effect results from an out-of-plane Berry curvature induced by the in-plane magnetic field, enabled by a peculiar anisotropic spin-orbit coupling. For simplicity, we still use the chemical formula of V5S8 in this letter to refer to the new material. A bilayer device S1, Al(31)/V5S8(10), is illustrated in the inset of Fig. 1a. The number in the parenthesis denotes the thickness in nanometer. The superconducting transition temperature $T_{\mathrm{cs}}$ of a 31 nm Al film on the $\mathrm{SiO_{2}}$ substrate, defined by the midpoint of the resistance transition, is 0.72 K. Surprisingly, S1 remains in the normal state down to 0.28 K. Not even a slight depression of the resistance was observed, indicating that the superconductivity of 31 nm Al was completely destroyed by 10 nm V5S8. To get an idea of the strength of the IPE, we fabricated more bilayer devices, in which the thickness of the Al film is varied, while maintaining the same $\mathrm{V}_{5}\mathrm{S}_{8}$ thickness as in S1. As shown in Fig. 1b, superconductivity gradually recovers with increasing Al thickness. When the Al film is 96 nm, superconductivity is fully restored. We compare our experimental data with the classical theory describing the proximity effect in the S/N bilayerWerthamer (1963); Hauser _et al._ (1964); Nagel _et al._ (1994). The detailed calculation can be found in the Supplementary Material. As shown in Fig. 1c, the experimental data are well below the theoretical calculation, suggesting the IPE induced by $\mathrm{V}_{5}\mathrm{S}_{8}$ on the Al film is too strong to be explained by the classical theory of the S/N proximity system. When the superconductor layer in an S/N bilayer is thin, the system is non- superconducting due to the IPE. As the thickness of the superconductor layer increases to a critical value $d_{\mathrm{cr,s}}$, superconductivity will appear. Therefore, $d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}$ may be used to estimate the strength of IPE. In Fig. 1f, we compare $d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}$ of our Al/$\mathrm{V}_{5}\mathrm{S}_{8}$ bilayer with some S/N and superconductor/ferromagnet systems. In a Pb/Cu bilayer $d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}=0.025$Hilsch (1962), consistent with a weak IPE described by classical theories. For ferromagnetic films, Ni and Co, $d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}$ can reach 1.14 and 1.42, respectivelyAarts _et al._ (1997). For our Al/$\mathrm{V}_{5}\mathrm{S}_{8}$ bilayer, $0.35<d_{\mathrm{cr,s}}/\xi_{\mathrm{s}}<0.73$ (calculated from Fig. 1c). It is remarkable that the IPE of Al/$\mathrm{V}_{5}\mathrm{S}_{8}$ bilayer is even stronger than the $(\mathrm{Co/Pd})_{10}$ filmXia _et al._ (2009). Figure 2: Homointerface planar JJ based on IPE. (a) The top view and the side view of the schematic of an Al–(Al/$\mathrm{V}_{5}\mathrm{S}_{8}$)–Al JJ. (b) Normalized resistance $R$ as a function of $T$ for device S4. The inset is an optical micrograph of S4 with an illustration of the measurement configuration. The junction length $L$ is 1.1 $\mu$m, the junction width $W$ is 2.7 $\mu$m. The scale bar is 2 $\mu$m. 0.6 $\mu$m away from the junction, the width of the Al superconductor is reduced to 0.8 $\mu$m so that no vortices can enterStan _et al._ (2004). Otherwise, flux jumping will appear in the Fraunhofer pattern. (c) Fraunhofer diffraction pattern at 0.25 K. (d) The magnetic field of the nodal point extracted from (c). The solid blue line is a linear fit. The IPE is intriguing and deserves further investigation. In this work, we focus on an application of the effect in JJs. Utilizing the observed strong IPE, an Al–(Al/$\mathrm{V}_{5}\mathrm{S}_{8}$)–Al planar JJ can be fabricated using a simple one-step metal deposition process. Shown in Fig. 2a, a narrow $\mathrm{V}_{5}\mathrm{S}_{8}$ flake and an Al strip form a cross. Non- superconducting Al on $\mathrm{V}_{5}\mathrm{S}_{8}$ plays the role of a weak link between superconducting Al electrodes on the $\mathrm{SiO_{2}}$ substrate, which creates an SNS junction. Since the junction is built with a single piece of continuous Al film, the homointerface is supposedly fully transparent. As Al(31)/$\mathrm{V}_{5}\mathrm{S}_{8}$(10) is non- superconducting at the lowest temperature of this experiment, the JJs studied below are all based on Al(31)/$\mathrm{V}_{5}\mathrm{S}_{8}$(10) bilayers. The temperature dependence of the resistance for JJ S4 shows that the Josephson supercurrent is established below 0.7 K. The differential resistance $\mathrm{d}V/\mathrm{d}I$ of the device displays clear diffraction patterns in the magnetic field versus bias current mapping(Fig. 2c). In this Fraunhofer pattern, the characteristic of a JJ, at least eight side lobes can be identified, suggesting a homogeneous junction with highly transparent interfaces. Note that similar JJs based on a cross structure have been demonstrated, using the IPE of ferromagnetic metalsVávra _et al._ (2013, 2009). However, the Fraunhofer pattern displays irregularities in amplitude and frequency. These features imply substantial inhomogeneities, probably stemming from grain or domain structures in the ferromagnetic layer. Our $\mathrm{V}_{5}\mathrm{S}_{8}$ samples are single crystals and non-magnetic, enabling the formation of a uniform weak link. The oscillating critical current $I_{\mathrm{c}}(B)$ can be described by $I_{\mathrm{c}}(B)=I_{\mathrm{c}}(0)\left|\frac{\sin(\pi BS/\Phi_{0})}{\pi BS/\Phi_{0}}\right|$, where $B$ is the perpendicular magnetic field, $S$ is the effective area of junction, $\Phi_{0}=h/2e$ is the flux quantum. Let $B^{(n)}$ denote the magnetic field when $I_{\mathrm{c}}$ lies at the local minimum of the Fraunhofer pattern. The node index $n$ is equal to the number of flux quantum penetrating the junction. We define $n<0$ if $B<0$. The linear relationship between $B^{(n)}$ and $n$ reflects equally spaced nodes in the pattern (Fig. 2d). The slope $\Delta B$ is 0.55 mT. $\Phi_{0}/\Delta B=3.8$ $\mu\mathrm{m}^{2}$ yields the effective area of the junction. In comparison, the nominal area of Al(31)/$\mathrm{V}_{5}\mathrm{S}_{8}$(10) is 3.0 $\mu\mathrm{m}^{2}$. The discrepancy can be ascribed to the London penetration depth and flux-focusingSuominen _et al._ (2017). Figure 3: Properties of JJ S5 and S6. $L=2.9$ $\mu$m and $W=1.6$ $\mu$m for S5. $L=0.9$ $\mu$m and $W=2.8$ $\mu$m for S6. (a) Normalized $R$ versus $T$. (b) Two dimensional mapping of the differential resistance $\mathrm{d}V/\mathrm{d}I$ in the $I$–$B$ plane for S5 at 0.26 K. (c) $\mathrm{d}V/\mathrm{d}I$ versus $I$ at $B=0$ extracted from (b). (d) and (e) Fraunhofer diffraction pattern of S6 at 0.26 K and 0.67 K, respectively. (f) $\mathrm{d}V/\mathrm{d}I$ versus $I$ for S6 at different temperatures ranging from 0.57 to 0.77 K. The inset is the $I$–$V$ characteristic at 0.26 K. The good quality of the junction S/N interface enables the establishment of a Josephson supercurrent across large gaps of the weak link. Fig. 3b shows a zero resistance state and a finite supercurrent in a junction with a $2.9$ $\mu$m gap. Likely for the same reason, our junction can support a giant supercurrent even when the gap is relatively large. Fig. 3d shows the Fraunhofer pattern of a device with a 0.9 $\mu$m gap. The supercurrent reaches 255 $\mu$A in zero magnetic field at 0.26 K ($\sim 0.32T_{\mathrm{c}}$), yielding a large supercurrent density of $2.1\times 10^{5}$ $\mathrm{A/cm^{2}}$ among SNS junctionsLacquaniti _et al._ (2001); Abay _et al._ (2012); Frielinghaus _et al._ (2010). The critical current of the junction is so large that the superconductivity of Al electrodes is quenched by the Joul heating as soon as the junction goes into the normal state, which will be explained shortly. Because of the device structure, the measured junction resistance is a sum of the actual junctions resistance and the resistances of Al segments between two voltage probes, as seen in the inset of Fig. 2b. Consequently, as the bias current increases, two sequential resistance transitions are expected. The first one indicates the critical current of the junction, while the second one is the critical current of Al electrodes, $I_{\mathrm{c}}^{\mathrm{Al}}$. This is what is observed above 1.5 mT, seen in Fig. 3d. The critical current line of Al encloses the Fraunhofer pattern. There are 0.4 $\Omega$ for the normal state resistance of junction and 4.3 $\Omega$ for the electrode resistance. However, below 1.5 mT, the zeroth and first diffraction peaks protrude over the Al critical current. In particular, the maximum of the zeroth peak exceeds the Al critical current by 4.8 times. This bizarre result seems to indicate that the Al electrode can sustain a much higher supercurrent when the junction is also in the superconducting state than that when the junction is non- superconducting. We believe that, rather than $I_{\mathrm{c}}^{\mathrm{Al}}$ being enhanced in the protruding regions, $I_{\mathrm{c}}^{\mathrm{Al}}$ is strongly suppressed in other regions because of the Joul heating occurring at the junction when it goes into the normal state. When the critical current of junction is higher than the suppressed $I_{\mathrm{c}}^{\mathrm{Al}}$, two transitions take place simultaneously, leading to only one resistive transition in the protrusion regions. At a higher temperature, $T=0.67$ K, the critical current of junction is reduced more strongly than $I_{\mathrm{c}}^{\mathrm{Al}}$. The whole Fraunhofer pattern submerges below $I_{\mathrm{c}}^{\mathrm{Al}}$ and a common two-transition pattern appears, shown in Fig. 3e. Figure 4: $I_{\mathrm{c}}R_{\mathrm{N}}$ of JJ S6 scaled by $\Delta$ and $E_{\mathrm{Th}}$ as a function of temperature. The red dashed line indicates the theoretical bound of $10.82E_{\mathrm{Th}}/e$Dubos _et al._ (2001). Transparent interfaces in SNS JJs are critical for obtaining a large $I_{\mathrm{c}}R_{\mathrm{N}}$, which is an important parameter of JJs. Shabani et al. improved the interface by employing epitaxial growth of aluminum on a semiconductor and achieved $I_{\mathrm{c}}R_{\mathrm{N}}\sim 0.68\Delta/e$ at very low temperature, $T/T_{\mathrm{c}}=0.02$Shabani _et al._ (2016). Here $\Delta$ is the superconducting gap of the superconductor. We plot $I_{\mathrm{c}}R_{\mathrm{N}}$ of JJ S6 as a function of temperature in Fig. 4. At 0.26 K($T/T_{\mathrm{c}}=0.32$), $I_{\mathrm{c}}R_{\mathrm{N}}\approx 0.81\Delta/e$ is obtained. This value is anticipated to be substantially enhanced with decreasing temperatureDubos _et al._ (2001). Generally speaking, $I_{\mathrm{c}}R_{\mathrm{N}}$ is bounded by the minimum of $\Delta$ and the Thouless energy $E_{\mathrm{Th}}$, where $E_{\mathrm{Th}}=\hbar D/L^{2}$Dubos _et al._ (2001). Here $L$ is the junction length, $D$ is the diffusion constant and can be calculated by $D=\frac{1}{3}(\frac{\pi k_{\mathrm{B}}}{e})^{2}\frac{\sigma}{\gamma}$Pippard (1960). With the electronic specific heat coefficient $\gamma=140$ J$\cdot$m${}^{-3}\cdot$K-2 and the electrical conductivity $\sigma=4.5\times 10^{7}$ S$\cdot$m-1 in the normal state, $E_{\mathrm{Th}}$ of JJ S6 turns out to be 6.5 $\mu$eV, much less than $\Delta$. In this long-junction limit, detailed theoretical calculations showed that the upper bound of $I_{\mathrm{c}}R_{\mathrm{N}}$ at zero temperature is $10.82E_{\mathrm{Th}}/e$Dubos _et al._ (2001). Surprisingly, our $I_{\mathrm{c}}R_{\mathrm{N}}\sim 15.3E_{\mathrm{Th}}/e$ at an intermediate temperature of 0.26 K, already exceeding the theoretical bound. Taken into account the approximate linear temperature dependence of $I_{\mathrm{c}}R_{\mathrm{N}}$Dubos _et al._ (2001), it may even become substantially higher at low temperatures. The weak link of our JJs is made of a superconductor, of which the superconductivity is killed by the IPE. The giant $I_{\mathrm{c}}R_{\mathrm{N}}$ is most likely related to the unusual nature of the weak link. The high transparency of the interface of our JJs is also reflected in the excess current, defined by $I_{\mathrm{ex}}=I-V/R_{\mathrm{N}}$. $I_{\mathrm{ex}}$ can be obtained by extrapolating the linear dependence of the $I$–$V$ characteristic in the normal state to the $I$ axis. The higher the interface transparency, the higher probability the Andreev reflection occurs at, hence the larger $I_{\mathrm{ex}}$Blonder _et al._ (1982). The inset of Fig. 3f shows $I_{\mathrm{ex}}\approx I_{\mathrm{c}}$, implying highly transparent interfaces. In conclusion, we observed a strong IPE in a bilayer of Al/$\mathrm{V}_{5}\mathrm{S}_{8}$. Based on the effect, a Josephson junction with superconductor–normal metal homointerface can be readily fabricated. Owing to the highly transparent interface, the junction displays a large critical current and $I_{\mathrm{c}}R_{\mathrm{N}}$ product, showing potentials in superconducting electronic applications. ###### Acknowledgements. We are grateful for helpful discussions with J. Linder and Q. F. Sun. This work was supported by National Key Basic Research Program of China (No. 2020YFA0308800) and NSFC (Project No. 11774009, No. 12074009). ## References * Hayakawa _et al._ (2004) H. Hayakawa, N. Yoshikawa, S. Yorozu, and A. Fujimaki, Proc. IEEE 92, 1549 (2004). * Clarke and Wilhelm (2008) J. Clarke and F. K. Wilhelm, Nature 453, 1031 (2008). * Hamilton (2000) C. A. Hamilton, Rev. Sci. Instrum. 71, 3611 (2000). * Golubov _et al._ (2004) A. A. Golubov, M. Y. Kupriyanov, and E. Il’ichev, Rev. Mod. Phys. 76, 411 (2004). * Belogolovskii _et al._ (2017) M. Belogolovskii, E. Zhitlukhina, V. Lacquaniti, N. De Leo, M. Fretto, and A. Sosso, Low Temp. Phys. 43, 756 (2017). * Kulik and Omel’yanchuk (1975) I. O. Kulik and A. N. Omel’yanchuk, JETP Lett. 21, 96 (1975). * Yu _et al._ (2006) L. Yu, R. Gandikota, R. K. Singh, L. Gu, D. J. Smith, X. Meng, X. Zeng, T. V. Duzer, J. M. Rowell, and N. Newman, Supercond. Sci. Technol. 19, 719 (2006). * Fu and Kane (2008) L. Fu and C. L. Kane, Phys. Rev. Lett. 100, 096407 (2008). * Takei _et al._ (2013) S. Takei, B. M. Fregoso, H.-Y. Hui, A. M. Lobos, and S. Das Sarma, Phys. Rev. Lett. 110, 186803 (2013). * Dolan and Dunsmuir (1988) G. Dolan and J. Dunsmuir, Physica B 152, 7 (1988). * Dubos _et al._ (2000) P. Dubos, P. Charlat, T. Crozes, P. Paniez, and B. Pannetier, J. Vac. Sci. Technol. B 18, 122 (2000). * Krogstrup _et al._ (2015) P. Krogstrup, N. L. B. Ziino, W. Chang, S. M. Albrecht, M. H. Madsen, E. Johnson, J. Nygård, C. M. Marcus, and T. S. Jespersen, Nat. Mater. 14, 400 (2015). * Shabani _et al._ (2016) J. Shabani, M. Kjaergaard, H. J. Suominen, Y. Kim, F. Nichele, K. Pakrouski, T. Stankevic, R. M. Lutchyn, P. Krogstrup, R. Feidenhans’l, S. Kraemer, C. Nayak, M. Troyer, C. M. Marcus, and C. J. Palmstrøm, Phys. Rev. B 93, 155402 (2016). * Fornieri _et al._ (2019) A. Fornieri, A. M. Whiticar, F. Setiawan, E. Portolés, A. C. C. Drachmann, A. Keselman, S. Gronin, C. Thomas, T. Wang, R. Kallaher, G. C. Gardner, E. Berg, M. J. Manfra, A. Stern, C. M. Marcus, and F. Nichele, Nature 569, 89 (2019). * Hilsch (1962) P. Hilsch, Z. Physik 167, 511 (1962). * Xia _et al._ (2009) J. Xia, V. Shelukhin, M. Karpovski, A. Kapitulnik, and A. Palevski, Phys. Rev. Lett. 102, 087004 (2009). * Aarts _et al._ (1997) J. Aarts, J. M. E. Geers, E. Brück, A. A. Golubov, and R. Coehoorn, Phys. Rev. B 56, 2779 (1997). * Zhou _et al._ (2021) J. Zhou, W. Zhang, Y.-C. Lin, Y. Zhou, H. Du, B. Tang, J. Shi, B. Jian, X. Cao, B. Lin, C. Zhu, Y. Deng, Q. Fu, R. Duan, X. Wang, J. Chen, S. Guo, W. Guo, Y. Huang, Y. Yao, Y. Gao, Y. Yao, K. Suenaga, X. S. Wu, and Z. Liu, “Heterodimensional superlattice with room temperature anomalous Hall effect,” (2021), under review. * Kawada _et al._ (1975) I. Kawada, M. Nakano-Onoda, M. Ishii, M. Saeki, and M. Nakahira, J. Solid State Chem. 15, 246 (1975). * Nozaki _et al._ (1978) H. Nozaki, M. Umehara, Y. Ishizawa, M. Saeki, T. Mizoguchi, and M. Nakahira, J. Phys. Chem. Solids 39, 851 (1978). * Werthamer (1963) N. R. Werthamer, Phys. Rev. 132, 2440 (1963). * Hauser _et al._ (1964) J. J. Hauser, H. C. Theuerer, and N. R. Werthamer, Phys. Rev. 136, A637 (1964). * Nagel _et al._ (1994) U. Nagel, A. Nowak, H. Gebauer, P. Colling, S. Cooper, D. Dummer, P. Ferger, M. Frank, J. Igalson, A. Nucciotti, F. Pröbst, W. Seidel, E. Kellner, F. Feilitzsch, and G. Forster, J. Appl. Phys. 76, 4262 (1994). * Stan _et al._ (2004) G. Stan, S. B. Field, and J. M. Martinis, Phys. Rev. Lett. 92, 097003 (2004). * Vávra _et al._ (2013) O. Vávra, W. Pfaff, R. Monaco, M. Aprili, and C. Strunk, Appl. Phys. Lett. 102, 072602 (2013). * Vávra _et al._ (2009) O. Vávra, W. Pfaff, and C. Strunk, Appl. Phys. Lett. 95, 062501 (2009). * Suominen _et al._ (2017) H. J. Suominen, J. Danon, M. Kjaergaard, K. Flensberg, J. Shabani, C. J. Palmstrøm, F. Nichele, and C. M. Marcus, Phys. Rev. B 95, 035307 (2017). * Lacquaniti _et al._ (2001) V. Lacquaniti, S. Maggi, A. Polcari, R. Steni, and D. Andreone, IEEE Trans. Appl. Supercond. 11, 1130 (2001). * Abay _et al._ (2012) S. Abay, H. Nilsson, F. Wu, H. Xu, C. Wilson, and P. Delsing, Nano Lett. 12, 5622 (2012). * Frielinghaus _et al._ (2010) R. Frielinghaus, I. E. Batov, M. Weides, H. Kohlstedt, R. Calarco, and T. Schäpers, Appl. Phys. Lett. 96, 132504 (2010). * Dubos _et al._ (2001) P. Dubos, H. Courtois, B. Pannetier, F. K. Wilhelm, A. D. Zaikin, and G. Schön, Phys. Rev. B 63, 064502 (2001). * Pippard (1960) A. B. Pippard, Rep. Prog. Phys. 23, 176 (1960). * Blonder _et al._ (1982) G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B 25, 4515 (1982). * Eschrig (2015) M. Eschrig, Rep. Prog. Phys. 78, 104501 (2015). * Keizer _et al._ (2006) R. S. Keizer, S. T. B. Goennenwein, T. M. Klapwijk, G. Miao, G. Xiao, and A. Gupta, Nature 439, 825 (2006). * Lazar _et al._ (2000) L. Lazar, K. Westerholt, H. Zabel, L. R. Tagirov, Y. V. Goryunov, N. N. Garif’yanov, and I. A. Garifullin, Phys. Rev. B 61, 3711 (2000). Supplemental Materials: Homointerface planar Josephson junction based on inverse proximity effect This Supplemental Material Section contains the crystal structure of $\mathrm{V_{5}S_{8}}$ and detailed calculation of the proximity effect in the Al/$\mathrm{V_{5}S_{8}}$ bilayer. ## .1 Crystal structure of superlattice V5S8 Figure S1: Atomic structure of the $\mathrm{V_{5}S_{8}}$ superlattice in side view. The crystal structure of the superlattice $\mathrm{V_{5}S_{8}}$ has a triclinic symmetry and belongs to the space group of P1, with lattice constants of $a=9.69$ Å, $b=3.23$ Å, $c=75.53$ Å, $\alpha=\beta=90^{\circ}$ and $\gamma=120^{\circ}$. Fig. S1 depicts the atomic structure of the $\mathrm{V_{5}S_{8}}$ superlattice in side view. One can see that $\mathrm{VS_{2}}$ layers are intercalated with $\mathrm{V_{2}S_{2}}$ atomic chains, which are oriented perpendicular to the plane of paper. ## .2 Calculation of the proximity effect based on the Werthamer theory According to the theory by Werthamer, the proximity effect in the one- dimension S/N bilayer can be described by a set of three equationsWerthamer (1963); Hauser _et al._ (1964); Nagel _et al._ (1994): $\displaystyle-\chi(-\xi_{\mathrm{n}}^{2}k_{\mathrm{n}}^{2})$ $\displaystyle=$ $\displaystyle\ln(T_{\mathrm{c}}/T_{\mathrm{cn}})$ (S1) $\displaystyle\chi(\xi_{\mathrm{s}}^{2}k_{\mathrm{s}}^{2})$ $\displaystyle=$ $\displaystyle\ln(T_{\mathrm{cs}}/T_{\mathrm{c}})$ (S2) $\displaystyle\left[N\xi^{2}k\tan(kd)\right]_{\mathrm{s}}$ $\displaystyle=$ $\displaystyle\left[N\xi^{2}k\tanh(kd)\right]_{\mathrm{n}}$ (S3) . Here $T_{\mathrm{c}}$ is the transition temperature of the S/N bilayer, $T_{\mathrm{cs}}$ and $T_{\mathrm{cn}}$ are the transition temperatures of the superconductor and the normal metal, respectively. $\xi_{\mathrm{s}}$ is the superconducting coherence length of the superconductor and $\xi_{\mathrm{n}}$ is the depth by which Cooper pairs penetrate into the normal metal. $\chi(Z)=\psi(\frac{1}{2}+\frac{1}{2}Z)-\psi(\frac{1}{2})$, where $\psi$ is the digamma function. $k_{\mathrm{s,n}}$ are free parameters, $N$ is the density of state, $d$ is the thickness. Eqs. (S1) and (S2) describe the properties of normal metal and superconductor, respectively. Eq. (S3) is the boundary condition at the S/N interface. Since $\mathrm{V_{5}S_{8}}$ is a normal metal, $T_{\mathrm{cn}}=0$. Plugging it into Eq. (S1), we get $-\chi(-\xi_{\mathrm{n}}^{2}k_{\mathrm{n}}^{2})=+\infty$, so $k_{\mathrm{n}}=1/\xi_{\mathrm{n}}$. Using this relation, Eq. (S3) becomes $[N\xi^{2}k\tan(kd)]_{\mathrm{s}}=[N\xi\tanh(d/\xi)]_{\mathrm{n}}$ (S4) . For a nonmagnetic diffusive system, $\xi_{\mathrm{n}}=(\hbar D_{\mathrm{n}}/2\pi k_{\mathrm{B}}T)^{1/2}$, where $D_{\mathrm{n}}$ is the diffusion coefficientEschrig (2015). Generally, $\xi_{\mathrm{n}}$ is much larger than $d_{\mathrm{n}}=10$ nmKeizer _et al._ (2006), so Eq. (S4) can be approximated to $[N\xi^{2}k\tan(kd)]_{\mathrm{s}}=[Nd]_{\mathrm{n}}$ (S5) . Assuming $\mathrm{V}_{5}\mathrm{S}_{8}$ and Al as free electron systems, we have $N_{\mathrm{n}}/N_{\mathrm{s}}=(n_{\mathrm{n}}/n_{\mathrm{s}})^{1/3}$, where $n$ is the carrier density. For Al, $n_{\mathrm{s}}=1.806$ $\times 10^{29}$ $\mathrm{m^{-3}}$, while for $\mathrm{V}_{5}\mathrm{S}_{8}$, $n_{\mathrm{n}}=4.161$ $\times 10^{27}$ $\mathrm{m^{-3}}$, determined from the Hall resistivityZhou _et al._ (2021). Then, $N_{\mathrm{n}}/N_{\mathrm{s}}$ is equal to 0.2846. According to the Ginzburg-Landau theory, the perpendicular upper critical field $B_{\mathrm{c2}\perp}$ of the superconducting film is linear with the temperature $T$ as $B_{\mathrm{c2}\perp}(T)=B_{\mathrm{c2}\perp}(0)(1-T/T_{\mathrm{c}})$ near $T_{\mathrm{c}}$. $\xi_{\mathrm{s}}$ is related to $B_{\mathrm{c2}\perp}(0)$ via $\xi_{\mathrm{s}}=\frac{1}{\pi}(\frac{2\Phi_{0}}{\pi B_{\mathrm{c2}\perp}(0)})^{1/2}$, where $\Phi_{0}=h/2e$ is the flux quantumLazar _et al._ (2000). Therefore, we find that $\xi_{\mathrm{s}}$ is about 102 nm. Finally, Eq. (S5) is reduced to $[k\tan(kd)]_{\mathrm{s}}=2.846\times 10^{-4}\;\mathrm{nm^{-1}}.$ (S6)
# An Efficient FPGA-based Accelerator for Deep Forest ††thanks: This work was supported in part by the National Natural Science Foundation of China under Grant 62174084, 62104097 and in part by the High- Level Personnel Project of Jiangsu Province under Grant JSSCBS20210034, the Key Research Plan of Jiangsu Province of China under Grant BE2019003-4. (Corresponding author: Zhongfeng Wang.) Mingyu Zhu, Jiapeng Luo, Wendong Mao, Zhongfeng Wang School of Electronic Science and Engineering Nanjing University, Nanjing, China Email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Deep Forest is a prominent machine learning algorithm known for its high accuracy in forecasting. Compared with deep neural networks, Deep Forest has almost no multiplication operations and has better performance on small datasets. However, due to the deep structure and large forest quantity, it suffers from large amounts of calculation and memory consumption. In this paper, an efficient hardware accelerator is proposed for deep forest models, which is also the first work to implement Deep Forest on FPGA. Firstly, a delicate node computing unit (NCU) is designed to improve inference speed. Secondly, based on NCU, an efficient architecture and an adaptive dataflow are proposed, in order to alleviate the problem of node computing imbalance in the classification process. Moreover, an optimized storage scheme in this design also improves hardware utilization and power efficiency. The proposed design is implemented on an FPGA board, Intel Stratix V, and it is evaluated by two typical datasets, ADULT and Face Mask Detection. The experimental results show that the proposed design can achieve around 40$\times$ speedup compared to that on a 40 cores high performance x86 CPU. ###### Index Terms: Deep Forest, Random Forest, Decision Tree, Machine Learning, Hardware Acceleration, FPGA ## I Introduction With the rapid development of machine learning, deep neural networks (DNN) [1] have achieved great breakthrough in artificial intelligence literature. Though DNN has dominated the machine learning research fields nowadays, it has some obvious deficiencies such as high computational complexity, slow training speed, and lack of flexibility on small datasets. In 2017, a new tree-based ensemble learning method, Deep Forest (DF), was proposed by Zhou and Feng [2]. As shown in Fig. 1, its cascade structure makes DF able to do representation learning like deep neural networks. As an alternative to conventional deep learning methods, it has the following advantages over deep neural networks. Firstly, DF has almost no multiplication operations, which means low computational complexity. Secondly, DF can perform well when there are only small datasets or low-dimension datasets in contrast to DNN which requires large datasets. Thirdly, there are less hyperparameters in DF than in DNN, which makes DF easy to train. However, as the number of forests and the depth of the model increase, the computational complexity grows severely. Since the CPU cannot meet the real-time application requirements, it is of great necessity to accelerate the inference of the deep forest on hardware. Figure 1: Illustration of the cascade forest structure. Many hardware accelerators have been developed for tree-based models like Random Forest [3] to improve the speed. When it comes to Deep Forest, we face more problems. Firstly, since Deep Forest contains a large-scale ensemble of decision trees, it is a big challenge to store all the trees in limited space. Secondly, if we traverse all trees in parallel, the problem of node computing imbalance will arise due to the different path length of different trees and inputs. In this paper, we propose the first hardware accelerator for DF based on FPGA, which improves processing speed with high classification accuracy and low power consumption. The main contributions of this paper are summarized as follows: * • A delicate node computing unit (NCU) is designed to decompose the inference of a single decision tree into fine-grained logic calculation, in order to accelerate the processing. Meanwhile, an optimized storage scheme is introduced to store a large number of trees with limited on-chip memory resources. * • Based on the NCU, a specialized hardware architecture, together with an efficient dataflow is proposed to alleviate the problem of node computing imbalance in the classification process, while maintaining high classification accuracy and low power consumption. * • The design is implemented on Intel Stratix V FPGA, which is also the first work for accelerating Deep Forest on hardware. The experimental results show that the proposed design can achieve around 40$\times$ speedup compared to that on a 40 cores high performance x86 CPU. ## II Background ### II-A Hardware Acceleration for Tree-based Algorithms Several previous works have targeted hardware acceleration of the single decision tree and Random Forest. In 2012, Van Essen _et al_. [4] conducted a comparative study on the acceleration of inference processing of random forests by multi-core CPU, GP-GPU and FPGA. The experimental results showed that FPGAs can provide the highest performance solution while GP-GPUs still have high energy consumption that is sensitive to sample size and makes it difficult to be applied to mobile devices or edge devices. In their hardware design, the calculation cycle of each node is 5 clock cycles which can be further shortened. Saqib _et al_. [5] designed a pipeline structure for DT inference, and proposed an acceleration architecture composed of parallel processing nodes. Nakahara _et al_. [6] proposed a multi-valued decision diagrams based on random forests. In the diagram, each variable only appears once on the path in order to reduce inference latency. The disadvantage is that the number of nodes increases which will slow down the training process as a result. Alharam _et al_. [7] improved the real-time performance of the random forest classifier by reducing the number of nodes and branches to be evaluated, and reducing the branch length by numerical splitting. However, different from the other tree-based models, Deep Forest is an ensemble of ensembles which makes it a big challenge to deal with the large resource consumption and the large number of calculations. In addition, the prior works mainly focus on shortening the branch length of each tree which brings small speed improvement. In this paper, we accelerate the inference of DF with the aid of the NCU and propose a special overall architecture for DF based on FPGA. ### II-B Deep Forest The deep forest algorithm includes two parts: Multi-Grained Scanning and Cascade Forest. Inspired by the layer-by-layer processing of the original features in DNN, Deep Forest adopts a cascading structure, as shown in Fig. 1. The cascade forest structure stacks multiple forests in this way to obtain enhanced features and better learning performance. In the cascade forest, the input of the first layer is the feature vector of the instance, and the output of each layer is a set of class vectors. The output vectors of the previous layer and the original feature vector are concatenated together as the input of the next layer. Here we use two random forests [3] and two completely-random tree forests [8] in each layer. Fig. 2 illustrates the generation of the class vector. The traversed paths of the instance are highlighted in orange. For each instance, each forest averages the percentages of different classes of training data given by all trees in the same forest. Figure 2: Illustration of class vector generation. The overall procedure of Deep Forest uses the multi-grained scanning process to enhance the cascade forest. By using multiple sizes of sliding windows, the transformed feature vectors contain more information and different kinds of outputs are sent to the corresponding layer of the cascade forest. DF terminates training when the performance cannot be improved. ## III Proposed design ### III-A Node Computing Unit Deep Forest contains a cascade structure of ensemble trees, which makes it have higher computational complexity than other tree-based models, so it is important to reduce memory requirements and improve inference speed. Firstly, the storage scheme optimizes the format used to store the trees, while including all the information of each node in a 32-bit word. Secondly, we propose a computation-efficient node computing unit (NCU) and it can shorten the node operation period to 4 clock cycles while [4] uses 5 clock cycles. Figure 3: Storage of trees and memory layout of each node. In traditional storage scheme of tree-based models, the information of a single tree includes address of feature, threshold and addresses of the left and right child nodes. Our goal is to store each node of a tree in a 32-bit word. However, if the address of feature occupies 8 bits and the threshold occupies 16 bits, the left 8 bits memory is not enough for the addresses of all child nodes of an 8-depth tree. To tackle the problem, we propose an optimized storage scheme. Fig. 3 shows the storage of trees and the memory layout of each node. In our design, one nodes RAM stores the information of 8 trees of maximum depth _d_ , and each tree has at most _$2^{d}$_ -1 nodes. The format of each node includes three fields. For non-leaf nodes, the first field stores the _feature_idx_ deciding which feature will be used. The second field stores the _threshold_ (_n_ bits) which will be compared with the selected feature. For the addresses of child nodes, we use the pre-order traversal method to store the nodes, which means the memory position of a left child node always follows its parent node. In this way, we can deduce the address of a left child node from its parent node address. Therefore, the third field only stores the address of the right child (_m_ bits) with a sign bit. When it comes to the leaf nodes, the sign bit of the _right_idx_ turns to 1, which distinguishes two kinds of nodes. For the leaf nodes, the second field stores the _leaf_value_ (_n_ bits) which is the output of the tree. In our design, _m_ is 9, and without the address of the left child node, 20% storage space of trees are saved. Figure 4: The NCU and the node updating module. The NCU includes the nodes RAM memory storing the information of all trees in a group and the logic carrying out the comparison and accumulating the output of the trees. In our design, all trees in a forest are divided into several groups and one NCU takes charge of a group of trees stored in one RAM. The description of the NCU and the node updating module is shown in Fig. 4. The design includes the nodes RAM memory, a main input, _currentnode_idx_ , which is used to store the address of current node, a main output, _nextnode_idx_ , which is used to store the address of next node, and the logic that carries out the comparison and accumulates the _prob_total_ , which is the output of the trees. To start, as the nodes RAM stores 8 trees, the _currentnode_idx_ is concatenated with the _finish_count_ to get the information of the current node from the nodes RAM. Then, the _feature_idx_ of the non-leaf nodes is concatenated with the _finish_count_ to select one feature from all input features. After a clock cycle, we get the _feature_ and it is compared with the _threshold_reg_. In our design, we use the combinational logic instead of the sequential logic to implement the comparator. Finally, the comparison result is sent to a multiplexer, deciding whether the left child or the right child will be the next node. To obtain the address of the left child, _left_idx_ , we add 1 to the _currentnode_idx_. As the output, the _nextnode_idx_ needs to be sent to the update module to get the _currentnode_idx_ which will be used in the next round. If the current node is a leaf node, the _is_leaf_ value is 1 and is sent to the counter to get the counting result, _finish_count_. Meanwhile, the _leaf_value_ is accumulated with the previous _prob_total_ to get a new one. As a result, the overall calculation cycle of each node is shortened to four clock cycles which is one clock cycle less than that of [4]. ### III-B Overall Architecture and Dataflow For Deep Forest inference, the cascade forest occupies most of the time, so it’s crucial to accelerate this part on hardware. We insert a pipeline at the end of each layer in order to accelerate the processing. The proposed overall architecture is illustrated as Fig. 5. In our design, each layer occupies different on-chip resource. There are two forests in one layer, each of which is processed by one PE. A forest consists of 32 trees and 8 trees are packed into a group and are processed by one and the same NCU. Therefore, each PE is composed of 8 NCUs and all of them are run in parallel. The final prediction is obtained by averaging the output of the last layer, and then sent to the off-chip DRAM. There are three kinds of buffers to store data on the chip. Input Buffer, whose basic unit is RAM, stores three feature vectors produced by the multi- grained scanning, supposing we use three sizes of sliding windows. Layer 1$\sim$4 Buffer and Output Buffer store the input features of layer 1$\sim$4 and the output vector of the whole on-chip logic. Average is composed of adders and a shift register. The adders accumulate the classification results of all NCUs in one PE. As there are 32 trees in a forest, a shift register is used to get the mean value of all trees. When the averaging is finished, the result will be stored in a register. Update contains a counter and a register. The counter records the period of the NCU. Once the period reaches four clock cycles, the address of the current node in a register, _currentnode_idx_ , is replaced by the address of the next node, _nextnode_idx_. When the module receives the finish signal from the corresponding NCU, _currentnode_idx_ turns to zero. Controller receives the finish signals of all layers and counts the number of final results. It takes charge of data transport from off-chip DRAM to Input Buffer and from registers to Layer 1$\sim$4 Buffer. It is worth noting that we concatenate the data fetched from Layer 1$\sim$4 Buffer with the original feature vector fetched from the corresponding Input SRAM when Layer 1$\sim$4 Buffer receives the signal from the controller. Figure 5: The overall architecture. Each layer contains two PEs and each PE contains eight NCUs, each of which processes eight trees. Since different samples have different path lengths, it will cause the problem of node computing imbalance. To solve this problem, we propose the following dataflow. All NCUs in one PE are run in parallel, and each NCU is responsible for a group of trees instead of only one decision tree as the traditional methods. Once the NCU finishes a tree, it immediately processes the next one. The decision trees in the same group are sequentially traversed in a serial manner. In this way, we can mitigate the impact of gaps between various path lengths. Moreover, we insert a pipeline at the end of each layer to improve data throughput. ## IV Experiments ### IV-A Configuration In this section, we use two DF models trained on ADULT [9] and Face Mask Detection [10] respectively. Face Mask Detection is a new image dataset distinguishing whether a person wears a mask correctly or not. For ADULT, the multi-grained scanning is abandoned considering that the features have few sequential or spacial relationships. There are 4 layers in this model and each layer consists of one completely-random tree forest and one random forest, each containing 32 trees. For Face Mask Detection, 3 sizes of sliding windows are used in the multi-grained scanning. The cascade forest is composed of 3 layers and the configuration of each layer is the same as the model trained on ADULT. ### IV-B Results We run the above two models on Intel Xeon Gold 6148 CPU (40 cores), and our hardware design is implemented on FPGA (Intel Stratix V), reaching a clock frequency of 400 MHz. The proposed design decreases the usage of on-chip resourses. The resource utilization of our design on Intel Stratix V is shown in Table I. Because of the different data sizes, the models trained by the two datasets are implemented on different chips. TABLE I: The Resource Utilization of Our Design | ADULT | Face Mask Detection ---|---|--- Device | Stratix V 5SGXMA3 | Stratix V 5SGXEAB ALMs | 41,377 / 128,300 (32%) | 213,104 / 359,200 (59%) Memory (KB) | 314 / 2,392.5 (13%) | 420 / 6,600 (6%) DSP Blocks | 0 | 0 Table II shows the comparison of our implementation on FPGA at a clock frequency of 400 MHz with CPU. We evaluate the throughput rate on the two datasets, and find that our design achieves great speedup compared to the 40 cores high performance x86 CPU. It increases the throughput rate 40 times on ADULT and 1,871 times on Face Mask Detection. The proposed design also brings a great improvement on the latency. TABLE II: The Comparison of Our Design on FPGA (400MHz) with CPU | Throughput Rate (Ksamples/s) | Latency ($\mu$s) ---|---|--- | CPU | Ours | Speedup | CPU | Ours ADULT | 37.59 | 1,525 | 40$\times$ | 34,000 | 2.52 Face Mask Detection | 0.75 | 1,413 | 1,871$\times$ | 877,000 | 2.36 TABLE III: The Comparison of Our Work with [4] | Ours | [4] ---|---|--- Platform | Intel Stratix V 5SGXMA3 | Xilinx Virtex 6 XC6VLX Number of FPGAs | 1 | 2 Frequency (MHz) | 400 | 100 Throughput Rate (Ksamples/s) | 1,525 | 31,250 Power (W) | 2.64 | 11 Energy Efficiency (GOPS/W) | 517,117 | 499,968 Since the complexity of the deep forest algorithm is higher than the other tree-based algorithms, the energy efficiency becomes another important performance when these methods are implemented on FPGA. Table III shows the comparison of our work with [4]. In our design, the energy efficiency on ADULT surpasses that of [4], but the latter needs more than one FPGA to implement the same number of trees as one layer in our model. ## V Conclusion In this paper we propose an efficient hardware architecture for the deep forest model which is also the first work to accelerate DF. Implemented on Intel Stratix V FPGA, the proposed design achieves at least 40$\times$ speedup compared to that on a 40 cores high performance x86 CPU. Since there are no previous works on hardware acceleration of DF, we compare it with the hardware accelerator of Random Forest and find our design has comparable energy efficient while consuming less hardware resources. There are many potential applications for the proposed design, especially some classification tasks on mobile devices. ## References * [1] Ian Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. Deep Learning, 2016. * [2] Z. H. Zhou and J. Feng. Deep Forest: Towards An Alternative to Deep Neural Networks. 2017\. * [3] Breiman. Random forests. Machine Learing, 2001,45(1)(-):5–32, 2001. * [4] Brian Van Essen, Chris Macaraeg, Maya Gokhale, and Ryan Prenger. Accelerating a Random Forest Classifier: Multi-Core, GP-GPU, or FPGA? In IEEE International Symposium on Field-programmable Custom Computing Machines, 2012. * [5] Saqib, Dutta, Plusquellic, Ortiz, Pattichis, and MS. Pipelined Decision Tree Classification Accelerator Implementation in FPGA (DT-CAIF). IEEE Transactions On Computers, 2015,64(1)(-):280–285, 2015. * [6] H. Nakahara, A. Jinguji, S. Sato, and T. Sasao. A Random Forest Using a Multi-valued Decision Diagram on an FPGA. In 2017 IEEE 47th International Symposium on Multiple-Valued Logic (ISMVL), 2017. * [7] A. K. Alharam and A. Shoufan. Optimized Random Forest Classifier for Drone Pilot Identification. In 2020 IEEE International Symposium on Circuits and Systems (ISCAS), 2020. * [8] Fei Tony Liu, Kai Ming Ting, Yang Yu, and Zhi Hua Zhou. Spectrum of Variable-Random Trees. Journal of Artificial Intelligence Research, 32(1):355–384, 2008\. * [9] K. Bache and M. Lichman. UCI Machine Learning Repository. 2013\. * [10] Péter Baranyi. TP Toolbox. https://www.kaggle.com/ashishjangra27/face-mask-12k-images-dataset.
# Identification of multi-component LOFAR sources with multi-modal deep learning Lara Alegre,1 Philip Best,1 Jose Sabater,1,2 Huub Rottgering,3 Martin Hardcastle,4 Wendy Williams5 1SUPA, Institute for Astronomy, University of Edinburgh, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK 2UK Astronomy Technology Centre, Royal Observatory, Blackford Hill, Edinburgh, EH9 3HJ, UK 3Leiden Observatory, Leiden University, PO Box 9513, NL-2300 RA Leiden, The Netherlands 4Centre for Astrophysics Research, Department of Physics, Astronomy and Mathematics, University of Hertfordshire, College Lane, Hatfield AL10 9AB, UK 5SKA Observatory, Jodrell Bank, Lower Withington, Macclesfield, SK11 9FT, UK E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract Modern high-sensitivity radio telescopes are discovering an increased number of resolved sources with intricate radio structures and fainter radio emissions. These sources often present a challenge because source detectors might identify them as separate radio sources rather than components belonging to the same physically connected radio source. Currently, there are no reliable automatic methods to determine which radio components are single radio sources or part of multi-component sources. We propose a deep learning classifier to identify those sources that are part of a multi-component system and require component association on data from the LOFAR Two-Metre Sky Survey (LoTSS). We combine different types of input data using multi-modal deep learning to extract spatial and local information about the radio source components: a convolutional neural network component that processes radio images is combined with a neural network component that uses parameters measured from the radio sources and their nearest neighbours. Our model retrieves 94 per cent of the sources with multiple components on a balanced test set with 2,683 sources and achieves almost 97 per cent accuracy in the real imbalanced data (323,103 sources). The approach holds potential for integration into pipelines for automatic radio component association and cross-identification. Our work demonstrates how deep learning can be used to integrate different types of data and create an effective solution for managing modern radio surveys. ###### keywords: Surveys – Galaxies: active – Radio continuum: galaxies – Methods: statistical ††pubyear: 2024††pagerange: Identification of multi-component LOFAR sources with multi-modal deep learning–B ## 1 Introduction The role of active galactic nuclei (AGN) in galaxy evolution is widely recognised today (see reviews by Fabian, 2012; Kormendy & Ho, 2013; Heckman & Best, 2014, and references therein), with AGN feedback being the main candidate responsible for suppressing star formation and leading to massive galaxies becoming “red and dead”. Radio-loud AGNs, or radio AGNs for short, which have relativistic jets extending tens or hundreds of kiloparsecs from the galaxy, are thought to be the primary force behind this AGN feedback (see Hardcastle & Croston, 2020, for a review). However, certain aspects of AGN- galaxy co-evolution, such as the mechanisms by which AGNs are triggered, are not completely understood, and larger samples of AGNs are needed to permit detailed statistical studies (e.g. Best et al., 2006, 2007; Sabater et al., 2019). Significant advances have been made with data from extensive radio continuum surveys, such as the Faint Images of the Radio Sky at Twenty centimeters survey (FIRST; Becker et al., 1995), the National Radio Astronomy Observatory (NRAO) Very Large Array (VLA) Sky Survey (NVSS; Condon et al., 1998) and the LOw Frequency ARray (LOFAR; van Haarlem et al., 2013) Two-meter Sky Survey (LoTSS; Shimwell et al., 2017, 2019, 2022). These surveys cover wider and deeper areas of the sky, which result in an increase in detected sources from tens of thousands in early radio surveys to about 5 million currently. These surveys already provide large enough samples for some statistical studies, but with upcoming telescopes like the Square Kilometre Array (SKA; Dewdney et al., 2009), it is anticipated that we will get a fully detailed picture from the radio viewpoint of galaxy evolution, AGN triggering, and the influence of AGNs on galaxies across cosmic time. However, to perform these studies, it is crucial to obtain accurate measurements of radio fluxes and source sizes in order to characterise the radio AGN properties of the hosting galaxy. Furthermore, it is necessary to have precise identification of the radio source host galaxy to obtain optical properties, as well as redshifts to enable measurements to be converted into physical properties. In LoTSS, radio source properties including source sizes and flux densities are measured using the Python Blob Detector and Source Finder (PyBDSF; Mohan & Rafferty, 2015), which extracts confined regions of high radio brightness from the images, designated as PyBDSF sources, which can be fitted by one or various Gaussians. In order to get the correct optical counterparts, in LoTSS DR1, a proportion of the PyBDSF sources were visually inspected while the majority of them were cross-matched automatically using the statistical Likelihood Ratio (LR) technique (see Williams et al., 2019). When sources were visually inspected, they fell mainly into three categories. The first category was extended single-component radio sources. These are sources that have been successfully identified as physical sources by PyBDSF. However, due to their extended radio emission, automatic cross-matching methods become less reliable, requiring visual inspection and cross-identification. Machine- learning methods have been developed that show promising potential for providing accurate cross-match IDs for these type of sources. For example, Alger et al. (2018) implemented a method that involves creating a bounding box centred on a radio component and deriving a score for potential candidate IDs within a search radius, demonstrating significant efficacy in cross-matching sources of this nature. The second category of sources comprises blended sources, where PyBDSF encompasses multiple sources into a single detection, necessitating deblending before cross-matching. As demonstrated, for example by Williams et al. (2019), the implementation of automated algorithms for source deblending can be accomplished with relative ease. Thirdly, there are radio sources composed of multiple components (MC). In these cases, PyBDSF separated a physical radio source into different source components, and it is therefore necessary this category to associate the components before cross- matching. MC sources are typically sources with extended radio emission and/or distinct radio blobs. When applying source detection algorithms (e.g. Mohan & Rafferty, 2015; Hale et al., 2019) to high-resolution images, algorithms search for pixel areas exceeding a pre-determined threshold level (often set at a signal- to-noise ratio of 5). Sometimes certain parts of a source may fall below the threshold level, and therefore the software may identify different source regions above the threshold as separated sources. Extreme cases are FRIIs (see Fanaroff & Riley, 1974, for FRI vs. FRII classification), which possess highly luminous steep-spectrum lobes but faint flat-spectrum jets between the lobes, which commonly fall below the signal-to-noise level. Sometimes, even if detections are above the threshold level, it is possible for certain components to be separated as the software tries to remove irrelevant sources to avoid incorrectly producing blends. The cross-identification of MC sources presents a significant challenge, since it involves the accurate definition of the radio source (which requires radio source component association) and the cross-matching of the (potentially very extended) radio source to its optical counterpart. Some algorithms have recently been developed to group components of MC sources in radio images (e.g. Wu et al., 2019; Mostert et al., 2022), and others successfully identify the host galaxy in source components that have already been grouped beforehand (Barkus et al., 2022). However, without a specific methodology, it is impossible to determine whether a source requires component association. When applying these algorithms without previous knowledge then if, for example, the source needs component association, a bounding box, which may encompass only one of the source components, may give the correct ID (e.g. Alger et al., 2018), but the radio source properties will be incorrect. Consequently, the initial step of cross-matching MC sources involves ensuring the appropriate identification of a source as a MC source in order to determine whether the radio components have been accurately associated or not. Due to their complexity, MC sources hold significant interest for both individual galaxy studies (e.g. Hardcastle et al., 2019a) and statistical studies (e.g. Sabater et al., 2019; Hardcastle et al., 2019b, Alegre et al., in prep.). Hence, it is crucial to identify these sources to precisely measure their radio properties. Given the lack of automatic methods available for identifying MC sources, our primary focus in this paper is to identify them. To address this we use Machine Learning (ML) and the LoTSS data. We employ Multi-Modal ML (MML; e.g. Ngiam et al., 2011), a ML type of model that integrates different data inputs. In MML models, each data instance can contain various types of information, such as images, structured data, and others such as text, audio, video, and even metadata (see, e.g. Baltrušaitis et al., 2019). MML has been successfully applied to a wide range of AI problems, with particular developments in deep learning and computer vision (see Summaira et al., 2021, and references therein). However, MML methods have only recently been developed to be used in astronomy applications. For example, Hong et al. (2023) used a MML model to estimate photometric redshifts of galaxies in the Sloan Digital Sky Survey (SDSS, York et al., 2000) with significant improvement in the estimations. Natural language processing combined with radio images from the “Radio Galaxy Zoo: EMU” were recently used to classify galaxies based on description tags (Bowles et al., 2023). In the context of weak gravitational lensing, Pinciroli Vago & Fraternali (2022) combined images and time-series data to detect lensing effects in four different simulated survey datasets, showing that the method surpasses the traditional method using only images, which will be important to detect lenses in upcoming surveys, such as the Large Survey of Space and Time (LSST; Ivezić et al., 2019). Cuoco et al. (2021) combined information from different parts of the electromagnetic spectrum to characterise gravitational wave events. They further reviewed the computational aspects of MML astrophysics and the importance of developing methods that combine multimessenger astronomy (Cuoco et al., 2022) . The complexity and amount of data that new gravitational wave detectors and new telescopes will generate by detecting thousands of transients per night, creates urgency for developing methods that are able to efficiently analyse and combine the information coming from multiple sources. In this work, we train a MML classifier built on a Convolutional Neural Network (CNN) and an Artificial Neural Network (ANN) in order to identify MC sources. The MML model combines two different types of information into a unified architecture; it takes as inputs radio and optical properties as well as radio images. Although somewhat similar architectures have been used in radio astronomy (e.g. by connecting 2 CNNs, Maslej-Krešňáková et al., 2021; Samudre et al., 2022) these approaches do not combine data coming from different sources, or different data types. By incorporating multiple sources of information, in this paper we demonstrate improved performance in identifying MC sources in LoTSS and show the advantages of using MML to analyse future radio surveys. Furthermore, we employ active learning (e.g. Walmsley et al., 2020), by using the results from Alegre et al. (2022) to remove sources from the dataset that are less informative for the learning process. By selecting the most informative sources for the model, it is possible to optimise its performance while also reducing the number of examples needed for training. The paper is organised as follows. We describe the LoTSS data and the creation of the dataset in Section 2, where we define the data types, define the classes and discuss how balancing the dataset was achieved. We then perform a set of experiments in Section 3. We define a baseline model and explore the production of the images, the creation of the multi-modal model (where we test for different sets of features), data augmentation, as well as adjusting the training dataset. We further present the model optimisation and model performance. The model is applied to the real imbalanced data sample in Section 4. We conclude and discuss future directions in Section 5. ## 2 Data This work is focused on data from the LoTSS survey carried on with the LOFAR telescope. LoTSS is a survey of the entire northern sky which reaches depths about 10 times greater than the FIRST survey (for sources of typical steep spectral index), while achieving sensitivity to extended structures, better than the NVSS survey. This unique combination allows for the detection of sources with extended faint emission. LoTSS has a frequency coverage from 120 to 168 MHz, and achieves a typical rms noise level of 70 $\mu$Jy/beam over its first data release (DR1) region, with an estimated point source completeness of 90 per cent at a flux density of 0.45 mJy. The low frequencies of LOFAR combined with a high sensitivity on short baselines gives it a high efficiency at detecting extended radio emission. LoTSS DR1 has an angular resolution of 6′′ and an astrometric precision of 0.2′′, making it very suitable for host- galaxy identification. In this section, we provide an overview of the LoTSS DR1 data and the dataset that is extracted from this to perform the experiments. More details about the data used to create the dataset can be found in Alegre et al. (2022). ### 2.1 LoTSS LoTSS detected 325,694 PyBDSF sources in its first data release, containing just the first 2 per cent of the survey (LoTSS DR1; Shimwell et al., 2019) 111https://lofar-surveys.org. The public release provided radio catalogues that were derived from the 58 mosaic images of DR1, which cover 424 deg2 over the Hobby-Eberly Telescope Dark Energy Experiment (HETDEX; Hill et al., 2008) Spring Field (right ascension 10h45m00s – 15h30m00s and declination 45∘00′00′′– 57∘00′00′′). The area benefits from extensive multi-wavelength coverage. The released LoTSS data products include value-added catalogues which present the identification of LOFAR-matched radio sources to optical counterparts using Pan-STARRS (Chambers et al., 2016) and the Wide Infrared Survey Explorer (WISE, Cutri et al., 2013) surveys, achieved using a combination of statistical techniques and visual inspection via a private LOFAR Galaxy Zoo (LGZ) classification project hosted on the Zooniverse platform222https://www.zooniverse.org (described in paper III of LoTSS DR1, Williams et al., 2019). The catalogues also provide some initial characterisation of the sources, including photometric redshift estimates and rest-frame magnitudes (described in paper IV of LoTSS-DR1; Duncan et al., 2019). In LoTSS DR1, sources bigger than 15 arcseconds (that were not automatically cross-matched with a large SDSS optical source) were all sent to visual inspection without any triage, since large sources are usually resolved and potentially complex. These correspond to 19,216 sources, or 5.95 per cent of LoTSS DR1. From these, the outcome of the visual analysis demonstrated that 10,001 (52.05 per cent) needed to have been inspected (Alegre et al., 2022), with 4,671 (24.31 per cent) being MC sources and the rest single-component sources. Considering only the large and bright sources (total flux $>$ 10 mJy, 6,748 sources, 2.09 per cent of LoTSS DR1), i.e. the ones used to perform component association by Mostert et al. (2022), only 2,226 (32.99 per cent) of those in fact needed component association, decreasing the performance of source association. Even though the majority of the components of MC sources are indeed large and bright, the remaining components fall into different parts of the Williams et al. (2019) decision tree, with only 57 MC sources (0.63 per cent) being sent directly to visual inspection, 201 (2.22 per cent) being automatically cross-matched with a large optical galaxy but inspected afterwards, 1,046 (11.56 per cent) being accepted automatically to cross-match by LR (most likely the cores of FRII or double-lobed sources), and finally 3,071 (33.95 per cent) going through a pre-filtering process before further visual analysis. A second LoTSS data release with a total number of 4,396,228 PyBDSF sources in 841 mosaics covering 5634 deg2 has been published (LoTSS DR2; Shimwell et al., 2022); some aspects will be discussed in Section 4.4. LoTSS DR2 corresponds to 27 per cent of the northern sky and it spans two regions: one with 4178 deg2 around right ascension 12h45m and declination 44∘30′and the other with 1457 deg2 around right ascension 1h00m and declination 28∘00′. LoTSS DR2 has a central frequency of 144 MHz with 83 $\mu$Jy/beam rms sensitivity and an estimated point-source completeness of 90 per cent at a peak brightness of 0.8 mJy/beam. Hardcastle et al. (2023) present the methods used to cross-match LoTSS DR2 radio sources with their corresponding optical counterparts. In their work the public Zooniverse ‘Radio Galaxy Zoo: LOFAR’ was established for the purpose of associating and cross-matching a fraction of the sources in the dataset. ### 2.2 Dataset classes In this work, we use supervised machine learning for classification, which involves training models using labelled data with the aim of classifying unseen examples afterwards. The labelled data provided for training determine the quality of the model and its ability to generalise (i.e. to be able to classify other examples correctly). Therefore, it is important to have a well- defined and well-annotated dataset. We created the dataset using 323,103 PyBDSF sources which resulted from removing the artefacts from the original 325,694 PyBDSF sources obtained over the LoTSS DR1 area. This was done by comparing the original PyBDSF radio catalogue with the outputs obtained from a combination of visual inspection and statistical cross-matching described in detail in Williams et al. (2019). In cases where source components had been merged, this resulted in a single entry in the final catalogue; deblended sources, on the other hand, show multiple entries. Single-component sources remain the same in both catalogues. This enables the categorisation of sources into two distinct classes: class MC corresponds to multi-component (MC) sources, whereas class S is a mix of non-MC sources. 333The supplementary online material includes a list of the 323,103 PyBDSF sources, with sources in class MC assigned a value of 1 in the multi$\\_$component column and sources in class S assigned a value of 0. They are defined as follows: 1. Class MC: PyBDSF sources that were associated with other PyBDSF sources in LGZ, meaning that these make up a MC source. These correspond to sources for which the PyBDSF algorithm has detected the radio emission separately, or has split the radio emission, giving rise to two or more different radio components. To construct a genuine physical source it is, therefore, necessary to associate the different source components. 2. Class S: PyBDSF sources for which the source emission is all encompassed within a single PyBDSF source, and therefore do not require component association. While these primarily consist of correctly identified single-component sources, this class also includes the blended sources that PyBDSF incorrectly identified as being a single source and that needed to be split into two or more sources. Artefacts correspond to PyBDSF sources not present in the final LoTSS DR1 value-added catalogue and have been excluded from this analysis. ### 2.3 Balancing the dataset In radio surveys the number of objects in the two different classes is highly imbalanced, with relatively low numbers of class MC sources. Balancing the dataset (having a similar number of examples in each class) is a common ML technique used to avoid overfitting the model to the majority class during the training process. A balanced dataset was achieved using an undersampling method, which consists of using only a subsample of all the available data. This has been shown to be effective by Alegre et al. (2022). Furthermore, the augmentation step (adding more examples through rotations and reflections; see Section 3.4) will act as compensation for the undersampling, whereby the class MC sources will be effectively augmented while different groups of class S will be added without suffering augmentation transformations. Consequently, a greater number of examples will be used to train this algorithm since, typically, deep learning algorithms require more training data than machine learning ones. It is worth noting that the model is trained using a balanced dataset but is it then used to make predictions on data that has an unequal distribution of classes. Particular attention must be paid to this when applying the classifier to real distributions (see Section 4). The balanced dataset (before augmentation) has 9,046 sources in each class. Class S corresponds to 8,189 random single sources and 857 blended PyBDSF sources, which were included in the dataset class because even if they are rare, they will be part of the real datasets, and thus, they allow the classifier to train using a wider variety of single sources. Class MC consists of 9,046 multi-component PyBDSFs, which is reduced from the 9,072 sources that required component association, as 26 sources were both deblended and grouped with another PyBDSF source, and therefore were excluded because they belong to distinct classes. The total number of sources in the balanced dataset before augmentation is 18,092, with the training set corresponding to 12,664 sources (70 per cent) and the validation and test sets corresponding to 2,714 sources each (15 per cent each). ### 2.4 Dataset images The images used to create the dataset are cutouts around the PyBDSF sources centred on their right ascension and declination positions. These were cut from the 58 LoTSS DR1 mosaics 444lofar-surveys.org/dr1$\\_$release. The DR1 mosaics have 1.5 arcsec/pixel, and the final images used are 128$\times$128 pixels PNGs, corresponding to 192$\times$192 arcseconds. The images were, however, extracted first as 256$\times$256 pixel FITS files, which were then used for augmentation (including rotation – hence the larger size to avoid empty regions in the corners after rotation; see Section 3.4 for details), application of sigma cuts, and combining the different channels, before being cut into the final 128$\times$128 pixel images. Initially, a default sigma clipping on a linear range between 1$\sigma$ and 30$\sigma$ was applied to the images. Different authors (e.g. Aniyan & Thorat, 2017; Alhassan et al., 2018; Tang et al., 2019; Mostert et al., 2022) have shown that the performance of a CNN model depends on the background noise in the input images. Thus, later, we will also investigate different sigma cuts that enhance extended emission while simultaneously removing noise. In all cases, the PNG images were normalised after applying the sigma cuts, in which the values were scaled to a range of 0-1. This makes it easier to create composite images and also reduces computational costs when using a 3-channel CNN. More details about preprocessing the images can be found in Section 3.1.2. Although the entire source may occasionally (but rarely) extend outside the frame, the choice of an output image size of 128$\times$128 pixels is a reasonable compromise. There are only 199 final associated LGZ sources in the sample for which the angular size is larger than the picture frame. This represents only 6 per cent of the total final associated MC sources (3,596 sources), and even for these, each of the source components is significantly smaller than the image size chosen. In all these cases, there is still a substantial quantity of information within the frame. The objective is only to determine whether or not the source is a MC, not to identify all of the source components. Therefore, a source being larger than the image does not represent an issue, as it would potentially be if we were conducting tasks such as source association, morphology classification, or host galaxy cross-matching using ML. Furthermore, the classifier does not necessarily need the image of the entire source to determine that it requires association, with the extended emission being a better indicator. ## 3 Constructing the multi-modal model In this section, we conduct experiments that will ultimately lead to the adoption of a final model. A Convolutional Neural Network is investigated in Section 3.1 where a baseline architecture (Section 3.1.1) is established to allow for the evaluation of changes made to various aspects of the model, with further experiments examining image production (Section 3.1.2). The extension to a multi-modal architecture is explained in Section 3.2, with adjustments made to the training set described in Section 3.3 and augmentation in Section 3.4. Modifications to model hyperparameters are described in the optimisation stage (Section 3.5); and the final model performance is presented in Section 3.6. The evaluation criteria used for assessing the performance of the model are explained in Appendix A. ### 3.1 The Convolutional Neural Network #### 3.1.1 Establishing a baseline model In order to establish a baseline model, we assessed different convolutional neural networks (CNN) that had been used for radio morphology classification. These correspond to models developed mainly for classifying sources into FRI, FRII, bent-tailed, and compact sources (Alhassan et al., 2018; Aniyan & Thorat, 2017; Becker et al., 2021), but also to differentiate between compact and extended sources (Lukic et al., 2018). These models are expected to provide a good starting point for the experiments. All of the models we are testing here were originally developed to work with VLA FIRST data. The higher radio frequency of FIRST (1.4 GHz) results in distinct regions of the observed galaxies being more prominent in FIRST images compared to the ones in the LOFAR surveys. Particularly, FIRST emphasises galaxy cores and hotspots, whereas LoTSS (144 MHz) provides a broader picture of the source, highlighting much more extended emission. However, this also suggests that the architectures under consideration may be suitable for the current task, as they have demonstrated efficacy in identifying sources that appear as separated radio emission (e.g. FRII) in particular for the FIRST data and therefore may be useful to identify MC sources. Here, we provide an overview of the models we have chosen to investigate and any modifications we have made to the architectures and hyperparameters. The corresponding publications (Alhassan et al., 2018; Aniyan & Thorat, 2017; Becker et al., 2021; Lukic et al., 2018) provide detailed descriptions and illustrations of the structure of each model. All tested architectures have a set of convolutional layers, typically each made up of a convolutional stage (where feature extraction is performed, outputting feature maps), a detection stage (based on a non-linear activation function, commonly a Rectified Linear Unit, or ReLU), and a pooling stage (which subsamples the feature maps, reducing their spatial size, in this case by using maxpooling, where the maximum values are retained). Following this, the architectures have a final 1 to 3 dense layers with dropout (a regularisation technique that corresponds to removing random neurons), followed by a softmax layer (which transforms the outputs into probabilities). A kernel is applied to the input image and the feature maps during the convolutional and pooling operations. This operates as a sliding window, computing dot products between the kernel values and the values of the pixels of the images and feature maps in the convolutional stages and retaining specific values in the pooling stage (e.g. the maximum value). Different strides can be applied, where the stride corresponds to how many steps the kernel shifts in the horizontal and vertical directions after each computation. Smaller kernels and strides mean tighter scanning, possibly enabling more details to be extracted from the images. These filters are learnable matrices that specialise in detecting different features, with a higher number of filters having the potential to identify increasingly complex and intricate patterns. Both Alhassan et al. (2018) and Lukic et al. (2018) have very similar architectures, with only 3 convolutional layers and 1 or 2 final dense layers, respectively, with 50 per cent dropout before the softmax layer. Lukic et al. (2018) uses small sets of filters (16, 32, 64), while Alhassan et al. (2018) uses (32, 64, 94). Alhassan et al. (2018) uses kernels that are typically smaller and have smaller strides, while Lukic et al. (2018) uses typically higher values for the kernel sizes and strides, in particular in the first layers. Furthermore, Lukic et al. (2018) uses two final dense layers of 1024 neurons each, while Alhassan et al. (2018) uses a single layer with only 194 neurons. Both models require a high number of epochs to converge. The Alhassan et al. (2018) classifier was trained for 400 epochs and Lukic et al. (2018) for 100 epochs. On the other hand, Aniyan & Thorat (2017) present a deeper (and wider) network, resulting in a much heavier model than the previous two architectures. With 5 convolutional layers (not all with a pooling stage), set for a large number of filters in each layer (96, 256, 384, 384, 256), three final wide dense layers with 4096 neurons each, and 50 per cent dropout, this model has a layer normalisation after the ReLU on each convolutional layer. The kernel sizes are larger in the first layers, with a stride of 1. For all of these reasons, this is an expensive model to run. The authors, when training it, ran it only for 30 epochs. In order to decrease memory problems, we had to make major changes to this architecture in particular. The filter sizes were reduced to (16, 32, 64, 64, 32) and the 3-dense layers to 1024 neurons each. Furthermore, the normalisation layer had to be removed because it was making the model unstable and leading to overfitting. The Becker et al. (2021) model has a much deeper architecture but a lighter one as well. This model has 11 convolutional layers but only uses pooling every 3 (or 2) layers. It has a small number of filters in each set of three consecutive layers (32, 64, 128) and 256 on the two final layers, with a 25 per cent dropout after the maxpooling layer. Kernels have a general size of 3 and a stride of 1. It finishes with only one dense layer of 500 neurons and 50 per cent dropout, followed by a softmax. Most importantly, this original architecture required only 16 epochs to be trained. We use a single-channel CNN to explore the different architectures and establish the baseline model. The model inputs radio images with a size of 128$\times$128 pixels, which went through a linear cut ranging from 1$\sigma$ to 30$\sigma$. No data augmentation was used at this stage. The architectures were implemented with very few changes and assumptions, with only the Aniyan & Thorat (2017) architecture undergoing significant modifications, as explained above. Minor adjustments had to be made, particularly in cases where not enough details were provided. It was assumed that the stride corresponded to the size of the kernel, and padding was chosen to preserve the size of the feature maps (padding is used to add extra pixels with zero values to the border of the input image and feature maps before the convolutions). In cases where the specific location for the dropout layer was not explicitly indicated, dropout was performed after the dense layer, as per the original paper from Hinton et al. (2012). The batch size, learning rate, optimiser algorithm, batch normalisation, and number of epochs were initially tested to find a suitable set of hyperparameters which allowed to compare the models without stability problems (huge variations across epochs) and massive overfitting issues. We tested each architecture individually using its original hyperparameters, but the classifiers overfitted or the performance was worse in general. We found that the optimal hyperparameters for the four architectures were a learning rate of 0.0001, no batch normalisation (in architectures that applied it), a batch size of 32, and the use of the RMSprop optimiser (Tieleman & Hinton, 2012). All the models were able to converge and show above 85 per cent accuracy after about 30 epochs of training. It was observed that the choice of these hyperparameters did not depend strongly on the architecture. We set these as the baseline hyperparameters. Figure 1: Performance of four different Convolutional Neural Networks (CNN) architectures that were used to establish the baseline model. All architectures were ran using the same set of hyperparameters (here showing training for 50 epochs), which were found to be the most suitable ones regardless of the CNN used (see text for a discussion). The performance of the different architectures on the training and validation sets is compared in Figure 1 (see Appendix A for a definition of the performance metrics used). Even with small modifications made to the network and hyperparameters (for example, by introducing batch normalisation after each convolution layer or changing the learning rate to 0.001), the performance of the Alhassan et al. (2018) classifier is the weakest (reaching 85 per cent accuracy). The Lukic et al. (2018) network performs about 2 per cent better and benefits from using a larger batch size and a smaller learning rate, which reduces the overfitting of the network when compared to using its original hyperparameters. Changes to the Aniyan & Thorat (2017) architecture resulted in good performance for the model, reaching accuracy values above 90 per cent, but the model shows some architectural issues, resulting in high training costs and also instabilities. For example, with the original learning rate of 0.01, the network was not even able to converge. Even though Figure 1 suggests that it is possible that the model has the potential to improve its performance after more training, for the reasons mentioned (and also because there is a better alternative architecture), this model was excluded from further consideration. The model based on the Becker et al. (2021) architecture reaches accuracy values on both the training and validation sets above 92 per cent, and reducing the learning rate leads to even better results than the original one of 0.001. Overall, it is evident that, after establishing the baseline hyperparameters, the deeper architectures show superior performance for the identification of multi-component sources. The results of the model based on the Becker et al. (2021) architecture show the best performance, with similar values on both training and validation sets and high stability. This architecture performs well, converges rapidly, and trains smoothly. Therefore, it was selected as the baseline model. The hyperparameter values established for the baseline model will be the ones used throughout the experiments, unless stated otherwise, for example, when augmentation is introduced. The model which will be finally adopted is a refinement of this baseline model. The process of refinement and optimisation of both the hyperparameters and the architecture is described in Section 3.5, which also contains a diagram illustrating the architecture of the final model. #### 3.1.2 Optimising image production LoTSS original images show differences in noise levels depending on the sky regions being observed, and also show different contrast ranges with very bright sources or others with weak diffuse emission. We use sigma clipping for cleaning and removing noise from the images. This was done using Montage 555montage.ipac.caltech.edu, an astronomical image mosaic engine from NASA. The sigma-clipping procedure discards values (i.e. sets them to the minimum or maximum value) that are either above or below a defined standard deviation from the mean. As baseline, we used image cuts of 1$\sigma$–30$\sigma$ on a linear scale. We also tested using sigma cuts of 1$\sigma$–30$\sigma$ and 1$\sigma$-200$\sigma$ in the logarithmic scale, and 3$\sigma$ and 5$\sigma$ cuts. In the wide range examples (1$\sigma$–200$\sigma$ and 1$\sigma$–30$\sigma$), the lower limit corresponds to 1$\sigma$, while the upper limit corresponds to 200$\sigma$ and 30$\sigma$, respectively, with a stretch applied on a logarithmic scale. The 3$\sigma$ (or 5$\sigma$) cut sets all values below 3$\sigma$ (or 5$\sigma$) to zero and sets values above that level to unity. Figure 2 compares these different sigma-clipping levels for some example sources. We can see that when using 1$\sigma$–200$\sigma$, the bright features and the diffuse emission of the source have been enhanced. Additionally, the extended emission has been smoothed out, and the background noise has been reduced, making it more consistent across images. The 3$\sigma$ cut displays the source silhouette in its entirety. The 1$\sigma$–30$\sigma$ emphasises the extended emission while maintaining a consistent level of noise across all images. Original 1$\sigma$-200$\sigma$ log (1) 3$\sigma$ (2) 1$\sigma$-30$\sigma$ log (3) (1, 2, 3) Figure 2: Sigma-clipping image examples. The left column shows the original (not scaled) images directly extracted from the LoTSS DR1 mosaics, with indicated peak flux signal-to-noise ratio (SNR). The three middle columns correspond to individual sigma cuts, with black indicating the lower limit of the range and white indicating the upper limit. The right column is a composite image made up of the three individual ones, which is finally used in the 3-channel CNN. The first four rows correspond to multi-component sources. The fifth and sixth rows show a blended source and a single-component source, respectively. In the top row, the PyBDSF source corresponds to a lobe and the entire source is not within the frame; however, it is clear that enough source is present for the classifier to identify this as part of a MC source, justifying our choice of 128$\times$128 pixel image sizes even for the small fraction of sources that are larger than this. Figure 3 compares the performance of the model for different options of the sigma-clipping, on both the training and validation sets. The baseline resulted in a good performance, and the model using the images created using 1$\sigma$–200$\sigma$ logarithmic scale have a very similar performance with a slight improvement, in particular on the validation set. Using the 1$\sigma$–30$\sigma$ stretch in the log scale outperforms the one in the linear scale in both the training and validation sets. However, it requires attention for a higher number of epochs since it tends to overfit after around 20 epochs of training. The 3$\sigma$ cut shows good performance on the validation set but only up to around 15 epochs of training, after which the results start to get unstable. Even though this is the least reliable of the three channels that were ultimately used, it is able to provide some helpful information (as can be seen from the 1-channel network alone). The 5$\sigma$ cut performs poorly in terms of overall accuracy and overfitting, and hence it was excluded. This may be due to a significant loss of information because the majority of extended emission will be below 5$\sigma$ and therefore will be rejected. Figure 3: Experiments using the baseline architecture with 1 channel and different individual sigma cuts (top row), and the final adopted 3 channels, which combines the 3 individual sigma cuts (bottom). In each plot the dark blue lines represents the baseline model with 1$\sigma$-30$\sigma$ in linear scale, which is compared to the performance of the model using different sigma cuts, both on the training and on the validation set. The CNN model can be designed to process a three-channel input image. Since the performance of the model is different with different sigma cut images, we can combine the most suitable sigma cuts for the classifier (e.g. Mostert et al., 2022). The three adopted channels are the 1$\sigma$–200$\sigma$ and 1$\sigma$–30$\sigma$, both in logarithmic scale, and the 3$\sigma$ cut. These were chosen as they were the best-performing individual channels. Each one of them provides subtly different information, and by combining the information from the three channels we provide more details for the training process. The combination of the three images provides an improved performance on the training sample, although the performance on the validation sample is comparable to the 1-channel model. This indicates an increased risk of over- fitting, in particular for higher number of epochs. This aspect will be mitigated later by data augmentation and additional adjustments to the network; we show in Section 3.2 that in the final architecture the 3-channel CNN outperforms the 1-channel version. ### 3.2 Multi-modal model We created a fusion classifier (a model that can integrate multiple data sources or modalities), by combining the CNN with an artificial neural network (ANN), thus combining images and tabular data into a single multi-modal (MM) architecture. Each PyBDSF source in the dataset is processed, with radio images being fed into the CNN and features into the ANN. This approach enables an effective combination of different types of data thereby further improving the performance of the model. In our approach, we adopt late fusion (i.e. the process of combining the input data), where the outputs from the CNN and the ANN are concatenated, and then passed through two dense fully-connected layers followed by an activation softmax function, generating binary predictions. Other approaches exist, such as early fusion, hybrid fusion (combining early and late fusion), and mid-fusion (e.g. transfer module to fuse CNNs at different stages of the architecture; Vaezi Joze et al., 2019). There is a debate regarding the impact of fusion techniques on the multi-modal model performance, but we do not explore this and focus solely on late fusion in this work. It should be noted that retrieving the original input from the tabular data features is not feasible since these are only properties based on a combination of Gaussian models; hence, they do not fully describe the original image. However, the tabular features can benefit the multi-modal model by helping to identify characteristics in the images that are more likely to have astrophysical relevance, as well as bringing in information about the multi- wavelength data that goes beyond just the radio images. The CNN architecture and hyperparameters used are as defined in Section 3.1.1, with the three-channel input defined in Section 3.1.2. The ANN used for running the experiments is a ANN with two fully connected layers, each with 64 neurons. The model is optimised at later stages, albeit with minimal modifications, as detailed in Section 3.6. The initial set of features (baseline features) are the major and minor axes, total and peak flux, and the total number of Gaussians that make up a PyBDSF source, which are the same as those used in the baseline of Alegre et al. (2022). Figure 4 compares the accuracy, precision and recall of different experiments (see Appendix A for a description of these ML performance metrics). As can be seen, the multi-modal model with baseline features shows an increment in the performance values to the CNN alone by about 0.5 per cent in accuracy and about 1 per cent in recall, with negligible effect on precision. It also shows that the performance of recall is superior to that of precision; this is a favourable differential, since the recall (representing the percentage of actual MC sources correctly identified by the model) is the parameter we aim to optimise over precision (which reflects the percentage of sources classified as MC sources by the model that are indeed MC sources). This higher value for recall over precision was already seen in the 3-channel CNN, where the 3-channel CNN had a $\approx 2$ per cent higher recall than the 1-channel CNN, despite a slightly lower overall accuracy. Table 1: Baseline, optical, the set of 18 final features from the Alegre et al. (2022) Gradient Boosting Classifier model (GBC), and the full set of 32 features, which include the GBC features and the first 3 nearest neighbours (3 NNs). The features listed in italic are removed (to avoid duplication) when using the 32 features. The LR features were scaled using the LoTSS DR1 threshold LR value of $L_{\text{thr}}$ = 0.639. Sources refer to PyBDSF sources, for which the full set of feature values are provided in the online material. The features were computed using the LoTSS DR1 PyBDSF source and Gaussian catalogues, as well as the LR values.∗ Features | Definition & Origin ---|--- Baseline | Maj | Source major axis [arcsec]a Min | Source minor axis [arcsec]a Total$\\_$Flux | Source integrated flux density [mJy]a Peak$\\_$Flux | Source peak flux density [mJy/bm]a log$\\_$n$\\_$gauss | No. Gaussians that compose a Sourceb Optical | log$\\_$lr$\\_$tlv | Log10(Source LR value match/$L_{\text{thr}}$)c lr$\\_$dist | Distance to the LR ID match [arcsec]c log$\\_$gauss$\\_$lr$\\_$tlv | Log10(Gaussian LR/$L_{\text{thr}}$)c gauss$\\_$lr$\\_$dist | Distance to the LR ID match [arcsec]c log$\\_$highest$\\_$lr$\\_$tlv | Log10(Source or Gaussian LR/$L_{\text{thr}}$)c log$\\_$NN$\\_$lr$\\_$tlv | Log10(LR value of the NN/$L_{\text{thr}}$)c NN$\\_$lr$\\_$dist | Distance to the LR ID match [arcsec]c GBC (baseline & optical) | gauss$\\_$maj | Gaussian major axis [arcsec]b gauss$\\_$min | Gaussian minor axis [arcsec] b gauss$\\_$flux$\\_$ratio | Gaussian/Source flux ratioa,b NN$\\_$45 | No. of sources within 45′′a NN$\\_$dist | Distance to the NN [arcsec]a NN$\\_$flux$\\_$ratio | NN flux/Source flux density ratioa Nearest Neighbour (3 NNs) | (All feat. replacing italic ones) | NN$\\_$Maj (x3) | NNs major axis [arcsec]a NN$\\_$Min (x3) | NNs minor axis [arcsec]a NN$\\_$log$\\_$lr$\\_$tlv (x3) | Log10(LR value match/$L_{\text{thr}}$)c NN$\\_$lr$\\_$dist (x3) | Distance to the LR ID match [arcsec]c NN$\\_$dist (x3) | Distance to the NNs [arcsec]a NN$\\_$flux$\\_$ratio (x3) | NNs flux/Source flux density ratioa ∗ a - PyBDSF radio source catalogue (Shimwell et al., 2019); b - PyBDSF Gaussian component catalogue (Shimwell et al., 2019); c - Gaussian and PyBDSF LR catalogues (Williams et al., 2019); Figure 4: Main set of experiments, with values for accuracy on the training (red x) and validation (blue circles) sets, with precision (green stars) and recall (yellow triangles) for class MC also shown for the validation set; in each case, the plotted points correspond to different epochs where the training and validation sets show the best performance possible for similar results on both sets (i.e. training was stopped before significant signs of overfitting). The F1-score (not displayed) is consistent with the accuracy values within 10-4. The CNN 1 channel corresponds to the baseline model, for which the performance for all the metrics on the training and validation sets is very similar. The introduction of a 3-channel CNN (CNN 3 channels) and the modification of the architecture into a multi-modal (MM) model (MM 3 chan. baseline feat.) helped to greatly increase the recall. The introduction of more features, independently, all helped to improve the performance. Plotted points correspond to the baseline features (MM 3 chan. baseline feat.), only the optical features (MM 3 chan. optical feat.), and the 18 features used in the GBC model (MM 3 chan. GBC feat.) from Alegre et al. (2022). The best results were obtained from combining the 18 GBC features with additional information about the first, second and third NNs (MM 3 chan. 3NNs all feat.) shown as the shaded model. Overall, all the metrics improved from around 92 per cent to 96 per cent as a result of adopting a MM model and adding more features. For comparison, also shown are the MM model using only 1 channel (MM 1 chan. 3NNs, all feat.), and the neural network alone (ANN only 3NNs, all feat.), both of them showing inferior performance. Different sets of features were then tested independently, building upon the features developed by Alegre et al. (2022). See Table 1 for details of the different features and Figure 4 for a summary of the experiments. The features denoted with ‘lr’ on Table 1 are based on the Likelihood Ratio (LR) values derived from Williams et al. (2019) and correspond to the likelihood of a LoTSS radio source having a true optical galaxy counterpart (Pan-STARRS, if available, or otherwise infrared WISE sources). The LR is a statistical technique that has long been used to automatically cross-match sources at different wavelengths (e.g. Sutherland & Saunders, 1992), in particular those with longer wavelengths for which the positional uncertainty is greater due to the large beam size of the telescopes, resulting in multiple possible counterparts. The LR assesses the probability of a galaxy having a true radio counterpart based on the positional uncertainty of radio sources and both the magnitude distributions of the true counterparts and the source counts of the background sources. First, we considered only optical features; these comprise the log of the LR relative to the threshold value (tlv; that is, the LR divided by the lowest LR value at which a cross-match is considered to be genuine) and the distance to the highest LR counterpart, for both the source, the first nearest neighbour (NN), and the Gaussian with the highest LR value, as well as the highest log LR tlv between the source and the Gaussian. Using only optical features resulted in an increase in precision, a decrease in recall, and an overall increase in accuracy to about 93 per cent and 95 per cent accuracy on the validation and training sets, respectively. Second, using the set of 18 final features defined in Alegre et al. (2022) improves the recall and leads to similar accuracy values of 94 per cent on both training and validation sets. The NNs have been shown to improve the model of Alegre et al. (2022), as they provide useful information about the source surroundings. Therefore, we expand this to incorporate additional NNs, in particular the second and third NNs; for each one, the set of features includes the minor and major axes, the log of the LR tlv, the LR distance, the distance to the NNs and the flux ratio between the NNs and the source. Adding this information about the second and third NNs to the previous 18 features proved to significantly improve all the metrics by almost 2 per cent each (see Figure 4. Experiments using more NNs, such as including the fourth and fifth, did not reveal any further improvements. The results show that the NNs feature information is essential to identifying MC sources since it leads not only to better overall model performance but also to higher values of recall. Additional experiments on features, such as feature scaling, or replacing measured axis sizes with their deconvolved equivalents, failed to produce any further improvement to the model (or decreased performance) and so are not considered further. As a final test of the performance of our multi-modal model, we show on Figure 4 also the performance of the model with the full set of features, but including only 1 channel of input image for the CNN (the baseline 1-30$\sigma$ cut). This shows the performance in all metrics drops by about 1 per cent compared to the 3-channel CNN, justifying our decision to use the 3-channel model. We also show the performance of the ANN alone (i.e. without the CNN). Like the CNN alone, this achieves an overall accuracy on the validation set of around 92 per cent, considerably below that of the multi-modal model. ### 3.3 Removal of Small Isolated Single Gaussian (SISG) sources In this section, a particular set of sources (hereafter referred to as SISG sources) is removed from the dataset in order to evaluate whether it results in any improvements in the performance of the classifier. These correspond to small (major axis smaller than 15 arcsec), isolated (no NNs within a 45 arcsec radius), and single Gaussian PyBDSF sources, which were not cross-matched with a large optical ID. The SISG sources correspond to a large proportion of the sources (186,371 PyBDSF sources, or 57.7 per cent of the full LoTSS DR1 sample, excluding artefacts), for which the classifier from Alegre et al. (2022) achieved 99.98 per cent accuracy. The vast majority of the sources in this group can be cross-matched using the LR method. It is characterised by single-component sources, with the exception of 133 components (the cores) of MC sources, and 4 additional single-component sources for which the LR method gave an incorrect ID. This group of sources shows broadly uniform properties, and they do not add diversified information about class S. Excluding these objects from the training sample therefore allows the classifier to be exposed to a wider range of class S sources. The SISG sources are the type of sources that can be processed even without a classifier since, by their characteristics, they can be cross-matched by LR methods. Additionally, they are also the ones the classifier would easily identify as single-component sources and therefore are likely to get the correct classification. The SISG sources were, therefore, removed from training and testing on a balanced dataset, and they are assigned automatically to class S if they are presented in the data. 666The SISG sources are indicated in the table provided as complementary online material. When removing the SISG from the training set, the model drops in performance on the training set (about 1 per cent to 1.5 per cent worse ability to distinguish between the classes). The overall decrease in performance can be attributed to the exclusion of the easily-classifiable 60 per cent of the single-component sources. With SISG removed from class S, this class is now characterised by sources that are more complex (i.e. this class has a higher number of sources that are not isolated, clustered, and composed by multiple Gaussians) and therefore the performance in class S drops. But at the same time class S now comprises elements that are more relevant for the classification. Importantly, if the model is applied to the full imbalanced dataset, it performs better (especially on class S) than the model trained on all sources (see Section 4). These results indicate that exclusion of the SISG sources improves the overall performance on the full dataset. This strategy also reduces computational costs since it eliminates the need to process more than 50 per cent of the data, which is particularly important when processing large samples. ### 3.4 Augmentation When removing the set of SISG sources from the dataset, the model tends to overfit. In order to minimise this issue, we use data augmentation by increasing the number of examples of the minority class. Data augmentation is an artificial way of enlarging the training set by creating alternate samples of the original data. A common approach to achieve this is by generating synthetic examples, typically through the application of geometric or colour transformations (see Shorten & Khoshgoftaar, 2019, for a review). This technique is commonly used in deep learning models, as it is often necessary to avoid overfitting since these models require a higher number of examples to be trained (e.g. Goodfellow et al., 2016). In astronomy, Dieleman et al. (2015) applied augmentation by using geometric transformations to prevent a CNN model from learning specific orientations of galaxies in optical images. Assuring the models are rotational invariant is now a common practice for astronomy applications (see also Appendix B). In radio morphology classification, where there are generally 2–5 classes but sometimes as few as 100 objects per class (e.g. Aniyan & Thorat, 2017), augmentation is commonly achieved by massive oversampling, e.g. applying multiple rotational and flipping angles. Maslej-Krešňáková et al. (2021) demonstrated that the use of both vertical and horizontal flips increased accuracy by roughly 10 per cent, but improper augmentation operations, such as shifting and zooming, degraded their CNN model. The augmentation procedure was done as follows: having cut 256$\times$256 pixel FITS images from the original LoTSS DR1 mosaics and applied different sigma clipping thresholds, we then performed augmentation on the minority class (class MC). We rotated each image around the PyBDSF position at the centre of the frame by a random angle between 0 and 2$\pi$ and applied random (true or false) vertical and/or horizontal flipping. The transformed images were then cut to their final sizes of 128$\times$128 pixels (see Figure 5 for an example). By rotating the images prior to reducing their size, we avoid the issue of empty corners created by the rotation; this avoids the need for any interpolation to fill in the empty regions, and eliminates the possibility of the classifier correlating such corner effects to the augmented class. The majority of the sources in LoTSS correspond to sources belonging to class S. The training set for class S was created by randomly undersampling single- component sources, and therefore did not require any type of augmentation. The blended sources, which are rare, were also not augmented. They were added up to the undersampled single-component sources in order to ensure the same number of sources as in class MC. This allows for the creation of a balanced dataset for evaluating the results. Even though balanced datasets are not typical, balancing the dataset is necessary for the network to effectively learn the characteristics of the sources in the different classes. Figure 5: Example of the augmentation process, where sources undergo random rotations and horizontal and vertical flips. This is done after sigma clipping (1$\sigma$-30$\sigma$ linear in this example) on 256$\times$256 pixel images before cropping them to 128$\times$128 pixels. The augmentation process is exclusively applied to the training set and only to the minority class, as mentioned previously. We experimented with increasing the number of sources in the dataset by factors of two, three, and five relative to the original dataset. The validation and test sets remained unaffected and contained always the same amount of sources, regardless of augmentation. The datasets were constructed using the same sources, but for each augmentation factor, new single-component sources from the majority class were added. When using the augmented datasets, it was necessary to adjust the learning rate. We found that augmenting the dataset three times the original size was sufficient to prevent overfitting while achieving good results, as we can see from Figure 6. On three times the size of training set, the number of sources in class MC is 18,789 MC sources, which is three times the number of MC sources in the original dataset (excluding any MC source for which at least one the source components was in the SISG group). The number of sources in class S is also 18,789, but in this case, these correspond to 18,189 single- component sources and 600 blended sources. 777Information about the dataset splitting is provided in the online table. The following values are assigned to sources associated with each set of data (as indicated by the column mc$\\_$dl$\\_$dataset): none of the sets (0); training set (1); test set (2); validation set (3). In the context of the multi-modal model, it was necessary to replicate the feature values for every augmented image, ensuring that they align with each respective instance. Furthermore, we also ensured that the dataset was properly shuffled when training the classifier. ### 3.5 Model optimisation and final architecture adopted In order to investigate if it is possible to optimise the model, different aspects were analysed. These comprise architectural variations and hyperparameter adjustments. Changes to the structure included investigating the ANN width and depth (by varying the number of layers and changing the number of neurons in each layer), the removal of layers in the CNN, and the presence or absence of layer normalisation or batch normalisation. Adding layers to the ANN part of the model, specifically ranging from 2 to 5 layers, did not result in improvements. On the other hand, changing the number of hidden neurons in each layer (64, 128, 512, and 256) showed that using 256 neurons resulted in higher performance (with a further reduction in the dropout rate from 50 percent to 25 percent). As a result, the ANN part of the model adopted is a two-layer ANN containing 256 neurons each. Regarding the CNN module, the use of batch normalisation following each convolutional or dense layer, either as a substitute or in combination with dropout, led to a decline in performance. Furthermore, we investigated reducing the length of the CNN. However, it was observed that the elimination of the first layer led to overfitting and a decrease in overall performance. Figure 6: The figure shows the effect of augmentation when training with the SISG sources removed from the dataset. The coloured lines represent the learning curves for 30 epochs of training for illustration. The model trained with all types of sources achieves greater accuracy (yellow line), but this is also because the class S contains about 60 per cent of sources that are easy to classify; see the main text for a discussion. The performance drops and shows major discrepancies on the training and validation sets when the SISG sources are removed (blue line), but augmentation helps to compensate for this effect (red line). Furthermore, we explored alternative hyperparameters besides the baseline ones defined in Section 3.1.1. One of the experiments involved testing different learning rates, including ones with both static and variable rates, and using alternative learning optimisers which iteratively adjust the weights of the networks and/or the learning rate of the training to find the minimum error for a certain problem. Using the Adam optimiser (Kingma & Ba, 2014) yielded inferior performance, while using stochastic gradient descent (SGD; e.g. Bottou, 2010), particularly when used with momentum, demonstrated superior performance. The best results were achieved with a SGD, which is an optimiser that adapts the weights but not the learning rate. The weights were updated using Nestrov momentum (Sutskever et al., 2013). Different batch sizes were evaluated since smaller batch sizes tend to result in higher performance, although the extent of their effectiveness depends on the GPU being used since very small batch sizes may cause memory problems. Different values of batch sizes were assessed, including 16, 32, and 128. Results were indeed better for smaller batches, and 32 was chosen as the best without massive computational problems. Additionally, the optimisation process involved reviewing the number of training epochs and eventual early stopping. Training the model for a higher number of epochs (more than 50) resulted in accuracy in the validation set above 92 per cent, with no significant differences in performance on the training set, as can be seen from Figure 7 . We identified an interval of 10 epochs, ranging from the 60th to the 70th epoch, which led to the most favourable results. These epochs show strong performance and smaller overfitting, with a discrepancy between the validation and training sets of less than 1.5 per cent. It was also observed that training below this range leads to a decline in performance, with the accuracy dropping below 92 per cent. The chosen epoch for stopping training was epoch 64, based on this resulting in only a minor difference of 1.083 per cent between the training and validation sets. This results in an training accuracy of 93.6 per cent and a validation accuracy of 92.5 per cent. Figure 7: Learning curve for the final adopted model after optimisation. The model reaches about 91 per cent accuracy after only 20 epochs of training on both the training and validation sets. Training for longer gives about 1 per cent improvement to 92 per cent accuracy on the validation set. There is a higher difference in the performance on the training set with an increasing number of epochs, which is a clear indication that the model may be overfitting. However, as we can see from Table 2, it is worth to train for longer since the performance on both validation and test sets ends up to be very similar, and so training for longer helps improve the model by about 1 per cent in accuracy. We defined epoch 64, which was selected from the 60–70 range of epochs where the performance seems to stabilise. The accuracy reaches a plateau on the validation set and does not seem to improve more than about 92 per cent. Figure 8 provides a schematic representation of the adopted architecture and outlines the steps taken to achieve the final model. These comprise 1) building the dataset, 2) creating the CNN, 3) and the ANN modules, and 4) assembling the multi-modal model (including optimisation). The model inputs a 3-channel radio image into a 4-block CNN and a set of features into a 2-layer ANN with 256 neurons each. Each convolutional layer has a kernel of 3x3, padding of 1, and stride of 1 (with the exception of the first layer of the first two blocks of the CNN, which have a stride of 2), followed by a ReLU activation function. The maxpooling layer has a kernel size of 2x2, a stride of 1, and padding of 1. The outputs of the CNN are then concatenated with the outputs of the ANN and passed through a set of two dense layers with 64 neurons each before being fed into a softmax function, which outputs a probability of the source being a MC source or not. The model was trained for 64 epochs with a batch size of 32, a SGD optimiser with a 0.9 Nestrov momentum, and a learning rate of 0.0001 without decay. The number of filters in the convolutional layer is indicated in the figure, as is the amount of dropout applied. Figure 8: Sequence of steps employed to construct the final model (left), and the adopted model architecture (right). It consists of a multi-modal architecture, that inputs a 3-channel image of 128$\times$128 pixels into a 4-block CNN (very similar to the Becker et al. (2021) architecture), and a set of features into a 2-layer ANN. The model outputs the probability of a source being a multi-component (class MC) or a single-component source (class S). More details about the architecture and the model hyperparameters can be found in the main text. ### 3.6 Final model performance Performance metrics using the optimised model trained on the augmented, balanced dataset are presented in Table 2, for both the validation and test sets. The value adopted for the threshold is 0.5, which is commonly used for balanced datasets, and the metrics used are accuracy, precision, recall, and F1-score, as explained in Appendix A. Given that the dataset adopted for training the model was created with the SISG removed, the results presented here are for a dataset where the SISG were removed as well. As can be seen from the table, the performance on the validation and test sets is very similar across all of the metrics, which shows the model is able to generalise to unseen data. Overall, the model favours recall on class MC (and precision on class S), which is the value we want to optimise. Our goal is to maximise the number of correctly identified MC sources because it will ensure accurate source flux measurements. If these are sent to be cross-matched automatically without prior analysis, the source properties will be wrong. At the same time, we want to keep the number of sources wrongly identified as MC sources low, either because the source component association algorithm may fail on those and/or because we will have to analyse those sources and manually grouping them and/or cross-matching. According to the values obtained for the recall on the validation and test sets, the model is able to identify 94 per cent of the sources that are MC sources correctly. From the ones that are classified as not being MC sources, about 94 per cent as well are indeed not MC sources, as per the precision obtained in class S for both validation and test sets. Despite the increased number of complex sources in the augmented dataset (which is accompanied by the same number of single-component sources), the classifier effectively differentiates between the various classes. This shows the ability of the classifier to handle rotation invariance since about 60 per cent of the sources in class MC suffered rotations and flippings. More details of testing the final model to ensure it has rotation/reflection symmetry, confirming that it does, are discussed in Appendix B. Table 2: Performance on a balanced dataset for the final model with SISG sources removed. The validation and test sets each contain 2,685 and 2,683 sources, respectively, with an equal distribution of sources between class MC and class S as defined in Section 2.2. The results show the accuracy, precision, recall, and F1-score for the 2 classes for a decision threshold of 0.5. | Validation set | Test set ---|---|--- Accuracy | 0.925 | 0.925 F1-score MC | 0.926 | 0.926 F1-score S | 0.924 | 0.923 Precision MC | 0.914 | 0.911 Precision S | 0.937 | 0.939 Recall MC | 0.939 | 0.941 Recall S | 0.911 | 0.908 ## 4 Application to the full LoTSS-DR1 dataset In this section we apply the model to the full LoTSS DR1 dataset. The LoTSS datasets differ from the data used to train and test the model both in terms of class balance and the type of sources that make up the classes, since the SISG sources were removed from training. Class imbalance happens when one of the classes is severely underrepresented, which is the case for MC sources in the real LoTSS datasets. The classes defined are highly imbalanced, with less than 3 per cent of the sources being MC sources. This effect is commonly counteracted with threshold moving, which can be done by evaluating the metrics we intend to improve and choosing a more suitable threshold value. However, it can be observed that the use of a training set where the SISG sources are removed already goes some way towards counterbalancing the class asymmetry, and with our desire to maximise recall on the MC class, suitable thresholds are found to be around 0.5, as discussed next, which is the default threshold value for balanced datasets. ### 4.1 Performance as a function of the threshold In order to investigate if 0.5 is the appropriate value to discriminate between the classes, we examined the performance of the model on the LoTSS DR1 sample using different threshold values. As outlined in more detail by Alegre et al. (2022), corrections are applied in cases where at least one of the source components is flagged as being a MC source: in these cases, although other components of the same MC source may not themselves be identified as MC (and hence incorrectly classified as false negatives 888A false negative (FN) source is a class MC PyBDSF source classified by the model incorrectly as class S. A false positive (FP) is a source that is incorrectly classified as a MC source but is actually a class S one (either a single-component source or a blended detection). True positives (TP) and true negatives (TN) are sources that the model has correctly identified, corresponding to class MC and class S sources, respectively.), these components will be re-found as part of the examination of the identified MC component. To account for this, following Alegre et al. (2022), we remove these sources from the false negative (FN) category. | ---|--- Figure 9: Performance of the model on the LoTSS DR1 sample plotted for different threshold values with corrections (solid line) and without corrections (dashed line). Corrections are applied when one source component is identified as part of an MC source allowing the other source components in the same MC source to be recovered, even if they themselves are false negatives. Please see the text for a more detailed explanation. Left: accuracy, recall, and precision. Right: true negative (TN), true positive (TP), false positive (FP), and false negative (FN) counts on a logarithmic scale. The results correspond to the model applied to the full dataset, where sources in the SISG category got assigned automatically to class S, i.e. not being part of a MC source. Figure 9 shows the results of applying the final model to the LoTSS dataset. It can be seen that, in general, the model favours recall instead of precision unless the threshold is above 0.9. Recall is the metric it was intended to prioritise and for which the results show always high values close to unity up to a threshold of 0.6. For thresholds around 0.5 the number of FN reaches values around 100, and it is higher for higher thresholds, reaching values close to 200 at a threshold value of 0.62, which is where the number of TP and FP sources is counterbalanced. A 0.5 threshold shows a good performance for recall and does not compromise precision too much, so this is the threshold value adopted. Depending on the choice of the metrics one intends to optimise, a sensible value range would be between about 0.5 and 0.6 in order to reduce the number of FP, since the true positive rate (TPR) decreases towards higher thresholds, as it will be discussed next. In Figure 10, we show the Receiver Operating Characteristic (ROC) curve where the FPR corresponds to the proportion of class S sources that are incorrectly classified as being MC sources, and the TPR corresponds to the proportion of MC sources that are correctly identified by the model (see Appendix A for performance metrics). The FPR values are always very low, but this is because there are many single- component sources in the dataset and therefore many sources that are TN. The adoption of a threshold value of 0.5 (blue and red crosses in Figure 10) leads to a FPR of nearly 4 percent, corresponding to approximately 10,000 sources of class S. Only for higher threshold values does the number of FP start to decrease (which can be seen in Figure 9), and therefore the FPR decreases. This shows that only for thresholds above about 0.8 there is a significant reduction in the number of FP sources and in the FPR. On the other hand, the TPR values are always very high, decreasing only towards greater thresholds. This is because the number of TP is roughly constant across thresholds (see Figure 9) decreasing only for threshold values close to unity, and the number of FN is always low in comparison. However, the FN counts start to increase for higher thresholds, and therefore the TPR decreases. For the 50 per cent threshold adopted, this means that almost all the MC sources are being accurately identified, with only a very small number of MC sources being missed by the model (see also Figure 11). Figure 10: Receiver operating characteristic (ROC) curve, with the FPR (FP/(FP+TN)) in the x-axis and the TPR (TP/(TP+FN)) in the y-axis, plotted for different threshold values (colour coded). These correspond to values for which corrections are applied (filled markers) or not applied (empty markers). Overall, the classifier shows outstanding performance, and corrections improve the model for both TPR and FPR. Note that the plot corresponds to a zoom in to the left-hand side of the ROC curve, with only relevant values of the axes shown. The cross markers correspond to the 0.5 threshold adopted. ### 4.2 Results at a threshold of 0.5 Using the adopted threshold value of 0.5 999In the online table, the column mc$\\_$prediction$\\_$0.5 corresponds to the predictions for a threshold value of 0.5, with the following values: (0) sources predicted as class S; (1) sources predicted as class MC; and (2) sources corrected (i.e. recovered to class MC) as described in Section 4.1. The actual prediction values correspond to the mc$\\_$probability$\\_$multi column., we analyse the performance of the classifier across the entire dataset and on different categories of sources. This is done by analysing the results of the Confusion Matrix (CM; see Appendix A) where the values on the CM correspond to the number of sources in the TP, TN, FP, and FN classes, defined earlier in this section. The results of the model applied to the full LoTSS DR1 sample can be seen in Figure 11. The figure also compares how the model performs when confronted with the SISG, which were excluded during training. As will be explained next, the adopted strategy will consist of training the model with the SISG removed and applying it to all DR1 sources except for SISG, and then setting the SISG to class S (i.e. sources that are automatically classified as not being MC sources). SISG sources set as S | SISG sources removed | SISG sources only ---|---|--- | | Figure 11: Confusion matrix for all the sources in LoTSS DR1 using the final adopted model and a threshold value of 50 per cent. Left: results for all the sources in LoTSS DR1 setting the SISG to class S. Middle: results on DR1 with the SISG sources removed. Right: results for the SISG sources only. In all panels, the values in the square brackets correspond to the numbers of FP before applying the corrections. The left CM in Figure 11 shows the results of the final model (in which all SISG sources are assigned to class S, i.e. not MC sources), while the middle and right CM correspond to the results when the final model is applied to non- SISG and SISG sources, respectively. These result from training the model with SISG sources removed, calculating a prediction for all LoTSS DR1 sources, and separating the data by SISG sources. The left CM is subtly different from the sum of the values in the different cells of the middle and right CMs because SISG sources were all automatically set to class S. Therefore, all the SISG sources that had been classified correctly (89 sources) or incorrectly (589 sources) as MC sources will contribute to the values in the left column in the left-hand CM. By setting SISG to class S, the classification is improved by saving almost 600 sources from the FP, even though 10 more sources (after correction) end up as a FN. However, this represents a good trade since it means a maximum of 5 physical radio sources because, by definition, each MC is made up of at least 2 source components. Using the adopted strategy and the 0.5 threshold, the accuracy of the model when applied to the imbalanced LoTSS DR1 dataset is 96.62 per cent. This demonstrates that the overall results when applying the model to other data are also improved if the SISG sources are set automatically assigned to class S. Based on this conclusion, the SISG sources can also be excluded from the data processing (and its predictions set to class S). This results in only about 40 per cent of the data requiring to be processed. ### 4.3 Performance as a function of sources properties In order to understand the performance of the model and its ability to distinguish between class MC and class S, we evaluate the performance of the classifier as a function of source characteristics and contextual information. This is illustrated in Figure 12. | | ---|---|--- | | Figure 12: Performance as a function of radio source properties, with accuracy displayed in red and recall for the MC class in blue. The histograms show that the final model adopted has high accuracy across the different properties being analysed, particularly for smaller and fainter sources and those with no near neighbours. The values of recall, however, are always significant above 0.95, with the sole exception of when there is no nearest neighbour within 60 arcsec. High values of recall are due to a consistently high number of sources being identified as MC sources and a low number being missed. The values of precision (not plotted) are consistently weaker and have values around 0.5 across all the parameter space, indicating a large number of false positive (as also seen in the confusion matrix). See the text for a discussion. Regarding the angular sizes of the PyBDSF source being analysed (panel a), accuracy is consistently above 95 per cent for sources with major axis up to around 10 arcsec, indicating a successful identification of class S sources through a high number of TN. These type of sources correspond to the majority of sources in the LoTSS surveys. Interestingly, even at these small angular sizes where single-component sources dominate the sample, the recall for MC sources remains high. The accuracy drops steeply as the source size increases, reaching around 75 per cent for sources with 25-30 arcsec and remaining at this value for larger sources (albeit that there are relatively fewer sources of this size in the sample). The drop in accuracy for larger sources is primarily due a decrease in the proportion of TN, that is, sources that are actually single component being correctly identified, because there are less single component sources with these sizes. The accuracy is above 90 per cent for sources with total flux density (panel b) below 4-5 mJy, but it drops for brighter sources, particularly for sources brighter than 15 mJy, where the performance drops to about 85 per cent. High performance at lower flux densities is due to a high number of TN since the majority of the sources in LoTSS are faint. Interestingly, a higher proportion of FP can also be found in the fainter bins, with extended and faint emission being more likely to be part of a MC source as opposed to bright emission. Lower performance at higher flux densities is attributed to a small fraction of TN. Recall of MC sources remains consistently high at all flux densities. The classifier shows high performance values for sources where the distance to the LR counterpart (panel c) is low, in particular for values below 1-2 arcsec. Small values of LR match distance (and high values of LR value) indicate a genuinely-associated optical match, suggesting that the source is a single-component source in most cases (the alternative being the core of a MC source). Therefore, the classifier is able to correctly identify these sources as class S sources. The accuracy drops sharply for higher values (but it is always above 90 per cent) as the proportion of TN falls. The highest proportion of sources being misidentified as MC sources can be found for the 3 smaller LR distances bins. A similar conclusion can be drawn if inspecting the performance using the nearest neighbour (NN) LR distance (panel d). Smaller NN LR distances have a higher probability of the NN source being a MC source, and therefore a high probability of the source being a MC itself, since it takes two source components to make up a MC source. The performance drops for higher values of the NN LR distance, as happens with the source LR distance, due to a drop in the TN. In all cases the accuracy is always above 90 per cent for higher LR distance matches. The performance as a function of the NN properties (panel e) is evaluated further since the presence of a NN is an indication that the source might be clustered and potentially has a higher chance of being part of a MC source. If the first NN is more than around 50 arcseconds apart, the accuracy is close to 100 per cent, indicating that the majority of these sources do not need to be grouped and are correctly identified by the model as class S sources. The recall of MC sources for such distance nearest neighbours is at its lowest here of all of the parameter space examined in Figure 12, but still remains above 90 per cent. Smaller distances to the NN suggest a more crowded environment and increase the chances of the source being a MC source, and therefore the accuracy of the classifier drops due to the mixed population. When evaluating the performance as a function of the number of NNs within 45 arcseconds (panel f), it is possible to observe that the classifier reaches accuracy values close to 100 per cent when there are no NNs within this radius because the chances of being a class MC source are comparatively low, and there is an accurate identification of the class S sources, for which the majority do not have any NN within 45 arcseconds. As the number of NNs within the radius increases, the accuracy drops, mainly because there are a smaller number of single-component sources in these bins. ### 4.4 Performance below 4 mJy flux density Due to the amount of sources in LoTSS, in LoTSS DR2 sources with a flux density of less than 4 mJy were not sent for visual inspection (Hardcastle et al., 2023). This is mainly because priority was given to potential WEAVE-LOFAR (Smith et al., 2016; Jin et al., 2023) target sources (which are brighter than 8 mJy) for spectroscopic follow-up. Furthermore, there are many sources below 4 mJy, most of which are single-component sources (see Williams et al., 2019). Those faint sources which are multi-component are, in general, very difficult to identify, which would represent a huge effort without much return. Hence they have not been inspected in DR2, and therefore, it is important to investigate the performance of the model below this 4 mJy threshold. The ability for the classifier to identify MC sources for fainter (below 4 mJy) and brighter (above 4 mJy) sources can be seen in Figure 13. Sources < 4 mJy | Sources > 4 mJy ---|--- | Figure 13: Confusion matrices for the entire LoTSS DR1 dataset, for sources below (left) and above (right) 4 mJy. The classifier demonstrates higher accuracy when classifying fainter sources, achieving 97.89 per cent accuracy. The accuracy drops to 86.61 per cent for brighter sources. There is a much larger population of sources below 4 mJy compared to those above it, and the large values of accuracy for faint sources are because a considerable number of them are correctly classified as single- component sources. The number of sources flagged as MC sources (both correct and incorrect classifications) has a similar order of magnitude for sources above and below 4 mJy. Therefore, the performance of the classifier is comparable among these two groups, with the large majority of the genuine MC sources being correctly flagged as MC, and a similar number of sources being incorrectly flagged as MC. This is more pronounced for fainter sources, but without major differences. Furthermore, the model successfully identifies nearly all the sources that necessitate component association, missing less than 2 per cent of those even if the source is faint. The distribution of sources in each of the cells of the confusion matrix as function of the flux density can be seen in Figure 14 (note the logarithmic y-axes). At lower flux densities, the abundance of single-component sources is higher, and the number of correctly classified class S sources (TN) is also higher. There is a reduction in the number of TN sources as the flux density increases, but this is because there are fewer bright sources overall. For sources with lower flux densities, there is a greater number of FN, but at high flux densities a higher proportion of sources are multi-component compared to at low flux densities. This is also why accuracy drops at high flux densities (see Figure 12). This trend is also evident when examining the distribution of sources in the TP and FP histograms. The occurrence of FP is predominantly observed at lower flux densities, but this is also because there are many more sources at these flux values. | ---|--- | Figure 14: Confusion matrix counts in terms of flux density. Each of the cells correspond to: TN (top left, blue), FP (top right, yellow), FN (bottom left, red), and TP (bottom right, green). ## 5 Conclusions and future outlook The number of faint sources with intricate radio structures is increasing in modern radio continuum surveys. Sometimes source components can be mistakenly identified as independent sources despite being components of the same physically connected radio source. This work introduces a multi-modal deep learning classifier specifically designed to identify these MC sources. These are sources that require component association and for which currently there are no automatic identification methods available. This work has implications for future surveys as it becomes impractical to select and cross-identify all sources using conventional astronomy techniques, which commonly involve substantial amounts of visual analysis. The work also highlights the effectiveness of deep learning algorithms, particularly when combining data from diverse sources, as a valuable approach for handling modern radio surveys. The model developed in this work combines a convolutional neural network and an artificial neural network into a single architecture. The model incorporates radio images and source parameters of the radio sources and their nearest neighbours, as well as parameters of the possible optical statistical counterpart. The model is trained using LoTSS DR1 manual annotations to discriminate between a) sources that are part of MC sources (which will always be difficult to identify and cross-match) and b) relatively compact sources, which can be processed in a more automatic way using statistic methods or machine learning methods such as those of Alger et al. (2018), and are also typically unresolved single-component sources. We used 9,046 MC PyBDSF sources out of the total 323,103 PyBDSF sources identified in LoTSS DR1. While 75 per cent of the MC sources were used for training purposes, 30 per cent was split equally for validating and testing the model. The dataset was augmented by performing rotations and flips on the MC sources and using a proportional number of random single-component sources, in order to achieve a balanced dataset. The dataset after augmentation comprised 42,946 sources, of which 37,578 were used for training, 2,685 for validation and 2,683 testing (we defined the validation and test sets to be the same size as the ones before augmenting the training set). In this work, we employ active learning by excluding SIGS sources from the dataset before the training process. These sources do not add diversity to the dataset and can be predominantly cross- matched using statistical methods. By removing the SIGS sources, we increase the ability of the model to detect MC sources and save processing time since these correspond to approximately 60 per cent of the LoTSS data. The model demonstrates good results, achieving a recovery rate of 94 per cent for sources with MC in the balanced dataset and an overall accuracy of almost 97 per cent in the real imbalanced dataset consisting of 323,103 sources. The performance of the classifier is closer to 100 per cent for small, and faint sources, dropping for sources brighter than 2-3 mJy and sources larger than 10 arcseconds. The classifier shows excellent performance (between 96 per cent and 99 per cent) for sources with smaller distance to an optical counterpart, in particular if the source itself or the NN have a LR match below 1-2 arcseconds, which is an indication that the source (and its NN) are not part of a MC source. The classifier precisely identifies class S sources with 99 per cent accuracy if there are no NN within 45 arcseconds. Furthermore, if the NN is smaller than 10 arcseconds the classifier performs closer to 98 per cent. We evaluated the performance of the classifier for sources below 4 mJy since those are not being visually inspected in LoTSS DR2 (Hardcastle et al., 2023), and a good performance is achieved for both brighter (86.6 per cent accuracy) and fainter (97.9 per cent accuracy) regimes, with many more fainter sources being correctly identified as class S sources since the majority of the sources in LoTSS are indeed faint and single-component sources. These results indicate that the reliability of the classifications heavily depends on the distribution of the source characteristics within the dataset. Our model already exhibits strong performance. However, deep learning is a flourishing field, with new architectures and methods being developed rapidly, and there are a variety of ways in which the model could potentially be improved. Investigating different types of fusion could lead to improvements, for example the architecture could implement a fusion module where the weights of the CNN and ANN are shared across the network instead of performing a single late fusion. Another option could be to construct an ensemble of classifiers to enhance the model’s performance, which could be done with any other type of machine learning or deep learning model. Furthermore, the architecture could be optimised using AutoML, which would help automate the network design process and optimise hyperparameters. Conducting feature exploration, such as grouping features or designing new features, could improve the ANN part of the model. Finally, incorporating different wavelength images could be explored, such as optical and infrared, although their impact is expected to be more important for source cross-matching than for this source classification task in particular. The construction of the dataset could be evaluated in order to examine the performance of the model when blended sources are in the same class as MC sources or when there is discrimination between those three independent classes. This would raise the question of whether the radio source detector was accurate in identifying the source itself. Furthermore, it would be interesting to assess whether additional training examples improve the overall performance, which could be achieved using the outputs from the citizen science annotation of LoTSS DR2 (Hardcastle et al., 2023) to train and evaluate the model. Mostert et al. (2024) assembled a pipeline to automatically group and cross- match multi-component radio sources. The source association part of the pipeline builds on the approach of Mostert et al. (2022) for component association. However, while that algorithm performs well on genuine MC sources, if single-component sources are included, then 7.7 per cent of them get erroneously grouped with unrelated PyBDSF sources. In addition, they assume that the majority of their galaxies will belong to the type of sources identified by Alegre et al. (2022) as the ones that cannot be matched using the LR technique. While this is expected, Alegre et al. (2022) do not specifically address whether a source requires radio component association. The present work will help to tackle this question by determining the specific subset of sources on which the source association code should be executed. This will also allow the pipeline to be expanded to include fainter and smaller sources than it does now. Our results will therefore improve the overall pipeline for automatic source association and identification in LoTSS. The proposed methodology would involve three main steps. Firstly, the findings of the present study are used to identify the PyBDSF sources that are most likely to be part of a MC source. Secondly, the Mostert et al. (2022) component association code is executed to define the physical radio sources (possibly extending the method to smaller and fainter sources). This uses the output of Alegre et al. (2022) to eliminate unrelated single-component sources within the bounding box of the extended source, for which the threshold value can be adjusted as well. Finally, after the sources have been associated, the Barkus et al. (2022) code is used to obtain the optical identifications using the ridgeline approach. In conclusion, in LoTSS DR1 and LoTSS DR2, a substantial effort was put into analysing the sources that require component association. This was done manually on LGZ by associating components and cross-matching. Therefore, the outcomes of this work are of significant value for incorporating into pipelines for the processing of upcoming LoTSS data releases or other radio surveys. Furthermore, the results can be incorporated into diverse pipelines not only for automated cross-matching but also for identifying sources for further radio morphology classification or for the simple detection of radio sources (for example, by ensuring the radio properties correspond to actual sources). ## Acknowledgements LA is grateful for support from the UK Science and Technology Facilities Council (STFC) via CDT studentship grant ST/P006809/1. PNB and JS are grateful for support from the UK STFC via grants ST/R000972/1 and ST/V000594/1. The authors would like to express their gratitude to the referee for the valuable feedback that improved the clarity of the paper. The authors thank Deyan Petrov and Adam McCabe for their help in the early stages of this project. LOFAR data products were provided by the LOFAR Surveys Key Science project (LSKSP; https://lofar-surveys.org) and were derived from observations with the International LOFAR Telescope (ILT). LOFAR (van Haarlem et al., 2013) is the Low Frequency Array designed and constructed by ASTRON. It has observing, data processing, and data storage facilities in several countries, that are owned by various parties (each with their own funding sources), and that are collectively operated by the ILT foundation under a joint scientific policy. The ILT resources have benefitted from the following recent major funding sources: CNRS-INSU, Observatoire de Paris and Université d’Orléans, France; BMBF, MIWF-NRW, MPG, Germany; Science Foundation Ireland (SFI), Department of Business, Enterprise and Innovation (DBEI), Ireland; NWO, The Netherlands; The Science and Technology Facilities Council, UK; Ministry of Science and Higher Education, Poland. ## Data availability The datasets were derived from LoTSS Data Release 1 publicly available at https://lofar-surveys.org/dr1$\\_$release.html. The table of source features derived for this work is provided as supplementary online material, along with the model predictions. The model is available by request to the lead author. ## References * Alegre et al. (2022) Alegre L., et al., 2022, MNRAS, 516, 4716 * Alger et al. (2018) Alger M. J., et al., 2018, MNRAS, 478, 5547 * Alhassan et al. (2018) Alhassan W., Taylor A. R., Vaccari M., 2018, MNRAS, 480, 2085 * Aniyan & Thorat (2017) Aniyan A. K., Thorat K., 2017, ApJS, 230, 20 * Baltrušaitis et al. (2019) Baltrušaitis T., Ahuja C., Morency L.-P., 2019, IEEE Transactions on Pattern Analysis and Machine Intelligence, 41, 423 * Barkus et al. (2022) Barkus B., et al., 2022, MNRAS, 509, 1 * Becker et al. (1995) Becker R. H., White R. L., Helfand D. J., 1995, ApJ, 450, 559 * Becker et al. (2021) Becker B., Vaccari M., Prescott M., Grobler T., 2021, MNRAS, 503, 1828 * Best et al. (2006) Best P. N., Kaiser C. R., Heckman T. M., Kauffmann G., 2006, MNRAS, 368, L67 * Best et al. (2007) Best P. N., von der Linden A., Kauffmann G., Heckman T. M., Kaiser C. R., 2007, MNRAS, 379, 894 * Bottou (2010) Bottou L., 2010, in Lechevallier Y., Saporta G., eds, Proceedings of COMPSTAT’2010. Physica-Verlag HD, Heidelberg, pp 177–186 * Bowles et al. (2021) Bowles M., Bromley M., Allen M., Scaife A., 2021, arXiv e-prints, p. arXiv:2111.04742 * Bowles et al. (2023) Bowles M., et al., 2023, MNRAS, 522, 2584 * Chambers et al. (2016) Chambers K. C., et al., 2016, arXiv preprint arXiv:1612.05560 * Condon et al. (1998) Condon J. J., Cotton W. D., Greisen E. W., Yin Q. F., Perley R. A., Taylor G. B., Broderick J. J., 1998, AJ, 115, 1693 * Cuoco et al. (2021) Cuoco E., Patricelli B., Iess A., Morawski F., 2021, Universe, 7, 394 * Cuoco et al. (2022) Cuoco E., Patricelli B., Iess A., Morawski F., 2022, Nature Computational Science, 2, 479 * Cutri et al. (2013) Cutri R. M., et al., 2013, VizieR Online Data Catalog, p. II/328 * Dewdney et al. (2009) Dewdney P. E., Hall P. J., Schilizzi R. T., Lazio T. J. L. W., 2009, IEEE Proceedings, 97, 1482 * Dieleman et al. (2015) Dieleman S., Willett K. W., Dambre J., 2015, MNRAS, 450, 1441 * Duncan et al. (2019) Duncan K. J., et al., 2019, A&A, 622, A3 * Fabian (2012) Fabian A. C., 2012, ARA&A, 50, 455 * Fanaroff & Riley (1974) Fanaroff B. L., Riley J. M., 1974, MNRAS, 167, 31P * Goodfellow et al. (2016) Goodfellow I., Bengio Y., Courville A., Bengio Y., 2016, Deep learning. Vol. 1, MIT press Cambridge * Hale et al. (2019) Hale C. L., Robotham A. S. G., Davies L. J. M., Jarvis M. J., Driver S. P., Heywood I., 2019, MNRAS, 487, 3971 * Hardcastle & Croston (2020) Hardcastle M. J., Croston J. H., 2020, New Astron. Rev., 88, 101539 * Hardcastle et al. (2019a) Hardcastle M. J., et al., 2019a, MNRAS, 488, 3416 * Hardcastle et al. (2019b) Hardcastle M. J., et al., 2019b, A&A, 622, A12 * Hardcastle et al. (2023) Hardcastle M. J., et al., 2023, A&A, 678, A151 * Heckman & Best (2014) Heckman T. M., Best P. N., 2014, ARA&A, 52, 589 * Hill et al. (2008) Hill G. J., et al., 2008, in Kodama T., Yamada T., Aoki K., eds, Astronomical Society of the Pacific Conference Series Vol. 399, Panoramic Views of Galaxy Formation and Evolution. p. 115 (arXiv:0806.0183) * Hinton et al. (2012) Hinton G. E., Srivastava N., Krizhevsky A., Sutskever I., Salakhutdinov R. R., 2012, arXiv e-prints, p. arXiv:1207.0580 * Hong et al. (2023) Hong S., Zou Z., Luo A. L., Kong X., Yang W., Chen Y., 2023, MNRAS, 518, 5049 * Hossin & Sulaiman (2015) Hossin M., Sulaiman M. N., 2015, International journal of data mining & knowledge management process, 5, 1 * Ivezić et al. (2019) Ivezić Ž., et al., 2019, ApJ, 873, 111 * Jin et al. (2023) Jin S., et al., 2023, MNRAS, * Khotanzad & Hong (1990) Khotanzad A., Hong Y., 1990, IEEE Transactions on Pattern Analysis and Machine Intelligence, 12, 489 * Khramtsov et al. (2022) Khramtsov V., Vavilova I. B., Dobrycheva D. V., Vasylenko M. Y., Melnyk O. V., Elyiv A. A., Akhmetov V. S., Dmytrenko A. M., 2022, Kosmichna Nauka i Tekhnologiya, 28, 27 * Kingma & Ba (2014) Kingma D. P., Ba J., 2014, arXiv e-prints, p. arXiv:1412.6980 * Kormendy & Ho (2013) Kormendy J., Ho L. C., 2013, ARA&A, 51, 511 * Lukic et al. (2018) Lukic V., Brüggen M., Banfield J. K., Wong O. I., Rudnick L., Norris R. P., Simmons B., 2018, MNRAS, 476, 246 * Maslej-Krešňáková et al. (2021) Maslej-Krešňáková V., El Bouchefry K., Butka P., 2021, MNRAS, 505, 1464 * Mohan & Rafferty (2015) Mohan N., Rafferty D., 2015, PyBDSF: Python Blob Detection and Source Finder (ascl:1502.007) * Mostert et al. (2022) Mostert R. I. J., Duncan K. J., Alegre L., Röttgering H. J. A., Williams W. L., Best P. N., Hardcastle M. J., Morganti R., 2022, A&A, 668, A28 * Mostert et al. (2024) Mostert R. I. J., et al., 2024, arXiv e-prints, p. arXiv:2405.00232 * Ngiam et al. (2011) Ngiam J., Khosla A., Kim M., Nam J., Lee H., Ng A. Y., 2011, in International conference on machine learning. pp 689–696 * Pinciroli Vago & Fraternali (2022) Pinciroli Vago N. O., Fraternali P., 2022, arXiv e-prints, p. arXiv:2205.00701 * Sabater et al. (2019) Sabater J., et al., 2019, A&A, 622, A17 * Samudre et al. (2022) Samudre A., George L. T., Bansal M., Wadadekar Y., 2022, MNRAS, 509, 2269 * Scaife & Porter (2021) Scaife A. M. M., Porter F., 2021, MNRAS, 503, 2369 * Shimwell et al. (2017) Shimwell T. W., et al., 2017, A&A, 598, A104 * Shimwell et al. (2019) Shimwell T. W., et al., 2019, A&A, 622, A1 * Shimwell et al. (2022) Shimwell T. W., et al., 2022, A&A, 659, A1 * Shorten & Khoshgoftaar (2019) Shorten C., Khoshgoftaar T. M., 2019, Journal of big data, 6, 1 * Smith et al. (2016) Smith D. J. B., et al., 2016, in SF2A-2016: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics. pp 271–280 (arXiv:1611.02706) * Summaira et al. (2021) Summaira J., Li X., Shoib A. M., Li S., Abdul J., 2021, arXiv e-prints, p. arXiv:2105.11087 * Sutherland & Saunders (1992) Sutherland W., Saunders W., 1992, MNRAS, 259, 413 * Sutskever et al. (2013) Sutskever I., Martens J., Dahl G., Hinton G., 2013, in International conference on machine learning. pp 1139–1147 * Tang et al. (2019) Tang H., Scaife A. M. M., Leahy J. P., 2019, MNRAS, 488, 3358 * Tieleman & Hinton (2012) Tieleman T., Hinton G., 2012, University of Toronto, Technical Report, 6 * Vaezi Joze et al. (2019) Vaezi Joze H. R., Shaban A., Iuzzolino M. L., Koishida K., 2019, arXiv e-prints, p. arXiv:1911.08670 * Walmsley et al. (2020) Walmsley M., et al., 2020, MNRAS, 491, 1554 * Williams et al. (2019) Williams W. L., et al., 2019, A&A, 622, A2 * Wu et al. (2019) Wu C., et al., 2019, MNRAS, 482, 1211 * York et al. (2000) York D. G., et al., 2000, AJ, 120, 1579 * van Haarlem et al. (2013) van Haarlem M. P., et al., 2013, A&A, 556, A2 ## Appendix A Performance Metrics for supervised classification In classification problems, each example belongs to one of several classes. Binary classification has two classes, which are commonly labelled as positive and negative (or 1 and 0). Table 3 presents the “confusion matrix” for a binary classification problem, where the true positive TP and the true negative TN are the number of values which are correctly identified by the classifier, from the positive and negative classes, respectively. The false positive FP and false negative FN correspond to the remaining number of values classified as positive and negative, respectively, but which belong to the opposite class. The confusion matrix may be used to derive standard metrics by which the performance can be evaluated (see e.g. Hossin & Sulaiman, 2015, for a review). | Positive | Negative | ---|---|---|--- True | Positive | TP | FN | | Negative | FP | TN | | Predicted | Table 3: Binary classification confusion matrix. Accuracy is the most popular performance metric. It measures the fraction of sources which are correctly classified relative to the overall classifications: $Accuracy=\frac{TP+TN}{TP+TN+FP+FN}$ (1) When using a balanced dataset (i.e. when each of the classes has a similar number of examples), accuracy shows how well the classifier performs overall. However, for imbalanced datasets, the accuracy may not reflect the real performance of the model since it will be mostly determined by the values in the majority class. Metrics such as precision, recall, and the F1-score need be used to assess the performance in the different classes, individually. Precision can be defined as the fraction of sources predicted as being from a certain class that are actually from that class: $Precision=\frac{TP}{TP+FP}$ (2) The recall (also known as sensitivity or True Positive Rate; TPR) is the fraction of sources from a certain class that are predicted correctly: $Recall\equiv TPR=\frac{TP}{TP+FN}$ (3) Both precision and recall have on the numerator the number of TP. While in precision the denominator is the number of all predicted positive values, in recall it is the number of all real positive values. This means that precision reflects how reliable is the model when predicting if an element belongs to a particular class, while recall indicates how effectively the model recognises the elements from that class. A combination of precision and recall can be given by the F1-score: $F1=\frac{2*Precision*Recall}{Precision+Recall}$ (4) A lower value for either precision or recall will be reflected in this value. Therefore, this score is useful for identifying significant discrepancies between these two metrics. To compute the Receiver Operating Characteristic (ROC) curve (see Figure 10) we also use the False Positive Rate (FPR), which corresponds to the fraction of sources from the negative class that are incorrectly classified: $FPR=\frac{FP}{FP+TN}$ (5) ## Appendix B Rotation invariance of the final model We explore the ability of the model to handle rotational and reflection symmetry, as source classification should not depend on any particular source orientation. Khotanzad & Hong (1990) demonstrated that the Zernike moments exhibit inherent rotation invariance when extracted from a shape at different angles. In astronomy, this problem has been mostly addressed using CNNs to classify optical galaxies (e.g. Dieleman et al. (2015); Khramtsov et al. (2022), but recently received more interest in radio astronomy since orientation biases can be particularly problematic for the automatic classification of radio morphology sources into FRI and FRII in large surveys. Scaife & Porter (2021) specifically designed CNNs to be group-equivariant and Bowles et al. (2021) combined this with attention networks. In order to test for this effect, we investigate the classification of the same source when seen from a variety of orientations and flips. We used only the PyBDSF sources that belong to multi-component sources from the training set to inspect for this aspect; each image was randomly rotated and flipped as explained in the augmentation process. We did this four times, obtaining a total of 6,277 PyBDSF sources. The predictions for these sources were then calculated and compared. We calculated the standard deviation for the predictions of each group of 4 sources; the two sources with the most extreme variation had standard deviations between 0.05 and 0.1, but for the vast majority of the sources the standard deviation was significantly below 0.01. We inspected the sources for which the probability showed higher differences, and the most extreme case corresponded to an example where part of the source was rotated outside of the image, with probabilities of being a MC ranging from 0.74 to 0.87. For the remaining sources, the differences seem less evident to the naked eye, with some emission obscured but still relevant for the classification. Nevertheless, for the majority of these sources, the predictions are skewed to one of the extremes, and so they do not translate into problems. For a threshold at 0.5, only 14 sources ended up with a mix of classifications but those were very close to either above or below the 0.5 value. We can conclude that the algorithm is rotation-invariant except if an important part of the source falls outside the cropped image for some rotation angles.
# Robust replication initiation from coupled homeostatic mechanisms Mareike Berger Biochemical Networks Group, Department of Living Matter, AMOLF, 1098 XG Amsterdam, The Netherlands Pieter Rein ten Wolde Biochemical Networks Group, Department of Living Matter, AMOLF, 1098 XG Amsterdam, The Netherlands ###### Abstract The bacterium Escherichia coli initiates replication once per cell cycle at a precise volume per origin and adds an on average constant volume between successive initiation events, independent of the initiation size. Yet, a molecular model that can explain these observations has been lacking. Experiments indicate that E. coli controls replication initiation via titration and activation of the initiator protein DnaA. Here, we study by mathematical modelling how these two mechanisms interact to generate robust replication-initiation cycles. We first show that a mechanism solely based on titration generates stable replication cycles at low growth rates, but inevitably causes premature reinitiation events at higher growth rates. In this regime, the DnaA activation switch becomes essential for stable replication initiation. Conversely, while the activation switch alone yields robust rhythms at high growth rates, titration can strongly enhance the stability of the switch at low growth rates. Our analysis thus predicts that both mechanisms together drive robust replication cycles at all growth rates. In addition, it reveals how an origin-density sensor yields adder correlations. To maintain stable cell cycles over many generations, living cells must coordinate DNA replication with cell growth and cell division. Intriguingly, in nutrient-rich environments, the model organism Escherichia coli can even divide faster than the time it takes to replicate its entire chromosome [1, 2, 3, 4]. This apparent paradox was resolved by the model of Cooper and Helmstetter in which new rounds of replication are initiated before the previous round has finished [5] (Fig. 1 A). Donachie then predicted that replication is initiated at a constant volume per origin $v^{\ast}$ [6]. Initiating replication at a constant origin density ensures that DNA replication is initiated once per cell cycle per origin, which is a necessary condition for maintaining stable cell cycles at all growth rates (Fig. 1 A). Recent experiments at the population level showed that the average initiation volume per origin $v^{\ast}$ varies within a $\sim 50\%$ range over a tenfold change in the growth rate [7]. Moreover, single-cell measurements revealed that the initiation volume is one of the most tightly controlled cell-cycle parameters, varying by about $10\%$ for any measured growth rate [3, 8]. Yet, how the initiation volume is controlled so precisely, and what molecular mechanism gives rise to robust cell cycles over many generations remains despite extensive studies poorly understood [9, 10, 11, 12, 13]. To obtain insight into the mechanisms that control DNA replication and cell division, fluctuations in cell size have been studied [14, 15]. These experiments revealed that cells obey an adder principle, which states that cells add an on average constant volume independent of the birth volume during each cell cycle. It has been proposed that cell division control is tightly coupled to the control over replication initiation [3, 16, 17], via a sizer on replication initiation and a timer for cell division. Yet, recent experiments revealed the existence of two adders, one on cell division and the other on replication initiation, and that these two processes are more loosely coupled than hitherto believed [8, 18, 19, 20, 21, 22, 23]. While these phenomenological observations are vital because they constrain any model on the molecular mechanism for initiation and cell division control, no such molecular model has yet been presented that is consistent with the experimental data. Figure 1: We present two distinct models to elucidate the molecular mechanism by which E. coli initiates replication at an on average constant volume per origin. (A) The volume $V(t)$, the number of origins $n_{\rm ori}(t)$ and the origin density $\rho_{\rm ori}(t)=n_{\rm ori}(t)/V(t)$ as a function of time. Initiating replication at a constant origin density $\rho^{\ast}$ (dashed red line) and division a constant time $\tau_{\rm cc}$ later (blue arrows) ensures that the cell initiates replication once per division cycle and that it maintains cell size homeostasis at slow (light blue regime) and fast (dark blue regime) growth rates. (B) Schematic representation of an E. coli chromosome: Replication starts at the origin (oriC, yellow circle) and proceeds via two replication forks to the terminus (ter, grey bar). Replication is initiated by the ATP-bound form of the initiator protein DnaA. DnaA is activated via the acidic phospholipids in the cell membrane and via the two chromosomal sites DARS1 and DARS2, and deactivated via the chromosomal site datA and via regulatory inactivation of DnaA (RIDA), a process coupled to active DNA replication. DnaA also has a high affinity for titration sites (grey circles) located on the DNA. (C) Scheme of the AIT model: In E. coli, the initiator DnaA (red circles) is negatively autoregulated with the dissociation constant $K_{\rm D}^{\rm p}$, and can bind both to the oriC and the titration sites with dissociation constants $K_{\rm D}^{\rm ori}$ and $K_{\rm D}^{\rm s}$, respectively. (D) Scheme of the initiator switch models: In the LD model, ATP-DnaA is mainly activated via the acidic phospholipids and deactivated via the site datA. In the LDDR model, replication forks overlap and RIDA is the main deactivator in combination with the activators DARS1 and DARS2. So far, two distinct classes of models for replication initiation control have been proposed. In the here called initiator accumulation models [24, 17, 16, 25, 26, 27, 28], an initiator protein accumulates during the cell cycle proportional to the cell volume, and replication is initiated when a threshold amount per origin has accumulated. As a fixed amount of initiators per origin needs to be accumulated per replication cycle, models of this class are often seen as a mechanistic implementation of an adder [17, 15, 16, 27]. Many variations of this idea with different degrees of detail have been proposed [16, 25, 26, 27]. Hansen et al. [26, 28] identified the initiator protein as the protein DnaA, which can be titrated away from the origin by DnaA boxes, high-affinity binding sites on the chromosome [12, 29]. This constant number of titration sites per chromosome sets the critical threshold number of initiator proteins required for initiating replication. In this manuscript, we consider a mechanistic implementation of the initiator accumulation model (Fig. 1 C). In E. coli, the initiator protein DnaA is negatively autoregulated and can be bound to titration sites on the chromosome. Following Hansen et al. [26, 28], we therefore consider a model in which the initiator is autoregulated, the Autoregulated Initiator-Titration (AIT) model. While the AIT model indeed gives rise to stable cell cycles at low growth rates, it exhibits reinitiation events at high growth rates. We thus argue that the initiator titration model is not sufficient to explain the experimental data on replication initiation in E. coli. The second class of models is based on a switch of the initiator protein DnaA between an active and an inactive form (Fig. 1 D) [30, 9, 12, 31, 32, 33]. In E. coli, the initiator protein DnaA forms a tight complex with ATP or ADP, but only ATP–DnaA can initiate replication by forming a complex with the chromosomal replication origin (oriC) [34, 35, 36]. While the total DnaA concentration is approximately constant at different growth rates [37, 7], the cellular level of ATP–DnaA oscillates over the course of the cell cycle, with a peak at the time of replication initiation [33, 38, 39]. It has been suggested that the oscillations in the fraction of ATP-DnaA in the cell are the key to understanding how replication is regulated in E. coli, but a quantitative description that is consistent with experiments is currently lacking [40, 12, 13, 41, 32, 42, 40]. Intriguingly, the level of ATP-DnaA is strictly regulated by multiple systems in the cell. DnaA is activated via acidic phospholipids in the cell membrane [43] and via two chromosomal regions called DnaA-Reactivation Sequence 1 (DARS1) and DARS2 [38, 32], and deactivated via the chromosomal site datA in a process called datA-dependent DnaA-ATP Hydrolysis (DDAH) [31] and via a mechanism coupled to active DNA replication, called Regulatory Inactivation of DnaA (RIDA) [34, 33, 44] (Fig. 1 B). Deleting or modifying any of these systems can lead to untimely initiation, asynchrony of initiation, and changes in the initiation volume [45, 46, 47, 13, 31, 48, 49]. To dissect how these multiple mechanisms give rise to a stable cell cycle, we first study the Lipid-DatA (LD) model, which consists of only the acidic lipids and datA (Fig. 1 D). This model reveals that the interplay between a constant rate of activation and a rate of deactivation that depends on the origin density gives rise to stable cell cycles. Yet, at higher growth rates these two reactions alone, based on the experimentally estimated rates of activation and deactivation, respectively, are not sufficient to generate large amplitude oscillations in the fraction of ATP-DnaA. Simulations of our Lipid-DatA-DARS1/2-RIDA (LDDR) model show that in this regime, activation via DARS2 and deactivation via RIDA become essential. Importantly, in our mean-field switch models, DNA replication is initiated at a threshold origin density and mechanistically they should arguably be qualified as a sizer. Yet, we show that a stochastic version of the switch model naturally gives rise to the experimentally observed adder correlations in the initiation volume [8, 18]. Fluctuations in the components that control the DnaA activation switch (lipids, HdA, Fis, IHF) are transmitted from mother to daughter cells and this generates mother-daughter correlations in the initiation volume that can explain the observed adder correlations [8]. Finally, while the AIT model inevitably fails at higher growth rates, the LDDR model is less robust at low growth rates. Yet, combining titration with the activation switch yields robust DnaA oscillations over the full range of growth rates. We thus argue that E. coli has evolved an elaborate set of mechanisms that act synergistically to create robust replication-initiation cycles at all growth rates. ## Models and Results A titration-based mechanism is not sufficient to ensure stable cell cycles at high growth rates. Figure 1 C shows the key ingredients of the AIT model. It consists of a negatively autoregulated initiator protein $p$, such that the change in copy number $N_{\rm p}$ is given by $\frac{dN_{\rm p}}{dt}=\frac{\tilde{\phi}_{\rm p}^{0}\,\lambda\,V}{1+\left(\frac{[p]}{K_{\rm D}^{\rm p}}\right)^{n}}$ (1) following the growing cell model of gene expression of Lin et al. [50] (SI section S1) with gene allocation density $\tilde{\phi}_{\rm p}^{0}$, dissociation constant of the promoter $K_{\rm D}^{\rm p}$, Hill coefficient $n$ and concentration of the initiator protein $[p]=N^{\rm f}/V$ in the cytoplasm. The model also includes a number $N_{\rm s}$ of high-affinity titration sites that are distributed randomly on the chromosome [28, 51]. The volume $V(t)$ of the cell grows exponentially, $V(t)=V_{\rm b}\,e^{\lambda\,t}$, where the growth rate $\lambda=\rm{ln}(2)/\tau_{\rm d}$, with cell-doubling time $\tau_{\rm d}$, is a model parameter. A new round of replication is initiated when the free initiator concentration $[p]$ reaches the dissociation constant for binding to the origin, $K_{\rm D}^{\rm ori}$. Based on the general growth law, the cell divides a constant cycling time $\tau_{\rm cc}$ after initiation of replication [4, 3]. This choice is convenient, as it directly couples cell division to replication, thus eliminating the need for implementing an additional mechanism for cell division, yet does not affect our results, as we discuss later. Figure 2: While the AIT model ensures stable cell cycles at low growth rates (A), it gives rise to premature reinitiation events at high growth rates (B). Adding SeqA, which transiently blocks DnaA synthesis after replication initiation, prevents reinitiation events at high (D) but not at intermediate growth rates (C). (A, B, C, D) The volume $V(t)$, the number of initiator proteins $N_{\rm p}(t)$ and titration sites $N_{\rm s}(t)$, the total concentration of initiator proteins $[p]_{\rm T}(t)$, and the concentration of initiator proteins in the cytoplasm $[p](t)$ as a function of time (in units of the doubling time of the cell $\tau_{\rm d}$) for $\tau_{\rm d}=2$ h (A), $\tau_{\rm d}=35$ min (B, C) and $\tau_{\rm d}=25$ min (D), respectively. (A) When the number of initiator proteins per origin $n_{\rm p}(t)$ exceeds the number of titration sites per origin $n_{\rm s}$ (yellow dashed line), the free concentration $[p](t)$ rapidly rises to reach the threshold concentration $K_{\rm D}^{\rm ori}$ (blue dashed line), initiating a new round of replication. Due to the homogeneous distribution of titration sites on the chromosome of E. coli and the constant DNA constant replication rate, the number of titration sites then increases linearly in time. At low growth rates, new titration sites are synthesized faster than new initiator proteins and the free concentration $[p](t)$ rapidly drops after initiation. After a fixed cycling time $\tau_{\rm cc}$ (blue arrows) the cell divides. The initiation volume per origin $v^{\ast}$ (green dashed line) at low growth rates is constant in time. (B) When the doubling time is however smaller than the time to replicate the entire chromosome, $\tau_{\rm d}<T_{\rm C}$, new proteins are synthesized faster than new titration sites are formed. After a short period $\tau_{\rm b}=10$ min (shaded red area) during which initiation at oriC is blocked via the protein SeqA, replication is reinitiated prematurely, dramatically raising the variation in the initiation volume (see Fig. 5 C, green line). (C, D) Blocking also transiently DnaA synthesis via SeqA during $\tau_{\rm b}=10$ min (shaded red area) can prevent reinitiation at high (D), but not at intermediate growth rates (C). (See Table S1 for all parameters.) The AIT model generates stable cell cycles at low growth rates (Fig. 2 A and Fig. S3). Because the dissociation constant of the initiator protein for the titration sites $K_{\rm D}^{\rm s}$ is smaller than that for the origin $K_{\rm D}^{\rm ori}>K_{\rm D}^{\rm s}$, the cytoplasmic initiator concentration $[p]$ (SI section S2B) remains below the critical initiation threshold $K_{\rm D}^{\rm ori}$ as long as there are still unoccupied titration sites (Fig. 2 A, lowest panel). Yet, when the total number of proteins $N_{\rm p}$ exceeds the total number of titration sites $N_{\rm s}$, the free concentration $[p]$ rapidly rises. When the free initiator concentration $[p]$ reaches the threshold $K_{\rm D}^{\rm ori}$, a new round of replication is initiated. New titration sites are now being synthesized faster than new proteins are being produced and therefore the free initiator concentration $[p]$ drops rapidly far below $K_{\rm D}^{\rm ori}$ (Fig. 2 A, lowest graph). The cell then divides a constant time $\tau_{\rm cc}$ after replication initiation, during which the volume, the number of initiator proteins, and the number of titration sites are halved. In fact, in this mean- field description cell division does not change the concentrations of the components and it therefore does not affect the replication cycle. Importantly, this mechanism ensures stable cell cycles also in the presence of dnaA expression noise and gives rise to the experimentally observed adder correlations in the initiation volume (Fig. S4). At higher growth rates, the titration mechanism, however, breaks down. Because the titration sites are homogeneously distributed over the chromosome [51, 28], the rate at which new titration sites are formed after replication initiation is given by the DNA duplication rate, which is, to a good approximation, independent of the growth rate [4]. In contrast, the protein synthesis rate increases with the growth rate $\lambda$, see Eq. 1. As a result, when the system enters the regime of overlapping replication forks, where the cell division time $\tau_{\rm d}$ is shorter than the time $T_{\rm C}$ to replicate the DNA (SI section S2B4), the mechanism will fail to sequester the initiator after replication initiation, leading to premature reinitiation. Even when the system contains the protein SeqA, which protects the cell against immediate reinitiation events for ‘an eclipse period’ of about 10 minutes [52, 53, 54], reinitiation happens as soon as this period is over (Fig. 2 B). Also varying the number of titration sites and their affinity can not prevent premature reinitiation at high growth rates (Fig. S3); only placing the titration sites near the origin would (Fig. S3), but this is not consistent with experiments [51, 28]. These observations show that the E. coli replication cycle is not regulated via titration only. Interestingly, experiments indicate that after replication initiation SeqA not only blocks the origin, preventing immediate reinitiation, but also transiently lowers the DnaA synthesis rate [52, 53, 54]. The combination of periodic suppression of DnaA synthesis with DnaA titration enables robust DnaA rhythms at sufficiently high growth rates ($\lambda>1.5$ h-1) (Fig. 2 D). But at lower growth rates, corresponding to longer doubling times, the effect of SeqA becomes weaker because of the fixed duration of the eclipse period. As a result, at intermediate growth rates ($1>\lambda>1.5$ h-1) this combination cannot prevent premature reinitiation events (Fig. 2 C). In this regime, another mechanism is needed. Figure 3: An ultra-sensitive switch between ATP-DnaA and ADP-DnaA gives rise to stable cell cycles. (A) LD model: The constant activation rate (red curve) and the origin density-dependent deactivation rate (blue curve) as a function of the active fraction of the initiator protein $f$ at different moments of the cell cycle. The steady-state active fractions are given by the intersection of the activation and deactivation rates (colorful dots) and when $f$ equals the critical initiator fraction $f^{\ast}$, replication is initiated. A doubling of the number of origins leads to a decrease of the active fraction $f$. (B) LD model: The volume of the cell $V(t)$, the number of origins $n_{\rm ori}(t)$ and the fraction of ATP-DnaA $f(t)$ from equation 3 as a function of time (in units of the doubling time $\tau_{\rm d}=2$ h). The average active fraction over one cell cycle $\langle f\rangle$ is indicated in red in the third panel. Replication is initiated at a critical initiator fraction $f^{\ast}$ (red dashed line) and the system gives rise to a constant initiation volume per origin $v^{\ast}$ over time (green dashed line). (C) The amplitude $\Delta f$ of the oscillations in the active fraction $f$ as a function of the growth rate for different magnitudes of the (de)activation rates ($\alpha_{\rm l}=4.6\times\beta_{\rm datA}$). The amplitude of the oscillations $\Delta f$ becomes small for biologically realistic values of the (de)activation rates in the LD model (red solid curve), but not in the LDDR model (red dashed line). (See Table S2 for all parameters and Fig. S8 for time traces of LDDR model.) An ultra-sensitive switch between ATP- and ADP-DnaA gives rise to an origin- density sensor. In the second class of models, not the total number of DnaA is the key variable that controls replication initiation, but the concentration or fraction of DnaA that is bound to ATP [30, 42]. While DnaA has a high affinity for both ATP and ADP, only ATP-DnaA can initiate replication at the origin [34, 35, 36]. The switch between these two states is controlled by several mechanisms, which, we will argue, play distinct roles in different growth-rate regimes. We first focus on the regime of slow growth in which the replication forks are non-overlapping. RIDA, a mechanism promoting ATP hydrolysis in a replication- coupled manner, becomes active upon replication initiation, but, since there are no overlapping forks, is inactive before replication initiation [34]. The chromosomal locus datA can hydrolyze ATP-DnaA via DDAH and is crucial for repressing untimely initiation events (Fig. 1 B) [31]. The two chromosomal DNA regions DARS1 and DARS2 can regenerate ATP-DnaA from ADP-DnaA [34, 32, 13]. The activating site DARS2 is reported to be only active at high growth rates and the activity of DARS1 was reported to be ten times weaker than DARS2 in vitro [32]. In addition to DARS1/2, both in vitro [43, 55, 56] and in vivo [48, 57] experiments indicate that acidic phospholipids can rejuvenate DnaA by promoting the exchange of ADP for ATP. Moreover, as we show in section S3C3, for a switch-based system, activation by DARS1/2 is not sufficient, while lipid-mediated activation of DnaA is vital to generate stable cell cycles. In summary, our modelling in combination with experiments indicates that at slow growth, the dominant DnaA cycle of the switch setting the initiation volume consists of activation by the phospholipids and deactivation via DDAH. This cycle forms the basis of the Lipid-DatA (LD) model (SI section S3B). Since the growing cell model [50] predicts that the total DnaA concentration is nearly constant in time while experiments show that it is nearly independent of the growth rate [7], we make the simplifying assumption that the total DnaA concentration is strictly constant as a function of time and the growth rate. This allows us to focus on the fraction $f=[D_{\rm ATP}]/[D]_{\rm T}$ of DnaA that is bound to ATP [58]. Exploiting that DnaA is predominantly bound to either ATP or ADP [34], the change of the active fraction $f$ in the LD model is given by $\displaystyle\frac{df}{dt}=$ $\displaystyle\frac{d[D]_{\rm ATP}}{dt}\,\frac{1}{[D]_{\rm T}}$ (2) $\displaystyle=$ $\displaystyle\tilde{\alpha}_{\rm l}\,[l]\,\frac{1-f}{\tilde{K}_{\rm D}^{\rm l}+1-f}-\tilde{\beta}_{\rm datA}\,[n_{\rm ori}]\,\frac{f}{\tilde{K}_{\rm D}^{\rm datA}+f}+\lambda\,(1-f)$ (3) with the constant, re-normalized activation and deactivation rates $\tilde{\alpha}_{\rm l}=\alpha_{\rm l}/[D]_{\rm T}$ and $\tilde{\beta}_{\rm datA}=\beta_{\rm datA}/[D]_{\rm T}$ and the Michaelis-Menten constants $\tilde{K}_{\rm D}^{\rm l}=K_{\rm D}^{\rm l}/[D]_{\rm T}$ and $\tilde{K}_{\rm D}^{\rm datA}=K_{\rm D}^{\rm datA}/[D]_{\rm T}$. Note that because datA is located close to the origin, we have used here that their concentrations are equal. We further assume that the concentration of the acidic phospholipids $[l]$ is constant. The last term describes the effect of protein synthesis (Fig. S5). Since ATP is tenfold more abundant than ADP, new DnaA will predominantly bind ATP [34]. This term is however small at low growth rates ($\lambda\ll\tilde{\alpha}_{\rm l},\tilde{\beta}_{\rm datA}$). Our switch model gives rise to stable cell cycles. The crux of the model is that while the activation rate is independent of the volume of the cell, the deactivation rate decreases with the volume because it is proportional to the density of oriC (Fig. 3 A). The ATP-DnaA fraction $f(t)$ therefore increases with increasing volume $V(t)$ as the origin density decreases (Fig. 3 B). When the critical initiator fraction $f^{\ast}=[D]_{\rm ATP}^{\ast}/[D]_{\rm T}$ is reached, replication is initiated. As soon as the origin and thus the site datA have been replicated, the maximum of the deactivation rate doubles and the active initiator fraction $f$ decreases strongly, preventing reinitiation. As the cell continues to grow, the active initiator fraction rises again. This simple mechanism directly senses the origin density and ensures stable cell cycles (Fig. 3 B). At high (de)activation rates, the amplitude of the oscillations $\Delta f=f^{\ast}-f_{\rm min}$ is very large (Fig. 3 C). At smaller and more biologically realistic rates ($\beta_{\rm datA}\approx 10~{}{\rm min}^{-1}$) [31] (see section S3A), the amplitude of the oscillations becomes very small especially at high growth rates (Fig. 3 C); this continues to hold, even when the activation-deactivation system is deeper in the zero-order regime (Fig. S6). Such small amplitudes do not agree with the experiments [33] and are likely to be harmful, as even small fluctuations in the active fraction could result in untimely initiation of replication. LDDR model with all known activators and deactivators allows for larger amplitude oscillations even at high growth rates. Because at biologically realistic (de)activation rates the LD model fails to generate large amplitude oscillations in the active DnaA fraction at high growth rates, the question arises how the cell cycle is regulated in this regime. Interestingly, in the fast growth regime $\lambda>\rm{ln(2)}/T_{\rm C}\approx 1.04$/h, where the doubling time $\tau_{\rm d}$ is shorter than the time to replicate the entire chromosome $T_{\rm C}$, replication is still proceeding when a new round of replication is initiated. This means that at the moment of replication initiation, the deactivation mechanism RIDA, which is associated with active replication forks, is active [59]. Importantly, since RIDA is a potent deactivator [46], its activity must be balanced by another activation mechanism to maintain a roughly constant initiation volume independent of the growth rate [7, 4, 60]. We argue that this is the principal role of DARS2. We therefore included the effects of RIDA and DARS1/2 in our full Lipid-DatA- DARS1/2-RIDA (LDDR) model (SI section S3C). The RIDA deactivation rate is proportional to the total number of active replisomes. The activation rates of DARS1 and DARS2 are proportional to the copy numbers of their loci, which are located in the middle of the chromosome and are replicated at constant times after replication initiation (see Fig. S7). The LDDR model also takes into account the temporal regulation of the activities of DDAH and DARS2 via the Integrating Host Factor (IHF) [12, 31, 32, 13] (see Fig. S7). The LDDR model gives rise to stable cell cycles at all growth rates. Moreover, in contrast to the LD model, the LDDR model gives rise to large amplitude oscillations at all growth rates, even for realistic parameter values (Fig. 3 C) (see Fig. S8 for time traces). This is because after a new round of replication, the RIDA deactivation rate is raised immediately while the activation rates of DARS1/2 are increased only later, after the loci have been duplicated. This differential temporal dependence of the activation and deactivation rates is key to establishing large-amplitude oscillations at all growth rates. A stochastic model can recover the experimentally observed adder correlations in the initiation volume per origin. In the titration-based system, a new round of replication is initiated when the number of DnaA proteins that have been accumulated since the last initiation event equals roughly the number of titration sites, irrespective of the previous initiation volume; moreover, DnaA proteins are accumulated proportionally to the volume of the cell. These two elements together naturally give rise to adder correlations (see section S2B6 and Fig. S4). Yet, our switch model is a sizer at the mean-field level: replication is initiated when the origin density reaches a critical threshold. Do the experimentally observed adder correlations [8, 18] rule out our switch model? To address this question, we systematically studied the effect of fluctuations in the individual components of our switch model. Consider fluctuations in the lipid concentration, modelled as $\frac{d[l]}{dt}=\alpha-\lambda\,[l]+\xi(t),$ (4) where $\alpha$ is the production rate, the second term describes the effect of dilution set by the growth rate $\lambda$ and $\xi(t)$ models the noise resulting from protein production and partitioning upon cell division (SI section S3D). Fig. 4 illustrates our findings using the LD model, but Fig. S11 shows that the principal result also holds for the full LDDR model: the added initiation volume between consecutive initiation events $\Delta v^{\ast}_{\rm n}=2\,v^{\ast}_{\rm n+1}-v^{\ast}_{\rm n}$ is indeed independent of the volume at initiation $v^{\ast}_{\rm n}$, in agreement with experiments [18, 8]. Figure 4: Fluctuations in the switch components can give rise to the experimentally observed adder correlations in the initiation volume per origin $v^{\ast}$, illustrated using the LD model with lipid concentration fluctuations (Eq. 4). (A) The added volume per origin between successive initiation events, $\Delta v^{\ast}_{\rm n}=2\,v^{\ast}_{\rm n+1}-v^{\ast}_{\rm n}$, is independent of the initiation volume $v^{\ast}_{\rm n}$ per origin and on average equal to the average initiation volume, $\langle\Delta v^{\ast}\rangle=\langle v^{\ast}\rangle$, as expected for an initiation volume adder. (B) Lipid-concentration fluctuations $l(t)\equiv[l](t)$ regress to the mean on a timescale $\tau_{\rm d}=\ln(2)/\lambda$ set by the growth rate $\lambda$, such that an initial perturbation $l_{0}-\langle l\rangle$ is halved every subsequent cell cycle. (C) The initiation volume depends on the lipid concentration (Eq. S35 and Fig. S10). (D) The initiation volume relaxes on the same timescale $\tau_{\rm d}$ as the lipid concentration, such that a perturbation $v_{0}^{\ast}-\langle v^{\ast}\rangle$ is halved every cell cycle, giving rise to adder correlations. In (A) the dark blue line shows the mean of the binned data and the error bars represent the standard error of the mean (SEM) per bin. The number of data points $N$ and the Pearson correlation coefficient $R$ are indicated. The model includes an eclipse period of about 10 minutes following replication initiation to prevent immediate reinitiation. (See Table S2 for all parameters.) The concentrations of cellular components will fluctuate inevitably, and unless the components are degraded actively or produced with negative feedback control, the fluctuations will persist over several generations, regressing to the mean on a timescale set by the growth rate (Fig. 4 B). The components that control the threshold of the DnaA activation switch are no exception to this rule. Moreover, their concentration fluctuations will give rise to fluctuations in the initiation volume $v^{\ast}$ (Fig. 4 C) that, to a good approximation, relax on the same timescale because (de)activation is fast compared to the growth rate and the mapping between these components and the initiation volume is roughly linear. If this timescale is set by the growth rate, then deviations of $v^{\ast}$ from its mean are on average halved every cell cycle (Fig. 4 D), and this gives rise to adder correlations (SI section S3D) [8]. Fluctuations in switch components that relax with the growth rate, be they lipids or proteins that modulate the activity of datA, RIDA, or DARS1/2 like IHF and Hda [12, 31, 32, 13], thus give rise to adder correlations (Fig. S12). Coupling titration with DnaA activation enhances robustness. All our systems are stable in the presence of biochemical noise. The concentrations do not diverge, also not in the titration-based system at high growth rates (Fig. 2). Yet, the precision of replication initiation differs markedly between the respective models, see Fig. 5. The protein synthesis and the titration-site formation rate scale differently with the growth rate, which means that a titration-based mechanism inevitably breaks down at sufficiently high growth rates, causing premature reinitiation events and a dramatic rise of the coefficient of variation (CV) in the initiation volume; even in the absence of any biochemical noise, the CV becomes larger than that reported experimentally [3, 8] (Fig. 5 C). The transient suppression of DnaA synthesis by SeqA after replication initiation can prevent these premature reinitiation events, but only at high growth rates: at intermediate growth rates, the CV of a system based on only titration and SeqA still rises strongly. This indicates that the activation switch is essential (Fig. 5 C). But could it be sufficient? Our modelling predicts it could because the LDDR model can generate robust oscillations at all growth rates. Yet, our modelling also predicts that titration helps the switch by shaping the oscillations in the free concentration of ATP-bound DnaA (Fig. 5 A and B), such that the precision of replication initiation in the presence of noise is significantly enhanced (Fig. 5 C). In section S4A2 we show that a concentration cycle, as generated by titration and SeqA, can generically enhance an activation cycle, as driven by the switch, by increasing the steepness of the oscillations; this tames the propagation of fluctuations in the free concentration of active DnaA to the initiation volume (Fig. S14). Combining the switch with titration can thus protect the system against fluctuations in the switch components. Figure 5: Combining the DnaA activation switch with titration and SeqA generates robust replication-initiation cycles over a wide range of growth rates. (A, B) The concentration of free ATP-DnaA $[D]_{\rm ATP,f}(t)$ as a function of time (in units of the doubling time $\tau_{\rm d}$) for $\lambda=0.35$ h-1 as indicated in panel C. The dashed red line is the critical free ATP-DnaA concentration $[D]_{\rm ATP,f}^{\ast}$ at which replication is initiated. While in the LDDR model the free ATP-DnaA fraction is high during a large fraction of the cell cycle (A, see also section S3C2), combining it with titration sites and SeqA gives rise to a much sharper increase of the free ATP-DnaA concentration at low growth rates (B). (C) The coefficient of variation ${\rm CV}=\sigma/\mu$ with the standard deviation $\sigma$ and the average initiation volume $\mu=\langle v^{\ast}\rangle$ as a function of the growth rate for different models in the presence of noise in the lipid concentration. Even in the absence of biochemical noise in DnaA synthesis, the titration model gives rise to a very high CV at high growth rates, due to premature reinitiation (Fig. 2 B). Adding SeqA to the titration model can reduce the CV at high, but not at intermediate growth rates (Fig. 2 C). The large coefficient of variation in the LDDR model at low growth rates is reduced significantly by the titration sites. Conversely, the LDDR model prevents the reinitiation events that inevitably occur at intermediate growth rates in the AIT+SeqA model. Combining DnaA activation with titration thus enhances the robustness of replication initiation at all growth rates, also in the presence of noise in DnaA synthesis (Fig. S13). All models include an eclipse period of about 10 minutes following replication initiation to prevent immediate reinitiation [52, 53, 54]. (See Table S2 for all parameters.) ## Discussion While the two mechanisms of titration and protein activation have so far been mostly studied independently [37, 28, 8, 33, 31, 32, 38], our manuscript indicates that the robustness arises from the coupling of the two. Interestingly, recent experiments, which show that replication is neither controlled by titration only nor by a DnaA activation switch only, support this prediction from our model [61]. Moreover, the idea that coupling an oscillation in the concentration with an oscillation in the fraction gives rise to more robust rhythms than either oscillation alone, is very generic. Our results are thus expected to apply to any cell-cycle control system that combines titration with protein activation or modification. This finding is of particular interest given the recent observation that also higher organisms employ not only protein modification but also titration for cell-cycle control [62, 63]. In fact, the evidence is accumulating that also oscillatory systems, most notably circadian clocks in cyanobacteria and higher organisms, derive their robustness to changes in the growth rate by intertwining a protein modification cycle with a protein concentration cycle [64, 65, 66]. The mechanisms of titration and activation belong to distinct classes of replication initiation control. The titration-based AIT model is an example of an initiator accumulation model, in which an initiator protein needs to accumulate to a threshold number to initiate replication [25, 4, 37, 8, 28]. In contrast, the DnaA activation switch is an example of a push-pull network in which the regulator switches between an inactive and an active state. Conceptually, this switch model is different from the accumulation model because replication is triggered at a critical concentration or fraction and not at a critical number of accumulated initiator proteins. In the switch model, the concentration of ATP-DnaA is set by the balance between DnaA activation and deactivation. Because the (de)activation rates depend on the origin density, the critical initiator concentration maps onto a critical origin density for replication initiation. This switch system is thus a bonafide origin-density sensor. In recent years, single-cell tracking data have revealed that not only E. coli but also other evolutionary divergent organisms like Bacillus subtilis [15], Caulobacter crescentus [14], the archaeon Halobacterium salinarum [67], and even budding yeast [68], obey a division adder principle. Our study gives a new perspective on the question whether a cell cycle is controlled via a sizer or adder. While the titration mechanism naturally qualifies as an adder, our switch model should be characterised as a sizer at the mean-field level: the mechanism is based on sensing the origin density. Yet, the inevitable fluctuations in the components that control the density threshold for replication give rise to adder correlations. This idea is general and likely applies to other organisms that obey the adder principle: adder behavior may result from size sensing. Our prediction could be tested by measuring the critical active DnaA concentration for replication initiation and how its fluctuations relax. Since ATP binding induces a conformational switch of DnaA [69], developing a FRET-based ATP-DnaA sensor may be feasible. While our models are built on a wealth of data, they all make the simplifying assumption that the cell divides a constant time $\tau_{\rm cc}$ after replication initiation, independent of the growth rate. Experiments indicate, however, that this is an oversimplification [70, 23, 3, 21, 22, 8, 18] and that cell division is more loosely coupled to replication initiation [8, 18]. Importantly, our results on replication initiation control are robust to the assumption of a constant $\tau_{\rm cc}$, because on average cell division does not change the densities of the components. Indeed, while this assumption will affect the correlations between the cell volume at birth and the initiation volume, it does not change the correlations between the initiation volume and the volume added until the next initiation event (Fig. S19). Our model is supported by many experimental observations. Of particular interest are mutants in which the (de)activation mechanisms are modified or even deleted, because these allow us to test the prediction that replication initiation is controlled by the activation switch (SI section S4B1). Naturally, our model can reproduce the observations on which it is built: deleting datA [31] and deactivating RIDA [34, 33, 44, 31] raises the active fraction of DnaA, while deleting DARS1/2 [38, 32] reduces it (Fig. S16). Our model then predicts that impeding activation increases the average volume per origin, while weakening deactivation has the opposite effects. Many experiments support these predictions: deleting DARS1 and/or DARS2 increases the initiation volume per origin [71], while deleting datA decreases it [71]. Our model cannot only reproduce these observations, but also the effect of combinations of deletions of these chromosomal loci on the initiation volume (Fig. S16). Moreover, it can describe how the initiation volume per origin changes when datA or DARS2 is translocated towards the terminus [72, 73, 74] (Fig. S16). In addition, our model can reproduce the observation that increasing the number of titration sites via multicopy plasmids increases the initiation volume per origin [75] (section S4B2), while increasing the DnaA concentration reduces it [4, 61, 76, 77] (section S4B3, Fig. S17). Taken together, these experiments support the idea that replication initiation is controlled by both titration and DnaA activation. Intriguingly, the relative position of DARS2 with respect to the origin and the terminus is conserved in various genomes of different sizes and strains [71], suggesting it plays an important role. Our modelling provides the following rationale: In the high growth-rate regime of overlapping replication forks, DARS2 not only serves to balance the strong deactivation by RIDA to yield a roughly constant initiation volume, but also needs to generate oscillations in concert with RIDA. Because the activities of both DARS2 and RIDA are proportional to the origin density, DARS2 can only play this dual role if its position meets two constraints: On the one hand, the activity of DARS2 should rise as late as possible in order to push the active initiator fraction down right after initiation. On the other hand, to achieve a nearly constant initiation volume independent of the growth rate, the activity of DARS2 must be high to counteract RIDA before the next initiation event; indeed, moving DARS2 towards the terminus increases the initiation volume [73, 74] (Fig. S16I). The shortest period until replication is set by the highest doubling time of E. coli, $\tau_{\rm d}\approx 18$ min. The position of DARS2 in the middle of the chromosome ($\tau_{\rm d2}\approx 16$ min) therefore naturally results from our model. Arguably the most enigmatic element of our model is the role of the lipids in rejuvenating DnaA. In vitro experiments have shown that acidic phospholipids in the cell membrane promote dissociation of nucleotides from DnaA very effectively [43], and can restore replication activity of DnaA bound to ADP [55, 56]. Depleting acidic phospholipids in vivo can lead to growth arrest [48] and inhibit initiation at oriC [57]. These experiments support the idea that lipids can reactivate DnaA by promoting the exchange of bound ADP for ATP. On the other hand, it has been observed that the lethal effect of a pgsA null mutation, which causes a complete lack of the major acidic phospholipids, is alleviated by mutations that change the membrane structure [78]. More recently, it has been reported that while downregulating pgsA reduced the growth rate, the initiation volume was not significantly altered [79]. We have therefore also studied models in which lipid-mediated DnaA is absent (SI section S5A). Our modelling predicts that lipid-mediated DnaA activation is essential for the switch (Fig. S20A-D). The capacity of the switch to act as an origin-density sensor hinges on the idea that the activation and deactivation rates scale differently with the origin density. Without the lipids, only protein synthesis remains as an activation mechanism that does not scale with the origin density (Eq. 3). Consequently, to obtain a stable switch-based system, the rates of all other (de)activation mechanisms must be comparable to or smaller than the growth rate. This dramatically lowers the amplitude of the oscillations. The full model, which combines the switch with titration and SeqA, is, however, surprisingly resilient to the removal of lipids, although the latter does compromise the precision of replication initiation (Fig. S20E-G). It has also been suggested that DnaA rejuvenation is contingent on oriC [55] (SI section S5B). However, a lipid-mediated DnaA activation rate that scales with the origin density effectively reduces the datA-mediated deactivation rate; this yields a switch that behaves similarly to that of the lipid-devoid system, because protein synthesis is again the only DnaA activation mechanism that is independent of the origin density. In summary, lipids enhance replication initiation, but only if their effect is independent of the origin density. Perhaps the most non-trivial prediction of our model is that the relaxation timescale of the switch components governs whether the switch generates adder or sizer correlations in the inter-initiation volume. The experiments of Si et al. provide strong support for this prediction: by expressing DnaA in an oscillatory fashion, the adder is turned into a sizer [8], precisely as our model predicts (Fig. S18). Our modelling predicts that negative autoregulation does not play a direct role in replication initiation. This is supported by recent experiments, which show that the average initiation volume and precision of replication initiation are only weakly affected in strains with constitutive dnaA expression[61]. Following Hansen et al. [37], we believe that negative autoregulation only plays an indirect role, by setting the growth-rate dependence of the DnaA concentration. Experiments have revealed that the total DnaA concentration varies with the growth rate, anticorrelating with the initiation volume [7]. However, the variation of both the total DnaA concentration and the initiation volume is rather weak, i.e. about 50% over a tenfold change of the growth rate [7]. It seems likely that negative autoregulation is crucial for constraining the growth-rate dependence of the total DnaA concentration [80, 81] and hence the initiation volume [3, 4]. How negative autoregulation with a differential sensitivity of the DnaA promoter to DnaA-ATP and DnaA-ADP [58, 82] and titration conspire to shape the growth- rate dependence of the DnaA concentration and the initiation volume, we leave for future work. Another open question remains why E. coli has evolved two different switch systems, Lipid-DatA (LD) and DARS1/2-RIDA (DR). In principle, a switch based on activating lipids and deactivating datA would be sufficient to control replication initiation at all growth rates. Yet, to ensure high amplitude oscillations in the active DnaA fraction at high growth rates, the (de)activation rates would have to be higher than observed (Fig. 3 C). This would require higher turnover rates of ATP, which may not be achievable when the growth rate is low. Our model thus suggests that E. coli has evolved a slow system to control the initiation volume at low growth rates, the lipids- datA system, and then switches on a faster, more energy-consuming system at higher growth rates, based on RIDA and DARS2. Finally, our model predicts that in the regime of non-overlapping replication forks it should be possible to move the system from a switch-dominated regime to a titration-based one by increasing the number of titration sites or decreasing the basal synthesis rate of DnaA. Our model predicts that the dependence of the initiation volume on the number of titration sites or basal synthesis rate exhibits a marked, characteristic crossover when the system transitions between these two regimes (Fig. S15). This is a strong prediction that could be tested experimentally. We thank Lorenzo Olivi, Sander Tans, Suckjoon Jun, Erik van Nimwegen and Johan Elf for a careful reading of the manuscript. This work is part of the Dutch Research Council (NWO) and was performed at the research institute AMOLF. Code Availability The code is publicly available at the Github repository MareikeBerger/Cellcycle via https://github.com/MareikeBerger/Cellcycle or https://zenodo.org/record/5913722. Data Availability The datasets generated during and analysed during the current study are available at Zenodo via https://zenodo.org/record/5911070. ## References * [1] O. Maaløe and N. O. Kjeldgaard. Control of macromolecular synthesis : a study of DNA, RNA, and protein synthesis in bacteria. New York (N.Y.) : Benjamin, 1966. * [2] Charles E. Helmstetter and Stephen Cooper. DNA synthesis during the division cycle of rapidly growing Escherichia coli Br. Journal of Molecular Biology, 31(3):507–518, 1968. * [3] Mats Wallden, David Fange, Ebba Gregorsson Lundius, Özden Baltekin, and Johan Elf. The Synchronization of Replication and Division Cycles in Individual E. coli Cells. Cell, 166(3):729–739, 2016. * [4] Fangwei Si, Dongyang Li, Sarah E. Cox, John T. Sauls, Omid Azizi, Cindy Sou, Amy B. Schwartz, Michael J. Erickstad, Yonggun Jun, Xintian Li, and Suckjoon Jun. Invariance of Initiation Mass and Predictability of Cell Size in Escherichia coli. Current Biology, 27(9):1278–1287, 2017. * [5] Stephen Cooper and Charles E. Helmstetter. Chromosome replication and the division cycle of Escherichia coliBr. Journal of Molecular Biology, 31(3):519–540, feb 1968. * [6] W D Donachie. Relationship between Cell Size and Time of Initiation of DNA Replication. Nature, 219:1077–1079, sep 1968. * [7] Hai Zheng, Yang Bai, Meiling Jiang, Taku A. Tokuyasu, Xiongliang Huang, Fajun Zhong, Yuqian Wu, Xiongfei Fu, Nancy Kleckner, Terence Hwa, and Chenli Liu. General quantitative relations linking cell growth and the cell cycle in Escherichia coli. Nature Microbiology, 5(8):995–1001, 2020. * [8] Fangwei Si, Guillaume Le Treut, John T. Sauls, Stephen Vadia, Petra Anne Levin, and Suckjoon Jun. Mechanistic Origin of Cell-Size Control and Homeostasis in Bacteria. Current Biology, 29(11):1760–1770.e7, 2019. * [9] Liselot Dewachter, Natalie Verstraeten, Maarten Fauvart, and Jan Michiels. An integrative view of cell cycle control in Escherichia coli. FEMS Microbiology Reviews, 42(2):116–136, 2018. * [10] Kirsten Skarstad and Tsutomu Katayama. Regulating DNA replication in bacteria. Cold Spring Harbor perspectives in biology, 5(4):a012922, apr 2013\. * [11] Lisa Willis and Kerwyn Casey Huang. Sizing up the bacterial cell cycle. Nature Reviews Microbiology, 15(10):606–620, 2017. * [12] Tsutomu Katayama, Kazutoshi Kasho, and Hironori Kawakami. The DnaA cycle in Escherichia coli: Activation, function and inactivation of the initiator protein. Frontiers in Microbiology, 8(DEC):1–15, 2017. * [13] Leise Riber, Jakob Frimodt-Møller, Godefroid Charbon, and Anders Løbner-Olesen. Multiple DNA Binding Proteins Contribute to Timing of Chromosome Replication in E. coli. Frontiers in Molecular Biosciences, 3(June):1–9, 2016. * [14] Manuel Campos, Ivan V. Surovtsev, Setsu Kato, Ahmad Paintdakhi, Bruno Beltran, Sarah E. Ebmeier, and Christine Jacobs-Wagner. A constant size extension drives bacterial cell size homeostasis. Cell, 159(6):1433–1446, 2014. * [15] Sattar Taheri-Araghi, Serena Bradde, John T. Sauls, Norbert S. Hill, Petra Anne Levin, Johan Paulsson, Massimo Vergassola, and Suckjoon Jun. Cell-size control and homeostasis in bacteria. Current Biology, 25(3):385–391, 2015. * [16] Ariel Amir. Cell size regulation in bacteria. Physical Review Letters, 112(20):1–5, 2014. * [17] Po-Yi Ho and Ariel Amir. Simultaneous Regulation of Cell Size and Chromosome Replication in Bacteria. Frontiers in microbiology, 6:662, 07 2015. * [18] Guillaume Witz, Erik van Nimwegen, and Thomas Julou. Initiation of chromosome replication controls both division and replication cycles in E. coli through a double-adder mechanism. eLife, 8:e48063, nov 2019. * [19] Guillaume Le Treut, Fangwei Si, Dongyang Li, and Suckjoon Jun. Quantitative examination of five stochastic cell-cycle and cell-size control models for escherichia coli and bacillus subtilis. Frontiers in Microbiology, 12:3278, 2021. * [20] Guillaume Witz, Thomas Julou, and Erik van Nimwegen. Response to comment on ‘Initiation of chromosome replication controls both division and replication cycles in E. coli through a double-adder mechanism’, August 2020. * [21] Gabriele Micali, Jacopo Grilli, Jacopo Marchi, Matteo Osella, and Marco Cosentino Lagomarsino. Dissecting the Control Mechanisms for DNA Replication and Cell Division in E. coli. Cell Reports, 25(3):761–771.e4, 2018. * [22] Gabriele Micali, Jacopo Grilli, Matteo Osella, and Marco Cosentino Lagomarsino. Concurrent processes set E. coli cell division. Science Advances, 4(11):1–8, 2018. * [23] Aileen Adiciptaningrum, Matteo Osella, M. Charl Moolman, Marco Cosentino Lagomarsino, and Sander J. Tans. Stochasticity and homeostasis in the E. coli replication and division cycle. Scientific Reports, 5(1):18261, Dec 2015. * [24] Felix Barber, Po Yi Ho, Andrew W. Murray, and Ariel Amir. Details matter: Noise and model structure set the relationship between cell size and cell cycle timing. Frontiers in Cell and Developmental Biology, 5(NOV):1–16, 2017\. * [25] L. Sompayrac and O. Maaloe. Autorepressor Model for Control of DNA Replication. Nature New Biology, 241(January):133–135, 1973. * [26] F. G. Hansen, B. B. Christensen, and T. Atlung. The initiator titration model: computer simulation of chromosome and minichromosome control. Research in Microbiology, 142(2-3):161–167, 1991. * [27] Markus Basan, Manlu Zhu, Xiongfeng Dai, Mya Warren, Daniel Sévin, Yi‐Ping Wang, and Terence Hwa. Inflating bacterial cells by increased protein synthesis. Molecular Systems Biology, 11(10):836, 2015. * [28] Flemming G. Hansen and Tove Atlung. The DnaA tale. Frontiers in Microbiology, 9(FEB):1–19, 2018. * [29] Sigrid Schaper and Walter Messer. Interaction of the Initiator Protein DnaA of Escherichia coli with Its DNA Target. Journal of Biological Chemistry, 270(29):17622–17626, 1995. * [30] Mats Wallden, David Fange, Özden Baltekin, and Johan Elf. Fluctuations in growth rates determine the generation time and size distributions of E. coli cells, 2015. * [31] Kazutoshi Kasho and Tsutomu Katayama. DnaA binding locus datA promotes DnaA-ATP hydrolysis to enable cell cycle-coordinated replication initiation. Proceedings of the National Academy of Sciences, 110(3):936–941, 2013. * [32] Kazutoshi Kasho, Kazuyuki Fujimitsu, Toshihiro Matoba, Taku Oshima, and Tsutomu Katayama. Timely binding of IHF and Fis to DARS2 regulates ATP-DnaA production and replication initiation. Nucleic acids research, 42(21):13134–13149, Dec 2014. * [33] Kenji Kurokawa, Satoshi Nishida, Akiko Emoto, Kazuhisa Sekimizu, and Tsutomu Katayama. Replication cycle-coordinated change of the adenine nucleotide-bound forms of DnaA protein in Escherichia coli. The EMBO Journal, 18(23):6642–6652, 1999. * [34] Tsutomu Katayama, Shogo Ozaki, Kenji Keyamura, and Kazuyuki Fujimitsu. Regulation of the replication cycle: Conserved and diverse regulatory systems for DnaA and oriC. Nature Reviews Microbiology, 8(3):163–170, 2010. * [35] Satoshi Nishida, Kazuyuki Fujimitsu, Kazuhisa Sekimizu, Tadahiro Ohmura, Tadashi Ueda, and Tsutomu Katayama. A Nucleotide Switch in the Escherichia coli DnaA Protein Initiates Chromosomal Replication. The Journal of biological chemistry, 277(17):14986–14995, April 2002. * [36] Christian Speck and Walter Messer. Mechanism of origin unwinding: Sequential binding of DnaA to double- and single-stranded DNA. EMBO Journal, 20(6):1469–1476, 2001. * [37] F. G. Hansen, T. Atlung, R. E. Braun, A. Wright, P. Hughes, and M. Kohiyama. Initiator (DnaA) protein concentration as a function of growth rate in Escherichia coli and Salmonella typhimurium. Journal of Bacteriology, 173(16):5194–5199, 1991. * [38] Kazuyuki Fujimitsu, Takayuki Senriuchi, and Tsutomu Katayama. Specific genomic sequences of E. coli promote replicational initiation by directly reactivating ADP-DnaA. Genes and Development, 23(10):1221–1233, 2009. * [39] Tsutomu Katayama, Kazuyuki Fujimitsu, and Tohru Ogawa. Multiple pathways regulating DnaA function in Escherichia coli: Distinct roles for DnaA titration by the datA locus and the regulatory inactivation of DnaA. Biochimie, 83(1):13–17, 2001. * [40] Matthew Grant, Chiara Saggioro, Ulisse Ferrari, Bruno Bassetti, Bianca Sclavi, and Marco Cosentino Lagomarsino. DnaA and the timing of chromosome replication in Escherichia coli as a function of growth rate. BMC Systems Biology, 5(1):201, 2011. * [41] Alan C. Leonard and Julia E. Grimwade. Regulation of DnaA Assembly and Activity: Taking Directions from the Genome. Annual Review of Microbiology, 65(1):19–35, 2011. PMID: 21639790. * [42] William D Donachie and Garry W Blakely. Coupling the initiation of chromosome replication to cell size in Escherichia coli, 2003. * [43] K. Sekimizu and A. Kornberg. Cardiolipin activation of dnaA protein, the initiation protein of replication in Escherichia coli. Journal of Biological Chemistry, 263(15):7131–7135, 1988. * [44] Jun Ichi Kato and Tsutomu Katayama. Hda, a novel DnaA-related protein, regulates the replication cycle in Escherichia coli. EMBO Journal, 20(15):4253–4262, 2001. * [45] Tohru Ogawa, Yoshitaka Yamada, Takao Kuroda, Tetsuya Kishi, and Shigeki Moriya. The datA locus predominantly contributes to the initiator titration mechanism in the control of replication initiation in Escherichia coli. Molecular Microbiology, 44(5):1367–1375, 2002. * [46] Johanna E. Camara, Adam M. Breier, Therese Brendler, Stuart Austin, Nicholas R. Cozzarelli, and Elliott Crooke. Hda inactivation of DnaA is the predominant mechanism preventing hyperinitiation of Escherichia coli DNA replication. EMBO Reports, 6(8):736–741, 2005. * [47] Leise Riber, Jan A. Olsson, Rasmus B. Jensen, Ole Skovgaard, Santanu Dasgupta, Martin G. Marinus, and Anders Løbner-Olesen. Hda-mediated inactivation of the DnaA protein and dnaA gene autoregulation act in concert to ensure homeostatic maintenance of the Escherichia coli chromosome. Genes and Development, 20(15):2121–2134, 2006. * [48] Weiming Xia and William Dowhan. In vivo evidence for the involvement of anionic phospholipids in initiation of DNA replication in Escherichia coli. Proceedings of the National Academy of Sciences of the United States of America, 92(3):783–787, 1995. * [49] Rahul Saxena, Nicholas Fingland, Digvijay Patil, Anjali K. Sharma, and Elliott Crooke. Crosstalk between DnaA protein, the initiator of Escherichia coli chromosomal replication, and acidic phospholipids present in bacterial membranes. International Journal of Molecular Sciences, 14(4):8517–8537, 2013\. * [50] Jie Lin and Ariel Amir. Homeostasis of protein and mRNA concentrations in growing cells. Nature Communications, 9(1), 2018. * [51] Angelika Roth and Walter Messer. High-affinity binding sites for the initiator protein DnaA on the chromosome of Escherichia coli. Molecular Microbiology, 28(2):395–401, 1998. * [52] J L Campbell and N Kleckner. E. coli oriC and the dnaA gene promoter are sequestered from dam methyltransferase following the passage of the chromosomal replication fork. Cell, 62(5):967–979, September 1990. * [53] Min Lu, Joseph L. Campbell, Erik Boye, and Nancy Kleckner. SeqA: A negative modulator of replication initiation in E. coli. Cell, 77(3):413–426, 1994. * [54] Torsten Waldminghaus and Kirsten Skarstad. The Escherichia coli SeqA protein. Plasmid, 61(3):141–150, 2009. * [55] E. Crooke, C. E. Castuma, and A. Kornberg. The chromosome origin of Escherichia coli stabilizes DnaA protein during rejuvenation by phospholipids. Journal of Biological Chemistry, 267(24):16779–16782, 1992. * [56] C E Castuma, E Crooke, and A Kornberg. Fluid membranes with acidic domains activate DnaA, the initiator protein of replication in Escherichia coli. Journal of Biological Chemistry, 268(33):24665 – 24668, 01 1993\. * [57] Nicholas Fingland, Ingvild Flåtten, Christopher D. Downey, Solveig Fossum-Raunehaug, Kirsten Skarstad, and Elliott Crooke. Depletion of acidic phospholipids influences chromosomal replication in Escherichia coli. MicrobiologyOpen, 1(4):450–466, 2012. * [58] Christian Speck, Christoph Weigel, and Walter Messer. ATP- and ADP-DnaA protein, a molecular switch in gene regulation. EMBO Journal, 18(21):6169–6176, 1999. * [59] Tsutomu Katayama, Toshio Kubota, Kenji Kurokawa, Elliott Crooke, and Kazuhisa Sekimizu. The initiator function of DnaA protein is negatively regulated by the sliding clamp of the E. coli Chromosomal replicase. Cell, 94(1):61–71, 1998. * [60] Johan Elf, Gene-Wei Li, and X. Sunney Xie. Probing transcription factor dynamics at the single-molecule level in a living cell. Science, 316(5828):1191–1194, 2007. * [61] Anna Knöppel, Oscar Broström, Konrad Gras, David Fange, and Johan Elf. The spatial organization of replication is determined by cell size independently of chromosome copy number, 2021. * [62] Marco D’Ario, Rafael Tavares, Katharina Schiessl, Bénédicte Desvoyes, Crisanto Gutierrez, Martin Howard, and Robert Sablowski. Cell size controlled in plants using DNA content as an internal scale. Science, 372(6547):1176–1181, 2021. * [63] Nicholas Rhind. Cell-size control. Current Biology, 31(21):R1414–R1420, 2021. * [64] D Zwicker, D K Lubensky, and P R ten Wolde. Robust circadian clocks from coupled protein-modification and transcription–translation cycles. Proceedings of the National Academy of Sciences, 107(52):22540 – 22545, 2010. * [65] S W Teng, S Mukherji, J R Moffitt, S de Buyl, and E K O’Shea. Robust Circadian Oscillations in Growing Cyanobacteria Require Transcriptional Feedback. Science, 340(6133):737–740, May 2013. * [66] L F Larrondo, C Olivares-Yanez, C L Baker, J J Loros, and J C Dunlap. Decoupling circadian clock protein turnover from circadian period determination. Science, 347(6221):1257277–1257277, January 2015. * [67] Ye-Jin Eun, Po-Yi Ho, Minjeong Kim, Salvatore LaRussa, Lydia Robert, Lars D. Renner, Amy Schmid, Ethan Garner, and Ariel Amir. Archaeal cells share common size control with bacteria despite noisier growth and division. Nature Microbiology, 3(2):148–154, Feb 2018. * [68] Ilya Soifer, Lydia Robert, and Ariel Amir. Single-cell analysis of growth in budding yeast and bacteria reveals a common size regulation strategy. Current Biology, 26(3):356–361, 2016. * [69] Jan P Erzberger, Melissa L Mott, and James M Berger. Structural basis for ATP-dependent DnaA assembly and replication-origin remodeling. Nature Structural & Molecular Biology, 13(8):676–683, August 2006. * [70] Ole Michelsen, M Joost Teixeira de Mattos, Peter Ruhdal Jensen, and Flemming G Hansen. Precise determinations of C and D periods by flow cytometry in Escherichia coli K-12 and B/r. Microbiology (Reading, England), 149(Pt 4):1001–1010, April 2003\. * [71] Jakob Frimodt-Møller, Godefroid Charbon, Karen A. Krogfelt, and Anders Løbner-Olesen. Control regions for chromosome replication are conserved with respect to sequence and location among Escherichia coli strains. Frontiers in Microbiology, 6(SEP):1–15, 2015. * [72] Risa Kitagawa, Toru Ozaki, Shigeki Moriya, and Tohru Ogawa. Negative control of replication initiation by a novel chromosomal locus exhibiting exceptional affinity for Escherichia coli DnaA protein. Genes and Development, 12(19):3032–3043, 1998. * [73] Jakob Frimodt-Møller, Godefroid Charbon, Karen A. Krogfelt, and Anders Løbner-Olesen. DNA Replication Control Is Linked to Genomic Positioning of Control Regions in Escherichia coli. PLoS Genetics, 12(9):1–27, 2016. * [74] Yukie Inoue, Hiroyuki Tanaka, Kazutoshi Kasho, Kazuyuki Fujimitsu, Taku Oshima, and Tsutomu Katayama. Chromosomal location of the DnaA-reactivating sequence DARS2 is important to regulate timely initiation of DNA replication in Escherichia coli. Genes to Cells, 21(9):1015–1023, 2016. * [75] Bjarke Bak Christensen, Tove Atlung, and Flemming G. Hansen. DnaA boxes are important elements in setting the initiation mass of Escherichia coli. Journal of Bacteriology, 181(9):2683–2688, 1999. * [76] Norbert S. Hill, Ryosuke Kadoya, Dhruba K. Chattoraj, and Petra Anne Levin. Cell size and the initiation of DNA replication in bacteria. PLoS Genetics, 8(3):14–16, 2012. * [77] T. Atlung and F. G. Hansen. Three distinct chromosome replication states are induced by increasing concentrations of DnaA protein in Escherichia coli. Journal of Bacteriology, 175(20):6537–6545, 1993. * [78] Yasuhiro Shiba, Yasuko Yokoyama, Yoshiko Aono, Takashi Kiuchi, Jin Kusaka, Kouji Matsumoto, and Hiroshi Hara. Activation of the Rcs signal transduction system is responsible for the thermosensitive growth defect of an Escherichia coli mutant lacking phosphatidylglycerol and cardiolipin. Journal of Bacteriology, 186(19):6526–6535, 2004. * [79] Daniel Camsund, Michael J. Lawson, Jimmy Larsson, Daniel Jones, Spartak Zikrin, David Fange, and Johan Elf. Time-resolved imaging-based CRISPRi screening. Nature Methods, 17(1):86–92, 2020. * [80] Stefan Klumpp, Zhongge Zhang, and Terence Hwa. Growth Rate-Dependent Global Effects on Gene Expression in Bacteria. Cell, 139(7):1366–1375, 2009. * [81] Matthew Scott, Carl W Gunderson, Eduard M Mateescu, Zhongge Zhang, and Terence Hwa. Interdependence of cell growth and gene expression: origins and consequences. Science (New York, N.Y.), 330(6007):1099–102, nov 2010. * [82] Chiara Saggioro, Anne Olliver, and Bianca Sclavi. Temperature-dependence of the DnaA–DNA interaction and its effect on the autoregulation of dnaA expression. Biochemical Journal, 449(2):333–341, 12 2012. * [83] Mia Panlilio, Jacopo Grilli, Giorgio Tallarico, Ilaria Iuliani, Bianca Sclavi, Pietro Cicuta, and Marco Cosentino Lagomarsino. Threshold accumulation of a constitutive protein explains E. coli cell-division behavior in nutrient upshifts. Proceedings of the National Academy of Sciences of the United States of America, 118(18), 2021. * [84] Naama Brenner, Erez Braun, Anna Yoney, Lee Susman, James Rotella, and Hanna Salman. Single-cell protein dynamics reproduce universal fluctuations in cell populations. The European Physical Journal E, 38(9):102, 2015. * [85] Hermannus Kempe, Anne Schwabe, Frédéric Crémazy, Pernette J. Verschure, and Frank J. Bruggeman. The volumes and transcript counts of single cells reveal concentration homeostasis and capture biological noise. Molecular Biology of the Cell, 26(4):797–804, 2015. * [86] Olivia Padovan-Merhar, Gautham P. Nair, Andrew G. Biaesch, Andreas Mayer, Steven Scarfone, Shawn W. Foley, Angela R. Wu, L. Stirling Churchman, Abhyudai Singh, and Arjun Raj. Single Mammalian Cells Compensate for Differences in Cellular Volume and DNA Copy Number through Independent Global Transcriptional Mechanisms. Molecular Cell, 58(2):339–352, 2015. * [87] Robert Ietswaart, Stefanie Rosa, Zhe Wu, Caroline Dean, and Martin Howard. Cell-Size-Dependent Transcription of FLC and Its Antisense Long Non-coding RNA COOLAIR Explain Cell-to-Cell Expression Variation. Cell Systems, 4(6):622–635.e9, 2017. * [88] Xiao-yu Zheng and Erin K O’Shea. Cyanobacteria Maintain Constant Protein Concentration despite Genome Copy-Number Variation. CellReports, 19(3):497 – 504, 04 2017. * [89] Johan Paulsson. Models of stochastic gene expression. Physics of Life Reviews, 2(2):157–175, 2005. * [90] M. Thattai and A. Van Oudenaarden. Intrinsic noise in gene regulatory networks. Proceedings of the National Academy of Sciences of the United States of America, 98(15):8614–8619, 2001. * [91] Nir Friedman, Long Cai, and X. Sunney Xie. Linking stochastic dynamics to population distribution: An analytical framework of gene expression. Physical Review Letters, 97(16):1–4, 2006. * [92] Vahid Shahrezaei and Peter S. Swain. Analytical distributions for stochastic gene expression. Proceedings of the National Academy of Sciences of the United States of America, 105(45):17256–17261, 2008. * [93] Ron Milo. What is the total number of protein molecules per cell volume? A call to rethink some published values. BioEssays, 35(12):1050–1055, 2013. * [94] Joris Paijmans and Pieter Rein Ten Wolde. Lower bound on the precision of transcriptional regulation and why facilitated diffusion can reduce noise in gene expression. Physical Review E - Statistical, Nonlinear, and Soft Matter Physics, 90(3):1–14, 2014. * [95] Katrin Schenk, Ana B. Hervás, Thomas C. Rösch, Marc Eisemann, Bernhard A. Schmitt, Stephan Dahlke, Luise Kleine-Borgmann, Seán M. Murray, and Peter L. Graumann. Rapid turnover of dnaa at replication origin regions contributes to initiation control of dna replication. PLOS Genetics, 13(2):1–32, 02 2017. * [96] Risa Kitagawa, Hironobu Mitsuki, Tuneko Okazaki, and Tohru Ogawa. A novel DnaA protein-binding site at 94.7 min on the Escherichia coll chromosome. Molecular Microbiology, 19(5):1137–1147, 1996. * [97] Franca Blaesing, Christoph Weigel, Michaela Welzeck, and Walter Messer. Analysis of the DNA-binding domain of Escherichia coli DnaA protein. Molecular Microbiology, 36(3):557–569, 2000. * [98] Hironori Kawakamii, Kenji Keyamura, and Tsutomu Katayama. Formation of an ATP-DnaA-specific initiation complex requires DnaA arginine 285, a conserved motif in the AAA+ protein family. Journal of Biological Chemistry, 280(29):27420–27430, 2005. * [99] Ingvild Flåtten, Solveig Fossum-Raunehaug, Riikka Taipale, Silje Martinsen, and Kirsten Skarstad. The DnaA Protein Is Not the Limiting Factor for Initiation of Replication in Escherichia coli. PLoS Genetics, 11(6):1–22, 2015. * [100] Kenta Nakamura and Tsutomu Katayama. Novel essential residues of Hda for interaction with DnaA in the regulatory inactivation of DnaA: Unique roles for Hda AAA¡sup¿+¡/sup¿ Box VI and VII motifs. Molecular Microbiology, 76(2):302–317, 2010. * [101] M. Charl Moolman, Sriram T.iruvadi Krishnan, Jacob W.J. Kerssemakers, Aafke van den Berg, Pawel Tulinski, Martin Depken, Rodrigo Reyes-Lamothe, David J. Sherratt, and Nynke H. Dekker. Slow unloading leads to DNA-bound $\beta$2-sliding clamp accumulation in live Escherichia coli cells. Nature communications, 5:5820, 2014. * [102] B. Y.M. Yung and A. Kornberg. Membrane attachment activates dnaA protein, the initiation protein of chromosome replication in Escherichia coli. Proceedings of the National Academy of Sciences of the United States of America, 85(19):7202–7205, 1988. * [103] Weidong Zheng, Zhenya Li, Kirsten Skarstad, and Elliott Crooke. Mutations in DnaA protein suppress the growth arrest of acidic phospholipid-deficient Escherichia coli cells. EMBO Journal, 20(5):1164–1172, 2001. * [104] Yasuhiro Shiba, Hiroyoshi Miyagawa, Hideki Nagahama, Kenji Matsumoto, Daitetsu Kondo, Satoshi Matsuoka, Kouji Matsumoto, and Hiroshi Hara. Exploring the relationship between lipoprotein mislocalization and activation of the Rcs signal transduction system in Escherichia coli. Microbiology, 158(5):1238–1248, 2012. * [105] Prabhat Mallik, Brian J Paul, Steven T Rutherford, Richard L Gourse, and Robert Osuna. DksA Is Required for Growth Phase-Dependent Regulation, Growth Rate-Dependent Control, and Stringent Control of fis Expression in Escherichia coli. Journal of Bacteriology, 188(16):5775–5782, 2006. * [106] Ingvild Flåtten and Kirsten Skarstad. The Fis protein has a stimulating role in initiation of replication in Escherichia coli in vivo. PLoS ONE, 8(12):1–9, 2013. * [107] Kenji Keyamura, Yoshito Abe, Masahiro Higashi, Tadashi Ueda, and Tsutomu Katayama. DiaA dynamics are coupled with changes in initial origin complexes leading to helicase loading. Journal of Biological Chemistry, 284(37):25038–25050, 2009. * [108] Michael B Elowitz, Arnold J Levine, Eric D Siggia, and Peter S Swain. Stochastic gene expression in a single cell. Science, 297(5584):1183 – 1186, 08 2002. * [109] Christopher C Govern and Pieter Rein ten Wolde. Optimal resource allocation in cellular sensing systems. Proceedings of the National Academy of Sciences of the United States of America, 111(49):17486–17491, December 2014. * [110] Shingo Nozaki, Yoshitaka Yamada, and Tohru Ogawa. Initiator titration complex formed at datA with the aid of IHF regulates replication timing in Escherichia coli. Genes to Cells, 14(3):329–341, 2009. * [111] Kazuyuki Fujimitsu, Masayuki Su’etsugu, Yoko Yamaguchi, Kensaku Mazda, Nisi Fu, Hironori Kawakami, and Tsutomu Katayama. Modes of overinitiation, dnaA gene expression, and inhibition of cell division in a novel cold-sensitive hda mutant of Escherichia coli. Journal of Bacteriology, 190(15):5368–5381, 2008. * [112] P. N. Heacock and W. Dowhan. Alteration of the phospholipid composition of Escherichia coli through genetic manipulation. Journal of Biological Chemistry, 264(25):14972–14977, 1989. * [113] Alexander Aranovich, Garik Y. Gdalevsky, Rivka Cohen-Luria, Itzhak Fishov, and Abraham H. Parola. Membrane-catalyzed nucleotide exchange on DnaA: Effect of surface molecular crowding. Journal of Biological Chemistry, 281(18):12526–12534, 2006. * [114] Jennifer Garner, Peter Durrer, Jennifer Kitchen, Josef Brunner, and Elliott Crooke. Membrane-mediated release of nucleotide from an initiator of chromosomal replication, Escherichia coli DnaA, occurs with insertion of a distinct region of the protein into the lipid bilayer. Journal of Biological Chemistry, 273(9):5167–5173, 1998. * [115] Taeko Nishiwaki-Ohkawa, Yohko Kitayama, Erika Ochiai, and Takao Kondo. Exchange of ADP with ATP in the CII ATPase domain promotes autophosphorylation of cyanobacterial clock protein KaiC. Proceedings of the National Academy of Sciences of the United States of America, 111(12):4455 – 4460, 03 2014. * [116] Joris Paijmans, David K Lubensky, and Pieter Rein ten Wolde. A thermodynamically consistent model of the post-translational Kai circadian clock. PLoS Computational Biology, 13(3):e1005415, March 2017. Supplemental Material: Robust replication initiation from coupled homeostatic mechanisms Overview. Two classes of mechanistic models for the regulation of replication initiation in E. coli have been proposed in the literature: Initiator accumulation models [24, 17, 16, 25, 26, 27] and initiator switch models [40, 12, 13, 41, 32, 42, 40]. We propose mechanistic models out of each class and test whether they are consistent with experiments. Then we combine a titration with a switch model and show that it can increase the robustness of the system in the presence of noise. This Supporting Information is structured into four parts: In the first part, we present the gene expression model we are using throughout this work (section S1). In the second part, we present a model from the initiator accumulation class (section S2) that is based on the accumulation of an initiator protein up to a threshold number, which is set by the fixed number of titration sites per chromosome. First, we show that in order to maintain stable cell cycles with the initiator accumulation model, the initiator production rate must be proportional to the volume of the cell (section S2.1). Then we demonstrate that while the Autoregulated Initiator Titration (AIT) model would ensure stable cell cycles at all growth rates if all titration sites were located at the origin, it exhibits over-initiation events in the overlapping replication-fork regime at high growth rates because, as experiments show, the sites are distributed randomly over the chromosome (section S2.2). In the third part of this Supporting Information, we present two initiator switch models based on a switch between an active and an inactive form of the initiator protein DnaA (section S3): The Lipid-DatA (LD) model is based on an origin density-dependent ultra-sensitivity switch of DnaA (section S3.2). The Lipid-DatA-DARS1/2-RIDA (LDDR) model includes all known activators and deactivators in E. coli and generates high amplitude oscillations at realistic activation and deactivation rates (section S3.3). In section S3.4 we elucidate the origin of adder and sizer correlations using the LD model, and we also show that the same correlations are observed in the full LDDR model. In section S4 we validate our model and present testable predictions. We first combine titration with an activation switch and show how titration sharpens the oscillations of the activation switch, increasing the precision of replication initiation (section S4.1). While a titration-based mechanism initiates replication precisely only at low growth rates and the activation switch does so only at higher growth rates, the combined titration- switch model initiates replication accurately at all growth rates. We then discuss the role of SeqA. We show that suppression of dnaA expression by SeqA can rescue the titration-based mechanism at high growth rates, but not at intermediate growth rates: in this regime, the switch is essential. In section S4.2 we then validate our theoretical model by comparing key predictions to experimental observations and we make several novel experimentally testable predictions (section S4.3). In this section, we also show that our results are robust to the precise type of coupling of the replication cycle to the cell division cycle (section S4.2.5). In the last section, we study two variants of our models where the lipid activation is either oriC-dependent or is removed entirely (section S5). ## S1. Growing cell model of gene expression In this section, we present the gene expression model, which underlies all our models. In the recently developed growing-cell model by Lin et al. [50], transcription is limited by the availability of RNAPs while translation is limited by the ribosomes. In this model, the mRNA and protein copy numbers are proportional to the cell volume, as recent experiments indicate [83, 50, 84, 85, 86, 87, 88]. Concomitantly, the protein synthesis rate is, as observed very recently [83], proportional to the volume, which is a crucial requirement for the stability of the initiator accumulation model (see section S2.1). We start this section by deriving the basal protein synthesis rate in the growing-cell model (section S1.1). In section S1.2, we show how the synthesis rate of a constitutively expressed protein is proportional to the volume, such that its concentration increases exponentially in time over the course of the cell cycle. In section S1.3 we then describe how gene regulation can be included in the growing cell model. ### S1.1 Basal gene expression In the gene expression model of Lin et al. [50], the genes and the mRNAs compete for the limiting pool of RNAPs and ribosomes, respectively [50]. Therefore, the transcription rate of a gene $i$ is directly proportional to the total number RNAPs $n$ times the fraction of RNAPs $\phi_{\rm i}$ that are transcribing gene $i$. To quantify the gene allocation fraction $\phi_{\rm i}$, Lin et al. define an effective gene copy number $g_{\rm i}$ that accounts for its copy number and the binding strength of its promoter [50]. The gene allocation fraction of gene $i$ is then given by the effective gene copy number $g_{\rm i}$ divided by the sum over all effective gene copy numbers in the cell $\phi_{\rm i}=g_{\rm i}/\sum_{\rm j}\,g_{\rm j}$. As the number of ribosomes is assumed to limit translation, the protein synthesis rate of gene $i$ is proportional to the number of ribosomes $N_{\rm R}$ times the fraction of ribosomes translating the mRNA of gene $i$. Assuming that the affinity of ribosomes binding to mRNA is equal for all types of mRNA $m_{\rm i}$, the ribosome allocation fraction $f_{\rm i}$ of gene $i$ is given by the number of mRNAs $m_{\rm i}$ of gene $i$ divided by the total amount of mRNAs, thus $f_{\rm i}=m_{\rm i}/\sum_{\rm j}m_{\rm j}$. The growing cell model then gives rise to the following set of equations for the change in the number of mRNAs $m_{\rm i}$ and the number of proteins $p_{\rm i}$ of gene $i$: $\displaystyle\frac{dm_{\rm i}}{dt}=$ $\displaystyle k_{\rm m}\,\phi_{\rm i}\,n-\frac{m_{\rm i}}{\tau_{m}}$ (S1) $\displaystyle\frac{dp_{\rm i}}{dt}=$ $\displaystyle k_{\rm R}\,f_{\rm i}\,f_{\rm a}\,N_{\rm R}$ (S2) where $k_{\rm m}$ is the transcription rate of a single RNAP, $\tau_{\rm m}$ is the degradation time of the mRNA (taken to be equal and constant for all mRNAs), $k_{\rm R}$ is the translation rate of a ribosome, $f_{\rm a}$ is the fraction of actively translating ribosomes and $N_{\rm R}$ is the number of ribosomes. Due to the fast production and degradation rate of the mRNA compared to the growth rate of the cell, we can approximate the mRNA number to be at a steady state such that $\langle m_{\rm i}\rangle=k_{\rm m}\,\phi_{\rm i}\langle n\rangle\,\tau_{\rm m}$ (S3) Plugging equation S3 into equation S2 and using that $\sum_{\rm j}\phi_{\rm j}=1$ gives the following general expression for the change in the number of proteins: $\frac{dp_{\rm i}}{dt}=k_{\rm R}\,\phi_{\rm i}\,f_{\rm a}\,N_{\rm R}$ (S4) The protein production rate of any gene $i$ is therefore proportional to the number of ribosomes $N_{\rm R}$ times the gene allocation fraction $\phi_{\rm i}$ of gene $i$. The gene allocation fraction $\phi_{\rm i}$ is a measure of the relative affinity and amount of gene $i$ with respect to all other genes in the cell. In the simplified scenario of an instantaneous replication of the entire DNA after replication initiation, replication of the DNA does not affect the gene allocation fraction. If the gene $i$ is not regulated, the affinity of gene $i$ is constant in time. If at a given growth rate the total affinity of all genes remains approximately constant in time, the gene allocation fraction $\phi_{\rm i}$ is constant in time too. ### S1.2 Constitutively expressed proteins In this section, we will first demonstrate that in the growing cell model, the protein production rate is directly proportional to the volume of the cell, which, as we will see in section S2.2, ensures the stability of the AIT model. The total number of proteins $N$ in the cell is given by the sum over all proteins $p_{\rm j}$ $N=\sum_{\rm j}p_{\rm j}$ (S5) and the fraction of proteins that are ribosomes is $\Phi_{\rm R}=\frac{N_{\rm R}}{N}.$ (S6) From equations S4, S5 and S6, and using that $\sum_{\rm j}\Phi_{\rm j}=1$, we find that the change in the total number of proteins in time is $\frac{dN}{dt}=\sum_{\rm j}\frac{dp_{\rm j}}{dt}=k_{\rm R}\,f_{\rm a}\,N_{\rm R}=k_{\rm R}\,f_{\rm a}\,\Phi_{\rm R}\,N$ (S7) while, defining the total number density $\rho\equiv N/V$, the change in the volume is $\frac{dV}{dt}=\frac{1}{\rho}\,\frac{dN}{dt}=k_{\rm R}\,f_{\rm a}\,\Phi_{\rm R}\,V$ (S8) Hence, the cell grows exponentially with a growth rate $\lambda=\frac{1}{N}\frac{dN}{dt}=\frac{1}{V}\frac{dV}{dt}=k_{\rm R}\,f_{\rm a}\,\Phi_{\rm R}$ (S9) Using equation S9 we can then derive the change in the number of a protein of gene $i$: $\frac{dp_{\rm i}}{dt}=\phi_{\rm i}\,k_{\rm R}\,f_{\rm a}\,N_{\rm R}=\phi_{\rm i}\,k_{\rm R}\,f_{\rm a}\,\Phi_{\rm R}\,N=\phi_{\rm i}\,\lambda\,N=\phi_{\rm i}\,\lambda\,\rho\,V$ (S10) Therefore, while in the standard model of gene expression the copy number of a constitutively expressed protein $i$ increases bi-linearly in time, in the growing cell model it increases exponentially over the course of the cell cycle. The change in the protein concentration of gene $i$ is then given by $\frac{d[p_{\rm i}]}{dt}=\frac{dp_{\rm i}}{dt}\,\frac{1}{V}-p_{\rm i}\,\frac{1}{V^{2}}\,\frac{dV}{dt}=\phi_{\rm i}\,\lambda\,\rho-\lambda\,[p_{\rm i}]$ (S11) At steady state, we find that the growth rate drops out and the steady state protein concentration is given by: $[p_{\rm i}]^{\ast}=\phi_{\rm i}\,\rho$ (S12) In order to investigate how the protein number and concentration of an unregulated protein changes over the course of the cell cycle, we evolve the volume of a cell according to to $dV/dt=\lambda V$ (see S8 and S9) and the protein number according to equation S10. Replication is initiated at a fixed volume per origin $v^{\ast}$ and the cell divides a fixed time $\tau_{\rm cc}$ after replication initiation. The exponential increase in the number of proteins over the course of the cell cycle can be seen in Figure S1 A. In the scenario where the entire chromosome is replicated instantaneously and the gene is not regulated, the gene allocation fraction $\phi_{\rm i}$ remains constant (Fig. S1 A, yellow line). While the number of a protein $p$ increases proportional to the volume of the cell (Fig. S1 A, blue line), the concentration remains perfectly constant in time (Fig. S1 A, red line). Figure S1: The concentration of differently regulated proteins in the growing cell model of gene expression (A, B) The volume $V(t)$, the gene allocation fraction $\phi(t)$, the number of proteins $N(t)$ and the total concentration $[p]_{\rm T}$ of a constitutively expressed protein within the growing cell model. The volume and the protein number are evolved according to equations S8 and S10, respectively. (A) While the protein number increases exponentially in time, the total concentration remains perfectly constant. (B) The change of the number and concentration of a constitutively expressed protein when the gene allocation fraction changes in time due to a finite time to replicate the entire chromosome. The gene is assumed to be located at the origin which causes a doubling of the allocation fraction at the moment of replication initiation. When the entire chromosome has been replicated, the gene allocation fraction is again constant. As a consequence, the concentration of a constitutively expressed gene exhibits weak oscillations due to the changes in the gene allocation fraction. The parameters in all simulations are $v^{\ast}=1\,\mu$m3, $\tau_{\rm cc}=1$ h, $T_{\rm C}=2/3$ h, $\rho=10^{6}\,\mu$m-1, $\tau_{\rm d}=2$ h and $\phi_{\rm i}=2\times 10^{-3}$. In reality the chromosome is not replicated instantly. This means that when the part that houses gene $i$ is replicated, the gene allocation fraction $\phi_{\rm i}$ rises transiently, as illustrated in the second panel of Figure S1 B. The transiently higher gene allocation fraction results in a temporal increase of the production rate (Figure S1 B, third panel), which gives rise to weak oscillations in the protein concentration over the course of the cell cycle. ### S1.3 Negatively autoregulated proteins Regulation of gene $i$ can be included by modifying the gene affinity $g_{\rm i}$. If gene $i$ is for example negatively autoregulated, the gene affinity becomes $g_{\rm i}=g_{\rm i}^{0}\,\frac{1}{1+\left(\frac{[p_{\rm i}]}{K_{\rm D}^{\rm p}}\right)^{n}}$ (S13) where $g_{\rm i}^{0}$ is the basal gene affinity if the promoter is not repressed at all, $[p_{\rm i}]$ the free initiator concentration, $K_{\rm D}^{\rm p}$ is the dissociation constant of the promoter and $n$ is the Hill coefficient. The protein production rate then becomes dependent on the protein concentration via the modified gene allocation fraction $\phi_{\rm i}$: $\displaystyle\frac{dp_{\rm i}}{dt}$ $\displaystyle=\phi_{\rm i}\,\lambda\,\rho\,V=\frac{g_{\rm i}}{\sum_{\rm j}g_{\rm j}}\,\lambda\,\rho\,V$ (S14) $\displaystyle=\phi_{\rm i}^{0}\,\frac{1}{1+\left(\frac{[p_{\rm i}]}{K_{\rm D}^{\rm p}}\right)^{n}}\,\lambda\,\rho\,V$ (S15) where we defined the basal gene allocation fraction $\phi_{\rm i}^{0}\equiv g_{\rm i}^{0}/\sum_{\rm j}g_{\rm j}$. By defining the gene allocation density as $\tilde{\phi}_{\rm i}^{0}=\phi_{\rm i}^{0}\,\rho$, we obtain Eq. 1 of the main text for the production rate of a negatively autoregulated protein $p$ (with $i=p$): $\frac{dN_{\rm p}}{dt}=\frac{\tilde{\phi}_{\rm p}^{0}\,\lambda\,V}{1+\left(\frac{[p]}{K_{\rm D}^{\rm p}}\right)^{n}}$ (S16) ## S2. Initiator accumulation model In the initiator accumulation model, an initiator protein accumulates over the course of the cell cycle and replication is initiated when a threshold amount per origin is attained. We first show that a volume-dependent production rate is required to ensure stable replication cycles (section S2.1). We then present the Autoregulated Initiator Titration (AIT) model and investigate under what conditions the AIT model can ensure stable cell cycles (section S2.2). In the AIT model, a fixed number of titration sites per chromosome sets the critical number of initiators $n_{\rm p}^{\ast}$ that need to be accumulated in order to initiate replication (section S2.2.2). We first show that the model ensures stable cell cycles at all growth rates when all titration sites are located at the origin (section S2.2.3). When the titration sites are however homogeneously distributed on the chromosome, which is a good approximation for the experimentally reported random distribution [51, 28], reinitiation events occur at high growth rates (section S2.2.4). Finally, we derive an analytical expression for the initiation volume in the AIT model and investigate under what conditions the initiation volume becomes independent of the growth rate of the cell (section S2.2.5). All parameters used in the AIT in the main part of the paper and in the SI are discussed in section S2.2.1 and can be found in Table S1. ### S2.1 Stability of the initiator accumulation model In this section, we demonstrate that a volume-dependent protein production rate is essential to obtain stable cell cycles with the initiator accumulation model. The bacterium E. coli must initiate replication once per division cycle in order to be able to distribute two copies of the chromosome in the two daughter cells. In good nutrient conditions, E. coli grows exponentially with a growth rate $\lambda$ such that the volume is given by $V(t)=V_{\rm b}\,e^{\lambda t}$ (S17) The growth rate $\lambda$ can fluctuate due to noise, but on average cells double their entire volume after the cell-doubling time $\langle\tau_{\rm d}\rangle=\ln(2)/\langle\lambda\rangle$. As in E. coli replication is initiated synchronously at all origins also in the overlapping fork regime at high growth rates, we can define the inter-initiation time $\tau_{\rm ii}$ as the time between two consecutive initiation events. Any molecular mechanism for replication initiation must ensure that the average inter-initiation time $\langle\tau_{\rm ii}\rangle$ equals the average cell-doubling time $\langle\tau_{\rm d}\rangle$. If that is not the case, the average origin density, $\langle\rho\rangle=\langle n_{\rm ori}\rangle/\langle V\rangle$, does not remain constant over the course of several generations. In the initiator accumulation models, an initiator protein is accumulated up to a fixed threshold per origin at which replication is initiated. In the AIT model in section S2.2 we will show that a constant number of high-affinity binding sites for the initiator on the chromosome can ensure such a constant number threshold per origin. Given that this threshold per origin is fixed, the time from one initiation event to the next is determined by how fast the initiator proteins are synthesized. In contrast to the recently proposed growing cell model presented in section S1, in an arguably more traditional model of gene expression, the protein production rate of a constitutively expressed gene is given by a constant basal $\alpha$ rate times the gene copy number $g$ [80, 89, 90, 91, 92]: $\frac{dN}{dt}=\alpha\,g$ (S18) Assuming again that the gene is located at the origin, the number of genes $g$ equals the number of origins $n_{\rm ori}$. Thus, a constant number of initiators per origin $\Delta n=\Delta N/n_{\rm ori}$ is accumulated in a time interval $\Delta t$: $\Delta n=\alpha\,\Delta t$ (S19) As in the initiator accumulation model replication is initiated after a constant amount of proteins per origin $\Delta n^{\ast}$ has been accumulated, we find that the inter-initiation time $\tau_{\rm ii}$ in this model is given by $\tau_{\rm ii}=\frac{\Delta n^{\ast}}{\alpha}$ (S20) As the number of initiators that need to be accumulated per origin $\Delta n^{\ast}$ is constant and the basal rate does not explicitly depend on the volume in the traditional model of gene expression, the inter-initiation time thus is constant. If the basal production rate is not set such that the average replication period exactly equals the doubling time of the cell, $\tau_{\rm ii}=\tau_{\rm d}$, this system gives rise to an instability in the chromosome density. We verify this prediction by performing simulations. The cell volume and the number of initiators are evolved according to equations S17 and S18 and replication is initiated when the number of initiators per origin $n(t)=N(t)/n_{\rm ori}(t)$ equals the critical number per origin $n^{\ast}$. At initiation, the number of origins doubles and the number of initiators per origin in generation $i$ right after initiation thus becomes $n_{\rm i}=n^{\ast}/2$. The number of initiators per origin that needs to be accumulated until the next initiation event is therefore $\Delta n_{\rm i}=n^{\ast}-n_{\rm i}=n^{\ast}/2$. Following the Cooper-Helmstetter model [5], the cell divides a constant cycling time $\tau_{\rm cc}$ after replication initiation. In Figure S2 A, the replication period $\tau_{\rm ii}$ is chosen to be shorter than the doubling time $\tau_{\rm d}$ of the cell. As every replication initiation event triggers a cell division event, the division period $\tau_{\rm div}$ equals the replication period $\tau_{\rm div}=\tau_{\rm ii}<\tau_{\rm d}$. As the replication period and thus the division period is smaller than the doubling time of the cell, the volume of the cell decreases over several generations while the gene density increases. We emphasise that even when $\tau_{\rm ii}$ is chosen to be equal to $\tau_{\rm d}$, any noise, even that coming from the finite machine-precision, will cause the gene density to eventually become unstable. To show that this instability does not depend on the choice of the division control, we also study another model in which cell division is triggered at a fixed division volume $V_{\rm d}$ instead of a fixed time $\tau_{\rm cc}$ after replication initiation. Because in this model the division cycle is independent of the replication cycle and division is triggered at a fixed division volume $V_{\rm d}$, the division cycle naturally remains stable (Fig. S2 B). The replication cycle is however not coupled to this division cycle, because the synthesis rate of the accumulator and the replication threshold are constant, i.e. do not depend on the volume. Replication is therefore initiated at a period that is again shorter than the doubling time of the cell $\tau_{\rm ii}<\tau_{\rm d}$. Also in this scenario, the gene density increases over the course of several generations. The initiator accumulation model becomes stable by introducing a volume- dependent production rate, which couples the replication cycle to the cell division cycle. Taking the production rate to be $\frac{dN}{dt}=\alpha\,V^{\gamma}$ (S21) where $\gamma$ is an exponent quantifying the strength of the volume dependence of the production rate. For $\gamma=0$ the production rate becomes independent of the volume. We show that for the exponents $\gamma=1$ (Fig. S2 C) and $\gamma=0.5$ (Fig. S2 D) the system recovers from an initial perturbation and becomes stable. The relaxation time increases with decreasing volume dependence. We have demonstrated that the initiator accumulation model requires a volume dependent production rate. In the traditional model of gene expression, the production rate of an unregulated protein is proportional to the gene copy number times a constant production rate [80, 89, 90, 91, 92] and thus cannot fulfill this requirement (it corresponds to $\gamma=0$). In the previous section, we showed that in the growing cell model, which we use throughout this work, the production rate is directly proportional to the volume of the cell, thus corresponding to the scenario $\gamma=1$. Figure S2: For the initiator accumulation model to become stable, the production rate needs to depend on the growth rate of the cell (A, B, C, D) The volume $V(t)$ (according to equation S17), the number of proteins $N(t)$ together with the critical threshold $N^{\ast}=n^{\ast}\,n_{\rm ori}$, the total concentration $[p]_{\rm T}=N(t)/V(t)$, and the origin density $\rho(t)=n_{\rm ori}(t)/V(t)$ as a function of time. (A, B) The protein is produced at a constant rate times the number of genes (according to equation S18). This gives rise to an unstable chromosome density independent of the division mechanism. (A) Cell division is triggered a constant cycling time $\tau_{\rm cc}$ after replication initiation. The time between consecutive replication events $\tau_{\rm ii}$ is given by equation S20 and is shorter than the doubling time of the cell. Thus, the origin density increases in time. (B) Cell division is triggered at a fixed division volume $V_{\rm d}=1$ $\mu m^{3}$ and is thus independent of the replication cycle. Again, the replication period $\tau_{\rm ii}$ is shorter than the doubling time of the cell and the origin density increases in time. (C, D) Now, the initiator protein is produced proportional to the volume of the cell according to equation S21 with an exponent $\gamma$. Cell division is triggered a fixed time $\tau_{\rm cc}$ after replication initiation. For any positive exponent that is larger $\gamma>0$, the gene density stabilizes after an initial perturbation. (C) For an exponent of $\gamma=1$, the gene density relaxes to a constant average density after an initial perturbation. The total initiator concentration becomes perfectly constant in time. (D) For an exponent of $\gamma=0.5$ the relaxation time increases and the total concentration oscillates weakly over the course of the cell cycle. The system relaxes to a stable gene density and initiator concentration. The parameters of all simulations are $\tau_{\rm cc}=1$ h, $\alpha=110$ h-1, $\tau_{\rm d}=2$ h, $n^{\ast}=300$. Table S1: Parameters used in the AIT model Parameter | name | value | Motivation ---|---|---|--- $\phi_{\rm 0}$ | gene allocation fraction | $10^{-3}$ | set to match initiation volume | | | reported in [4] $K_{\rm D}^{\rm p}$ [$\mu\rm{m}^{-3}$] | dissociation constant initiator promoter | 200 | [58] $n$ | Hill coefficient initiator | 5 | [58] $n_{\rm s}$ | number of titration sites per chromosome | 300 | [51, 28] $K_{\rm D}^{\rm ori}$ [$\mu\rm{m}^{-3}$] | dissociation constant origin | 20 | [29] $K_{\rm D}^{\rm s}$ [$\mu\rm{m}^{-3}$] | dissociation constant titration sites | 1 | [29] $\rho$ [$\mu\rm{m}^{-3}$] | number density | $10^{6}$ | [93] $D_{\rm D}$ | noise strength DnaA | 100 | set to match CV from [3] $T_{\rm C}$ [h] | C-period | 2/3 | [5] $T_{\rm D}$ [h] | D-period | 1/3 | [5] $\lambda$ [h-1] | growth rate | 0.35-1.73 | [4, 3] * One molecule per cubic micrometer corresponds to approximately one nM ($1~{}\mu\rm{m}^{-3}=1.67$ nM). ### S2.2 The AIT model In this section, we present the AIT model that is consistent with the experimental data on the cell-cycle network of E. coli. In the AIT model, the initiator protein is DnaA, which is negatively autoregulated and binds to high-affinity titration sites on the DNA. Here we first discuss the parameters used in the AIT model (section S2.2.1). Then we show how a fixed number of titration sites per chromosome can set the critical number of initiators required for replication initiation (section S2.2.2). Next, in section S2.2.3, we show that the AIT model ensures stable cell cycles at all growth rates when the titration sites are located closely to the origin. Then we show that the experimentally reported random titration site distribution on the chromosome can give rise to premature reinitiation events at high growth rates (section S2.2.4). In section S2.2.5, we derive an analytical expression for the initiation volume in the AIT model and discuss its growth rate dependence. Finally, we show that gene expression noise in the production rate of the initiator protein DnaA naturally gives rise to the experimentally observed initiation adder (section S2.2.6). All parameters used in the AIT model in the main part of the paper and in the SI can be found in Table S1. #### S2.2.1 Biological parameters of the AIT model In this section, we discuss the experimentally found parameters and compare them to the ones used in the simulations of the AIT model. The parameters of the AIT model used both in the main figures and in the Supplementary Information can be found in Table S1. The protein DnaA in E. coli is generally referred to as the initiator protein, as its ATP-bound form is required to bind to the origin for initiating replication [12]. Both forms of the protein DnaA, ATP-DnaA and ADP-DnaA, have strong affinity for an asymmetric 9 bp consensus sequence on the DNA, the DnaA box [12]. In the replication origin region of E. coli several DnaA boxes are present, including R1-R4 and M. [51]. In total, 308 DnaA boxes of the stringent definition (5’- TT $\rm{}^{A}/_{T}$ TNCACA) have been found on the E. coli genome [51]. The dissociation constant of DnaA binding to the DnaA boxes on the DNA lies in the range of $K_{\rm D}^{\rm s}=1-50$ nM, depending on the flanking sequences [29]. While for some DnaA boxes, the binding was non-specific $K_{\rm D}^{\rm s}\geq 200$ nM, the highest affinity was found for the DnaA boxes R1 and R4 in the origin with $K_{\rm D}^{\rm s}=1$ nM. In E. coli, the approximately three hundred 9-mer DnaA boxes are randomly distributed on the E. coli chromosome [28, 51]. The dnaA gene is regulated by two promoters, dnaAp1 and dnaAp2, with a DnaA box located between them. dnaAp2 is the stronger promoter and contributes 60–80 % of the dnaA transcripts [58]. Both ATP-DnaA and ADP-DnaA bind cooperatively to these two promoters, but the repression via ATP-DnaA is more efficient [58]. As there are five binding sites for DnaA in the promoter region [58], we choose a Hill coefficient of $n=5$ in the simulations. In the AIT model we used $n_{\rm s}=300$ titration sites per chromosome with a dissociation constant of $K_{\rm D}^{\rm s}=1$ nM (Table S1). We approximate the experimentally reported random distribution of titration sites on the chromosome [28, 51] by a homogeneous distribution. At a concentration of ATP- DnaA of approximately $[D]_{\rm ATP}=100$ nM, the expression of DnaA was reduced by 50 % [58]. Therefore, we used in the AIT model for the promoter a dissociation constant of $K_{\rm D}^{\rm p}=100$ nM. The dissociation constant of DnaA for the origin was chosen to be $K_{\rm D}^{\rm ori}=20$ nM, reflecting the combination of high and intermediate affinity of the titration sites required to be filled by ATP-DnaA in order to initiate replication. Using the experimentally reported topology of the biochemical network in combination with the growing cell model of gene expression, we obtain stable cell cycles with the AIT at low growth rates, but not at high growth rates as explained in the main text of the paper and in section S2.2.4. #### S2.2.2 The titration sites In this section, we present how the titration sites set a fixed replication threshold, such that a fixed number of initiator proteins needs to be accumulated per number of origin between consecutive replication initiation events. We discuss why the quasi-equilibrium assumption is appropriate and calculate the concentration of free initiator proteins as a function of the total initiator protein concentration $[p]_{\rm T}$ and of the total titration site concentration $[s]_{\rm T}$ in the cell. Binding and unbinding rates of DnaA binding to the titration sites are fast. In the main text, we assumed that the binding and unbinding of the initiator proteins to the titration sites is well described by a quasi-steady-state. Here we show that the binding and unbinding dynamics are relatively fast compared to the doubling time of the cell, such that this assumption is well justified. It seems reasonable to assume that DnaA finds its target sites in a way that is similar to that of other transcription factors, such as the lac repressor whose binding dynamics has been well characterized [60]. These transcription factors move by facilitated diffusion, i.e. combining 3D with 1D diffusion along the DNA. Elf et al. [60] have measured that the effective diffusion constant of transcription factors in E. coli is of the order of $D_{\rm eff}=0.4\,\mu$m2/s. Assuming the binding rate is diffusion-limited, the binding rate is given by $k_{\rm on}=4\pi\sigma D_{\rm eff}$. For an estimated cross section in the order of $\sigma\approx 10^{-2}\mu$m [94], the binding rate therefore becomes $k_{\rm on}\approx 0.05\,\mu$m3/s. The time for a transcription factor to bind to its target site is given by one over the concentration of the transcription factor $[c]$ times the binding rate: $\tau_{\rm on}=([c]\times k_{\rm on})^{-1}$. With a typical volume of an E. coli cell of $V=1$ $\mu$m3, the search time of one transcription factor for finding its target site on the DNA should then be $\tau_{\rm on}=k_{\rm on}^{-1}\times V=20$ s. This estimate compares well to the measured value of $\tau_{\rm on}=65-360$ s by Elf et al. [60]. The dissociation constant of DnaA binding to the DnaA boxes on the DNA is in the range of $K_{\rm D}^{\rm s}=1-50$ nM [29]. Using $K_{\rm D}^{\rm s}=k_{\rm off}/k_{\rm on}$ allows us to estimate $k_{\rm off}=K_{\rm D}^{\rm s}\times k_{\rm on}\approx 0.015-0.8$ s-1. With an average concentration of the initiator protein DnaA in E. coli of $[D]_{\rm T}\approx$ 400 $\mu\rm{m}^{-3}$ [37], the correlation time for binding and unbinding then becomes $\tau=1/(k_{\rm on}\,[D]_{\rm T}+k_{\rm off})\approx 0.16$ s. This is much faster than the timescale at which the volume changes, set by the growth rate. Recent FRAP experiments combined with single molecule tracking experiments show that DnaA rapidly moves between chromosomal binding sites and has a residence time of less than a second [95]. Thus, the quasi-equilibrium approximation of the initiator binding to the titration sites we make is well justified. Concentration of free initiator proteins in the quasi-equilibrium assumption. As binding and unbinding dynamics of the initiator protein to the titration sites are relatively fast, we can assume for simplicity a quasi-equilibrium state of the concentration of free initiator proteins $[p]=K_{\rm D}^{\rm s}\,[sp]/[s]$ with the dissociation constant $K_{\rm D}^{\rm s}$. At every given total titration site concentration $[s]_{\rm T}=[s]+[sp]$ and total initiator protein concentration $[p]_{\rm T}=[p]+[ps]$, the average free initiator protein concentration $[p]$ is given by the quadratic equation $\displaystyle[p]([s]_{\rm T},[p]_{\rm T})=$ $\displaystyle[p]_{\rm T}-\frac{K_{\rm D}^{\rm s}+[s]_{\rm T}+[p]_{\rm T}}{2}$ $\displaystyle+\frac{\sqrt{(K_{\rm D}^{\rm s}+[s]_{\rm T}+[p]_{\rm T})^{2}-4\,[s]_{\rm T}\,[p]_{\rm T}}}{2}$ (S22) We use this expression in the main text to calculate at every given total titration site concentration and total initiator concentration in a cell the concentration of initiators freely diffusing in the cytoplasm. As can be seen in Figure 2A of the main text (and in Fig. S3 A and C), as long as there are more titration sites than proteins in the cell, the free DnaA concentration remains low. When the total number of DnaA proteins exceeds the total number of titration sites, the free concentration quickly rises and replication is initiated when the critical free initiator concentration $K_{\rm D}^{\rm ori}$ is attained. The fixed number of titration sites per chromosome therefore sets the critical number of initiators that need to be accumulated in order to reach the critical free initiator concentration $K_{\rm D}^{\rm ori}$ in the cytoplasm. #### S2.2.3 The AIT model ensures stable cell cycles at all growth rates when all titration sites are located closely to the origin The three key variables of the AIT model are the volume of the cell $V(t)$, the total number of DnaA proteins $N_{\rm p}(t)$ and the total number of titration sites $N_{\rm s}(t)$ in the cell. In the following we derive expressions for these three quantities and show that the AIT model gives rise to stable replication cycles at all growth rates when all titration sites are located at the origin. From the growing cell gene expression model we derived the following volume-dependent expression for the change in the number of a negatively autoregulated protein $p$ (see section S1.3): $\frac{dN_{\rm p}}{dt}=\frac{\phi_{\rm p}^{0}\,\lambda\,\rho}{1+\left(\frac{[p]}{K_{\rm D}^{\rm p}}\right)^{\rm n}}\,V$ (S23) with the gene allocation fraction $\phi_{\rm p}^{0}$, the growth rate $\lambda$, the number density $\rho=N/V$, the free initiator concentration $[p]$, the dissociation constant of the promoter $K_{\rm D}^{\rm p}$ and the Hill coefficient $n$. As the DnaA gene is located very closely to the origin, we assume that at the moment of replication initiation the gene number doubles instantaneously. We summarize the terms that do not depend on the cell volume or the growth rate in the gene allocation density $\tilde{\phi}_{\rm p}^{0}=\phi_{\rm p}^{0}\,\rho$ (S24) and obtain Eq. 1 of the main text $\frac{dN_{\rm p}}{dt}=\frac{\tilde{\phi}_{\rm p}^{0}\,\lambda\,V}{1+\left(\frac{[p]}{K_{\rm D}^{\rm p}}\right)^{n}}.$ (S25) As explained in the main text, in the AIT model we explicitly model the exponentially growing cell with the growth rate $\lambda$: $\frac{dV}{dt}=\lambda\,V$ (S26) Replication is initiated when the amount of initiators exceeds the number of titration sites per chromosome (see section S2.2.2). Here, for simplicity, we assume that all titration sites are located at the origin and therefore the total number of titration sites is doubled instantaneously after replication initiation. In the next section, we present the more realistic scenario that the titration sites are distributed homogeneously along the chromosome. Based on the experimental observation that the cell divides an approximately constant time $\tau_{\rm cc}$ after replication initiation, we assume here that $\tau_{\rm cc}$ is constant (see section S4.2.5 for scenario where $\tau_{\rm cc}$ is not constant). At cell division, not only the volume, but also the total number of initiators and of titration sites is divided by two. Evolving the number of initiator proteins and the volume according to equations S25 and S26, respectively, we find that the total DnaA concentration remains approximately constant in time (Fig. S3 A and B). The weak oscillations in the total concentration arise from the effect of a finite replication time of the chromosome as explained in section S1.2. When the total number of DnaA proteins exceeds the total number of titration sites on the chromosome, the free DnaA concentration rises and at the critical initiation concentration $K_{\rm D}^{\rm ori}$, replication is initiated. As here all titration sites are located at the origin, the number of titration sites doubles and the free concentration drops immediately after replication initiation both at high and at low growth rates (Fig. S3 A and B). Only when again enough initiator proteins have been accumulated is a new round of replication initiated. The AIT model therefore gives rise to stable cell cycles at all growth rates (Fig. S3 A and B), when all titration sites are located on the origin. An open question remains what role negative autoregulation plays in the AIT model. In order to attain the critical initiation concentration $K_{\rm D}^{\rm ori}$ at the origin, the dissociation constant of the promoter of DnaA $K_{\rm D}^{\rm p}$ must be higher than $K_{\rm D}^{\rm ori}$. At the same, the mechanism of titration requires that the affinity of the titration sites is higher than that of the origin: $K_{\rm D}^{\rm s}<K_{\rm D}^{\rm ori}$. Combining these two requirements yields: $K_{\rm D}^{\rm s}<K_{\rm D}^{\rm ori}<K_{\rm D}^{\rm p}$. The free protein concentration $[p]$ thus remains (far) below the promoter dissociation constant $K_{\rm D}^{\rm p}$, which means the latter is repressed only weakly and proteins are produced approximately at the maximal rate. Therefore, equation S25 can be approximated by $\frac{dN_{\rm p}}{dt}\approx\tilde{\phi}_{\rm p}^{0}\,\lambda\,V$ (S27) The stability of the AIT model arises from the volume dependence in the initiator production rate in equation S25 as explained in section S2.1. #### S2.2.4 The homogeneous titration site distribution causes reinitiation events at high growth rates In the previous section, we assumed out of simplicity that all titration sites are located at the origin. Yet, experiments indicate that the titration sites are distributed approximately homogeneously on the chromosome [28, 51]. Here, we investigate how a homogeneous titration site distribution on the chromosome affects the stability of the cell cycles. Figure S3: A homogeneous titration site distribution on the chromosome in the AIT model causes reinitiation events at high growth rates (A, B, C, D): The volume $V(t)$, the number of initiator proteins $N_{\rm p}(t)$ (black line) and titration sites $N_{\rm s}(t)$ (yellow line), the total concentration of initiator proteins $[p]_{\rm T}(t)$ together with the dissociation constant of the regulator $K_{\rm D}^{\rm r}$ (dotted red line), and the concentration of initiator proteins in the cytoplasm $[p](t)$ as a function of time (in units of the doubling time of the cell $\tau_{\rm d}$) for $\tau_{\rm d}=2$ h (A, C) and $\tau_{\rm d}=35$ min (B, D), respectively. When the number of initiator proteins per origin $n_{\rm p}(t)$ exceeds the number of titration sites per origin $n_{\rm s}$ (yellow dashed line), the free concentration $[p](t)$ rapidly rises to reach the threshold concentration $K_{\rm D}^{\rm ori}$ (blue dashed line) for initiating a new round of replication. The blue arrows indicate that the cell divides a constant cycling time $\tau_{\rm cc}$ after replication initiation. During the blocked period $\tau_{\rm b}$(red shaded area), no new round of replication can be initiated. (A, B) If all titration sites are located at the origin, the free initiator concentration $[p](t)$ decreases immediately after replication is initiated, independent of whether the doubling time of the cell $\tau_{\rm d}$ is smaller (A) or larger (B) than the time $T_{\rm C}$ to replicate the entire chromosome. (C) When the titration sites are distributed homogeneously along the chromosome, the free initiator concentration decreases during the entire replication time $T_{\rm C}$ at low growth rates. As the time to produce new titration sites is still faster than the time to synthesize new initiator proteins, we obtain regular stable cell cycles in this regime. (D) When the doubling time is however smaller than the time to replicate the entire chromosome, $\tau_{\rm d}<T_{\rm C}$, newly replicated titration sites are being filled faster with new proteins than they are replicated. After a short blocked period $\tau_{\rm b}$, replication is reinitiated. As a result, each long (sub)cycle is followed by a very short one, together forming the cell cycle. Moreover, replication is not initiated at a constant volume per origin anymore, but oscillates over time. The appearance of premature reinitiation events suggest that replication initiation in E. coli can not fully be explained by a titration-based mechanism. (E) The coefficient of variation ${\rm CV}=\sigma/\mu$ with the standard deviation $\sigma$ and the average initiation volume $\mu=\langle v^{\ast}\rangle$ as a function of the growth rate for the AIT model with homogeneous titration site distribution. Due to the rapid reinitiation events shown in (D), the coefficient of variation increases strongly in the overlapping fork regime at high growth rates. Varying the total number of titration sites in concert with the dissociation constant of the titration sites $K_{\rm D}^{\rm s}$ such that the initiation volume remains constant and equal to the experimentally observed initiation volume [4] (by solving equation S22 for $v^{\ast}$) cannot prevent these reinitiation events. This demonstrates that the failure of the titration model at high growth rates is independent of the precise parameter choice; it is thus a robust result. When the titration sites are distributed homogeneously along the chromosome, the number of titration sites $N_{\rm s}(t)$ is not directly proportional to the number of origins anymore but increases linearly from the moment of initiation of replication $t_{\rm i}$ until the end of replication at $t_{\rm i}+T_{\rm C}$: $N_{\rm s}(t)=\begin{cases}N_{\rm 0}&\text{for }t<t_{\rm i}\\\ N_{\rm 0}+N_{\rm 0}\,\frac{t-t_{\rm i}}{T_{\rm C}}\,&\text{for }t_{\rm i}\leq t<t_{\rm i}+T_{\rm C}\\\ 2\,N_{\rm 0}&\text{for }t\geq t_{\rm i}+T_{\rm C}\end{cases}$ (S28) with the C-period $T_{\rm C}\approx 40$ min being the time to replicate the entire chromosome and $N_{\rm 0}=n_{\rm s}\,n_{\rm ori}$ is the total number of titration sites before replication initiation, given by the number $n_{\rm s}$ of titration sites per chromosomes times the number $n_{\rm ori}$ of origins before replication initiation. In the main part of the paper we used the experimental observation that the cell divides an approximately constant cycling time $\tau_{\rm cc}$ after replication has been initiated [4]. This cycling time can be split into two times $\tau_{\rm cc}=T_{\rm C}+T_{\rm D}$, the C-period and the D-period: During the C-period, the DNA is being replicated and during the D-period the chromosomes are being separated and the cell divides [5, 3, 4]. The total number of binding sites before initiation $N_{0}$ will only be doubled, when the entire chromosome has been replicated, thus after the end of the C-period $T_{\rm C}$. In the low growth regime, the time to replicate the entire chromosome $T_{\rm C}$ is shorter than the time to double the volume of the cell $\tau_{\rm d}$. The time it takes to double the number of titration sites upon replication initiation is therefore shorter than the time to double the number of initiation proteins. This results in a gradual decrease of the free initiator concentration upon replication initiation (Fig. S3 C, lowest panel). In favorable growth conditions, the doubling time of E. coli can however be shorter than the time it takes to replicate the entire chromosome $T_{\rm C}$. As a result, the rate at which new titration sites are formed upon the first replication initiation event (marked by the dashed vertical lines) is therefore lower than the rate at which initiator proteins are produced (Fig. S3 D); the number of titration sites (yellow line) rises slower than the number of initiators (black line). This means that after the first replication initiation event, the free initiator concentration continues to rise (lower row). To prevent immediate reinitiation, we introduce a refractory or ‘eclipse’ period of $\tau_{\rm b}\approx 10$ min after replication initiation during which replication initiation is blocked (red shaded area), mimicking the effect of SeqA [52, 53, 54, 12]. When this eclipse period is over, a new round of replication is initiated, which triples the rate at which new titration sites are formed. Now the rate of titration-site formation is higher than the rate at which new initiator proteins are produced, causing the concentration of free initiator to go down. At some point, the first round of replication is finished, causing a small decrease in the rate at which new titration sites are formed and some time later also the next round is finished, causing the number of titration sites to become constant. Then the time $\tau_{\rm cc}$ after the first initiation event is reached and the cell divides. After this division event it grows briefly and then it divides again, a time $\tau_{\rm cc}$ after the second initiation event in the previous cycle. A given cell cycle thus consists of a long and a short cycle, such that the average division time (time from birth to death) equals the doubling time $\tau_{\rm d}=\ln(2)/\lambda$. These unnatural time traces of the volume, namely the oscillation between a short and a long (sub) cycle, have not been observed experimentally and can be prevented by decoupling cell division from replication initiation as described in section S4.2.5. The reinitiation events, which are caused by the excess of initiators after the first initiation event are however not affected by the choice of how the replication and the division cycle are coupled. Because in the long (sub) cycle two initiation events are triggered in rapid succession, the initiation volume per origin flip-flops between a high and a low initiation volume per origin. This causes a dramatic rise of the Coefficient of Variation (CV) in the initiation volume at higher growth rates (see Fig. S3 E). Importantly, the CV becomes much larger than that observed experimentally, even though the system is deterministic and no biochemical noise is present; adding noise would only make the CV even higher. We emphasize that the breakdown of the titration mechanism arises from the different scaling of two timescales with the growth rate: The rate at which the initiator DnaA is synthesized scales with the growth rate, see Eqs. S24 and S25. In contrast, the titration-site formation rate is nearly independent of the growth rate: when the titration sites are homogeneously distributed, as experiments show [51], then the titration-site formation rate per origin is set by the DNA duplication rate, which indeed varies only little with the growth rate [4]. The protein synthesis rate thus increases faster with the growth rate than the titration-site formation rate, which means that at sufficiently high growth rates the mechanism fails to sequester DnaA proteins after replication initiation; to a good approximation, this breakdown happens when the system enters the overlapping replication-fork regime with $\tau_{\rm d}\lesssim T_{\rm C}$, because the rate at which titration sites are formed is given by $n_{\rm s}/T_{\rm C}$, while the rate at which proteins are produced right after replication initiation is given by $dN_{\rm p}/dt=\phi_{\rm p}^{0}\,\lambda\,\rho\,V=\lambda\,N_{\rm p}=\ln(2)\,N_{\rm p}/\tau_{\rm d}\simeq\ln(2)\,n_{\rm s}/\tau_{\rm d}$, where we have used that the fraction of initiator equals the gene (ribosome) allocation fraction $\phi_{\rm p}^{0}$ (assuming all proteins are made with the same rate) and right after replication initiation $N_{\rm p}\simeq n_{\rm s}$. Since this prediction follows from the scaling of two timescales, it is robust, i.e. insensitive to the details of the model. Indeed, this prediction is insensitive to how the other key parameters in the AIT model are varied: the number of titration sites per origin $n_{\rm s}$ and their affinity $K_{\rm D}^{\rm s}$. Fig. S3 E shows that exactly the same rise in the CV of the initiation volume is observed for different values of $n_{\rm s}$ and $K_{\rm D}^{\rm s}$, which are varied together to keep the average initiation volume constant and within the range observed experimentally [4]. The fact that the curves nearly fully overlap is because a new replication round is initiated as soon as the eclipse period is over. Naturally, if the affinity of the titration sites located at the origin is higher than the affinity of titration sites at the rest of the chromosome, we can recover the behavior of the inhomogeneous titration site distribution. Interestingly, it had been proposed that the site datA which is located close to the origin has a very high affinity and can titrate large numbers of proteins, of up to 60-370 [96, 97]. These numbers had been inferred indirectly, from experiments that analyzed the de-repression of dnaA or mioC transcription upon introduction of plasmids containing datA sequences [31]. It remained however unclear by which mechanism datA would be able to absorb so many DnaA molecules. The discovery that the site datA can deactivate the initiator protein ATP-DnaA by promoting ATP hydrolysis provides a more likely explanation for this indirect observation [31]. In the original initiator titration paper by Hansen et al. [37], a bias of titrating DnaA boxes towards the oriC region was assumed. Roth and Messer [51] find however that while boxes of the R1 type indeed show such a bias, the high-affinity DnaA boxes show a distribution on the chromosome as random as possible. #### S2.2.5 Growth-rate dependence of the cell cycle in the AIT model In this section, we discuss how the key cell cycle parameters—the initiation time and volume, and the volume at birth and division—vary with the growth rate $\lambda$ in the AIT model. The initiation time is given by $t^{\ast}=\tau_{\rm d}-\tau_{\rm cc}$, where $\tau_{\rm d}=\ln(2)/\lambda$ is the cell division time and $\tau_{\rm cc}$ is the constant time between initiation and division. The volume at birth, $V_{\rm b}$, and the initiation volume $V^{\ast}$ are related via $V^{\ast}=V_{\rm b}e^{\lambda t^{\ast}}$, and the volume at division $V_{\rm d}$ is simply twice the birth volume $V_{\rm b}$. The central quantity is thus the total initiation volume $V^{\ast}$, or its value per origin $v^{\ast}=V^{\ast}/n_{\rm ori}$, with $n_{\rm ori}^{\ast}$ the number of origins at initiation: from this and the initiation time, $V_{\rm b}$ and $V_{\rm d}$ follow. To obtain the initiation volume, we exploit that at the moment of replication initiation the free initiator concentration $[p]$ equals the dissociation constant for binding the origin: $[p]=K_{\rm D}^{\rm ori}$. For a given total intiator concentration $[p]_{\rm T}$, we can then combine $[p]=K_{\rm D}^{\rm ori}$ with Eq. S22 to obtain the total titration site concentration $[s]_{\rm T}$. The latter is given by $[s]_{\rm T}=n_{\rm s}/v^{\ast}$, where $n_{\rm s}$ is the known number of titration sites per origin. Hence, for a given $[p]_{\rm T}$ we can obtain the initiation volume $v^{\ast}$ from Eq. S22. To understand how the initiation volume $v^{\ast}$ depends on the total initiator concentration $[p]_{\rm T}$ it is illuminating to consider the limit in which the binding of the initiator proteins to the titration sites is very strong. We connect the critical number of initiators per origin $n^{\ast}$ to the initiation volume per origin $v^{\ast}$ via the total concentration of initiators at the moment of initiation: $[p]_{\rm T}^{\ast}=\frac{N_{\rm p}^{\ast}}{V^{\ast}}=\frac{N_{\rm p}^{\ast}/n_{\rm ori}^{\ast}}{V^{\ast}/{n_{\rm ori}^{\ast}}}=\frac{n^{\ast}}{v^{\ast}}$ (S29) where $N_{\rm p}^{\ast}$ is the total number of initiators at initiation. Hence, $v^{\ast}=\frac{n^{\ast}}{[p]_{\rm T}^{\ast}}$ (S30) In the limit of very tight binding, the critical number of initiators per origin $n^{\ast}$ is set by the fixed number of titration sites per origin, $n^{\ast}\approx n_{\rm s}$, and thus is constant when a new round of replication is initiated in the non-overlapping replication fork regime. Now we see that if the total concentration is maintained approximately constant in time, the initiation volume is also constant and the replication cycle becomes stable. Furthermore, the total concentration could be maintained approximately constant in time for a given growth rate, but vary as a function of the growth
0 Research design study Yiwen Xing, Rita Borgo, and Alfie Abdul-Rahman are with King’s College London. E-mail: [yiwen.xing, rita.borgo, <EMAIL_ADDRESS>Cristina Dondi is with University of Oxford. E-mail<EMAIL_ADDRESS> # Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts Yiwen Xing0000-0003-1521-6616 Cristina Dondi0000-0001-9478-216X Rita Borgo0000-0003-2875-6793 and Alfie Abdul-Rahman0000-0002-6257-876X ###### Abstract The circulation of historical books has always been an area of interest for historians. However, the data used to represent the journey of a book across different places and times can be difficult for domain experts to digest due to buried geographical and chronological features within text-based presentations. This situation provides an opportunity for collaboration between visualization researchers and historians. This paper describes a design study where a variant of the Nine-Stage Framework [46] was employed to develop a Visual Analytics (VA) tool called _DanteExploreVis_. This tool was designed to aid domain experts in exploring, explaining, and presenting book trade data from multiple perspectives. We discuss the design choices made and how each panel in the interface meets the domain requirements. We also present the results of a qualitative evaluation conducted with domain experts. The main contributions of this paper include: 1) the development of a VA tool to support domain experts in exploring, explaining, and presenting book trade data; 2) a comprehensive documentation of the iterative design, development, and evaluation process following the variant Nine-Stage Framework; 3) a summary of the insights gained and lessons learned from this design study in the context of the humanities field; and 4) reflections on how our approach could be applied in a more generalizable way. ###### keywords: Design study, application motivated visualization, geospatial data Timeline of the iterative design process, showcasing the evolution of interface changes (I1 – I8), notable sketches (S1 – S7) from each iteration, and corresponding stages in the framework variant. This figure provides a visual summary of the design progression and highlights the milestones in the development of _DanteExploreVis_. Introduction The trade and circulation of historical books have long been a matter of interest to historians. A book printed in the 15th-century has bounced around for centuries, moving from country to country, bearing witness to countless histories of the time, and eventually taking on its present form. By examining book trade data, historians can gain insights into the life and trading history of books, encompassing all copies of a specific printed edition of a literary work. This information can offer historians valuable evidence to interpret various historical phenomena and provide fresh perspectives on pressing issues within the discipline. In the big data era, tracking the circulation of books is more manageable, but not for historical books. Historians have spent considerable time collecting and integrating historical book records. By examining features such as manuscript annotations, decorations, and binding styles, historians can trace the spatial and temporal movement of these books. The MEI (Material Evidence in Incunabula) database [14] compiles these fragmented records, offering valuable data for historical book researchers. However, the absence of appropriate visualization tools creates difficulties in analyzing and interpreting the large dataset, prompting our design study and collaboration with historians. Prompted by the demand for visualizing book trade data, we initiated an interdisciplinary collaboration with historians in 2021. The _BookTracker_ platform was established to develop visualization tools to address various domain needs. Our previous design study is detailed in [55], while this paper focuses on the development of _DanteExploreVis_. Building on our previous experience, we adapt the core phase of the Nine-Stage Framework [46] to better suit the iterative nature and prioritize the continuous refinement of domain problems and tasks. With the adapted Nine-Stage Framework, we crafted a tool capable of fulfilling domain tasks. Throughout the design process, we maintained frequent communication with domain experts, promptly receiving feedback and evaluations after each implementation. The qualitative evaluations of _DanteExploreVis_ provided positive feedback. In this paper, we present the result of the design study conducted with historical book researchers following a variant of the Nine-Stage Framework. During the iterative process, prototypes were refined and enhanced to better address domain tasks, ultimately leading to the development of _DanteExploreVis_. Our contributions include: * • Completing a six-month iterative design process with the leading domain expert, resulting in the development of _DanteExploreVis_. * • Evaluating the usability and usefulness of _DanteExploreVis_ through surveys and expert interviews with a group of domain experts. * • Adapting the Nine-Stage Framework with a more explicit prominence of its iterative essence. * • Documenting and visualizing the design study process, sharing insights and lessons learned from collaboration with humanities scholars, and discussing the scalability of our approach to wider domains. ## 1 Domain Background and Data ### 1.1 15cBOOKTRADE Project and MEI Database The development of _DanteExploreVis_ is motivated by the growing volume of records in the MEI database [14] and the unmet research needs of the 15cBookTrade Project [21]. 15cBOOKTRADE Project aims to investigate the impact of the European printing revolution (1450-1500) on the development of European civilization during the early modern period. The project utilizes tangible evidence from thousands of surviving 15th-century printed books to address fundamental questions about the dissemination of printing in the West, which had previously remained unanswered due to a lack of evidence and inadequate tools for exploring existing data. The project uses material, documentary, and bibliographical evidence to reconstruct the history of each book, including its provenance or ‘life’ from the time of its printing to its current location [20]. Each piece of evidence is recorded in a separate block of provenance, which is tagged geographically and chronologically. The ‘life’ of a book is therefore represented in MEI records as a sequence of provenance blocks. Mobility is an inherent characteristic of printed books, and the success of the book trade relied on the distribution of hundreds of copies of each edition beyond the place of production. Therefore, studying the book trade is essential to understanding the printing revolution. MEI [14] has been developed to provide a physical representation of the circulation of books over time, from their production location to their current locations: the possibility to visualize the movement of books has always been a priority, firstly to advance scholarship by analyzing the extensive data and identifying trends, and secondly, to effectively communicate research findings to the general public. The MEI database is a critical output of the 15cBOOKTRADE Project, which consolidates evidence related to the distribution, sale, acquisition, and use of thousands of surviving 15th-century printed books. ### 1.2 Data Although MEI records are presented in a cataloged format, they are inherently hierarchical (Fig. 1). A specific work of literature can have multiple print editions, distinguished by its unique ISTC (Incunabula Short Title Catalogue) codes. Each print edition may have multiple copies, a copy is assigned a unique MEI ID, and its physical evidence is recorded chronologically through a series of provenance blocks. A set of records entered into MEI for the Polonsky Foundation Dante Project was used in _DanteExploreVis_. An illustrated copy census of all the 173 surviving copies of the first Florentine edition of Dante’s Commedia, printed in 1481 showed that these copies are scattered worldwide. This edition was chosen because it contains the complete copy census on the distribution, use, and survival of copies of an edition. Figure 1: The hierarchical structure of MEI data (top) and the evolution of attributes in provenance instances (bottom). ### 1.3 Previous Work Prior to initiating the design study on _DanteExploreVis_ , a separate design study on the same dataset, addressing different domain problems, was carried out following the ‘Nine-Stage Framework’. This process entailed a close collaboration with the same leading domain expert and led to the development of _DanteSearchVis_ , the inaugural visualization tool under the _BookTracker_ platform for visualizing MEI data. The tool was subsequently tested and evaluated by five additional domain experts for its usability and effectiveness. The insights and reflections gleaned from the previous design study were summarized [55], prompting an adjustment of the framework to better suit our needs. Communication with the leading domain expert persisted following the deployment of _DanteSearchVis_. Feedback and expectations from the five domain experts involved in the evaluation were extensively discussed with the leading domain expert. Both parties concurred on the merit of further visualization research using MEI data and anticipated incorporating new features into the _BookTracker_ platform to fulfill diverse domain requirements. The cross-disciplinary collaboration demonstrated its value and ongoing relevance, warranting continued exploration. ## 2 Related Work ### 2.1 Visualization Design Study Core to visualization design studies is a collaboration between researchers and domain experts to address real-world problems using visualization techniques. The process includes developing a solution to the problem and evaluating it in retrospect. A rich literature exists on models and methodologies for conducting design studies. The Nested Model [38] provides prescriptive guidance for visualization design and validation by organizing the process into four cascading levels. The Nine-Stage Framework [46] provides a structured approach with 9 stages to conduct problem-driven visualization research by working with domain experts. Each stage is designed to ensure that the resulting visualizations meet the needs of stakeholders and are appropriately validated at each level of the process. These two methodologies are widely applied to visualization design studies for their generalizability, flexibility, and adaptability [1, 59, 41, 7, 24, 56]. The Design Activity Framework [36] breaks down the design process into four series of design activities and linked them to the four layers of the Nested Model, which offers a more flexible structure for iteration. The Design Study ‘Lite’ Methodology [50] is an expedited framework for visualization design research within limited time frames. The notion of the data-first design study and refinement of the Nine-Stage Framework to fit the data-driven visualization research were proposed in [42]. Our review, of papers on visualization projects in the field of humanities, found commonalities in the practice of these projects [4, 17, 58, 37, 52]. Combining our own experience working with historical book researchers [55], we consider that the core stages in the Nine-Stage Framework can be adjusted and refined to be more suited and adaptable to the design studies in collaboration with humanities researchers, particularly in terms of improving understanding and communication and obtaining more comprehensive data and problem abstractions. In the domain of Human-Computer Interaction (HCI), iterative design principles [11, 12] and user-centered design methodologies [18, 2, 9] significantly inform our design study, which places domain experts at its core. By drawing from the widely acknowledged design lifecycle [57, 35] in HCI and User Experience (UX) realms, we tailored and melded elements such as need-finding, design alternatives, prototyping, and validation into the Nine-Stage Framework [46]. This resulted in a more manifest representation of the iterative design nature subtly embedded in the core stage of the Nine-Stage Framework, thereby bolstering its usability and actionability across a diverse range of domains. To ensure the rigor of the evaluation practice in our study, we have referred to the following evaluation-related works for theoretical support: 1) evaluation on human-centered processes [32, 51], 2) qualitative evaluation methods [27, 19, 28], and 3) usability evaluation [40, 53, 10]. ### 2.2 Map-Based Visualizations on Trajectory Data Consisting mainly of ordered provenance (OD) information, the book trade data is a sequence of spatial points arranged according to timestamps and carry additional textual information. Although it is atypical due to its sparse time points and irregular intervals, it can be considered as trajectory data. To gain inspiration for our work, we have perused papers on map-based visualization of trajectory data. Dynamic vehicle movement tracing [49, 33] and population mobility [45, 48, 34] are hot topics under the umbrella of visualizing trajectories. In a survey paper by He et al. [30], various visual representations and techniques for multivariate spatio-temporal trajectory have been compared. Heatmaps [29, 54] are commonly used to represent OD data to show the event volume and density, while force-directed edge-bundling [31] is used as a bundling technique to reduce visual clutter on the map. ### 2.3 Book Trade Related Projects Previous research that shared common interests with historical book analysis was reviewed. Both Peripleo [47] and ArtVis [22] displayed spatio-temporal historical data using scatterplot-based visualizations. However, they did not depict sequential paths. Visualizations of the Republic of Letters [16, 23] succeeded in illustrating correspondence circulation, but their primary focus was on representing the overall volume of letter exchanges rather than the narrative of an individual record. Regarding visualizations of the book trade data, ‘The Atlas of Early Printing’ [43] utilized a GIS map of Europe to illustrate the spread of printing and typography, trade routes, and other historical data. The ‘MEI Map Current’ [15] displayed the distribution of the location where the books are being held today and was linked to the MEI database. Currently, no tools concentrate both on distribution and movements of books, therefore, this becomes the emphasis of our work. ## 3 Design Study Overview Collaborating with historians highlights unique aspects, including the need for clear communication of academic terminology, the potential for humanities researchers’ divergent thinking to affect project direction, and the evolving data perceptions when using visualization tools may present new research opportunities. Although the Nine-Stage Framework serves as a valuable design study methodology and offers general guidance, its highly structured and repetitive reflective loop may be inefficient or lack direction in specific contexts. Consequently, we recommend adapting its core phase with iterative design cycles to ensure thorough evaluation and reflection while maintaining consistent interactions with humanities experts. We refer to Leading Domain Expert as “LDE” and larger Domain Experts’ group as “DEs”. ### 3.1 Variant of the Nine-Stage Framework The adapted framework maintains the precondition and analysis phases in the Nine-Stage Framework while restructuring the core into iterative cycles with modified objectives. We tailored the core phase (discover, design, implement, deploy) to our context, retaining the first 3 stages in the development cycles with weekly meetings as iteration nodes. We included validate in each iteration for controllable evaluation and regular communication with DEs. Discover: In addressing the challenge of an early Discover stage for problem abstraction [46], our adapted framework places Discover at the beginning of each iteration cycle to: 1) refine domain problems, 2) increase DEs’ interaction and 3) expand information collection channels beyond the conversation. We observed that DEs’ ability to ask pertinent questions depends on their data understanding and familiarity with data. For example, historians with limited quantitative analysis experience and reliance on textual data may struggle to identify deeper concerns as data volume grows. We moved data abstraction from the design to the discovery stage, enabling DEs to improve their domain data knowledge with assistance and visualization tools at each iteration’s onset. Design: In the adapted framework’s iteration cycle, we revised the design stage, emphasizing task abstraction, and visual and interaction design. We integrate task abstraction approaches [6, 7, 39] along with the Nine-Stage Framework. DEs often have specific design expectations (e.g., our LDE desired a 2D geopolitical map with all elements); we recommend first summarizing their anticipated visualizations and then adjusting or proposing alternatives from visualization experts’ perspectives. Presenting them with multiple design options using real data to test and then implement the highest-rated design in the final tool. Implement: In the adapted framework, we suggest using rapid prototyping with real data to explore the design and refine the early versions of the tools. As iterations are marked by regular meetings with the LDE, we recommend parallel prototyping within each iteration, simultaneously presenting design alternatives for various components for evaluation in the subsequent validation stage. Validate & Evaluation: In the adapted framework, we split user evaluation into: 1) informal evaluations during discussions with the LDE, and 2) systematic evaluations involving a larger group of DEs prior to deployment. The former, termed validate, is incorporated within the iteration cycle to assess the usefulness of features in early versions. We asked questions such as Do the current visualizations address domain requirements? and Which design alternatives are most effective? during validation. Usability feedback is collected but not prioritized. The latter, labeled evaluation, occurs after the final iteration, outside the iteration cycle, and before the tool deployment. This comprehensive evaluation focuses on usability and usefulness, engaging a broader user group to ensure unbiased results. Taking cues from established iterative prototyping and user-centered design strategies common in HCI, we modified the Nine-Stage Framework to accentuate the importance of explicit iteration in its core phase, and the necessity for significant involvement of domain experts, especially during domain data, problem, and task abstraction stages. Although the iterative essence exists subtly in the original framework, we have given it a more explicit prominence in our adaptation, thereby amplifying its actionability and extensive applicability. This enhancement strengthens the framework’s systematic compliance and usability across multiple domains, signifying its domain- agnostic qualities. ### 3.2 Design Study Following the Framework Variant Before initiating the design study for _DanteExploreVis_ , we already had collaborations with researchers in historical books to develop _BookTracker_. The established workflow allowed us to swiftly progress through the precondition phase. The design study began in May 2022 and lasted six months. In the first two weeks, we clarified domain problems and data sources and set a timeline with the LDE. Between June and August, we completed seven design iterations, resulting in _DanteExploreVis_. Subsequently, a two-month evaluation period included surveys and expert interviews to assess the tool’s usability and usefulness. Finally, we documented our findings and reflections in the analysis phase, presented in this paper. Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts illustrates the timeline, interface evolution, and key issues discussed in each iteration. ## 4 Domain Problems and Tasks In this section, we present abstractions from the design study, encompassing requirements, data, and tasks. We initially identified six domain requirements and captured evolving requirements during the design process. We also outline five essential data modifications and additions. Furthermore, we analyze each domain requirement, both original and derived, associating them with low-level tasks. ### 4.1 Problem Abstraction and Requirement Analysis In the precondition stage, the LDE described their research objectives and the historians’ research flow using MEI. She emphasized that users rely on MEI to formulate initial queries related to the early distribution, later survival, and circulation of knowledge in 15th-century printed books. They also sought to uncover patterns of book ownership over time, such as institutions or individuals as well as entangled histories and shared heritage, revealed through book ownership. MEI provides basic query and display features, allowing users to retrieve book editions using ISTC numbers. The catalog-style listing and separate pages for textual provenance information are suitable for close reading but lack overview and exploratory analysis. The temporal and spatial aspects of the provenance data are not well presented too. In response to visualization needs, together with the LDE, we documented the problems that could not be directly addressed by MEI: * • Search for books with similar features: Which copies were printed in Italy and used in Germany? * • Visualize the circulation of one edition intuitively: Can I see the circulation of all surviving copies with their provenance history of Dante’s Commedia printed in Florence in 1481? * • Insights from the provenance: Can we extract more insightful summaries from the valuable provenance data in MEI? From communications with the DEs, we distilled six main requirements and listed them along with derived requirements from iteration cycles. R1 Visualize the trajectory of a single copy. The data contains valuable spatio- temporal features not represented in the MEI interface. DEs anticipate that the visualization tool will intuitively display the transfer trajectory of each copy. During the design process, three additional requirements were derived: R1.1 Show an overview of full journey and provenance breakdown. R1.2 Animate the provenance-by-provenance movement. R1.3 Establish connection to the MEI page of each copy. R2 Visualize the circulation of a group of copies. Copies with shared geo- features are often of research significance, such as those that have been transferred from Italy to the United States, potentially indicating important historical events in the printing revolution. Since MEI can only list individual copies, the visualization tool should display groups of copies with common characteristics, such as having similar transfer experiences or being stored at the same location, in a single view for deeper insights. Derived requirements from this one include: R2.1 Enable users to identify copies with shared characteristics. R2.2 Offer insights into the movement of groups of copies. R3 Present the distribution and circulation of all copies on a geopolitical map. The circulation of an edition is formed by the distribution of all its copies. Understanding the geographical distribution of the edition throughout its circulation can help shed light on historical issues. The DEs stress the significance of visualizing the trajectories of all copies on a geopolitical map. The derived requirements include: R3.1 Gaining insights on path density and clustering. R3.2 Capability to overview individual instances. R4 Static visualization of provenance-related information. The primary value of the MEI lies in the reconstruction of provenance, which is presented in a limited way by the MEI interface. An overview of all crucial aspects of the provenance instances, such as the time period and location of stay, for each copy was desired. Simultaneously, due to their evolving comprehension of the data, they wish to indicate the data completeness of elements within each provenance, facilitating future data entry and revision efforts. Derived requirements include: R4.1 Visualize data uncertainty (completeness) for each provenance. R4.2 Provide general statistics on number of provenances. R5 Animate book movement. DEs wish to view book transfers in animation, particularly for presentation and enjoyment purposes. They find that animated presentations facilitate their understanding of event chronology. A requirement associated with animation is: R5.1 Provide different ways to play the animation. R6 Gain insights from features extracted from provenance-related information. DEs take pride in the reconstruction of provenance for each copy and believe that the data can offer far more insight than the current MEI interface.They hope the visualization tool can characterize the dataset at a higher level of abstraction and enable them to examine the dataset from various perspectives. Two requirements are raised: R6.1 Examine the dataset from different angles. R6.2 Expect extracted features to be used as query entries. ### 4.2 Data Abstraction Figure 2: Encode the data to obtain the subset with critical features. The MEI data (in Fig. 1 (top)) can be interpreted as a hierarchical structure. The outermost layer is the database, which stores numerous literary works as book nodes. Editions throughout history become sub-nodes, identified by ISTC codes. Each edition produces printed copies, assigned unique MEI IDs, serving as the lowest-level nodes. Ordered provenance instances in each copy, which DEs seek to visualize and analyze, differentiate the MEI data from other historical book datasets. We divided the textual data inside each provenance instance into two categories: conclusive data (time and location data used to present the status of the provenance) and evidential data (marginal annotations and binding styles, etc.) for obtaining and confirming conclusive data. From discussions with the LDE, geographical and chronological information related to the book’s provenance was prioritized for visualization. Adhering to the framework variant, we incorporate data discovery at the beginning of each iteration, continuously performing data abstraction throughout the design study, and seizing every opportunity for DEs to reflect on the data and generate new requirements. We expanded the existing MEI data to better address domain needs. We outline four manipulations made during the iterations, which can be regarded as milestones propelling the progress of the design study: D1 Addition of Geographical Coordinates: Building on our past collaboration with the DEs, we reached a consensus that the tool should concentrate on displaying book transfer paths on a geopolitical map. Following the acquisition of raw JSON data from the MEI database, our initial data processing step entailed converting all area and place names into corresponding geographical latitude and longitude coordinates, facilitating their visualization on the map. D2 Alternative Presentation of Chronology: The LDE emphasized MEI’s uniqueness in aggregating and integrating each book’s provenance across different time periods. Provenance instances are linked chronologically, forming the book’s historical circulation trajectory. In addition to displaying each provenance location on the map, we aimed to visualize the transfer path, direction, and sequence. From the raw data, we obtained each provenance’s time period (in years). Ideally, the current transfer’s start and end times should match the end time of the current provenance and the start time of the next one. We separated each provenance’s time period into start and end times, then reconstructed them to obtain each transfer’s start and end times. The symbolic representation is as follows: For the $i^{th}$ copy $C_{i}$, let $P_{i}=\\{p_{1},p_{2},....,p_{n}\\}$ be the set of provenance instances, where $n$ is their total number. $T_{i}=\\{t_{1},t_{2},...,t_{n-1}\\}$ denotes transfers between consecutive provenance instances. For the $j^{th}$ provenance, the time range is $[p^{j}_{start},p^{j}_{end}]$, so the start and end time pairs for the $j^{th}$ transfer can be denoted as $[t^{j}_{start},t^{j}_{end}]=[p^{j}_{end},p^{j+1}_{start}]$. After splitting the time periods, we encountered issues constructing a coherent time series for book transfers and stays. Transfers with end time earlier than start time ($t^{j}_{end}<t^{j}_{start}$) indicated numerous provenance instances in the original data with $p^{j+1}_{start}<p^{j}_{end}$. The main reason was the imprecision of provenance time periods in MEI data. We identified two main data sources: 1) precisely recorded data from libraries or purchase records and 2) inferences by historians based on material evidence. After consulting with the LDE, we agreed that improving accuracy would be difficult. Instead, we decided to use the order statistic $j$, as the relative time of transfer occurrence. D3 Addrressing Data Uncertainty: We discovered that ambiguity exists not only in temporal features but also in geographical information for each provenance instance. Similarly to the temporal aspect, some provenance instances lack clearly documented locations, and historians may or may not estimate the location roughly based on existing material evidence. Here, we introduced three additional attributes using three data completeness levels to indicate the uncertainty of start time, end time, and location. The completeness levels are: 1) Accurate – Clear recorded information is available; 2) Approximate – Estimation based on the material evidence; 3) Missing – No obvious evidence or information is available. D4 Flatting Provenance Blocks: To address the limitations of displaying provenance information in the sequenced blocks confined in a hierarchical structure, as they obscure non-temporal features. We proposed a de- hierarchized and de-ordered approach (Fig. 2) allowing DEs to analyze each provenance or transfer event individually. The deconstructed data is clustered based on shared attributes, such as transfer location, and subsequently reorganized to create a subset of MEI records conforming to specific criteria. D5 Bundled Path Coordinates: In designing Single-Static Storyboard (Fig. 3 d2), we faced the challenge of visual clutter when displaying all paths as straight lines on the map. To present all paths while minimizing visual clutter, we implemented the edge bundling approach and employed the force-directed edge bundling algorithm [31] to generate a list of coordinates for curved lines, effectively replacing the initial straight lines for each transfer trajectory. ### 4.3 Task Abstraction Task abstraction follows behind requirement and data abstraction while maintaining a strong connection to subsequent designs. In line with the adapted framework, we conducted task analysis during each design iteration, particularly when new discoveries emerged. We identified three main objectives derived from domain requirements: 1) Explanatory Analysis: presenting data directly and intuitively, enabling users to uncover the story behind each copy and provenance record; 2) Exploratory Analysis: exploring data to gain insights and deeper understanding through visualizations; 3) Presenting & Enjoying: displaying data for audiences or deriving pleasure from visualization results. Drawing from the approach to task abstraction in [6, 8, 39], we further refined the high-level domain problems into lower-level tasks based on the purpose of each requirement. T1 & T2 stemmed from the need for exploratory and explanatory analysis, respectively. T3 & T4 were intermediate-level tasks for in-depth exploration and explanation of the data of interest after the initial exploration phase, while T5 addressed the presenting and enjoying purpose. The set of design tasks crafted to fulfill the domain requirements includes: T1 Browse & Explore: For R1.1, R2.2, R3, R3.1, R4, R4.1, R4.2, R6, R6.1. The MEI interface provides extensive textual information for each provenance, but lacks a way to browse the entire dataset at a glance. Offering multiple exploration methods through various entry points and data layers can reveal diverse insights and is crucial. T2 Elaborate & Explain: For R1, R2. The textual data in MEI are insufficient to present data with temporal and spatial features. Map-based visualizations are necessary to elaborate and interpret the data. T3 Lookup & Locate: For R1.3, R6.2. When searching for a specific target, such as a copy with a known ID, the tool should offer functionalities to look up and locate the target for further exploration. T4 Identify & Compare: For R2, R3. When examining multiple copies, the tool should enable users to identify historical phenomena and points of interest, and to compare the movements of different copies, observing similarities and dissimilarities. T5 Present & Enjoy: For R1.2, R3.2, R5, R5.1. To enhance the audience’s perception and sense of immersion in terms of presentation and enjoyment, the tool needs to add animation to the static visualization of the book’s movement. ## 5 Iterative Design Process In this section, we briefly discuss the iterative design process in accordance with the framework variant, providing an explanation for the timeline shown in Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts, while emphasizing the iterative evaluation-based improvement. A more detailed account is available in the supplementary material. Precondition: 30 May – 23 Jun. Potential developments for visualizing the MEI data were discussed in an initial meeting with the LDE. We identified four unmet requirements: 1) presenting the trajectory of each copy individually, 2) visualizing and comparing the circulation of multiple copies simultaneously, 3) exploring and gaining insights from provenances, and 4) implementing animations to enhance the understanding of provenance sequence. Iteration 1: 23 Jun – 05 Jul. Discover: Grounded in domain challenges and expectations from the precondition phase, six domain requirements (R1-R6) were established. Geocoding locations (D1) and splitting provenance periods were discussed (D2). However, data uncertainty posed challenges. Design & Implement: Focusing on explanatory and exploratory analysis of provenance data, domain requirements were transformed into lower-level visualization tasks. Three solutions for visualizing ordered paths on a map were proposed: 1) multiple views showing path development over time, 2) gradient color rendering of paths, and 3) path animation. For data exploration, feature aggregation was suggested. Designs ideas were documented in Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts I1, emphasizing ideation. Validate: Ideas were discussed with the LDE, who found all path visualization methods useful in different contexts. The expert questioned the feature aggregation proposal, but was open to seeing a more concrete demo before deciding. Data uncertainty was acknowledged as inevitable and the reasons behind it were explained. Iteration 2: 05 Jul – 26 Jul. Discover: Focused on providing an overview and insights (R6), this iteration proposed simplifying trajectory data to origin-destination data and handling incomplete geographical and temporal information. Order statistics were suggested as an alternative to showing sequence (D2). Design & Implement: The interface was tentatively divided into two parts. The left view provided exploratory analysis using heatmaps, while the right offered explanatory analysis based on geopolitical maps. A simple, real data- based demo was implemented (Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts I2 & I3). Validate: The LDE found the heatmap design for feature aggregation more useful than anticipated, it was comprehensible and effectively emphasized data traits. She acknowledged the value of breaking down the provenance (D4) and the possibility of presenting them in heatmaps to identify the location and temporal characteristics. She agreed with the redefinition of transfer and the use of order statistics instead of the year (D2), pressing the need for a full presentation of geographical locations and provenance occurrence order. Iteration 3: 26 Jul – 01 Aug. Discover: Different levels of data completeness were discussed (D3). The refined data structure with additional attributes is shown in Fig. 1 (bottom). The LDE’s interest in provenance data completeness (uncertainty) resulted in new domain requirements (R4.1 & R4.2). Regarding exploratory visualizations, 6.1 was formulated. To visualize individual copy trajectories (R1), we expanded the requirements based on the LDE’s suggestions, adding R1.1, R1.2, and R1.3. Design & Implement: To address requirements R1, R2, and R6, we proposed three explanatory storyboards to cater to different requirements: 1) Multi-Static Storyboard (or MSS) for individual provenance through multiple concatenated static views, 2) Single-Static Storyboard (SSS) for a single static view of multiple copy trajectories, and 3) Single-Dynamic Storyboard (SDS) for presenting the journey of multiple copies in animation. For the MSS, an overview and step-by-step sequence visualizations were implemented for each copy. The overview included a radar chart to display data completeness and a map to demonstrate the complete trajectory of a copy. Two additional heatmaps were implemented to satisfy R6.1. Validate: The LDE found the MSS to be valuable and appreciated both the overview and step-by-step sequence visualizations. We were suggested to improve the design by connecting the overview and step-by-step sequence maps, allowing users to easily pinpoint meaningful provenances. She also expressed that the radar chart can effectively display provenance information and data uncertainty, but requested the addition of uncertainty hints in the step-by- step sequence map. Iteration 4: 01 Aug – 08 Aug. Discover: The MSS was well-received but needed improvements in interactivity and view associations. Key issues to address included overlapping provenances, chronological representation, associations between overview and step-by-step maps, MEI database linking, highlighting similar transfer instances, and adding textual descriptions. Design & Implement: To better illustrate the path sequence, several designs were proposed: 1) a conical gradient circle glyph indicates total provenances and current timeline position. 2) a secondary view showing a horizontal journey of provenances over time, and 3) add animations to the overview map. We implemented the proposed designs, including a horizontal journey with clickable circles connected to corresponding provenances in the sequenced maps, and emphasized line segments for identifying and comparing similar transfers. The animation was added to the overview map, and donut glyphs displayed the data uncertainty on the sequenced maps. Links to MEI record pages were also provided. Validate: The LDE appreciated the horizontal journey visualization and the direct link to the individual MEI record pages. Animations enhanced the transfer visualization experience. The expert preferred simpler monochrome fills for points on the overview map but suggested retaining both designs (Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts S4) for user preference testing. With a single-copy presentation exceeding expectations, the focus shifted to visualizing multiple copies (R2 and R3). Iteration 5: 08 Aug – 15 Aug. Discover: Regarding presenting multiple trajectories simultaneously (R2 & R3), additional domain requirements were identified by the LDE: visualizing distribution and aggregation patterns (R3.1), and identifying trajectories from the map (R3.2). These requirements were associated with the development of SSS. However, displaying all paths on a single map led to visual clutter, so we incorporated edge bundling to improve path aggregation and minimize clutter (D4). Design & Implement: Bundled paths were proposed to decrease visual clutter and emphasize clustering, but the degree of bundling needed to be evaluated. To identify single copies, we designed interactions with the information panel, allowing users to highlight and compare multiple trajectories. We implemented these features. We generated five visualizations using edge bundling with different parameters ranging from minimum to maximum, for the LDE to test (Fig. Visualizing Historical Book Trade Data: An Iterative Design Study with Close Collaboration with Domain Experts S5). Validate: The LDE valued the SSS and its ability to highlight individual or multiple copies. Initially, she favored straight paths over bundled curved ones, but as she interacted with the prototype, she began to appreciate the advantages of bundling in displaying aggregates. Ultimately, we chose to retain both straight and bundled paths, leaving the bundling degree for broader evaluation. Iteration 6: 15 Aug – 22 Aug. Discover: With most visualization features implemented, the LDE focused on more practical domain needs – the query methods. The heatmap, while informative, was not familiar to DEs for searching and querying. Requirements for other query methods, such as multi-entry and single-entry searches, were mentioned (R2.1 & R2.2). Design & Implement: Following discussions on DEs’ familiar search methods and specific issues, we incorporated two designs into the query section as separate tabs: 1) searching by setting print and current locations; and 2) searching by individual MEI ID. The drop-down menus are designed to input query constraints. The initial SDS was developed, with moving markers showing book transfers. Validate: The LDE appreciated the two new convenient search methods with drop- down inputs and was delighted by the animated transfer histories on a single map. She then recommended two distinct playback sequences to display the animation of multiple copy transfers. Iteration 7: 22 Aug – 29 Aug. Discover: We further refined animation-related designs after discussing them with the LDE. She suggested two ways to present multiple trajectories: simultaneously or sequentially. We labeled this as new derived requirements R5.1. Furthermore, we needed to address the identification issue for multiple paths in the SDS. Design & Implement: As requested, we designed and implemented a sequential playing mode in which only one moving marker appears at a time, while completed trajectories stay on the map in different colors. To address the issue of identifying individual copies, we proposed and implemented clickable moving markers that display copy IDs and provide links to their respective MEI pages. Validate: All requirements and derived needs were met, and the LDE was satisfied with the tool, suggesting no further improvements. Evaluation & Deployment: 29 Aug – 29 Nov. The tool was further evaluated by a larger group of DEs to mitigate the potential bias of the LDE, as detailed in Section 7. Analysis: 29 Nov – 31 Dec. After the evaluation and deployment, the outcomes were discussed with the LDE. The design study concluded with a summary of crucial aspects and lessons learned. ## 6 DanteExploreVis Figure 3: The interface of DanteExploreVis, highlighting its key components: (A) and (B) function as query panels, offering various querying methods and incorporating heatmaps for enhanced data exploration; (C) is the information panel, presenting pertinent data; (D) features three interactive storyboards for examining the trajectories, patterns, and distributions within the data sourced from the MEI database. The _DanteExploreVis_ has been developed to enable DEs to explain, explore, and present MEI data. The interface is organized into four panels (Fig. 3). Panels A and B present data through heatmaps, facilitating data exploration, global observation, and data search. Panel D serves as the primary data observation panel, offering map-based visualizations to display book movement and provenance information from various angles. The information panel C presents the search results from Panels A and B, allowing users to highlight and identify specific records within Panel D. The tool is implemented using d3.js [5] and Leaflet [3]. ### 6.1 Query Panels with Exploratory Heatmaps The Query Panels (A & B) allow users to perform initial data exploration and make queries. Different types of queries are enabled to obtain the desired MEI records and related visualizations: Instances Heatmap Query Panel (a1) is based on the flattened transfer data in D4. It focuses on capturing the frequency and distribution of pathway locations and visualizes transfers as individual units rather than as a child layer of a single copy. The heatmap matrix is constructed using the origin- destination of the flattened transfers. Users can select a specific start and end country pair, and click on the corresponding cell to get all the copy records with one or more transfers that meet the query requirement. Two ordering methods are supported: 1) frequency-based ordering enables the user to detect regions where most transfers occurred; 2) alphabetical ordering of country names provides convenience for the user to identify the country and make a selection, serving a better querying role. Hover-on functions are available for each cell, providing a detailed textual description of each cell’s data. Full Journey Query Panel (a2) is designed to search for copy records with known destinations. The copy serves as the basic unit, and the location of the last provenance is the key feature for searching. Given a location pair (i.e., the copy was printed and its current location), the output will be all copy records printed in the same place, traveled for centuries, and eventually ended up in the same destination. MEI ID Search (a3) is designed to search specific records with known IDs, with the corresponding visualizations show up immediately. Time & Location Heatmaps (b1 & b2) serve mostly as exploration tools, providing both chronological and geographical overviews of the dataset. Each row in the heatmap represents an individual copy. A cell in the heatmaps can serve as a query trigger to return the corresponding MEI record. Hover-on functions for detailed information are also supported. The time-focused heatmap (b1) sorts and counts all provenance instances contained in a copy according to the corresponding historical period. Users can easily identify periods with frequent book transfers. The location heatmap takes the spatial feature of each provenance into account. Every cell depicts how many times a book has stayed in a country, allowing users to easily obtain information about the most popular countries with a high number of book transfers and stays. ### 6.2 Information Panel The Information Panel (C) acts as connecting component for the entire tool. It links data exploration with data interpretation and bridges the functions of distant and close reading of the data. The information panel displays: 1) in c1, the user-selected query category, the detailed query content, and general statistics of the results, and 2) in c2, a list of the buttons with MEI IDs of the selected copies. The buttons act as the entry point to the three storyboards for close reading and exploration of the data. Clicking a button anchors the visualization of the selected copy in the MSS (d1) and highlights its full path in the SSS (d2). ### 6.3 Storyboards Storyboards (D) are the primary feature supporting data exploration and explanation through multiple functionalities. Three different storyboards have been developed, using a combination of static and dynamic visualization techniques to effectively represent the movement history of copies both temporally and spatially, catering to the needs of DEs. Multi-Static Storyboard (d1), or MSS, emphasizes visualizing individual copy records. Each copy’s visualizations are organized in a row, enabling easy navigation and comparison between copies. Visualizations for each copy are divided into two groups: an overview and support of close-reading analysis. Three components are designed to overview the data: 1) radar chart (d1.1) displays the data completeness levels [D3], allowing users to quickly gauge the number of provenances contained in a copy and the accuracy of the data (start and end time, and location of stay) for each provenance; 2) full journey map (d1.3) visualizes the provenances and transfers of a copy on a geopolitical map using circle markers and arcs with gradient colors to show the progression of time. Zooming in and trajectory animation are supported on the map; 3) horizontal journey (d1.2) compresses the geographic location dimension from 2D to 1D and focuses on displaying the progression over time. This element also acts as a connection to step-by-step sequence map (d1.4). By clicking on a circle marker, users are directed to the corresponding detailed map. When employing the Instances Heatmap (a1) for queries, the transfer inquired will be highlighted in the horizontal journey. The step-by-step sequence maps (d1.4) are designed for the close reading of each transfer, consisting of a cohesive series of geopolitical maps that present the specifics of every transfer. To maintain visual consistency, circle markers and polylines with gradient colors are employed to depict the sequence of provenances and transfers. Each map in the series focuses on the two most recent points, ensuring users can access granular geographic information for every transfer and easily determine if the subsequent provenance stays in the same location. A donut chart glyph, with colors consistent with the radar chart, is integrated into each circle marker to indicate data completeness for the provenance. The Ant Path animation [13] highlights the most recent movement path and its direction. Single-Static Storyboard (d2), or SSS, is designed to display trajectories of all copies on one map. To mitigate visual clutter, particularly with large datasets, users can opt for bundled paths using the force-directed bundling algorithm [31]. Straight path employs line segments to intuitively represent locations and routes, enabling users to grasp location and path distribution. In contrast, the bundled path conveys similar data but diminishes visual clutter and facilitates the detection of high-frequency routes and locations associated with numerous transfers and stays. By clicking the buttons in information panel (c2), users can highlight trajectories related to specific copies using Ant Path animation, making it easier to identify individual copy trajectories and movement directions of particular paths. Single-Dynamic Storyboard (d3), or SDS, is designed to display the circulation of selected copies through animation on a single map. The visualization animates each copy’s movements, illustrating the sequential path and direction for every segment. Two animation modes are available: 1) all-at-once animation begins movement for all selected copies simultaneously upon clicking the play button, 2) one-by-one animation starts each copy’s animation sequentially, using distinct colors for differentiation. Users can start, pause, resume, or stop the animation. A reverse ID lookup function is provided for each trajectory. In the all-at-once mode, a blue pop-up button is linked with each moving marker, allowing users to identify the specific copy of the MEI ID, and clicking on the button, will redirect users to its MEI record page. ## 7 Evaluation Figure 4: Results of participants’ ranking of the six glyph designs (top) and statistics of the Likert scale question responses (bottom). Multiple evaluation methods were employed throughout the design study. During iterations, we refined the design via feedback from the LDE. The validations in each iteration are detailed in Section 5. In this section, we discuss the evaluation involving a broader group of DEs after completing most of the design. The evaluation involved a user survey and expert interviews. The objectives encompassed verifying the representativeness of summarized requirements among DEs, investigating the acceptance of various design alternatives, and assessing the effectiveness of the tool in addressing domain requirements. ### 7.1 User survey The user survey aimed to: 1) verify if the majority of domain users concur with the distilled domain requirements obtained through discussions with the LDE, 2) gather feedback on design alternatives for specific tool components, 3) collect quantitative data on the tool’s usability and usefulness, and 4) identify potential experts for follow-up studies. Given that _DanteExploreVis_ targets historians using the MEI database as their primary research data, we obtained a mailing list of historical book researchers from the LDE and used Expert Sampling [26, 25] for participant recruitment. The survey was conducted using Qualtrics [44]. After the pilot tool went live, we distributed a questionnaire to a DE mailing list. The detailed questionnaire is provided in the supplementary material. Out of 45 responses received, 40 were from domain users working in fields related to historical book trading and using MEI data. Among these participants, 33% had used visualization tools in their work, 93% believed that visualization tools could support their research and expressed interest. In summary, our survey findings are as follows: Domain requirements are representative. We involved the LDE throughout the design study, who is the creator of MEI and has experience in medieval book research and teaching. However, consulting a wider range of DEs is crucial to ensure rigor and verify that the summarized requirements accurately represent the domain’s needs. The survey has provided confirmatory results: 90% of the participants think it would be helpful to visualize the full circulation of a single book & a group of books on a geopolitical map (R1, R2, R3). 85% of them would like to see an animation showing the transfer of books on the map (R5). 83% show the desire to see the features extracted from the transfer instances in a single visualization (R6). Keep designs simple and connected. During the design process, most design decisions were made in agreement with the LDE, but some remained unresolved. These primarily focused on visualizing the chronological order of circles on the Full Journey map in the MSS. We suggested two gradient-based circular filling options to represent progress: 1) a single color from the gradient color group, and 2) the entire gradient color range for conical filling. For each method, we provide three options to highlight the current stage. Survey feedback (Fig. 4 top) aligned with the LDE’s opinion. Single-color filling ranked first. The feedback indicated that simplicity and comprehensibility were more important than displaying more information.Conical filling for small circles sometimes distracts from the overall map view. Connections to other visualizations, can compensate for the concealed information when using single-color filled circles. The tool is useful. Concerning domain requirements, we examined the tool’s usefulness. The 5-point Likert scale ratings indicated positive outcomes (Fig. 4 bottom). ### 7.2 Expert Interview The features in _DanteExploreVis_ were further refined and improved based on survey feedback and then deployed. By conducting expert interviews along with the think-aloud protocol, the tools’ effectiveness and usability were evaluated. We observed DEs’ interaction with the tool and performed qualitative analysis using transcripts. #### 7.2.1 Methodology We interviewed 4 DEs recruited from the survey. Each interview lasted between 60 $\sim$ 90 mins. All participants use MEI for research for more than two years. P2 (2 years) is a PhD student involved in the Dante Project copy census; P3 (6 years) is an initial builder of the MEI database, contributing to record expansion and verification. P1 (5 years) and P4 (2 years) also use MEI for teaching. The interview combines think-aloud evaluation and reflective discussion for multifaceted user-interaction insights. It started with a demo then participants screen sharing and thinking aloud during the prepared activities. They first explored the tool, then completed tasks based on pre- collected domain requirements.The interview ended with discussions on tool usability, effectiveness, and potential development. #### 7.2.2 Results Reflection on Usability: Drawing on Nielsen’s five usability indicators[40], feedback from DEs indicated good levels of satisfaction and efficiency, while memorability was not the primary evaluation focus. Concerning learnability and low-error, most participants quickly understood and correctly used various features, though some initially struggled with heatmaps. P1 and P4 had questions about cell meanings in the Instances Heatmap (Fig. 3 a1). They found it challenging to view provenances and transfers as separate entities, regardless of the associated copy. After guidance, they recognized its value in providing an overview and highlighting unique traits. P1, P3, and P4 commented on the small font size of axis labels. To enhance heatmap usability, we added hover-over with more descriptions and increased the font size. Reflection on Usefulness: Usefulness was evaluated based on the tool’s contribution to meeting domain requirements. All participants found the map- based visualization helpful in displaying the geographical distribution of books (R3). According to transcripts, various features in the tool facilitated intuitive data interpretation. New data manipulations and corresponding visualizations allowed efficient exploration (R1, R2, R4) and provided insights into the prevalent book transfer causes across time and space (R6). The Storyboards received wide appreciation for their ability to present and explain the transfer history of book copies on map-based views from multiple perspectives. Participants P1 and P3 mentioned that MSS (Fig. 3 d1) presented individual copies as row objects aligned with their accustomed way of viewing the records in the MEI interface. The linked global and partial views were very helpful in quickly understanding the copy’s movement history and exploring provenances with interesting features. All participants commended SSS (d2) for effectively displaying the distribution and aggregation of transfers across all printed copies of one book edition, providing previously inaccessible information. As an animated version of the SSS, the SDS (d3) was praised for the vivid presentation of book movements. The Heatmap was acknowledged as an efficient method for identifying and searching interesting patterns in MEI. Participants noted their intuitive presentation of transfer data characteristics across all copies regarding geographical or temporal distribution. Linking heatmaps to storyboards enabled quick targeting of specific copy records with certain traits for closer observation: It is a tool that cuts the time of realizing things and will save you a lot of comparing and looking at data while you have it all here. Participants P2 and P3 found heatmaps especially useful for providing a new perspective for users with limited prior knowledge, offering insights and research starting points: The heatmaps are useful at the start of a research when someone wants to immediately see a pattern or behavior, or what they need to focus on. The concept of data completeness, obtained through continuous data abstraction during the iterative design process, was well-received by participants. They appreciated the radar chart and step-by-step sequences map (Fig. 3 d1.1 & d1.4) for providing a quick overview of data ambiguities in each provenance and an indication of credibility and reliability. Participants also liked the connection to MEI page for data checking and proofreading. The animations embedded in the tool received positive feedback, particularly from DEs with teaching duties, who noted their usefulness in engaging students and showing patterns. Usage Scenarios: We described the following scenarios encountered or envisioned by DEs during the think-aloud evaluation and reflective discussion. P2 was familiar with all the copy records entered for the Dante Project and wanted to use the tool to provide evidence for her research conjectures. She started with the temporal heatmap (Fig. 3 b1) and observed the frequency distribution of transfer activity in the temporal dimension for all copies: It’s very interesting to see that the colors are darker in these beginning time frames and then get lighter and less frequent. It illustrates how the Printing Revolution in Early Modern Europe impacted the book trade, […], I can visually see they increase again around the 18th century, which can be used as evidence of the mechanization of printing technology. With a particular interest in the copies that had traveled across the ocean to the US, she opened the Full Journey query panel (a2), set the start and end location pair, and got 25 records retrieved. She then opened the SSS (d2) for global exploration. She identified and compared the transfer paths of different copies on the map by clicking on the corresponding buttons in the Information Panel (c2). In the end, from the SDS (d3), she enjoyed the animation of the selected copies moving across the map: It is surprising to see these books printed from the same place and headed to the same destination but underwent such different journeys. P4, focusing on the special immigration of books from Europe to South America, was less familiar with the MEI records entered for the Dante Project. He began exploring using the Instances Heatmap (a1), ranking results by frequency, and noticing most transfers were in Europe. Focusing on the UK, he obtained all the copies that contained transfers within the UK by clicking the cell. Using the MSS, he compared UK transfers in different copies with the Horizontal Journey (d1.2). He targeted interesting records by combining Full Journey and Step-by-step Sequences views (d1) and then went for more information on the linked MEI record page. He envisioned applying the tool for his research: The heatmaps can accelerate the process of seeking the migration of books from Europe to South America, […], the trajectory and the animation can further show the migration took place in a few decades and how the new migration tool place. Summary Fulfillment of _DanteExploreVis_ requirements was confirmed during interview sessions. Heatmaps’ usability issues were addressed based on participants’ feedback. On usefulness, all domain requirements were met. The three storyboards provided a multi-perspective interpretation of MEI data in static or dynamic forms, fulfilling requirements R1, R2, and R5. Map-based visualizations addressed the research gap (R3), complementing textual MEI records and supporting conjectures. One participant remarked, The tight link to the map is really useful. This tool proves a saying that geography is the mother of history. The data completeness proposal and associated visualizations improved data checking efficiency (R4). Heatmaps and the hierarchical provenance data’s disintegration and integration offered new exploration avenues, inspiring further research (R6). ## 8 Reflection and Discussion Reflecting on our experiences, we highlight the unique challenges and opportunities of collaborating with humanities domain experts. We further explore how our adapted framework assists in mitigating these challenges and discuss its potential applicability across other domains. Tailoring design study frameworks for different domains: Our experience underscores the importance of adapting design study frameworks according to the target domain. Experts from different fields possess distinct habits, including information processing, data perception, thinking styles, collaboration methods, etc. Adapting to these habits and customizing the framework for design research implementation can enhance the process’s efficiency. For example, historians may possess a deeper understanding of data and tend to present new requirements each time they encounter a new way of visualizing the domain data. As a result, we adjusted the core phase in the Nine-Stage Framework to an iterative design cycle with fixed stages, emphasizing data, requirement, and task reflection during each iteration. Balancing engineering and creative design processes [36]: We discovered that humanities researchers exhibit different research habits and sensitivities compared to their scientific counterparts. For instance, they favor textual information and may display lower sensitivity to visual and numerical data. Although they may establish goals and problems to solve at a project’s outset, their strong divergent, and creative minds can generate needs unrelated to the current design mid-project. Thus, during the design research process, we aim to fulfill domain requirements; however, if they are too unrelated, we evaluate whether they should be implemented in the current tool or serve as a starting point for future projects or tools. Being aware of the “wow” effect: Keep in mind that the wow effect, such as the use of animation or the simultaneous display of vast amounts of data, may sway domain experts during the design process. This could potentially divert their focus from assessing the effectiveness of the tool in supporting data analysis. When examining data from qualitative evaluations, it is essential to prioritize feedback regarding the tool’s functionality and overall usefulness. Be discerning about whether praise from domain experts may be affected by the wow effect. Questioning domain expert assumptions: Visualization experts should avoid assuming that domain experts always possess a deeper knowledge of data characteristics and uses. By challenging domain expert opinions and investigating alternative visualization techniques, new research opportunities, and more effective design solutions can emerge. This process can also provide novel perspectives for domain experts to understand or work with the data. For instance, in our case, despite initial skepticism from domain experts about the usefulness of a heatmap, as the project evolved, they discovered its value in interpreting large datasets and as a source of inspiration for their research. Overcoming Interdisciplinary Collaboration Challenges: The framework variant emphasizes iterative design cycles and deep collaboration with domain experts, cultivating a more context-sensitive and user-centered design perspective. This methodology helps to navigate and reconcile the understanding gaps and thought divergences often found in collaborations spanning distinct domains. Hence, the framework proves to be a versatile instrument for conducting design studies across a broad spectrum of fields, extending its value beyond the humanities domain, which was the genesis of our project. ## 9 Limitations and Future Work Data uncertainty and ambiguity, inherent characteristics in humanities and historical datasets, pose challenges to accuracy. The MEI data is heavily reliant on expert annotations based on physical evidence, thus improving precision via computational means is difficult. Nonetheless, future work will focus on visualizing these uncertainties, attracting expert attention, and potentially improving data quality. Scalability is a limitation due to _DanteExploreVis_ ’s specialized design and integration with the MEI database and data structure. However, the hierarchical and temporal-spatial nature of the provenance data shares similarities with datasets in various fields. Multi-views with close and distant observations applied in _DanteExploreVis_ meet the requirements of a visual analytics tool. With minor adjustments, it could be employed for research involving diverse datasets such as population migration, theological book circulation, and the trade of manuscripts and paintings. Testing _DanteExploreVis_ with different datasets will be part of our future endeavors to gauge its scalability. Scalable Data Visualization is a significant issue. The tool currently accommodates the existing data effectively, the anticipated expansion of the MEI database and the escalating complexity of analytical demands could potentially result in visual clutter. Although our tool includes edge-bundling to mitigate clutter, its future efficiency is uncertain. Future work will explore enhancing the bundling feature by enabling user customization for more targeted path bundling. We are also examining alternative visualization methodologies for temporal-spatial path data to overcome the constraints of a 2D map. These advancements would require continued collaborative discussions and alignment with DEs. ## 10 Conclusion We modified the core phase of the Nine-Stage Framework to develop a tailored design study framework for historians. Over six-month, we collaborated with domain experts in iterative design cycles to create _DanteExploreVis_. We thoroughly documented the process to ensure reproducibility and shared insights and lessons learned from our interdisciplinary approach, highlighting its potential for broader application across various domains. On scalability, the _DanteExploreVis_ interface can be adapted to support trajectory data analysis in different fields, though future evaluations with diverse datasets are necessary. At present the tool is applied to a subset of the MEI data; future research could explore its application to the complete dataset or examine relationships between different versions or literary works based on provenances. ###### Acknowledgements. Y. Xing is funded by: King’s-CSC PhD Scholarship programme. ## References * [1] A. Abdul-Rahman, E. Maguire, and M. Chen. Comparing Three Designs of Macro-Glyphs for Poetry Visualization. In N. Elmqvist, M. Hlawitschka, and J. Kennedy, eds., EuroVis - Short Papers. The Eurographics Association, 2014. doi: 10 . 2312/eurovisshort . 20141165 * [2] C. Abras, D. Maloney-Krichmar, J. Preece, et al. User-centered design. Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications, 37(4):445–456, 2004. * [3] V. Agafonkin and other contributors. Leaflet: A JavaScript library for interactive maps. https://leafletjs.com, 2021. Accessed: 2023-03-02. * [4] T. Arnold, N. Ayers, J. Madron, R. Nelson, and L. Tilton. Visualizing a large spatiotemporal collection of historic photography with a generous interface. In 2020 IEEE 5th Workshop on Visualization for the Digital Humanities (VIS4DH), pp. 30–35. IEEE, 2020. doi: 10 . 1109/VIS4DH51463 . 2020 . 00010 * [5] M. Bostock. D3.js - data-driven documents. https://d3js.org/, 2011. Accessed: 2023-03-02. * [6] M. Brehmer. Why Visualization? Task Abstraction for Analysis and Design. PhD thesis, University of British Columbia, April 2016. * [7] M. Brehmer, S. Ingram, J. Stray, and T. Munzner. Overview: The design, adoption, and analysis of a visual document mining tool for investigative journalists. IEEE Trans. Visualization & Computer Graphics, 20(12):2271–2280, 2014. doi: 10 . 1109/TVCG . 2014 . 2346431 * [8] M. Brehmer and T. Munzner. A multi-level typology of abstract visualization tasks. IEEE Transactions on Visualization and Computer Graphics, 19(12):2376–2385, 2013. doi: 10 . 1109/TVCG . 2013 . 124 * [9] M. Brhel, H. Meth, A. Maedche, and K. Werder. Exploring principles of user-centered agile software development: A literature review. Information and Software Technology, 61:163–181, 2015. doi: 10 . 1016/j . infsof . 2015 . 01 . 004 * [10] T. Brinck, D. Gergle, and S. D. Wood. Usability for the web: Designing web sites that work. Elsevier, 2001. * [11] B. Buxton. Sketching user experiences: getting the design right and the right design. Morgan kaufmann, 2010. * [12] B. Camburn, V. Viswanathan, J. Linsey, D. Anderson, D. Jensen, R. Crawford, K. Otto, and K. Wood. Design prototyping methods: state of the art in strategies, techniques, and guidelines. Design Science, 3:e13, 2017. doi: 10 . 1017/dsj . 2017 . 10 * [13] R. P. G. Cavalcante. React-leaflet-ant-path. https://github.com/rubenspgcavalcante/react-leaflet-ant-path, 2021\. Accessed: 2023-03-02. * [14] C. CERL. Material Evidence in Incunabula. https://data.cerl.org/mei/_search, 2015. * [15] C. CERL. MEI map. http://documents.cerl.org/jupyter/mei_map_current.html, 2021. Accessed: 2023-03-02. * [16] D. Chang, Y. Ge, S. Song, N. Coleman, J. Christensen, and J. Heer. Visualizing the republic of letters. Stanford: Stanford University. Retrieved April, 21:2014, 2009. * [17] A. Ciula, M. Vieira, G. Ferraro, T. Ong, S. Perovic, R. Mucignat, N. Valmori, B. Deseure, and E. J. Mannucci. Small data and process in data visualization: The radical translations case study. In 2021 IEEE 6th Workshop on Visualization for the Digital Humanities (VIS4DH), pp. 1–6. IEEE, 2021. doi: 10 . 48550/arXiv . 2110 . 09349 * [18] T. S. Da Silva, A. Martin, F. Maurer, and M. Silveira. User-centered design and agile methods: a systematic review. In 2011 AGILE conference, pp. 77–86. IEEE, 2011. doi: 10 . 1109/AGILE . 2011 . 24 * [19] B. Dingman, G. W. Tigwell, and K. Shinohara. Interview and think aloud accessibility for deaf and hard of hearing participants in design research. In Proceedings of the 23rd International ACM SIGACCESS Conference on Computers and Accessibility. Association for Computing Machinery, New York, NY, USA, 2021. doi: 10 . 1145/3441852 . 3476526 * [20] C. Dondi. “15cBooktrade”: An evidence-based assessment and visualization of the distribution, sale and reception of printed books in the renaissance. Gazette du livre médiéval, 60(1):83–101, 2013. doi: 10 . 3406/galim . 2013 . 2035 * [21] C. Dondi. From the 15cBOOKTRADE Project website. https://15cbooktrade.ox.ac.uk/, (accessed in 2021). * [22] B. Dumas, B. Moerman, S. Trullemans, and B. Signer. Artvis: Combining advanced visualisation and tangible interaction for the exploration, analysis and browsing of digital artwork collections. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, AVI ’14, p. 65–72. Association for Computing Machinery, New York, NY, USA, 2014. doi: 10 . 1145/2598153 . 2598159 * [23] D. Edelstein, P. Findlen, G. Ceserani, C. Winterer, and N. Coleman. Historical Research in a Digital Age: Reflections from the Mapping the Republic of Letters Project. The American Historical Review, 122(2):400–424, 03 2017. doi: 10 . 1093/ahr/122 . 2 . 400 * [24] J. Eirich, J. Bonart, D. Jackle, M. Sedlmair, U. Schmid, K. Fischbach, T. Schreck, and J. Bernard. Irvine: A design study on analyzing correlation patterns of electrical engines. IEEE Trans. Visualization & Computer Graphics, 28(01):11–21, 2022\. doi: 10 . 1109/TVCG . 2021 . 3114797 * [25] I. Etikan and K. Bala. Sampling and sampling methods. Biometrics & Biostatistics International Journal, 5(6):00149, 2017\. doi: 10 . 15406/bbij . 2017 . 05 . 00149 * [26] I. Etikan, S. A. Musa, R. S. Alkassim, et al. Comparison of convenience sampling and purposive sampling. American Journal of Theoretical and Applied Statistics, 5(1):1–4, 2016. doi: 10 . 11648/j . ajtas . 20160501 . 11 * [27] M. E. Fonteyn, B. Kuipers, and S. J. Grobe. A description of think aloud method and protocol analysis. Qualitative health research, 3(4):430–441, 1993. doi: 10 . 1177/104973239300300403 * [28] C. Forsell and M. Cooper. A guide to reporting scientific evaluation in visualization. In Proceedings of the International Working Conference on Advanced Visual Interfaces, p. 608–611. Association for Computing Machinery, New York, NY, USA, 2012. doi: 10 . 1145/2254556 . 2254668 * [29] D. Guo. Visual analytics of spatial interaction patterns for pandemic decision support. International Journal of Geographical Information Science, 21(8):859–877, 2007. doi: 10 . 1080/13658810701349037 * [30] J. He, H. Chen, Y. Chen, X. Tang, and Y. Zou. Variable-based spatiotemporal trajectory data visualization illustrated. IEEE Access, 7:143646–143672, 2019. doi: 10 . 1109/ACCESS . 2019 . 2942844 * [31] D. Holten and J. van Wijk. Force-directed edge bundling for graph visualization. Comput. Graph. Forum, 28:983–990, 06 2009. doi: 10 . 1111/j . 1467-8659 . 2009 . 01450 . x * [32] W. Huang, ed. Handbook of Human Centric Visualization. Springer New York, NY, New York, NY, 1 ed., 2014. doi: 10 . 1007/978-1-4614-7485-2 * [33] X. Huang, Y. Zhao, C. Ma, J. Yang, X. Ye, and C. Zhang. Trajgraph: A graph-based visual analytics approach to studying urban network centralities using taxi trajectory data. IEEE Trans. Visualization & Computer Graphics, 22(1):160–169, 2016\. doi: 10 . 1109/TVCG . 2015 . 2467771 * [34] C. Ling and E. C. Delmelle. Classifying multidimensional trajectories of neighbourhood change: A self-organizing map and k-means approach. Annals of GIS, 22(3):173–186, 2016. doi: 10 . 1080/19475683 . 2016 . 1191545 * [35] D. J. Mayhew. The usability engineering lifecycle. In CHI’99 Extended Abstracts on Human Factors in Computing Systems, p. 147–148. Association for Computing Machinery, New York, NY, USA, 1999. doi: 10 . 1145/632716 . 632805 * [36] S. McKenna, D. Mazur, J. Agutter, and M. Meyer. Design activity framework for visualization design. IEEE Trans. Visualization & Computer Graphics, 20(12):2191–2200, 2014. doi: 10 . 1109/TVCG . 2014 . 2346331 * [37] V. Müller, C. Sieg, and L. Linsen. Uncertainty-aware topic modeling visualization. In 2021 IEEE 6th Workshop on Visualization for the Digital Humanities (VIS4DH), pp. 12–18. IEEE Computer Society, Los Alamitos, CA, USA, 2021. doi: 10 . 1109/VIS4DH53644 . 2021 . 00007 * [38] T. Munzner. A nested model for visualization design and validation. IEEE Trans. Visualization & Computer Graphics, 15(6):921–928, 2009\. doi: 10 . 1109/TVCG . 2009 . 111 * [39] T. Munzner. Visualization Analysis and Design. AK Peters Visualization Series. CRC Press, 2014. * [40] J. Nielsen. Usability Engineering. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1994. * [41] C. Nobre, N. Gehlenborg, H. Coon, and A. Lex. Lineage: Visualizing multivariate clinical data in genealogy graphs. IEEE Trans. Visualization & Computer Graphics, 25(3):1543–1558, 2019. doi: 10 . 1109/TVCG . 2018 . 2811488 * [42] M. Oppermann and T. Munzner. Data-first visualization design studies. In 2020 IEEE Workshop on Evaluation and Beyond-Methodological Approaches to Visualization (BELIV), pp. 74–80. IEEE Computer Society, Los Alamitos, CA, USA, 2020. doi: 10 . 1109/BELIV51497 . 2020 . 00016 * [43] G. Prickman, A. Holland, and R. Shepard. The Atlas of Early Printing. https://atlas.lib.uiowa.edu/, (accessed in 2021). * [44] Qualtrics. https://www.qualtrics.com, 2005. Accessed: 2022-11-21. * [45] D. Redin, D. Vilela, N. Nunes, M. Ribeiro, and C. Prandi. Vitflow: a platform to visualize tourists flows in a rich interactive map-based interface. In 2017 Sustainable Internet and ICT for Sustainability (SustainIT), pp. 1–2. IEEE, 2017. doi: 10 . 23919/SustainIT . 2017 . 8379814 * [46] M. Sedlmair, M. Meyer, and T. Munzner. Design study methodology: Reflections from the trenches and the stacks. IEEE Trans. Visualization & Computer Graphics, 18(12):2431–2440, 2012. doi: 10 . 1109/TVCG . 2012 . 213 * [47] R. Simon, L. Isaksen, E. T. Barker, and P. de Soto Cañamares. Peripleo: a tool for exploring heterogenous data through the dimensions of space and time. Code4Lib Journal, 2016. * [48] A. Skupin and R. Hagelman. Visualizing demographic trajectories with self-organizing maps. GeoInformatica, 9(2):159–179, 2005. doi: 10 . 1007/s10707-005-6670-2 * [49] T. Sobral, T. Galvão, and J. Borges. Visualization of urban mobility data from intelligent transportation systems. Sensors, 19(2):332, 2019. doi: 10 . 3390/s19020332 * [50] U. H. Syeda, P. Murali, L. Roe, B. Berkey, and M. A. Borkin. Design study “lite” methodology: Expediting design studies and enabling the synergy of visualization pedagogy and social good. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, CHI ’20, pp. 1–13. Association for Computing Machinery, 2020\. doi: 10 . 1145/3313831 . 3376829 * [51] M. Tory and T. Möller. Evaluating visualizations: Do expert reviews work? IEEE Computer Graphics & Applications, 25(5):8–11, 2005. doi: 10 . 1109/MCG . 2005 . 102 * [52] T. Vancisin, M. Orr, and U. Hinrichs. Externalizing transformations of historical documents: Opportunities for provenance-driven visualization. In 2020 IEEE 5th Workshop on Visualization for the Digital Humanities (VIS4DH), pp. 36–42. IEEE, 2020. doi: 10 . 1109/VIS4DH51463 . 2020 . 00011 * [53] M. Winter, H. Baumeister, U. Frick, M. Tallon, M. Reichert, and R. Pryss. Exploring the usability of the german covid-19 contact tracing app in a combined eye tracking and retrospective think aloud study. 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2215–2221, 2021. doi: 10 . 1109/EMBC46164 . 2021 . 9630949 * [54] J. Wood, J. Dykes, and A. Slingsby. Visualisation of origins, destinations and flows with od maps. The Cartographic Journal, 47(2):117–129, 2010. doi: 10 . 1179/000870410X12658023467367 * [55] Y. Xing, C. Dondi, R. Borgo, and A. Abdul-Rahman. A Design Study of Visualizing Historical Book Movement. In M. Agus, W. Aigner, and T. Hoellt, eds., EuroVis 2022 - Short Papers. The Eurographics Association, 2022. doi: 10 . 2312/evs . 20221100 * [56] Y. Ye, F. Sauer, K.-L. Ma, K. Aditya, and J. Chen. A user-centered design study in scientific visualization targeting domain experts. IEEE Trans. Visualization & Computer Graphics, 26(6):2192–2203, 2020. doi: 10 . 1109/TVCG . 2020 . 2970525 * [57] P. Zhang, J. Carey, D. Te’eni, and M. Tremaine. Integrating human-computer interaction development into the systems development life cycle: a methodology. Communications of the Association for Information Systems, 15(1):29, 2005. doi: 10 . 17705/1CAIS . 01529 * [58] W. Zhang, J. K. Wong, X. Wang, Y. Gong, R. Zhu, K. Liu, Z. Yan, S. Tan, H. Qu, S. Chen, et al. Cohortva: A visual analytic system for interactive exploration of cohorts based on historical data. IEEE Trans. Visualization & Computer Graphics, 29(1):756–766, 2023\. doi: 10 . 1109/TVCG . 2022 . 3209483 * [59] Y. Zhang, K. Chanana, and C. Dunne. Idmvis: Temporal event sequence visualization for type 1 diabetes treatment decision support. IEEE Trans. Visualization & Computer Graphics, 25(1):512–522, 2019\. doi: 10 . 1109/TVCG . 2018 . 2865076
# AI-Copilot for Business Optimisation: A Framework and A Case Study in Production Scheduling Pivithuru Thejan Amarasinghe Research Center for Data Analytics and Cognition La Trobe University Australia <EMAIL_ADDRESS>&Su Nguyen College of Business and Law RMIT University Australia <EMAIL_ADDRESS>&Yuan Sun Research Center for Data Analytics and Cognition La Trobe University Australia <EMAIL_ADDRESS>&Damminda Alahakoon Research Center for Data Analytics and Cognition La Trobe University Australia <EMAIL_ADDRESS> ###### Abstract Business optimisation is the process of finding and implementing efficient and cost-effective means of operation to bring a competitive advantage for businesses. Synthesizing problem formulations is an integral part of business optimisation which is centred around human expertise, thus with a high potential of becoming a bottleneck. With the recent advancements in Large Language Models (LLMs), human expertise needed in problem formulation can potentially be minimized using Artificial Intelligence (AI). However, developing a LLM for problem formulation is challenging, due to training data requirements, token limitations, and the lack of appropriate performance metrics in LLMs. To minimize the requirement of large training data, considerable attention has recently been directed towards fine-tuning pre- trained LLMs for downstream tasks, rather than training a LLM from scratch for a specific task. In this paper, we adopt this approach and propose an AI- Copilot for business optimisation by fine-tuning a pre-trained LLM for problem formulation. To address token limitations, we introduce modularization and prompt engineering techniques to synthesize complex problem formulations as modules that fit into the token limits of LLMs. In addition, we design performance evaluation metrics that are more suitable for assessing the accuracy and quality of problem formulations compared to existing evaluation metrics. Experiment results demonstrate that our AI-Copilot can synthesize complex and large problem formulations for a typical business optimisation problem in production scheduling. _K_ eywords Copilot, Large Language Model (LLM), Artificial Intelligence (AI), Business optimisation, Problem Formulation, Production Scheduling ## 1 Introduction Business optimisation is an important process to help businesses gain competitive advantages by reducing operational costs, improving customer satisfaction, and mitigating risks. Advances in digital technologies, such as Internet-of-Things and cloud technologies, have enabled new business models with complex operations. Optimising key business decisions (operational, tactical, and strategic) in complex and dynamic systems is challenging and requires the involvement of different stakeholders. Handling business rules and various practical constraints is also not a trivial task. Although modern optimisation technologies have offered businesses different ways to formulate and solve their problems, successfully adopting these technologies still requires significant domain knowledge and optimisation expertise. The business optimisation commences from business by providing a problem description to an optimisation expert. The optimisation expert then formulates the problem description into a mathematical model [Antoniou and Lu, 2007]. Following that, the optimisation expert must translate the mathematical model into an executable problem formulation to solve using a solver [Boyd and Vandenberghe, 2004]. Subsequently, the optimisation expert interprets the results and suggests the best actions for the business. Finally, a software engineer can integrate the models developed by the optimisation expert into customer’s apps, etc. Although solving optimisation problems can be handled efficiently by many advanced solvers such as [Gurobi Optimization, LLC, 2023, Google, 2023, Cplex, 2009], and meta-heuristics, transforming problem description to an executable and accurate problem formulation is time- consuming and requires expert knowledge. Poor problem formulations can lead to infeasible solutions (e.g., failure to address constraints and optimise the objective of interest) and significantly slow down the solving process. LLMs have become increasingly popular due to their broad applications. Initiated by transformer Vaswani et al. [2017] for machine translation, LLMs have quickly adopted within different software and business functions such as analysing business data [Cheng et al., 2023], creating marketing content [Rivas and Zhao, 2023], generating code for visualisations [OpenAI, 2023], supporting programmers with auto-completion [Nguyen and Nadi, 2022], and working as optimisers for simple continuous and discrete optimisation problems [Yang et al., 2023]. With respect to supporting technical users, Salesforce uses code-generating LLMs for code generation in its development teams [Le et al., 2022]. Meanwhile, GitHub Copilot [Nguyen and Nadi, 2022] enables code suggestions and code completions for programmers to improve their coding efficiency. Furthermore, Amazon CodeWhisperer [Yetiştiren et al., 2023] helps developers to code efficiently as well as write code related to AWS resources. Meanwhile, ServiceNow worked with the open-source community to introduce StarCoder [Li et al., 2023] as a free AI code generator. Going beyond supporting technical users, LLMs support non-technical users to implement technical tasks. For example, code-generating LLMs now enable non-technical users to generate a simple website or create a simple query to retrieve data from a database, without technical support. The motivation behind this paper is to leverage code-generating LLMs to support non expert users to successfully carry out business optimisations without having to consult experts to significantly reduce the traditionally required effort. Given the nature of problem formulation as a language-to-language translation, code-generating LLMs can be a powerful tool to transform problem description into a problem formulation. Furthermore, the recent considerable attention to using LLMs to automate code generation tasks, paves the way to fine-tuning a pre-trained code-generating LLM for problem formulation. Additionally, the introduction of unlabelled data to train LLMs for code generation [Chen et al., 2021], eliminated most of the limitations in early-stage code-generating LLMs that were trained using labelled datasets [Mastropaolo et al., 2021]. Recently, fine-tuning pre-trained models for downstream tasks has enabled training a LLM for a specific set of tasks with just hundred or two hundred data points [Solaiman and Dennison, 2021]. However, in general, LLM-based applications in complex decision-making scenarios are still limited. Since existing code-generating LLMs are trained on generic programming problems, problem formulation is a non trivial task for those code-generating LLMs due to complex constraints, different optimisation requirements, and the need of selecting the most suitable optimisation technique. Additionally, due to token limitation, code-generating LLMs cannot generate large problem formulations, and large computational and memory requirements of some of the code-generating LLMs limit their practical use. Also, the existing performance evaluation metrics in code-generating LLMs are not suitable for problem formulation since the result as well as the optimisation technique need to be considered. Although machine translation LLMs have been recently fine-tuned for auto- formulation of optimisation models, such models are restricted to mathematical modeling with linear programming [Ramamonjison et al., 2022] or conceptual models that are still in the experimental stage [Tsouros et al., 2023]. Moreover, the datasets used in these LLMs contain comparatively smaller problem formulations with limited constraints and variables. As a result, applications of such LLMs are significantly limited in practice since real- world problems often have large numbers of variables and constraints while part of the variables are integers. Accordingly, we introduce AI-Copilot as a step towards automating problem formulations for complex real-world optimisation problems. To do so, we select production scheduling as a case study as it has been comprehensively researched in the past and contains complex constraints and different optimisation objectives [Xiong et al., 2022]. We fine-tune a code-generating LLM, which uses limited memory, and computational resources, using a data set created by us which comprised 100 pairs of problem descriptions and formulations. As a result, we minimize the requirement of large training data, rather than training a LLM from scratch for problem formulation. In addition, we apply modularization and prompt engineering techniques on AI-Copilot to cater to token limitations when formulating complex problem formulations. Furthermore, we use loss and execution based performance evaluation metrics to assess the accuracy and quality of problem formulations compared to existing evaluation metrics. In contrast to existing machine translation LLM based auto-formulation models such as Ramamonjison et al. [2022], our method performs text-to-code translation and formulates constraint programming problems. Moreover, AI- Copilot can formulate complex problem formulations compared to existing machine translation LLM based auto-formulations models [Ramamonjison et al., 2022, Tsouros et al., 2023]. Therefore the contributions made by AI-Copilot for automating problem formulations could be highlighted as: * • An open-source dataset with production scheduling to fine-tune code-generating LLM for problem formulation. * • A fine-tuned code-generating LLM for problem formulation that consumes limited computing resources. * • A modularization and prompt engineering technique to manage large problem formulations. * • Performance evaluation metrics for assessing the accuracy and quality of problem formulation. ## 2 Literature Review ### 2.1 Business Optimisation and Optimisation Technologies Business optimisation has been described as the Philosophy of Continuous Improvement [Singh and Singh, 2009], where businesses attempt to make their operation as perfect as possible. Generally, business optimisation covers all processes and efforts to improve productivity, efficiency, performance, etc. However, this research considers business optimisation from a computational and mathematical perspective where one tries to minimize or maximize an important characteristic of a process by an appropriate choice of decisions [Kallrath and Wilson, 1997]. For example, problem descriptions are formulated as mathematical models and solved using solvers to provide suggestions for business decisions. Traditionally, combinatorial optimisation is a class of optimisation, that is used for mathematical and computational requirements of business optimisation [Yu, 2013]. In spite of the benefits from business optimisation to businesses, challenges such as intensive computational and human expertise requirements, inadequate support structure for business optimisation, etc may exist. While past studies have predominantly focused on algorithmic improvements for business optimisation, our AI-Copilot will focus on improving the support structure and reducing intensive human expertise requirements. In general combinatorial optimisation is known to be the process of finding the minimum or maximum of an objective function that has a large discrete domain. Such a process is needed in real-world scenarios like Vehicle Routing problem to select the optimum set of routes to serve a given set of customers [Toth and Vigo, 2002], Bin-Packing problem for multiprocessor scheduling [Coffman et al., 1978], Integer Linear Programming to reduce cut waste in furniture manufacturing [Kłosowski et al., 2018], Production Scheduling [Pochet and Wolsey, 2006], etc. The solution space for such a scenario is too broad for a pure brute- force approach. Therefore, algorithmic techniques like Dynamic Programming, Branch and Bound, Random-Restart Hill Climbing, Simulated Annealing, Genetic Algorithms, and Tabu Search can be used. From a computer science perspective, the above algorithmic techniques reduce the solution space or accelerate the solution search using mathematical methods. Recently considerable attention is directed towards applying machine learning techniques for combinatorial optimisation rather than traditional mathematical improvements to combinatorial optimisation algorithms. Although the latest research focuses on improving combinatorial optimisation techniques using machine learning, AI-Copilot will focus on generating problem formulations related to combinatorial optimisation. ### 2.2 Problem Formulation & Solvers An optimisation problem can be formulated by selecting one or more optimisation variables, an objective function, and constraints . Such optimisation problems can be divided into unconstrained optimisations, simple bound constraints, and constrained optimisations. While the name suggests that unconstrained optimisations have no constraints, simple bound constraints have boundaries for the design parameters, but no constraints on the solution. However, constrained optimisation is the most complex type of optimisation problem, where the solution must satisfy a set of linear or nonlinear constraints and bounds to design parameters. Solvers embed powerful algorithms to solve problem formulations. Nevertheless, solvers will be different from each other, due to the facts such as, computational efficiency, and solution strategies. Despite Gurobi Optimization, LLC [2023] being the state-of-art solver for mathematical programming which solves a wide range of problems like linear programming, mixed integer programming, etc., Bestuzheva et al. [2021] is the fastest non- commercial solver currently available for mixed integer programming and mixed integer non-linear programming. In addition, Gurobi Optimization, LLC [2023] is more competitive over Bestuzheva et al. [2021] in solving complex problem formulations, by supplying better timing and solving within the time limit [Avella et al., 2023] capabilities. Admittedly, optimisation languages like MiniZinc [Nethercote et al., 2007], GAMS [Soroudi, 2017], and AMPL [Fourer et al., 1990] supply problem formulation specific syntax to users to represent a mathematical model related to a problem description, the differences of optimisation languages over each other require significant training to master an optimisation language. Such differences happen mainly in solving capability, licencing approach, expressiveness of syntax, and documentation. Furthermore, as the transformation of a problem description from a native language to an optimisation language is time consuming, optimisation languages limit the application of business optimisation in a wide range of problems. Yet AI-Copilot will go beyond existing technologies to bridge this gap and reduce the requirement of mastering optimisation languages. ### 2.3 LLM & Code Generation Considerable attention is recently directed towards LLMs, since LLMs can perform a wide range of tasks such as writing academic literature [Lund et al., 2023], question answering [Wang et al., 2019], language translation [OpenAI, 2023], code generation [Le et al., 2022], among others. Zan et al. [2022] reports a comprehensive study on twenty-seven such code-generating LLMs. Unquestionably transformer-based machine translation [Vaswani et al., 2017] has paved the way for code generation using LLMs, and the first code- generating LLMs were trained using labelled datasets [Mastropaolo et al., 2021]. Since such techniques had practicality issues of requiring labelled data to even fine-tune a LLM for code generation, these models had limitations. However, promising results could be seen with the introduction of unlabelled data to train LLMs for code generation by Chen et al. [2021]. Since Chen et al. [2021] is a code-generating LLM fine-tuned using GitHub Python code and due to the size of the corpus that is being used to fine-tune the model, code generation can cover a broader spectrum. Le et al. [2022] has improved the quality of the generated code by considering the results of test execution with reinforcement learning techniques. Accordingly, Le et al. [2022] uses an actor-critic network to supply rewards for the generated code and uses these rewards to further improve its code generation capability. Furthermore, as planning capabilities to code- generating LLMs can generate more accurate code for the problem descriptions by considering future scenarios, Zhang et al. [2023] introduces a model agnostic planning process based on Markov’s decision process. Since the tree search-based planning process developed by Zhang et al. [2023] is inspired by the Monte-Carlo tree search, they use the beam search for the evaluation process. As such techniques can cause computational efficiency-related discrepancies that come with the Monte-Carlo tree search algorithm, Zhang et al. [2023] suggests caching beam-search used in the code-generating LLM. Benchmarks and metrics play a key role in finding progressive improvements in the code-generating capabilities of LLMs. APPS [Hendrycks et al., 2021] benchmark holds 10,000 programming problems and their solutions in Python and the significant feature of this benchmark is that it has problem descriptions closer to the natural language. Furthermore, it holds simple problems as well as complex algorithms. Although the HumanEval [Chen et al., 2021] benchmark holds 164 hand-written programming problems, the focus of this benchmark is to measure the functional correctness of the generated code. While each programming problem of HumanEval [Chen et al., 2021] has a function signature, docstring, body, and several unit tests, the major difference between APPS [Hendrycks et al., 2021] & HumanEval [Chen et al., 2021] is that the problems hold in HumanEval [Chen et al., 2021] are new and do not contain solutions in the GitHub. Since for a sizeable part of APPS [Hendrycks et al., 2021] benchmark problems, there are solutions in GitHub, for code-generating LLMs trained using GitHub data, HumanEval [Chen et al., 2021] is more accurate compared to APPS [Hendrycks et al., 2021]. Conversely, the MBPP [Austin et al., 2021] benchmark holds 974 Python programming problems, which suit entry- level programmers. Additionally, Kulal et al. [2019] introduces the pass@k metric to evaluate generated code, where for a particular problem description k number of solution codes are generated. The generated codes are run against the test case related to the programming problem and the programming problem is considered solved if at least one solution code out of k can pass the test case. In contrast to the code-generating LLM based problem formulation automation approach introduced by AI-Copilot, existing problem formulation research uses NLP techniques [Ramamonjison et al., 2022]. Most research is based on NL4Opt [Ramamonjison et al., 2023] dataset and it contains simpler problem descriptions and linear programming problem formulations related to them. As statistics suggest NL4Opt dataset [Ramamonjison et al., 2023] contains on average 2.08 variables and 2.83 constraints per problem formulation [Ramamonjison et al., 2022], which is significantly smaller compared to the production scheduling dataset used by AI-Copilot and does not exceed token limits of most LLMs. Tsouros et al. [2023] introduces a conceptual framework for text to code transformation for problem formulation using prompt engineering techniques with generic LLMs. However, the experiment results of the framework have not been released as yet. Nevertheless, AI-Copilot will focus on constraint programming problem formulations, that involve a significantly larger number of constraints, and variables while considering important practical aspects like scalability, resource utilization, and optimisation techniques. ### 2.4 Machine Learning in Combinatorial Optimisation Recent attempts have been made by machine learning and operational research communities to leverage machine learning for combinatorial optimisation. The ultramodern combinatorial optimisation algorithms use handcrafted heuristics to make decisions that are computationally expensive or mathematically complex [Bengio et al., 2021]. In fact, with the data-driven nature of machine learning, it is a natural candidate for decision-making. The application of machine learning in combinatorial optimisations focuses on approximations and the discovery of new policies. In the matter of learning new policies for improving the decision process of combinatorial optimisation, an algorithm can be learned through reinforcement techniques. Furthermore, approximations can be learned through imitation learning because of the demonstrations done by an expert on how a model should behave. Baltean-Lugojan et al. [2018] introduces a neural network-based model to estimate the objective of a time-consuming semidefinite programming (SDP) problem to decide the most promising cutting planes. In the matter of branching policies in branch-and-bound trees of mixed integer linear programming, Gasse et al. [2019] introduces a neural network to learn strong branching [Cook et al., 2011] which was the well-performing heuristic-based approach. For the container pre-marshalling problem, Hottung et al. [2020] introduces convolutional neural networks (CNN) for learning branching policy and estimating the value of partial solutions. Going further beyond, for the traveling salesman problem, Khalil et al. [2017] leveraged Graph Neural Network (GNN) to learn the selection criteria for the next node. With AI- Copilot, we introduce a support structure for combinatorial optimisation problem formulation, which can be considered as shifting boundaries of work [Jarrahi et al., 2023], in the context of combinatorial optimisation using modern AI. ### 2.5 Summary Shifting boundaries using AI to improve the work environment in an organization [Jarrahi et al., 2023] is a popular topic that focuses on introducing virtual assistants to employees in an organization. In contrast to other domains, the business optimisation domain lacks such tools to ease the burden applied to people who are involved in business optimisation. Going further beyond, the use of chatbots makes a more comfortable working environment for employees [Wang et al., 2023]. Even though LLMs can be treated as perfect candidates for such requirements, using code-generating LLMs for problem formulation will introduce certain challenges such as training data collection, token limitations, evaluation metrics, LLM size, and resource use. By using the case study in production scheduling, we demonstrate that AI- Copilot can manage it. ## 3 Case Study Job Shop Scheduling (JSS) is one class of combinatorial optimisation problems that are common in manufacturing [Pinedo, 2005], and production scheduling is broader than JSS. However, if we can solve JSS problems, there is a high probability of solving other production scheduling problems. JSS is treated as one of the renowned NP-hard problems in literature. JSS is about scheduling a number of jobs over a number of machines, and each job consists of a set of operations that need to be executed in the given order on the allocated machine. In addition, the machine is allowed to process one operation at a time, and different objectives such as makespan, and weighted tardiness can be minimized. Methods such as integer programming [Ku and Beck, 2016], metaheuristics [Kreipl, 2000], and constraint programming [Beck et al., 2011, Watson and Beck, 2008] can be used to solve JSS. For the static JSS problem instance, the shop, such as, the working or manufacturing environment includes a set of $M$ machines and $N$ jobs that need to be scheduled. Each job $j$ has its own pre-determined route through a sequence of machines to follow and its own processing time at each machine it visits. The following notation is used to define the mathematical model for the JSS [Nguyen et al., 2021]. Parameters: * • $J=\\{1,....,j,....,N\\}$: the set of all jobs * • $n_{j}$: the number of operations of job $j$ * • $route_{j}=(m_{j1},....,m_{jn_{j}})$: the sequence of machines that job $j$ will visit, where $m_{ji}$ is the machine that processes the $i^{th}$ operation of job $j$ * • $time_{j}=(p_{j1},....,p_{jn_{j}})$: the processing times of all operations of job $j$, where $p_{ij}$ the processing time of the $i^{th}$ operation of job $j$ * • $r_{j}$: the release time of job $j$ * • $d_{j}$: the due date of job $j$ * • $w_{j}$: the weight of job $j$ Variables: * • $s_{ji}$: the starting time of the $i^{th}$ operation of job $j$ * • $e_{ji}$: the ending time of the $i^{th}$ operation of job $j$ * • $C_{j}$: the completion time of job $j$ * • $T_{j}$: the tardiness of job $j$ calculated by $T_{j}=\max(C_{j}-d_{j},0)$ The constraint programming formulation for the JSS is defined as follows. $\displaystyle\forall j\in J$ $\displaystyle:s_{j1}>r_{j}$ (1) $\displaystyle\forall j\in J,i\in\\{1,...,n_{j}\\}$ $\displaystyle:e_{ji}=s_{ji}+p_{ji}$ (2) $\displaystyle\forall j\in J$ $\displaystyle:C_{j}=e_{jn_{j}}$ (3) $\displaystyle\forall j\in J$ $\displaystyle:T_{j}=\max(C_{j}-d_{j},0)$ (4) Where 1: starting time of the first operation of the job should be greater than the release time of the job, 2: ending time of an operation equals to the sum of the starting time and processing time of an operation, 3: completion time of a job equals to the ending time of the last operation of the job, 4: tardiness of a job equals to the difference between the job completion time and the due date of the job if it is positive or zero otherwise To ensure no overlap between operations (or disjunctive constraints) on the same machine: $\forall j,k\in J,u\in\\{1,...,n_{j}\\},v\in\\{1,...,n_{k}\\},\\\ m\in route_{j},o\in route_{k}:m_{ju}=o_{kv}\Rightarrow s_{ju}\geq e_{kv}\vee s_{kv}\geq e_{ju}$ (5) That is if operations $u$ and $v$ from different jobs are to execute on the same machine $m_{ju}=o_{kv}$, the start time of one of these jobs must be greater than the end time of the other job. There are a number of precedence constraints between the operations of a job: $\displaystyle\forall j\in J,i\in\\{1,...,n_{j}-1\\}:s_{j,i+1}\geq e_{ji}$ (6) The objective functions are defined as follows: * • Makespan: Defined variable $C_{\max}$ which represents the latest completion time of any job. The objective is to minimise $C_{\max}$ subject also to constrain 8: $\displaystyle\min C_{\max}$ (7) $\displaystyle\forall j\in J:C_{\max}\geq e_{jn_{j}}$ (8) * • Maximum tardiness: Defined variable $T_{\max}$ which represents the maximum tardiness of any job. The objective is to minimise $T_{\max}$ subject to constrain (10): $\displaystyle\min T_{\max}$ (9) $\displaystyle\forall j\in J:T_{max}\geq T_{j}$ (10) * • Total weighted tardiness (TWT): The objective is to minimise cumulative tardiness’ across all jobs: $\displaystyle\min\sum_{j\in J}w_{j}T_{j}$ (11) ## 4 Proposed Method ### 4.1 Overview Figure 1: Solution overview Our approach is conceptually represented in Figure 1, which comprises of five main components: (a) Problem Description - which aims to capture the business optimisation scenario, that is going to be formulated; (b) Code-Generating LLM - which synthesizes the problem formulation from the problem description; (c) Problem Formulation - which is the generated problem formulation for the given problem description, that can be solved using a solver; (d) Solution - which is the final result obtained after solving the problem formulation using the solver; (e) Presentation - which interprets the final result and suggests best actions for the business optimisation scenario. The AI-Copilot will be the first four components of the conceptual framework. Additionally, in the future, we can translate problem formulations generated by AI-Copilot to mathematical models to be further verified by an optimisation expert if needed. Although verification from an optimisation expert will have a human dependency, AI-Copilot has reduced the human effort by automation. The remaining sections of this article will focus on how AI-Copilot is developed based on our conceptual model. ### 4.2 Pre-Trained Model We use CodeRL as the pre-trained model and the underline unified encoder- decoder architecture of CodeRL sums up to a size of 60M 770M parameters [Le et al., 2022]. Moreover, CodeRL [Le et al., 2022] is an advanced version of CodeT5 [Wang et al., 2021] trained on GitHub data, which holds 48 layers, 16 attention heads, and 1024 hidden states. Though CodeRL is significantly smaller compared to Codex [Chen et al., 2021], and GPT-3 [Brown et al., 2020] introduced by GPT, CodeRL has been able to perform well [Le et al., 2022]. The reason for such performance by CodeRL [Le et al., 2022] is that it considers the test case execution status of the generated code as a fine-tuning approach with actor-critic reinforcement learning technique. Since CodeRL performs the actor role, a separate critic network assesses the solutions generated by CodeRL, with the test cases related to the problem descriptions, and supplies a critic score for reinforcement learning [Le et al., 2022]. Despite CodeRL showing promising results with code generation, CodeRL is not capable of problem formulation [Le et al., 2022] and it is the same for GPT-4 [OpenAI, 2023] (Figure 2). Such generic code-generating LLMs may supply problem formulation to a problem description, but we are not one hundred percent sure about the generated problem formulation, because generic code- generating LLMs are not specifically trained on problem formulation. Furthermore, by using in-context learning available in generic code-generating LLMs, users might be able to generate problem formulations, but due to the complexity of the problem descriptions and the effort that needs to be put in by the users, generic code-generating LLMs may not be ideal for problem formulation. Since CodeRL is being trained on GitHub unlabelled data, and GitHub does not hold a substantial amount of problem formulation-related samples, and the token limitation of a maximum of six hundred tokens of CodeRL makes CodeRL incompetent with problem formulation [Le et al., 2022]. Such data and token limitations may be common for code-generating LLMs, but for problem formulation, those limitations should be managed. By fine-tuning CodeRL [Le et al., 2022] with problem formulation data and applying prompt engineering techniques to manage token limitations, we make AI-Copilot capable of synthesizing complex problem formulations. (a) Le et al. [2022] does not create a complete problem formulation due to the token limitation (b) OpenAI [2023] does not properly formulate the constraint of "Second task of Job two has to come before all the second tasks of other jobs" Figure 2: Problem Formulation with Generic Code-Generating LLMs: For the same problem description, Figure 2(a) shows the problem formulation generated by Le et al. [2022], and Figure 2(b) shows the solution after solving the problem formulation generated by OpenAI [2023] ### 4.3 Dataset Development Since it is rare to find publicly available problem formulation data, we manually created a hundred production scheduling problem descriptions with their problem formulations in Python (Problem Descriptions & Problem Formulations), using the CPMpy [Guns, 2019] library. However, the method followed by AI-Copilot can be used by businesses to develop their own AI- Copilot using their own data. Such applications will allow businesses to dynamically adapt optimisations based on business environment changes. Since problem formulations related to business optimisations are lengthier compared to normal programming code and written in optimisation languages supported by the solvers, our dataset contributes towards filling the gap of datasets in business optimisation problem formulation. Furthermore, problem descriptions in our dataset have less than six hundred tokens, and respective problem formulations have tokens in the range of 1200 to 1800. Since AI-Copilot uses prompt engineering techniques, formulations that exceed the token limits of the LLM are not a concern. A sample problem description can be, > Job shop scheduling model with 5 jobs and 5 machines. All jobs have random > routes and their operations have random durations. The objective function is > makespan. Maximum duration is 20. After solving the problem, solutions will > be printed and visualised. Note: Second task of Job two has to come before > all the second tasks of other jobs. In addition, our dataset holds different scenarios related to production scheduling. Such scenarios mainly contain different kinds of requirements that are encountered in production scheduling (Table 1). Scenario Type | Example Scenario ---|--- Task completion precedence | Second task of Job two has to come before all the second tasks of other jobs Introduction of release times | Release time of a job is a random value from 0 to 50. Jobs cannot start before their release time Minimize makespan | The objective function is makespan Minimize maximum tardiness | The due dates are calculated based on a total processing time of each job multiplied a due date allowance of 1.3. The objective function is maximum tardiness Minimize weighted tardiness | The due dates are calculated based on a total processing time of each job multiplied a due date allowance of 1.3. Release time of a job is a random value from 0 to 50. Jobs cannot start before their release times. Each job has a weight following a random distribution in which 20% will have weight of 1, 60% will have weight of 2, and 20% will have weight of 4. The objective function is total weighted tardiness Minimize total flow time | The objective function is total flow time (completion time - release time) Minimize total weighted flow time | The objective function is total weighted flowtime Table 1: Problem Formulation Scenarios Due to the fact that there are scenarios where problem formulations must generate random dummy data to make them solvable via a solver, particular statements have been added to problem descriptions to keep the consistency of generated random dummy data. Furthermore, the dataset holds scenarios where metric conversion must be considered while generating problem formulations (Table 2) Statement Type | Example Scenario ---|--- Maximum duration | Maximum duration is 20 Release time range | Release time of a job is a random value from 0 to 50 Metric types | Job two and four will have task durations in minutes. Other jobs will have task durations in seconds Table 2: Data Consistency Statements Finally, to get consistent outputs from the problem formulations, the dataset has configured a random seed (random.seed(1)) for all the random generations. ### 4.4 Fine-Tuning Pre-trained model CodeRL [Le et al., 2022] is fine-tuned using a trainer for problem formulation. When the trainer uses our dataset, during the fine-tuning process the trainer follows a loss-based approach rather than an output accuracy-based approach. Even though the loss-based approach preserves the qualitative aspects of generated problem formulations, we compare the final solution with the actual solution to measure the accuracy. Training configurations are available in Table 3, and we fine-tuned code-generating LLM on these configurations based on the parameters available in Table 4. Furthermore, we pick batch size and epoch count as the primary fine-tuning parameters, since they affect the learning frequency of the code-generating LLM. Since the results are promising, we did not proceed with changing parameters such as learning rate. Training Configuration | Value ---|--- GPU type | NVIDIA Tesla V100 SXM2 32 GB Pre-trained model | Salesforce/codet5-large-ntp-py Tokenizer | Salesforce/codet5-large-ntp-py Learning rate | 5e-05 Gradient checkpointing | True Evaluation strategy | steps Evaluation steps | 10 Logging steps | 10 Do Evaluation | True Table 3: Training Configurations Figure 3: Instructions-based code modules To fulfill scalability aspects, we modularize problem formulations using instructions. As instructions (Figure 3), we use nine prompts that allow the code-generating LLM to create problem formulations part by part. In the end, all the problem formulation modules related to a particular problem description are combined to create the final problem formulation. In the modularization process, we amend the first dataset by attaching instructions as suffixes for each problem description. Correspondingly each problem description becomes nine different problem descriptions, and all together we produce nine-hundred problem descriptions. Furthermore, we modularize each problem formulation into nine different sub-problem formulations to align with the instruction. As an outcome of the modularization process, the original hundred problem formulations get increased to nine-hundred sub problem formulations. Below is a sample modularized problem description, and some of its modularized problem formulation can be seen in Figure 3. > Create job shop scheduling model with 6 jobs and 6 machines. All jobs have > random routes and their operations have random durations. The due dates are > calculated based on a total processing time of each job multiplied a due > date allowance of 1.3. Release time of a job is a random value from 0 to 50. > Jobs cannot start before their release times. Each job has a weight > following a random distribution in which 20% will have weight of 1, 60% will > have weight of 2, and 20% will have weight of 4. The objective function is > total weighted flowtime. Maximum duration is 20. After solving the problem, > solutions will be printed and visualised. Note: Job two and four will have > task durations in minutes. Other jobs will have task durations in > seconds.[DEFINE_CONSTRAINTS] ### 4.5 Performance Metrics As performance metrics, we use training loss, training time, and problem formulation execution status. Despite training loss being used during the fine-tuning process, problem formulation execution status verifies that code- generating LLM created problem formulations give correct solutions. Furthermore, the loss generated by the trainer, by comparing the generated output with the target output for a particular problem description is the cross-entropy loss (12), that is used in LLMs. $l(x,y)=\frac{\sum\limits_{n=1}^{N}l_{n}}{N},$ (12) $l_{n}=-\log\frac{\exp(x_{n,y_{n}})}{\sum\limits_{c=1}^{C}\exp(x_{n,c})}.1\\{y_{n}\neq ignore\\_index\\},$ (13) where $x$: logits from the model for the given problem description (generated problem formulation), $y$: target ids (target problem formulation), $ignore\\_index$: -100, $C$: number of classes, $N$: mini-batch dimension. Since loss is not enough to ensure the executability and correctness of the generated problem formulations, we solve problem formulations using a solver and compare the final solution with the actual solution. Moreover, for problem formulation executions, there are three possible outcomes, the first outcome is giving the correct output, the second outcome is giving incorrect output, and the third outcome is problem formulation failure due to a technical error. A combination of such metrics allows us to pick the best code-generating LLM that suits for problem formulation. ## 5 Experiments ### 5.1 Overview We randomly allocate 70% of the dataset as training data, 10% as validation data, and 20% as test data. While training and validation data are used in the fine-tuning process to avoid any overfitting and underfitting scenarios, the test data is used to evaluate the performance of the fine-tuned code- generating LLM. The parameters and metrics used in the training, validation, and testing stages are available in Tables 4 and 5. Parameter | Definition ---|--- batch size | number of data points for a batch epoch | number of training iterations Table 4: Parameter Definitions Metric | Definition ---|--- loss | generated by comparing the target problem formulation and the generated problem formulation time | number of seconds to complete the training success | rate of problem formulation successfully solved and giving correct output failure | rate of problem formulation successfully solved and giving incorrect output exception | rate of invalid problem formulations Table 5: Metric Definitions ### 5.2 Training Performance The training results are available in Table 6. Though training happens on different parameters, we see some common phenomena. One such phenomenon is although loss values are low for all batch sizes, there is a significant improvement in success rate with the epoch count. In addition, the success rates for one and two epoch counts are low. But when looking into the generated problem formulations we saw that for low epoch counts, due to minor errors in the generated problem formulations, execution of problem formulations fails. Moreover, we see a slight increase in loss and a significant reduction in success rate with the increase in batch size. But when increasing the epoch count we see lower loss values and higher success rates. Such characteristic is valuable when the training dataset expand in the future, to reduce training time and get correct problem formulations. batch size | epoch | loss | time(sec) | success | failure | exception ---|---|---|---|---|---|--- $1$ | $1$ | $0.0056$ | $1083.78$ | $0.16$ | $0.23$ | $0.61$ $1$ | $2$ | $0.0068$ | $2105.51$ | $0.00$ | $0.00$ | $1.00$ $1$ | $4$ | $0.0038$ | $4216.54$ | $0.00$ | $0.01$ | $0.99$ $1$ | $8$ | $0.0008$ | $8561.05$ | $0.96$ | $0.00$ | $0.04$ $2$ | $1$ | $0.0363$ | $675.19$ | $0.00$ | $0.00$ | $1.00$ $2$ | $2$ | $0.0027$ | $1256.11$ | $0.44$ | $0.20$ | $0.36$ $2$ | $4$ | $0.0014$ | $2554.93$ | 1.00 | $0.00$ | $0.00$ $2$ | $8$ | $0.0008$ | $5153.64$ | 1.00 | $0.00$ | $0.00$ $4$ | $1$ | $1.4474$ | $429.63$ | $0.00$ | $0.00$ | $1.00$ $4$ | $2$ | $0.0043$ | $849.15$ | $0.24$ | $0.00$ | $0.76$ $4$ | $4$ | $0.0017$ | $1733.71$ | $0.97$ | $0.00$ | $0.03$ $4$ | $8$ | $0.0010$ | $3446.56$ | $0.97$ | $0.00$ | $0.03$ Table 6: Training results of metrics based on batch size and epoch count ### 5.3 Testing Performance The testing results are available in Table 7. In fact, we see the same patterns that we saw in training results. For instance, in a similar manner to the training results, the success rate of the testing results increases with the epoch count, and failure and exception rates reduce with the epoch count. Such observations convey that the fine-tuning process has not led the code- generating LLM to an overfit scenario. batch size | epoch | success | failure | exception ---|---|---|---|--- $1$ | $1$ | $0.25$ | $0.30$ | $0.45$ $1$ | $2$ | $0.00$ | $0.00$ | $1.00$ $1$ | $4$ | $0.00$ | $0.00$ | $1.00$ $1$ | $8$ | $0.95$ | $0.00$ | $0.05$ $2$ | $1$ | $0.00$ | $0.00$ | $1.00$ $2$ | $2$ | $0.45$ | $0.15$ | $0.40$ $2$ | $4$ | 1.00 | $0.00$ | $0.00$ $2$ | $8$ | 1.00 | $0.00$ | $0.00$ $4$ | $1$ | $0.00$ | $0.00$ | $1.00$ $4$ | $2$ | $0.15$ | $0.00$ | $0.85$ $4$ | $4$ | $0.90$ | $0.00$ | $0.10$ $4$ | $8$ | $0.95$ | $0.00$ | $0.05$ Table 7: Testing results of metrics based on batch size and epoch count ### 5.4 Convergence We use validation error in each parameter setting to investigate the loss convergence of the code-generating LLM. While we see in all parameter settings that validation and training curves overlap at some point (Figure 4), for batch size one, overlapping happens at early stages. Such overlapping behaviour we do not see in batch sizes two and four. Since the learning frequency of the code-generating LLM reduces with the increase in batch size, batch size may cause the initial gap in training and validation curves. However, with the increase of the epoch count even for larger batch sizes, we see overlapping training and validation curves after having enough learning iterations. Going further beyond, if we try to relate how success rate maps with overlapping training and validation curves, we see that perfect overlapping training and validation curves mean a higher success rate. Since we generate problem formulations as modules for a particular problem description, and combine them at the end, the code-generating LLM cannot even make a minor mistake. Otherwise, the generated problem formulation will not give the correct solution. Since perfectly overlapping training and validation curves mean perfect training, perfect training makes the code-generating LLM avoid minor mistakes, and that leads to a higher success rate. (a) Batch size 1 & epoch 1 (b) Batch size 2 & epoch 1 (c) Batch size 4 & epoch 1 (d) Batch size 1 & epoch 2 (e) Batch size 2 & epoch 2 (f) Batch size 4 & epoch 2 (g) Batch size 1 & epoch 4 (h) Batch size 2 & epoch 4 (i) Batch size 4 & epoch 4 (j) Batch size 1 & epoch 8 (k) Batch size 2 & epoch 8 (l) Batch size 4 & epoch 8 Figure 4: The behavior of training & validation loss for different settings ### 5.5 Loss Analysis The contribution of the prompt-based code generation can be seen in Figure 5(a). While problem formulation modules with more complexities such as defining model, constraints, objective function, and solution have contributed more towards the loss, simpler problem formulation modules have a negligible loss. Going further beyond, to further analyse the loss distribution in different problem formulation modules, we use GSOM [Alahakoon et al., 2000] which is an unsupervised dimensionality reduction and clustering technique, on target problem formulations for each problem formulation module. By doing so, we try to find problem formulation modules that have more variety compared to other problem formulation modules. As we can see in Figure 5(b), solution defining, constraints defining, objective function defining & model defining problem formulation modules have either a higher number of clusters or large deviation in the number of problem formulations per cluster, simpler problem formulation modules have few clusters and divided among clusters without much deviation. The above behaviour aligns with the loss distribution shown in Figure 5(a) which indicates that defining constraints, objective function, model, and solution are the most challenging problem formulation modules for the code-generating LLM. (a) The behavior of training loss based on instructions: The diagram only shows losses related to four instructions because other losses are significantly smaller. (b) Target problem formulation module wise GSOM clusters: The x-axis shows the number of clusters. The y-axis shows the standard deviation of the number of data points per cluster. Figure 5: Loss Analysis: As shown in Figure 5(a), there are four major contributors that determine the final status of the problem formulation. As shown in Figure 5(b) while other instructions are restricted to one region, these four instructions span two other regions in the diagram. This is due to the complexity of problem formulations for those instructions. Going further beyond, we use Principal Component Analysis (PCA) to find how the code-generating LLM responds to different problem descriptions. We apply PCA to vector embeddings of the code-generating LLM as shown in Figure 6, focusing on encoder embeddings and decoder embeddings. Since the code- generating LLM is an encoder-decoder transformer model [Vaswani et al., 2017], encoder embeddings show how the code-generating LLM finds different problem descriptions and decoder embeddings show how the code-generating LLM generates problem formulations. Since PCA brings similar characters in the problem descriptions and generated problem formulations together in the 2D space, and instruction-wise problem descriptions differ from each other by a suffix attached to the original problem descriptions, we do not see instruction-wise clusters for encoder embeddings as shown in Figure 6(a). Furthermore, as shown in Figure 6(b), we see some clusters for decoder embeddings, for instance, as imports and visualizations require similar problem formulation modules to be generated across all the problem descriptions, we see single clusters. But problem formulation modules for solution, constraints, and utility functions have multiple clusters or isolated points since they cover different scenarios based on the problem description. Furthermore, we see overlapping clusters for different problem formulation modules since they share variables. (a) PCA for embeddings of the encoder (b) PCA for embeddings of the decoder Figure 6: Vector embeddings of the fine-tuned code-generating LLM: The embeddings shown in Figure 6(a) depend on problem descriptions whereas the embeddings shown in Figure 6(b) depend on generated problem formulations ### 5.6 Examples A sample of the generated problem formulation is shown in Figure 7(a). Note that due to space limitations, we have only presented part of the generated code, and the complete code is available in our GitHub repository (Generated Problem Formulations). Figure 7(b) shows the output of the generated problem formulation once it is executed. By looking into the generated problem formulations and outputs, we observe the code-generating LLM has been able to find complex scenarios mentioned in problem descriptions and generate correct problem formulations. > Create job shop scheduling model with 6 jobs and 5 machines. All jobs have > random routes and their operations have random durations. The objective > function is makespan. Maximum duration is 20. After solving the problem, > solutions will be printed and visualised. Note: The first task related to > each job should be completed before the completion of any job. (a) Part of the generated problem formulation (b) Sample output of an executed problem formulation Figure 7: Generated problem formulations ## 6 Conclusion We introduce AI-Copilot, code-generating LLM based problem formulation framework for business optimisation using a case study in production scheduling. The AI-Copilot can generate large and complex problem formulations in a scalable manner, in contrast to generic LLMs, with significantly smaller sizes. Additionally, the performance evaluation metrics introduced with AI- Copilot can ensure the executability and selection of correct optimisation techniques of problem formulations. And the prompt engineering-based problem formulation modularization technique introduced in AI-Copilot can overcome token limitation issues in code-generating LLMs for problem formulation. Although the case study is based on manually created production scheduling data, the method followed in AI-Copilot can be used by businesses to build their own AI-Copilots using their own data. More importantly, in the future, our AI-Copilot could be a great supporting structure for the business optimisation process, by reducing the human expertise involved. As further improvements, we will focus on developing the remaining layers of the framework while supporting multiple problem formulation types like routing, assignment, etc. With the use of production scheduling problem formulation, we have started with constrained programming problem formulations in combinatorial optimisation. In the next step with vehicle routing problems, we will focus on mixed-integer programming with compact mixed-integer programming, column generation, & lazy constraints problem formulations in combinatorial optimisation that cover a broad range of business optimisation case studies. Additionally, we will introduce a layer of mathematical models to our framework, hence optimisation experts can verify the mathematical models that are being used in the generated problem formulation. The experimented AI-Copilot in this article is limited to production scheduling problem formulation and it is using CPMpy [Guns, 2019] from Python. Yet, the conceptual method can be applied to other types of business optimisation use cases. ## References * Alahakoon et al. [2000] Damminda Alahakoon, Saman K Halgamuge, and Bala Srinivasan. Dynamic self-organizing maps with controlled growth for knowledge discovery. _IEEE Transactions on neural networks_ , 11(3):601–614, 2000. * Antoniou and Lu [2007] Andreas Antoniou and Wu Sheng Lu. _Practical Optimization_. Springer, 2007. * Austin et al. [2021] Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, et al. Program synthesis with large language models. _arXiv preprint arXiv:2108.07732_ , 2021. * Avella et al. [2023] Pasquale Avella, Alice Calamita, and Laura Palagi. Off-the-shelf solvers for mixed-integer conic programming: insights from a computational study on congested capacitated facility location instances. _arXiv preprint arXiv:2303.04216_ , 2023. * Baltean-Lugojan et al. [2018] Radu Baltean-Lugojan, Pierre Bonami, Ruth Misener, and Andrea Tramontani. Selecting cutting planes for quadratic semidefinite outer-approximation via trained neural networks. _URL: http://www. optimization-online. org/DB_HTML/2018/11/6943. html_ , 2018. * Beck et al. [2011] J Christopher Beck, TK Feng, and Jean-Paul Watson. Combining constraint programming and local search for job-shop scheduling. _INFORMS Journal on Computing_ , 23(1):1–14, 2011. * Bengio et al. [2021] Yoshua Bengio, Andrea Lodi, and Antoine Prouvost. Machine learning for combinatorial optimization: a methodological tour d’horizon. _European Journal of Operational Research_ , 290(2):405–421, 2021. * Bestuzheva et al. [2021] Ksenia Bestuzheva, Mathieu Besançon, Wei-Kun Chen, Antonia Chmiela, Tim Donkiewicz, Jasper van Doornmalen, Leon Eifler, Oliver Gaul, Gerald Gamrath, Ambros Gleixner, et al. The SCIP optimization suite 8.0. _arXiv preprint arXiv:2112.08872_ , 2021. * Boyd and Vandenberghe [2004] Stephen P Boyd and Lieven Vandenberghe. _Convex optimization_. Cambridge university press, 2004. * Brown et al. [2020] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020. * Chen et al. [2021] Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. _arXiv preprint arXiv:2107.03374_ , 2021. * Cheng et al. [2023] Liying Cheng, Xingxuan Li, and Lidong Bing. Is GPT-4 a Good Data Analyst?, 2023. * Coffman et al. [1978] Edward G Coffman, Jr, Michael R Garey, and David S Johnson. An application of bin-packing to multiprocessor scheduling. _SIAM Journal on Computing_ , 7(1):1–17, 1978. * Cook et al. [2011] William J Cook, David L Applegate, Robert E Bixby, and Vasek Chvatal. _The traveling salesman problem: a computational study_. Princeton university press, 2011. * Cplex [2009] IBM ILOG Cplex. V12. 1: User’s Manual for CPLEX. _International Business Machines Corporation_ , 46(53):157, 2009. * Fourer et al. [1990] Robert Fourer, David M Gay, and Brian W Kernighan. AMPL: A mathematical programming language. _Management Science_ , 36(5):519–554, 1990. * Gasse et al. [2019] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. _Advances in neural information processing systems_ , 32, 2019. * Google [2023] Google. CP-SAT Solver, 2023. URL https://developers.google.com/optimization/cp/cp_solver. * Guns [2019] Tias Guns. Increasing modeling language convenience with a universal n-dimensional array, cppy as python-embedded example. In _Proceedings of the 18th workshop on Constraint Modelling and Reformulation, Held with CP_ , volume 19, 2019. * Gurobi Optimization, LLC [2023] Gurobi Optimization, LLC. Gurobi Optimizer Reference Manual, 2023. URL https://www.gurobi.com. * Hendrycks et al. [2021] Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, et al. Measuring coding challenge competence with apps. _arXiv preprint arXiv:2105.09938_ , 2021. * Hottung et al. [2020] André Hottung, Shunji Tanaka, and Kevin Tierney. Deep learning assisted heuristic tree search for the container pre-marshalling problem. _Computers & Operations Research_, 113:104781, 2020. * Jarrahi et al. [2023] Mohammad Hossein Jarrahi, Christoph Lutz, Karen Boyd, Carsten Oesterlund, and Matthew Willis. Artificial intelligence in the work context. _Journal of the Association for Information Science and Technology_ , 74(3):303–310, 2023. * Kallrath and Wilson [1997] Josef Kallrath and John M Wilson. _Business optimisation using mathematical programming_. Springer, 1997. * Khalil et al. [2017] Elias Khalil, Hanjun Dai, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. _Advances in neural information processing systems_ , 30, 2017. * Kłosowski et al. [2018] Grzegorz Kłosowski, Edward Kozłowski, and Arkadiusz Gola. Integer linear programming in optimization of waste after cutting in the furniture manufacturing. In _Intelligent Systems in Production Engineering and Maintenance–ISPEM 2017: Proceedings of the First International Conference on Intelligent Systems in Production Engineering and Maintenance ISPEM 2017 1_ , pages 260–270. Springer, 2018. * Kreipl [2000] Stephan Kreipl. A large step random walk for minimizing total weighted tardiness in a job shop. _Journal of Scheduling_ , 3(3):125–138, 2000. * Ku and Beck [2016] Wen-Yang Ku and J Christopher Beck. Mixed integer programming models for job shop scheduling: A computational analysis. _Computers & Operations Research_, 73:165–173, 2016. * Kulal et al. [2019] Sumith Kulal, Panupong Pasupat, Kartik Chandra, Mina Lee, Oded Padon, Alex Aiken, and Percy S Liang. Spoc: Search-based pseudocode to code. _Advances in Neural Information Processing Systems_ , 32, 2019. * Le et al. [2022] Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, and Steven Chu Hong Hoi. Coderl: Mastering code generation through pretrained models and deep reinforcement learning. _Advances in Neural Information Processing Systems_ , 35:21314–21328, 2022. * Li et al. [2023] Raymond Li, Loubna Ben Allal, Yangtian Zi, Niklas Muennighoff, Denis Kocetkov, Chenghao Mou, Marc Marone, Christopher Akiki, Jia Li, Jenny Chim, et al. Starcoder: may the source be with you! _arXiv preprint arXiv:2305.06161_ , 2023. * Lund et al. [2023] Brady D Lund, Ting Wang, Nishith Reddy Mannuru, Bing Nie, Somipam Shimray, and Ziang Wang. ChatGPT and a new academic reality: Artificial intelligence-written research papers and the ethics of the large language models in scholarly publishing. _Journal of the Association for Information Science and Technology_ , 74(5):570–581, 2023. * Mastropaolo et al. [2021] Antonio Mastropaolo, Simone Scalabrino, Nathan Cooper, David Nader Palacio, Denys Poshyvanyk, Rocco Oliveto, and Gabriele Bavota. Studying the usage of text-to-text transfer transformer to support code-related tasks. In _2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE)_ , pages 336–347. IEEE, 2021. * Nethercote et al. [2007] Nicholas Nethercote, Peter J Stuckey, Ralph Becket, Sebastian Brand, Gregory J Duck, and Guido Tack. Minizinc: Towards a standard CP modelling language. In _International Conference on Principles and Practice of Constraint Programming_ , pages 529–543. Springer, 2007. * Nguyen and Nadi [2022] Nhan Nguyen and Sarah Nadi. An Empirical Evaluation of Github Copilot’s Code Suggestions. in 2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR). 1–5. _IEEE, Pittsburgh, US_ , pages 1–5, 2022. * Nguyen et al. [2021] Su Nguyen, Dhananjay Thiruvady, Mengjie Zhang, and Kay Chen Tan. A genetic programming approach for evolving variable selectors in constraint programming. _IEEE Transactions on Evolutionary Computation_ , 25(3):492–507, 2021. * OpenAI [2023] OpenAI. GPT-4 Technical Report, 2023. * Pinedo [2005] Michael Pinedo. _Planning and scheduling in manufacturing and services_. Springer, 2005. * Pochet and Wolsey [2006] Yves Pochet and Laurence A Wolsey. _Production planning by mixed integer programming_ , volume 149. Springer, 2006. * Ramamonjison et al. [2022] Rindranirina Ramamonjison, Haley Li, Timothy T Yu, Shiqi He, Vishnu Rengan, Amin Banitalebi-Dehkordi, Zirui Zhou, and Yong Zhang. Augmenting operations research with auto-formulation of optimization models from problem descriptions. _arXiv preprint arXiv:2209.15565_ , 2022. * Ramamonjison et al. [2023] Rindranirina Ramamonjison, Timothy T Yu, Raymond Li, Haley Li, Giuseppe Carenini, Bissan Ghaddar, Shiqi He, Mahdi Mostajabdaveh, Amin Banitalebi-Dehkordi, Zirui Zhou, et al. NL4Opt Competition: Formulating Optimization Problems Based on Their Natural Language Descriptions. _arXiv preprint arXiv:2303.08233_ , 2023. * Rivas and Zhao [2023] Pablo Rivas and Liang Zhao. Marketing with chatgpt: Navigating the ethical terrain of gpt-based chatbot technology. _AI_ , 4(2):375–384, 2023. * Singh and Singh [2009] Jagdeep Singh and Harwinder Singh. Kaizen philosophy: a review of literature. _IUP journal of operations management_ , 8(2):51, 2009. * Solaiman and Dennison [2021] Irene Solaiman and Christy Dennison. Process for adapting language models to society (palms) with values-targeted datasets. _Advances in Neural Information Processing Systems_ , 34:5861–5873, 2021. * Soroudi [2017] Alireza Soroudi. _Power system optimization modeling in GAMS_ , volume 78. Springer, 2017. * Toth and Vigo [2002] Paolo Toth and Daniele Vigo. _The vehicle routing problem_. SIAM, 2002. * Tsouros et al. [2023] Dimos Tsouros, Hélène Verhaeghe, Serdar Kadıoğlu, and Tias Guns. Holy grail 2.0: From Natural Language to Constraint Models. _arXiv preprint arXiv:2308.01589_ , 2023. * Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. _Advances in neural information processing systems_ , 30, 2017. * Wang et al. [2023] Xuequn Wang, Xiaolin Lin, and Bin Shao. Artificial intelligence changes the way we work: A close look at innovating with chatbots. _Journal of the Association for Information Science and Technology_ , 74(3):339–353, 2023. * Wang et al. [2021] Yue Wang, Weishi Wang, Shafiq Joty, and Steven CH Hoi. Codet5: Identifier-aware unified pre-trained encoder-decoder models for code understanding and generation. _arXiv preprint arXiv:2109.00859_ , 2021. * Wang et al. [2019] Zhiguo Wang, Patrick Ng, Xiaofei Ma, Ramesh Nallapati, and Bing Xiang. Multi-passage bert: A globally normalized bert model for open-domain question answering. _arXiv preprint arXiv:1908.08167_ , 2019. * Watson and Beck [2008] Jean-Paul Watson and J Christopher Beck. A hybrid constraint programming/local search approach to the job-shop scheduling problem. In _International Conference on Integration of Artificial Intelligence (AI) and Operations Research (OR) Techniques in Constraint Programming_ , pages 263–277. Springer, 2008. * Xiong et al. [2022] Hegen Xiong, Shuangyuan Shi, Danni Ren, and Jinjin Hu. A survey of job shop scheduling problem: The types and models. _Computers & Operations Research_, 142:105731, 2022. * Yang et al. [2023] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. Large language models as optimizers. _arXiv preprint arXiv:2309.03409_ , 2023. * Yetiştiren et al. [2023] Burak Yetiştiren, Işık Özsoy, Miray Ayerdem, and Eray Tüzün. Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on Github Copilot, Amazon CodeWhisperer, and ChatGPT. _arXiv preprint arXiv:2304.10778_ , 2023. * Yu [2013] Gang Yu. _Industrial applications of combinatorial optimization_ , volume 16. Springer Science & Business Media, 2013. * Zan et al. [2022] Daoguang Zan, Bei Chen, Fengji Zhang, Dianjie Lu, Bingchao Wu, Bei Guan, Yongji Wang, and Jian-Guang Lou. When neural model meets nl2code: A survey. _arXiv preprint arXiv:2212.09420_ , 2022. * Zhang et al. [2023] Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B Tenenbaum, and Chuang Gan. Planning with large language models for code generation. _arXiv preprint arXiv:2303.05510_ , 2023.
Further author information: (Send correspondence to Jerry L. Prince <EMAIL_ADDRESS> # Segmenting thalamic nuclei from manifold projections of multi-contrast MRI Chang Yan Department of Information Technology and Electrical Engineering, Swiss Federal Institute of Technology (ETH) Zürich, Zürich 8092, Switzerland Muhan Shao Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA Zhangxing Bian Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA Anqi Feng Department of Biomedical Engineering, Johns Hopkins School of Medicine, Baltimore, MD 21205, USA Yuan Xue Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA Jiachen Zhuo Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA Rao P. Gullapalli Department of Diagnostic Radiology and Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD 21201, USA Aaron Carass Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA Jerry L. Prince Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA ###### Abstract The thalamus is a subcortical gray matter structure that plays a key role in relaying sensory and motor signals within the brain. Its nuclei can atrophy or otherwise be affected by neurological disease and injuries including mild traumatic brain injury. Segmenting both the thalamus and its nuclei is challenging because of the relatively low contrast within and around the thalamus in conventional magnetic resonance (MR) images. This paper explores imaging features to determine key tissue signatures that naturally cluster, from which we can parcellate thalamic nuclei. Tissue contrasts include T1-weighted and T2-weighted images, MR diffusion measurements including FA, mean diffusivity, Knutsson coefficients that represent fiber orientation, and synthetic multi-TI images derived from FGATIR and T1-weighted images. After registration of these contrasts and isolation of the thalamus, we use the uniform manifold approximation and projection (UMAP) method for dimensionality reduction to produce a low-dimensional representation of the data within the thalamus. Manual labeling of the thalamus provides labels for our UMAP embedding from which $k$ nearest neighbors can be used to label new unseen voxels in that same UMAP embedding. $N$-fold cross-validation of the method reveals comparable performance to state-of-the-art methods for thalamic parcellation. ###### keywords: thalamus, magnetic resonance imaging, dimensionality reduction, UMAP ## 1 INTRODUCTION The thalamus acts as a neuro-architectonic relay station which passes sensory and motor signals between various structures of the human brain [1]. It is a bilateral gray matter structure located in the forebrain, with its medial surface adjacent to the superior portion of the third ventricle. The thalamus can be divided into several clusters known as nuclei; numerous diseases are associated with these nuclei including Parkinson’s disease [2], multiple sclerosis [3], epilepsy [4], and mild traumatic brain injury (mTBI) [5]. As such it is useful to identify these thalamic nuclei. Existing methods fail to take advantage of the diverse sources of imaging data that are available. Magnetic resonance (MR) images (MRIs) provide a variety of tissue contrasts, each offering unique insights into the structure and nature of the human brain. The magnetization prepared rapid gradient echo (MPRAGE) image is a T1-weighthed (T1-w) sequence offering excellent gray matter to white matter (WM) contrast. The fast gray matter acquisition T1 inversion recovery (FGATIR) [6] image is a sequence that suppresses signal from WM. MPRAGE and FGATIR images scanned contemporaneously can be used to estimate synthetic multi-TI images, i.e., T1-weightings with different inversion times (TIs). Diffusion tensor imaging (DTI) is an MR modality that non-invasively acquires the bulk motion of water in the brain, representing white matter (WM) tracts by depicting the anisotropy of the underlying microstructure. The MPRAGE and DTI images allow identification of the thalamus boundaries, while the nuclei highlighting the ability of the multi-TI data allows for the identification of several individual thalamic nuclei. Previous MR work in this area has begun with extracting the thalamus [7, 8] which itself is a difficult task. This is followed by parcellation of the thalamus using various methods and MR modalities [9]. These include diffusion tensor tracking of DTI [10], multi-vector random forest analysis [11, 12], probabilistic tractography [13], and others [14, 15, 16, 17, 18, 19, 20, 21, 22]. These prior works have two key limitations. First, they are limited in the number of nuclei they examine. Most of these methods use six labels per hemisphere, Lambert et al. [13] use nine per hemisphere. Iglesias et al. [16] discuss using 12 labels per hemisphere; however, they only report global metrics for their analysis. Second these methods do not use the full range of available MR contrasts. We use a state-of-the-art dimensionality reduction method to create a latent space (also called an embedding) that captures the intrinsic tissue parameters of the nuclei. We build high-dimensional vectors at each voxel location from the available MR data to create this latent space. We also have some manual labels that identify certain thalamic nuclei. To label a new image, we map the MRI data of a new image into our latent space and then label the voxels based on their neighbors in our latent space. ## 2 METHOD Multi-contrast MRI Data Forty-four subjects were imaged for a study of mTBI. Acquired images include MPRAGE, 3D T2-weighted, DTI, FGATIR, related multi-TI synthetic images, and a T1 map. The DTI data is processed to generate fractional anisotropy (FA), mean diffusivity (MD), radial diffusivity (RD), axial diffusivity (AD), mode, trace (Tr), Westin Indices, eigenvectors, and eigenvalues. The latter two are used to generate a 5D Knutsson vector and edge map [23]. All images are registered to the MPRAGE which has been resampled to be isotropic. Our MRI data is extracted at each voxel location which results in a $19$D feature vector, including MPRAGE, T2-weighted, FGATIR, T1-map (2D), FA, MD, RD, AD, Tr, Westin Indices (3D), Knutsson vector (5D), and Knutsson edgemap; we refer to this $19$D vector as our Base vector. The MPRAGE and FGATIR images combined with multi-TI synthetic equations generate $41$ unique images—TI images with different TI values—which we use as a $41$D vector and refer to as the Multi-TI vector. Probabilistic tractography on the DTI gives us connectivity maps from thalamic voxels to the cortical mantle. These combined with a SLANT [24] segmentation of the cortical surfaces give us two connectivity maps: 1) a $6$D vector called Conn6 corresponding to the SLANT lobe labels; 2) a $98$D vector called Conn98 corresponding to the connectivity map for all SLANT labels. We also have the $3$D coordinates of a voxel as an additional feature vector, using the center of the thalamus bounding box of each subject as the origin. We use combinations of these feature vectors in an ablation study to determine the most informative for thalamic parcellation. Thalamic Nuclei Delineation Protocol Thirty six subjects were semi-manually labeled as follows. The Morel atlas [25] (which has 19 labels per hemisphere) was registered to the MPRAGE. We reduced the 19 labels to 13 per hemisphere, by merging some smaller labels (see Table 2 for the list of 13 labels). We manually corrected the labels using multi-TI images, which helped to better identify individual nuclei and to reduce registration errors. Because nuclei were corrected separately, there might be thalamus voxels with no label and voxels with multiple labels—these we relabeled as “Conflicted” labels. An example of manually-delineated thalamic nuclei is shown in Fig. 1(a). Because the thalamus boundary is not clearly visible on T1-weighted images, voxels that are outside the thalamus might have been manually labeled. Labels that are masked by the automatically-computed thalamus mask (see above) are shown in Fig. 1(b). Preprocessing Six of our subjects have incomplete MR data, leaving us with 30 viable subjects, which we separated evenly into five folds. We performed a five-fold cross-validation with a $4$:$1$ training to testing split. The features were normalized in the range $[0,1]$ for non-directional features, while directional features were normalized in the range $[-1,1]$. For our normalization, we used a robust approach to reduce the effects of noise and outliers. To do this, we computed the 2.5% and 97.5% percentage values for each feature, and value of the data points between 2.5% and 97.5% range are linearly scaled to the range $[0.025,0.975]$. Points outside that range are linearly scaled between $[0,0.025)$ for values below 2.5% and scaled between $(0.975,1]$ for values over 97.5%. Due to the varying thalamus sizes, we had $30$–$40$k vectors per fold. | | | ---|---|---|--- (a) | (b) | (c) | (d) Figure 1: In (a) we show the manual delineation and in (b) we show the manual delineation restricted to our thalamus mask. In (c) we see the visualization of our 2D embedding by UMAP from one fold of our five-fold cross-validation. The clustering comes from UMAP and colors correspond to thalamic labels. In (d) we show the result of our parcellation on the same test case. Dimensionality Reduction The uniform manifold approximation and projection (UMAP) [26] approach is a manifold learning technique which has been demonstrated to be a state-of-the-art [26] dimensionality reduction method. UMAP is based on a Riemannian geometry and algebraic topology framework that scales linearly with the input size. Unlike t-distributed stochastic neighbor embedding (t-SNE), UMAP produces consistent results on the same data and is generally insensitive to initialization and hyperparameter changes. Additionally, UMAP takes less than $10$ minutes to train on $50$k data point in contrast to 8 hours for t-SNE. An example of a two-dimensional feature space generated by UMAP dimensionality reduction for one choice of features from our dataset is shown in Fig. 1(c). Note that this figure shows the thalamic labels as different colors, but these labels are not used as features for dimensionality reduction. Training of the UMAP model The most important parameter of UMAP is n_neighbours; it is an initial guess of the size of each cluster that UMAP is trying to embed. Too small and we get more clusters than we should; while too large can cause all points to be condensed into one cluster. We have done extensive testing of multiple values and found a value of $2,000$ for n_neighbours offers the best performance (for our data). UMAP has a target dimension that is somewhat arbitrary, though theoretically the best separation of labels is achieved if the target dimension is close to the actual dimension of the underlying manifold. In Tables 2, 3, and 4, we include results for $2$D, $3$D, and $4$D latent spaces and corresponding visualizations, respectively. We see diminishing returns for using embeddings with higher dimensions than $4$D as the time for embedding will increase moderately, yet the classification accuracy hardly increases. During training, the UMAP model runs completely unsupervised; in particular, the model does not see any of the held-out testing data ($5^{\text{th}}$-fold) or semi-manual labels in our cross-validation experiment. We note that we have not used a validation dataset to determine an early stopping criteria, and we use 1,000 epochs for all the data in the UMAP embedding. Thus, our UMAP embedding is data-driven and determined by the manifold that the input data is exhibiting. $k$ Nearest Neighbor Completion We use $k$ nearest neighbors ($k$-NN) in the UMAP latent space to label new unseen data. Thus, at test time, a vector (pixel) is labeled based on its neighbors in our UMAP latent space. We use $l_{2}$ distance and a $k$ of $100$ in the $2$D latent space, $k$ of $75$ in the $3$D latent space, and $k$ of $50$ in the $4$D latent space. In our semi- manual labels, it is possible for one voxel to have multiple labels as each nucleus has been labeled separately in separate files. Thus, we write a special version of the $k$-NN algorithm to work with multiple labels, and multiple labels in one voxel are evenly weighted in the final voting. A typical example of a $k$-NN segmentation result (defined only on the thalamus mask) is shown in Fig. 1(d). More results are presented below. ## 3 EXPERIMENTS and Results Table 1: An ablation study of the features used in our 2D UMAP embedding experiments, and the corresponding mean and standard deviation of the Dice score. We observed similar results for the 3D and 4D UMAP embedding experiments. Key: Dim. - Dimension of the combined feature vector; Base - $19$D MR derived features; Coord - $3$D $(x,y,z)$-coordinates; Multi-TI - $41$D multi-parametric TI features; Conn6 - $6$D fiber connectivity; Conn98 - $98$D fiber connectivity. Dim. | | Base | | Coord | | Multi-TI | | Conn6 | | Conn98 | | Mean $\pm$ Std. Dev. ---|---|---|---|---|---|---|---|---|---|---|---|--- 19 | | ✓ | | | | | | | | | | $0.632\pm 0.011$ 22 | | ✓ | | ✓ | | | | | | | | $0.632\pm 0.011$ 60 | | ✓ | | | | ✓ | | | | | | $0.590\pm 0.021$ 63 | | ✓ | | ✓ | | ✓ | | | | | | $\bf{0.644\pm 0.017}$ 69 | | ✓ | | ✓ | | ✓ | | ✓ | | | | $0.640\pm 0.023$ 161 | | ✓ | | ✓ | | ✓ | | | | ✓ | | $0.642\pm 0.025$ 104 | | | | | | | | ✓ | | ✓ | | $0.392\pm 0.037$ We first conducted an ablation study on the feature vectors by using various combinations of the groups of feature vectors across our five-fold cross- validation. The results reported in Table 1 are a volume-wise weighted mean and standard deviation for the thirteen thalamic nuclei across our five-folds. This ablation study uses a $2$D UMAP embedding, and a similar result is found in both $3$D and $4$D UMAP embeddings. The best thalamic nuclei parcellation is achieved with a $63$D feature vector that does not include any cortical surface connectivity information; which is counter to the prevailing ideas about how thalamus parcellation should be approached. However, we note that our semi-manual labels were created by observing multiple image contrasts within the thalamus without any knowledge of connectivity, which could be a potential explanation. In our study, adding connectivity information does not improve mean accuracy, but results in a slight decrease in mean accuracy which is not statistically significant as the standard deviation also increases. This result suggests another possible explanation that the additional information added by connectivity is not significant and it might be compromised by the “curse of dimension” as higher dimension data makes it harder to be embedded into a proper latent space that preserves useful information. We found both the spatial information and Multi-TI information useful as removing them significantly reduces performance; again, this may be a consequence of our manual delineation using the Multi-TI images. We also tested the performance using connectivity information only, and the performance was even worse than using only the 19D base vectors. In conclusion, we found connectivity information not useful in our parcellation approach, possibly due to the fact that they are not seen by the manual delineators. We next report on the best-performing feature vector in our ablation experiment (i.e., the $63$D feature vector made up of Base $+$ Coord $+$ Multi-TI) across the five-folds of our cross-validation and on all thirteen labels. The mean Dice scores across our labels are shown in Table 2, as well as an “Overall” label. The presented result in this table is for our $2$D UMAP embedding. Then, in Table 3 and Table 4, we report the results of both the $3$D and $4$D latent space embeddings. We observe a clear improvement in the overall Dice score when the dimension of the latent space is increased from $2$D to $3$D, and $3$D to $4$D in all 5 folds. The Dice score for individual thalamic nucleus also increases. The improvement is most significant for those small nuclei with an initially low Dice score, and for larger nuclei which can be easily segmented out (such as MD and PuI nuclei in our data) the improvement is relatively small. As we mentioned in the Sec. 2, theoretically we could use higher dimensions for our latent spaces to achieve better result, but we only achieve a slight improvement of accuracy at the cost of much longer embedding time; a $2$D latent space takes about 10 minutes to train, while $3$D takes about 30 minutes, $4$D takes about 2 hours, and $5$D takes over 10 hours. Thus, a $4$D latent space achieves a reasonable balance between time cost and performance. Table 2: We present the classification Dice scores in our cross-validation experiment for our 2D UMAP embedding. The “Overall” label is a volume-weighted combination of the other thirteen Dice scores. Key: AN - Anterior Nucleus; CL - Central Lateral Nucleus; CM - Center Median Nucleus; LD - Lateral Dorsal Nucleus; LP - Lateral Posterior Nucleus; MD - Mediodorsal; PuA - Anterior Pulvinar; PuI - Inferior Pulvinar - VA - Ventral Anterior Nucleus; VLA - Ventral lateral Anterior Nucleus; VLP - Ventral Lateral Posterior Nucleus; VPL - Ventral Posterior Lateral Nucleus; VPM - Ventral Posterior Medial Nucleus. Label Overall AN CL CM LD LP MD PuA PuI VA VLP VLa VPL VPM Fold 1 0.66 0.18 0.44 0.31 0.22 0.57 0.81 0.20 0.92 0.73 0.81 0.43 0.49 0.50 Fold 2 0.66 0.21 0.44 0.20 0.28 0.62 0.78 0.20 0.90 0.76 0.77 0.43 0.52 0.49 Fold 3 0.62 0.28 0.38 0.31 0.21 0.53 0.83 0.18 0.77 0.81 0.74 0.42 0.34 0.49 Fold 4 0.64 0.35 0.48 0.36 0.18 0.57 0.89 0.10 0.84 0.58 0.74 0.43 0.40 0.45 Fold 5 0.64 0.27 0.57 0.23 0.35 0.59 0.73 0.26 0.87 0.58 0.81 0.54 0.53 0.37 Table 3: We present the classification Dice score in our cross-validation experiment for our 3D UMAP embedding. See Table 2 for the label key. Label Overall AN CL CM LD LP MD PuA PuI VA VLP VLa VPL VPM Fold 1 0.68 0.26 0.45 0.39 0.23 0.63 0.81 0.24 0.93 0.76 0.82 0.42 0.49 0.50 Fold 2 0.68 0.27 0.46 0.29 0.31 0.70 0.79 0.28 0.91 0.73 0.81 0.49 0.53 0.51 Fold 3 0.64 0.35 0.40 0.39 0.21 0.59 0.80 0.26 0.77 0.83 0.75 0.41 0.37 0.55 Fold 4 0.66 0.37 0.52 0.50 0.21 0.58 0.89 0.22 0.86 0.59 0.71 0.43 0.38 0.52 Fold 5 0.67 0.29 0.60 0.36 0.35 0.65 0.70 0.36 0.88 0.65 0.82 0.55 0.54 0.44 Table 4: We present the classification Dice score in our cross-validation experiment for our 4D UMAP embedding. See Table 2 for the label key. Label Overall AN CL CM LD LP MD PuA PuI VA VLP VLa VPL VPM Fold 1 0.71 0.31 0.48 0.49 0.29 0.64 0.83 0.28 0.93 0.78 0.84 0.45 0.58 0.51 Fold 2 0.70 0.28 0.49 0.37 0.41 0.73 0.82 0.31 0.91 0.80 0.80 0.51 0.58 0.51 Fold 3 0.65 0.35 0.44 0.44 0.30 0.62 0.81 0.27 0.77 0.87 0.76 0.43 0.40 0.50 Fold 4 0.68 0.40 0.53 0.60 0.27 0.60 0.92 0.24 0.86 0.63 0.74 0.45 0.43 0.56 Fold 5 0.69 0.33 0.61 0.47 0.37 0.67 0.77 0.37 0.88 0.69 0.82 0.57 0.56 0.48 | | Subject 1 | | ---|---|---|---|--- | | | | | | | | | | | | (a) | | (b) | | (c) | | Subject 2 | | | | | | | | | | | | | | (a) | | (b) | | (c) Figure 2: We show a comparison of results using our 2D and 4D latent spaces for UMAP embedding from 3 different views (coronal, axial, & sagittal) for two subjects from our test cases. In (a) we show the manual delineation, in (b) we show example results using our 2D latent space and in (c) we show example results using our 4D latent space. | | ---|---|--- | | | | (a) | | (b) Figure 3: We show visualizations of our whole thalamus parcellation result from three different views (coronal, axial, & sagittal) for one subject from our test cases. In (a) we show the manual delineation and in (b) we show the results from our whole thalamus parcellation. We ask the model to predict all thalamus voxels even if they are not labeled by manual delineation, so the automated labels cover the entire thalamus region while the manual labels have multiple empty regions in-between labels and overlaps. We also present visualizations of the final thalamus parcellation results in Fig. 2, and compare the manual labels with our $2$D and $4$D UMAP latent space embeddings for two subjects showing three different anatomical views (coronal, axial, & sagittal). Note that in order to calculate the Dice score, we only predict voxels that have a manual label to compare with, but in principle, our approach can predict any voxel given the feature vector of that voxel. Subject $1$ is a case with a relatively lower Dice score and Subject $2$ is a case with a higher Dice score. The central thalamic slices are used for each of the views. We observe that visually the result from our $4$D UMAP is closer to the manual labels than that from our $2$D UMAP, but the difference is only moderate. Thus using our $2$D UMAP latent space is already sufficient to place all labels at the correct spatial positions, and increasing to $4$D only refines some boundaries. As our approach largely depends on the natural clustering of feature vectors, with only weak spatial information, our result is evidence that feature vectors from different thalamic nuclei are naturally separated in that high-dimensional space, and we can project that manifold to a latent space as low as $2$D and separate them without using supervised training and deep learning methods. Lastly, we show that our approach can also be used to label other thalamic voxels not labeled by human labelers or an entirely new thalamus. In Fig. 3 we used our parcellation approach to label all voxels inside the thalamus region, and compare them with manual labels which have many overlaps and holes. We use a pre-generated thalamus mask to define the thalamus region, and label all voxels inside that region. ## 4 Conclusion We have used a high-dimensional feature vector and an unsupervised dimensionality reduction technique to parcellate the thalamus. Our ablation study suggests that connectivity does not help in parcellating the thalamus; however, further research may support a more subtle relationship between intensities that indicate microstructure and connectivity to distal regions. We found evidence that feature vectors from multi-contrast MRI scans can be used to naturally separate different thalamus nuclei using unsupervised approaches, and the high dimensional feature space (maximum $161$D in or experiment) can be reduced using unsupervised UMAP to as low as a 2D latent space, and is still able to classify new voxels using methods as simple as a $k$-NN with an accuracy comparable to state-of-the-art results. Our performance on several nuclei is higher than other works: see the MD performance of Stough et al. [11]; the PuI, VA, and VLP performance of Su et al. [19], for examples. A higher accuracy can be achieved using a higher order latent space, but the embedding time will increase. ###### Acknowledgements. This work was supported in part by the National Institutes of Health through the National Institute of Neurological Disorders and Stroke under grant R01-NS105503 (PI: R.P. Gullapalli) and grant R01-NS082347 (PI: P.A. Calabresi). ## References * [1] Sherman, S. M. and Guillery, R., [Exploring the Thalamus ], Academic Press (2001). * [2] Halliday, G. H., “Thalamic changes in Parkinson’s disease,” Parkinsonism & related disorders 15, S152–S155 (2009). * [3] Glaister, J., Carass, A., NessAiver, T., Stough, J. V., Calabresi, S. S. P. A., and Prince, J. L., “Thalamus Segmentation using Multi-Modal Feature Classification: Validation and Pilot Study of an Age-Matched Cohort,” NeuroImage 158, 430–440 (2017). * [4] Bertram, E. H., Mangan, P. S., Zhang, D., Scott, C., and Williamson, J. M., “The midline thalamus: alterations and a potential role in limbic epilepsy,” Epilepsia 42(8), 967–978 (2001). * [5] Grossman, E. J. and Inglese, M., “The role of thalamic damage in mild traumatic brain injury,” Journal of Neurotrauma 33(2), 163–167 (2016). * [6] Sudhyadhom, A., Haq, I. U., Foote, K. D., Okun, M. S., and Bova, F. J., “A high resolution and high contrast MRI for differentiation of subcortical structures for DBS targeting: The Fast Gray Matter Acquisition T1 Inversion Recovery (FGATIR),” NeuroImage 47, T44–T52 (2009). * [7] Liu, L., Glaister, J., Sun, X., Carass, A., Tran, T. D., and Prince, J. L., “Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2016), San Diego, CA, February 27-March 3, 2016 ], 9784, 97843H–97843H–7 (2016). * [8] Shao, M., Zuo, L., Carass, A., Zhuo, J., Gullapalli, R. P., and Prince, J. L., “Evaluating the impact of MR image harmonization on thalamus deep network segmentation,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2022), San Diego, CA, February 20 – 24, 2022 ], 12032, 115–121 (2022). * [9] Tohidi, P., Han, S., Zuo, L., Zhuo, J., Roys, S. R., Carass, A., Gullapalli, R. P., and Prince, J. L., “Multiple Sclerosis brain lesion segmentation with different architecture ensembles,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2023), San Diego, CA, February 19 – 23, 2023 ], (2023). * [10] Wiegell, M. R., Tuch, D. S., Larsson, H. B., and Wedeen, V. J., “Automatic segmentation of thalamic nuclei from diffusion tensor magnetic resonance imaging,” NeuroImage 19(2), 391–401 (2003). * [11] Stough, J. V., Glaister, J., Ye, C., Ying, S. H., Prince, J. L., and Carass, A., “Automatic method for thalamus parcellation using multi-modal feature classification,” in [17${}^{\mbox{\tiny{th}}}$ International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2014) ], Lecture Notes in Computer Science 8675, 169–176, Springer Berlin Heidelberg (2014). * [12] Glaister, J., Carass, A., Stough, J. V., Calabresi, P. A., and Prince, J. L., “Thalamus parcellation using multi-modal feature classification and thalamic nuclei priors,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2016), San Diego, CA, February 27-March 3, 2016 ], 9784, 97843J–97843J–6 (2016). * [13] Lambert, C., Simon, H., Colman, J., and Barrick, T. R., “Defining thalamic nuclei and topographic connectivity gradients in vivo,” NeuroImage 158, 466–479 (2017). * [14] Yan, C., Shao, M., Bian, Z., Feng, A., Xue, Y., Zhuo, J., Gullapalli, R. P., Carass, A., and Prince, J. L., “Segmenting thalamic nuclei from manifold projections of multi-contrast MRI,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2023), San Diego, CA, February 19 – 23, 2023 ], (2023). * [15] Feng, A., Xue, Y., Wang, Y., Yan, C., Bian, Z., Shao, M., Zhuo, J., Gullapalli, R. P., Carass, A., and Prince, J. L., “Label propagation via random walk for training robust thalamus nuclei parcellation model from noisy annotations,” in [20${}^{\mbox{\tiny{th}}}$ International Symposium on Biomedical Imaging (ISBI 2023) ], (2023). * [16] Iglesias, J. E., Van Leemput, K., Golland, P., and Yendiki, A., “Joint inference on structural and diffusion MRI for sequence-adaptive Bayesian segmentation of thalamic nuclei with probabilistic atlases,” in [26${}^{\mbox{\tiny{th}}}$ Inf. Proc. in Med. Imaging (IPMI 2019) ], Lecture Notes in Computer Science 11492, 767–779 (2019). * [17] Jonasson, L., Hagmann, P., Pollo, C., Bresson, X., Wilson, C. R., Meuli, R., and Thiran, J.-P., “A level set method for segmentation of the thalamus and its nuclei in DT-MRI,” Signal Processing 87(2), 309–321 (2007). * [18] Stough, J. V., Ye, C., Ying, S. H., and Prince, J. L., “Thalamic Parcellation from Multi-Modal Data using Random Forest Learning,” in [10${}^{\mbox{\tiny{th}}}$ International Symposium on Biomedical Imaging (ISBI 2013) ], 852–855 (2013). * [19] Su, J. H., Thomas, F. T., Kasoff, W. S., Tourdias, T., Choi, E. Y., K.Rutt, B., and Saranathan, M., “Thalamus Optimized Multi Atlas Segmentation (THOMAS): fast, fully automated segmentation of thalamic nuclei from structural MRI,” NeuroImage 194, 272–282 (2019). * [20] Wang, S. L., Han, S., Carass, A., Zhuo, J., Roys, S., Gullapalli, R. P., Jiang, L., and Prince, J. L., “Thalamus segmentation using convolutional neural networks,” in [Proceedings of SPIE Medical Imaging (SPIE-MI 2021), San Diego, CA, February 14 – 18, 2021 ], 11596, 1159634 (2021). * [21] Ziyan, U., Tuch, D., and Westin, C. F., “Segmentation of Thalamic Nuclei from DTI Using Spectral Clustering,” in [9${}^{\mbox{\tiny{th}}}$ International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2006) ], 4191, 807–814 (2006). * [22] Ziyan, U., Tuch, D., and Westin, C. F., “Joint Segmentation of Thalamic Nuclei from a Population of Diffusion Tensor MR Images,” in [11${}^{\mbox{\tiny{th}}}$ International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2008) ], 5241, 279–286 (2008). * [23] Knutsson, H., “Producing a Continuous and Distance Preserving 5-D Vector Representation of 3-D Orientation,” in [IEEE Computer Society Workshop on Computer Architecture for Pattern Analysis and Image Database Management ], 175–182 (1985). * [24] Huo, Y., Xu, Z., Xiong, Y., Aboud, K., Parvathaneni, P., Bao, S., Bermudez, C., Resnick, S. M., Cutting, L. E., and Landman, B. A., “3D whole brain segmentation using spatially localized atlas network tiles,” NeuroImage 194, 105–119 (2019). * [25] Morel, A., Magnin, M., and Jeanmonod, D., “Multiarchitectonic and Stereotactic Atlas of the Human Thalamus,” J. Comp. Neurol. 387(4), 588–630 (1997). * [26] McInnes, L., Healy, J., and Melville, J., “UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction,” arXiv preprint arXiv:1802.03426 1802 (2018).
# Dark Energy from holographic theories with hyperscaling violation Mariano Cadoni1,2 and Matteo Ciulu1 1 Dipartimento di Fisica, Università di Cagliari. 2 INFN, Sezione di Cagliari. Cittadella Universitaria, 09042 Monserrato, Italy. ###### Abstract We show that analytical continuation maps scalar solitonic solutions of Einstein-scalar gravity, interpolating between an hyperscaling violating and an Anti de Sitter (AdS) region, in flat FLRW cosmological solutions sourced by a scalar field. We generate in this way exact FLRW solutions that can be used to model cosmological evolution driven by dark energy (a quintessence field) and usual matter. In absence of matter, the flow from the hyperscaling violating regime to the conformal AdS fixed point in holographic models corresponds to cosmological evolution from power-law expansion at early cosmic times to a de Sitter (dS) stable fixed point at late times. In presence of matter, we have a scaling regime at early times, followed by an intermediate regime in which dark energy tracks matter. At late times the solution exits the scaling regime with a sharp transition to a dS spacetime. The phase transition between hyperscaling violation and conformal fixed point observed in holographic gravity has a cosmological counterpart in the transition between a scaling era and a dS era dominated by the energy of the vacuum. ###### Contents 1. I Introduction 2. II Dark energy, holographic theories and hyperscaling violation 3. III Exact cosmological solutions 4. IV Dark energy models 5. V Coupling to matter 1. V.1 Power-law evolution 2. V.2 Exponential evolution 3. V.3 Intermediate regime 6. VI Conclusions ## I Introduction Triggered by the anti-de Sitter/Conformal field theory (AdS/CFT) correspondence, recently we have seen several application of the holographic principle aimed to describe the strongly coupled regime of quantum field theory (QFT) Hartnoll et al. (2008a, b); Horowitz and Roberts (2008); Charmousis et al. (2009); Cadoni et al. (2010); Goldstein et al. (2010); Gouteraux and Kiritsis (2012). The most interesting example of these applications is represented by the holographic description of quantum phase transitions, such as those leading to critical superconductivity and hyperscaling violation Hartnoll et al. (2008a, b); Charmousis et al. (2009); Gubser and Rocha (2010); Cadoni et al. (2010); Goldstein et al. (2010); Dong et al. (2012); Cadoni and Pani (2011); Huijse et al. (2012); Cadoni and Mignemi (2012); Cadoni and Serra (2012); Narayan (2012); Cadoni et al. (2013). A general question that can be asked in this context is if these recent advances can be used to improve our understanding, not only of some holographic, strongly coupled dual QFT, but also of the gravitational interaction itself. After all the holographic principle in general and the AdS/CFT correspondence in particular, have been often used in this reversed direction. The most important example is without doubt the understanding of the statistical entropy of black holes by counting states in a dual CFT Strominger and Vafa (1996); Strominger (1998); Cadoni and Mignemi (1999). A challenge for any theory of gravity is surely cosmology and in particular the understanding of the present accelerated expansion of the universe and the related dark energy hypothesis Peebles and Ratra (2003); Padmanabhan (2003). It is not a priori self-evident that the recent developments on the holographic side may be useful for cosmology McFadden and Skenderis (2010). However, closer scrutiny reveals that key concepts used in the holographic description can be also used in cosmology. First of all the symmetries of the gravitational background. The AdS and de Sitter (dS) spacetime in $d$-dimensions share the same isometry group (the conformal group in $d-1$ dimensions). This fact has been the main motivation for the formulation of the dS/CFT correspondence Strominger (2001). Although this correspondence is problematic Goheer et al. (2003), it may be very useful to relate different gravitational backgrounds if one sees dS/CFT as analytical continuation $r\leftrightarrow it$ of AdS/CFT Cadoni and Carta (2004). Second, a domain wall/cosmology correspondence has been proposed Skenderis and Townsend (2006); Skenderis et al. (2007); Shaghoulian (2013). For every supersymmetric domain-wall, which is solution of some supergravity (SUGRA) model, there is a corresponding flat Friedmann-Lemaitre-Robertson-Walker (FLRW) cosmology (which can be obtained by analytical continuation), of the same model but with opposite sign potential. This means that, although cosmologies in general cannot be supersymmetric they may allow for the existence of pseudo-Killing spinors. Third, the spacelike radial coordinate $r$ of a static asymptotically AdS geometry can be interpreted as an energy scale and the corresponding dynamics as a renormalization group (RG) flow. This flow drives the dual QFT from an ultraviolet (UV) conformal fixed point (corresponding to the AdS geometry) to some nontrivial near-horizon, infrared (IR) point where only some scaling symmetries are preserved (for instance one can have hyperscaling violation in the IR Cadoni et al. (2013)). By means of the analytic continuation the RG flow becomes the cosmological dynamics of a time-dependent gravitational background, driving the universe from a early time regime (corresponding to the IR) to a late time regime (corresponding to the UV) Kiritsis (2013). Last but not least, scalar fields play a crucial role both for holographic models and for cosmology. In the first case they are seen as scalar condensates triggering symmetry breaking and/or phase transitions in the dual QFT Hartnoll et al. (2008a, b); Cadoni et al. (2010). They are dual to relevant operators that drive the RG flow from the UV fixed point to the IR critical point. Moreover, they are the sources of scalar solitons, which are the gravitational background bridging the asymptotic AdS region and the near- horizon region. On the cosmological side it is well-known that scalar fields can be used to model dark energy (the so-called quintessence fields) Ford (1987); Wetterich (1988); Caldwell et al. (1998); Zlatev et al. (1999); Amendola and Tsujikawa (2010). In this paper we will consider a wide class of Einstein-scalar gravity model (parametrized by a potential $V$) that have scalar solitonic solution interpolating between an hyperscaling violating region and an AdS region. These models have been investigated for holographic applications Charmousis et al. (2009); Cadoni et al. (2010); Goldstein et al. (2010); Dong et al. (2012); Cadoni and Pani (2011); Cadoni and Mignemi (2012); Cadoni and Serra (2012); Narayan (2012); Cadoni et al. (2013). We show that an analytical continuation transforms the solitonic solution in a flat FLRW solution of a model with opposite sign of $V$. If the soliton has the AdS region in the UV (IR), the FLRW solution will have a dS epoch at late (early) times. Correspondingly, the FLRW solution will be characterized by power-law expansion at early (late) times ( Section II). Focusing on a particular Einstein-scalar model (parametrized by a parameter $\beta$) that has the AdS regime in the UV and for which exact solitonic solutions are known Cadoni et al. (2011), we generate (and characterize in detail) the corresponding flat FLRW exact solutions. For a broad range of $\beta$ the solutions describe a flat universe decelerating at early times but accelerating at late times (Section III). We proceed by showing that these solutions can be used as a model for dark energy, the scalar field playing the role of a quintessence field. The parameter of state describing dark energy decreases with cosmic time, from a positive value ($<1$) till $-1$ (Section IV). Finally, we discuss the cosmological dynamics in presence of matter in the form of a general perfect fluid. Although we are not able to solve exactly the coupled system, we give strong evidence that the universe naturally evolves from a scaling era at early times to a, cosmological constant dominated, de Sitter universe at late times. Moreover, the transition between the two regimes in not smooth and is the cosmological analogue of the hyperscaling violation/AdS spacetime phase transition of holographic models Cadoni et al. (2010, 2013); Gouteraux and Kiritsis (2012) (Section V). ## II Dark energy, holographic theories and hyperscaling violation We consider Einstein gravity coupled to a real scalar field $\phi$ in four dimensions: $I=\int d^{4}x\sqrt{-g}\left[{\cal{R}}-\frac{1}{2}\left(\partial\phi\right)^{2}-V(\phi)\right],$ (1) where ${\cal{R}}$ is the scalar curvature of the spacetime. The model is parametrized by the self-interaction potential $V(\phi)$ for the scalar field. For static, radially symmetric solutions with planar topology for the transverse space, one can use the following parametrization of the solution: $ds^{2}=-U(r)dt^{2}+\frac{dr^{2}}{U(r)}+R^{2}(r)(dx^{2}+dy^{2}),\quad\phi=\phi(r).$ (2) It is known that the theory (1) admits solutions (2) describing black branes with scalar hair, at least for specific choices of $V(\phi)$ Cadoni et al. (2011, 2012); Cadoni and Mignemi (2012); Cadoni et al. (2013). When the spacetime is asymptotically AdS $U=R^{2}=\left(\frac{r}{R_{0}}\right)^{2}$ (3) (where $R_{0}$ is the AdS length) or more generically scale-covariant $U=R^{2}=\left(\frac{r}{r_{-}}\right)^{\eta},$ (4) (where $r_{-}$ and $0\leq\eta\leq 2$ are parameters), usual no-hair theorems can be circumvented and regular, hairy black brane solutions of (1) are allowed Cadoni et al. (2011, 2012). Moreover, it has been shown that the zero-temperature extremal limit of these black brane solutions is necessarily characterized by $U=R^{2}$ in Eq. (2) Cadoni et al. (2011, 2013). The extremal limit describes a regular scalar soliton interpolating between an AdS spacetime and a scale-covariant metric. In particular, the behaviour of the potential at $r=\infty$ and in the near- horizon region determines the corresponding geometry. When the leading term of the potential is a constant $V(\phi)\sim-6/R_{0}^{2}$ the geometry is AdS. On the other hand if the potential behaves exponentially $V(\phi)\sim-e^{\lambda\phi}$ ($\lambda$ is some constant) we get a scale- covariant metric Cadoni et al. (2011). The AdS vacuum has isometries generated by the conformal group in three dimensions. In particular the AdS metric is invariant under scale transformations: $r\to\mu^{-1}r,\quad(t,x,y)\to\mu(t,x,y).$ (5) On the other hand the scale-covariant metric breaks some of the symmetries of the AdS metric. Under scale transformation the metric (4) is not invariant but only scale-covariant. For $\eta\neq 1$ we get $r\to\mu^{\frac{1}{1-\eta}}r,\quad(t,x,y)\to\mu(t,x,y),\quad ds^{2}\to\mu^{\frac{2-\eta}{1-\eta}}ds^{2}.$ (6) Depending on the form of the potential $V(\phi)$ we have two cases $1)$ AdS is the $r=\infty$ asymptotic geometry and the scale-covariant metric is obtained in the near-horizon region Cadoni et al. (2011, 2013). $2)$ The AdS spacetime appears in the near-horizon region whereas the scale- covariant metric is obtained as $r=\infty$ asymptotic geometry Cadoni et al. (2012); Cadoni and Mignemi (2012). This behaviour has a nice holographic interpretation and a wide range of application for describing dual strongly-coupled QFTs and quantum phase transitions Hartnoll et al. (2008a, b); Charmousis et al. (2009); Gubser and Rocha (2010); Cadoni et al. (2010); Cadoni and Pani (2011); Cadoni and Mignemi (2012); Cadoni and Serra (2012); Cadoni et al. (2013). In the dual QFT the two cases described on points $1)$ and $2)$ above correspond, respectively, to the following: $1)$ The dual QFT at zero temperature has an UV conformal fixed point. In the IR it flows to an hyperscaling violating phase where the conformal symmetry is broken, only the symmetry (6) is preserved and an IR mass-scale (the parameter $r_{-}$ in Eq.(4)) is generated Cadoni et al. (2010, 2011); Cadoni and Pani (2011); Cadoni et al. (2013). $2)$ The dual QFT at zero temperature has a conformal fixed point in the IR and flows in the UV to an hyperscaling violating phase Cadoni et al. (2012); Cadoni and Mignemi (2012); Cadoni and Serra (2012). When $U=R^{2}$ in Eq. (2) the field equations stemming from the action (1) become: $\frac{R^{\prime\prime}}{R}=-\frac{\phi^{\prime 2}}{4},\,\quad\quad\frac{d}{dr}(R^{4}\phi^{\prime})=R^{2}\frac{dV}{d\phi},\,\quad\quad(R^{4})^{\prime\prime}=-2R^{2}V(\phi),$ (7) where the prime denotes derivation with respect to $r$. Notice that only two of these equations are independent. In this paper we are interested in FLRW cosmological solutions with non trivial scalar field of the gravity theory (1). Such solutions have been widely used to describe the history of our universe. Depending on the model under consideration, the scalar field can be used to describe dark energy (quintessence models)Ford (1987); Wetterich (1988); Caldwell et al. (1998); Zlatev et al. (1999); Amendola and Tsujikawa (2010), the inflaton (inflationary models) and also dark matter Sahni and Wang (2000); Bertolami et al. (2012). Our main idea is to use the knowledge of effective holographic theories of gravity in the cosmological context. The key point is that once an exact static solution (2) with $U=R^{2}$ of the field equations (7) is known one can immediately generate a flat FLRW cosmological solution using the following transformation in (2) and (7), $r\to it,\quad t\to ir,\quad V(\phi)\to-V(\phi).$ (8) In fact this transformation maps the line element and the scalar field (2) into $ds^{2}=-R^{-2}(t)dt^{2}+R^{2}(t)(dr^{2}+dx^{2}+dy^{2}),\quad\phi=\phi(t).$ (9) describing a FLRW metric in which the curvature of the spatial sections is zero, i.e a flat universe with $R(t)$ playing the role of the scale factor. The same transformation (8) maps the field equations (7) into $\frac{\ddot{R}}{R}=-\frac{\dot{\phi}^{2}}{4},\,\quad\quad\frac{d}{dt}(R^{4}\dot{\phi})=-R^{2}\frac{dV_{c}}{d\phi},\,\quad\quad\ddot{(R^{4})}=2R^{2}V_{c}(\phi),$ (10) where the dot means derivation with respect to the time $t$ and $V_{c}=-V$. One can easily see that Eqs. (7) and (10) have exactly the same form, simply with the prime replaced by the dot. This means that once a zero-temperature static solution, describing a scalar soliton, of the theory (1) with potential $V$ is known, one can immediately write down a cosmological solution of the theory (1) with potential $V_{c}=-V$. The flip of the sign of the potential when passing from the static scalar soliton to the cosmological solution has important consequences. The AdS vacuum corresponding to constant negative potential $V=-6/R_{0}^{2}$ (a negative cosmological constant) will be mapped in the de Sitter spacetime, corresponding to $V_{c}=6/R_{0}^{2}$ ( (a positive cosmological constant), which describes an exponentially expanding universe. Correspondingly, the scale covariant static metric (4) will be mapped into a cosmological power-law solution $R\sim t^{\eta}$. It follows immediately that the scalar solitons corresponding to the cases $1)$ and $2)$ above will generate after the transformation (8) FLRW cosmological solutions with, respectively, the following properties: $1)$ The cosmological solution describes an universe evolving from a power-law scaling solution at early times to a de Sitter spacetime at late times. $2)$ The cosmological solution describes an universe evolving from a de Sitter spacetime at early times to a power-law solution at late times. It is interesting to notice that a universe evolving from a power-law solution at early times to an exponentially expanding phase at late times has an holographic counterpart in a QFT flowing from hyperscaling violation in the IR to an UV fixed point. Conversely, universe evolving from de Sitter at early times to the power-law behaviour al late times corresponds to a QFT flowing from an IR fixed point to hyperscaling violation in the UV. The FLRW solutions described in point $1)$ above are good candidates to model an universe, which is dominated at late times by dark energy. On the other hand, the cosmological solutions described in point $2)$ above are very promising to describe inflation. In this paper we will investigate in detail solutions of type $1)$. We will leave the investigation of solution of type $2)$ to a successive publication. Transformations like (8) mapping solitons into FLRW cosmologies have been already considered in the context of SUGRA theories. Skenderis and Townsend (2006); Skenderis et al. (2007); Shaghoulian (2013). They are known under the name of domain wall (DW)/cosmology correspondence. For every supersymmetric domain-wall, which is solution of some SUGRA model, we can obtain, by analytical continuation, a flat FLRW cosmology, of the same model but with opposite sign potential Skenderis and Townsend (2006). When the model (1) is the truncation to the metric and scalar sector of some supergravity theory (or more generally when the potential $V$ can be derived from a superpotential, i.e when we are dealing with a “fake” SUGRA model DeWolfe et al. (2000)) the transformation (8) describes exactly the DW/cosmology correspondence. However, in this paper we consider the transformation in the same spirit of effective holographic theories. We do not require the action (1) to come from a SUGRA model and we consider the transformation (8) in its most general form as a mapping between a generic scalar DW solution, i.e. a spacetime (2) with $U=R^{2}$ and cosmological solution (9) endowed with a non trivial time-dependent scalar field. The cosmological solution (9) is not written in terms of the usual cosmic time $\tau$. Using this time variable, solution (9) takes the form: $ds^{2}=-d\tau^{2}+R^{2}(\tau)(dr^{2}+dx^{2}+dy^{2}),\quad\phi=\phi(\tau),$ (11) and the coordinate time $t$ and cosmic time $\tau$ are related by $\tau=\int{\frac{dt}{R(t)}}.$ (12) Written in terms of $\tau$ the field equations (10) become the usual ones ${\dot{H}}=-\frac{\dot{\phi}^{2}}{4},\,\quad\ddot{\phi}+3H\dot{\phi}=-\frac{dV_{c}}{d\phi},\,\quad 3H^{2}=\frac{\dot{\phi}^{2}}{4}+\frac{V_{c}}{2},$ (13) where now the dot means derivation with respect to the cosmic time $\tau$ and $H$ is the Hubble parameter $H=\dot{R}/R$. ## III Exact cosmological solutions In the previous section we have described a general method that allows us to write down a flat FLRW solution with a nontrivial scalar field once a static scalar solitonic solution is known. In the recent literature dealing with holographic applications of gravity one can find several scalar solitons describing the flow from an scale-covariant metric in the IR to an AdS solution in the UV Cadoni et al. (2011, 2013). However, many of them are numeric solutions. An interesting class of exact analytic solutions with the above features have been derived in Ref. Cadoni et al. (2011) using a generating method. This generating method essentially consists in fixing the form of the scalar field. The metric part of the solution and the potential $V$ are found by solving a Riccati equation and a first order linear equation. This allows us to find a solution (2) of the theory (1) with potential Cadoni et al. (2011) $V_{c}(\phi)=\frac{2}{R_{0}^{2}}e^{2\gamma\beta\phi}\left[2-8\beta^{2}+(1+8\beta^{2})\cosh(\gamma\phi)-6\beta\sinh(\gamma\phi)\right]$ (14) where 111In this paper we are using a normalization of the kinetic term for the scalar, which differs from that used in Ref. Cadoni et al. (2011) by a factor of $4$. Correspondingly, $\gamma$ differs by a factor of $2$. $\frac{1}{2}\leq|\beta|,\quad\gamma^{-2}=1-4\beta^{2}.$ (15) The point $\phi=0$ is a maximum of the potential $V$, i.e we have $V^{\prime}(0)=0$ and $V^{\prime\prime}(0)=-2/R_{0}^{2}=m^{2}<0$, where $m$ is the mass of the scalar field. Notice that the squared-mass of the scalar is negative and depends only on the the value of the cosmological constant. The potential (14) contains as special cases, models resulting from truncation to the abelian sector of $N=8$, $D=4$ gauged supergravity Cadoni et al. (2011). In fact, for $\beta=0$ and $\beta=\pm 1/4$ Eq. (14) becomes $V_{c}(\phi,\beta=0)=\frac{2}{R_{0}^{2}}\left(2+\cosh\phi\right),\quad V_{c}(\phi,\beta=\pm 1/4)=\frac{6}{R_{0}^{2}}\cosh\left(\frac{\phi}{\sqrt{3}}\right).$ (16) The static, solitonic solutions (2) of the theory (1) with potential (14) are given by Cadoni et al. (2011) $\gamma\phi=\log X,\quad R=\frac{r}{R_{0}}X^{\beta+\frac{1}{2}},\quad X=1-\frac{r_{-}}{r},$ (17) where $r_{-}$ is an integration constant. In the $r=\infty$ asymptotic region, corresponding to $\phi=0$, the potential approaches to $-6/R_{0}^{2}$ and solutions becomes the AdS solution (3). In the near-horizon region, $r=r_{-}$, corresponding to $\phi=\pm\infty$ (depending on the sign of $\gamma$), the potential behaves exponentially and the metric becomes, after translation of the $r$ coordinate, the scale covariant solution (4) with $\eta=2\beta+1$. A FLRW solution can be now obtained applying the transformation (8) to Eqs. (17). We simply get $R(t)=\frac{t}{R_{0}}\left(1-\frac{t_{-}}{t}\right)^{\beta+\frac{1}{2}},\quad\gamma\phi=\log\left(1-\frac{t_{-}}{t}\right).$ (18) Solutions (18) is not defined for every real $t$. Moreover, the range of variation of $t$ is disconnected. For $t_{-}>0$ we have either $-\infty<t\leq 0$ (corresponding to $\gamma\phi>0$) or $t_{-}\leq t<\infty$ (corresponding to $\gamma\phi<0$). Conversely, for $t_{-}<0$ we have either $-\infty<t\leq t_{-}$ (corresponding to $\gamma\phi<0$) or $0\leq t<\infty$ (corresponding to $\gamma\phi>0$). Apart from the parameter $R_{0}$, which sets the value of the cosmological constant, the solution (18) depends on the parameters $\beta$ and $t_{-}$. The parameter $\gamma$ is not an independent parameter but, apart from the sign, it is determined by Eq. (15). The potential (14), hence the action (1), is invariant under the two groups of discrete transformations $(\phi\to-\phi,\,\gamma\to-\gamma)$ and $(\gamma\to-\gamma,\,\beta\to-\beta)$. This symmetries allow to restrict the range of variations of $\gamma,\beta$ to $\\{\gamma>0,\,\phi<0,\,-\frac{1}{2}\leq\beta\leq\frac{1}{2}\\}$. In terms of the time coordinate $t$ we are left with only two branches : $a)\,\\{t_{-}>0,\,t_{-}\leq t<\infty\\}$ and $b)\,\\{t_{-}<0,\,-\infty\leq t<t_{-}\\}$. However, one can easily realize that these two branches are related by the time reversal symmetry $t\to-t,\,t_{-}\to-t_{-}$ and are therefore physically equivalent. We are therefore allowed to restrict our consideration to the branch $a)$. The potential $V_{c}$ has a minimum at $\phi=0$. Near the minimum the potential behaves quadratically $V_{c}=\frac{6}{R_{0}^{2}}+\frac{1}{2}m^{2}\phi^{2}.$ (19) The squared mass of the scalar field is therefore positive and depends only on the cosmological constant $m^{2}=\frac{2}{R_{0}^{2}}=\frac{\Lambda}{3}.$ (20) As expected, for $t=\infty$ ($\phi=0$) $V_{c}$ approaches to a positive cosmological constant $V_{c}=\Lambda=6/R_{0}^{2}$ and the solution becomes the de Sitter spacetime. For $t\approx t_{-}$ ( $\phi\to\pm\infty$) the scale factor has a power-law form, $R\propto(t-t_{-})^{\beta+1/2}$ and the potential behaves exponentially. We get, respectively for $\phi=\pm\infty$, the asymptotic behaviour $\displaystyle V_{c}(\phi)$ $\displaystyle=$ $\displaystyle R_{0}^{-2}(1+8\beta^{2}-6\beta)e^{\gamma\phi(2\beta+1)},$ $\displaystyle V_{c}(\phi)$ $\displaystyle=$ $\displaystyle R_{0}^{-2}(1+8\beta^{2}+6\beta)e^{\gamma\phi(2\beta-1)}.$ (21) The range of variation of the parameter $\beta$ can be further constrained by some physical requirements that must be fulfilled if solution (18) has to describe the late-time acceleration of our universe. The usual way to achieve this is to considers quintessence models characterized by a slow roll of the scalar field. As we will see later in this paper the potential (14) does not satisfy the slow roll conditions, which are sufficient, but not necessary, for having late-time acceleration. We will use here a much weaker condition on the slope of the potential $V_{c}(\phi)$. The scalar field $\phi$ in Eq. (18) is a monotonic function of the time $t$ in the branch under consideration. Being the function $\phi(t)$ of Eq. (18) monotonic for $t_{-}>0$ and $t\in(t_{-},\infty)$ the simplest way to have a well-defined physical model (i.e a one-to-one correspondence $t\leftrightarrow V_{c}$) is to require also the potential to be a monotonic function inside the branch. This requirement restricts the range of variation of the parameter $\beta$ to $-\frac{1}{4}<\beta\leq\frac{1}{4}.$ (22) In fact, for $\frac{1}{4}<|\beta|\leq\frac{1}{2}$ the potential $V_{c}$ has other extrema. From the range of $\beta$, we have excluded the point $\beta=-1/4$ because in this case the potential (14) becomes exactly the same as for $\beta=1/4$. It is interesting to notice that the two simple models (16), arising from SUGRA truncations, appear as the two limiting cases of this range of variation. In conclusion, the FLRW solution (18) represents a well-behaved cosmological solution in the following range of the parameters and of the time coordinate $t$ $-\frac{1}{4}<\beta\leq\frac{1}{4},\quad 1\leq\gamma\leq\frac{2}{\sqrt{3}},\quad t_{-}>0,\quad t_{-}\leq t<\infty,\quad\phi<0.$ (23) Other branches are either physically equivalent to it (by using the discrete symmetries of the potential (14) or time-reversal transformations) or can be excluded by physical arguments. Let us now consider the Hubble parameter $H$ and the acceleration parameter $A$. We have for $H$ and $A$: $\displaystyle H$ $\displaystyle=$ $\displaystyle\frac{1}{R}\frac{dR}{d\tau}=\frac{dR}{dt}=\frac{X^{\alpha}}{R_{0}}\left[1+\alpha X^{-1}\left(\frac{t_{-}}{t}\right)\right],$ $\displaystyle A$ $\displaystyle=$ $\displaystyle\frac{1}{R}\frac{d^{2}R}{d\tau^{2}}=\left(\frac{dR}{dt}\right)^{2}+R\frac{d^{2}R}{dt^{2}}=\frac{X^{2\alpha-2}}{R_{0}^{2}t^{2}}\left[\left(t+(\alpha-1)t_{-}\right)^{2}+\alpha(\alpha-1)t_{-}^{2}\right].$ (24) where $\alpha=\beta+\frac{1}{2}$. An important physical requirements are the positivity of the Hubble parameter $H$. Moreover, the acceleration parameter $A$ must be positive, at least at late times, to describe late-time acceleration. One can easily check that in the range of variation of the parameter $\beta$ (23) we have always $H>0$. The behaviour of the acceleration parameter $A$ is more involved. $A$ becomes zero for $t_{12}=\left[1-\alpha\pm\sqrt{\alpha(1-\alpha)}\right]t_{-}$. For $t_{-}>0$ we have $t_{1}>t_{-},\,t_{2}<t_{-}$ for $-1/4<\beta\leq 0$, whereas $t_{1}<t_{-},\,t_{2}<0$ for $0\leq\beta\leq 1/4$. This means that in the branch under consideration for $\beta$ positive, the universe is always accelerating. For $\beta$ negative the universe will have a deceleration at early times (for $t_{-}<t<t_{1}$), whereas it will accelerate for $t>t_{1}$. Until now we have always used in our discussion the coordinate time $t$. The cosmic time $\tau$ is defined implicitly in terms of $t$ by Eq. (12). The correspondence $\tau\leftrightarrow t$ defined by Eq. (12) must be one-to-one, i.e $\tau(t)$ must be monotonic in the range (23). Let us show that this is indeed the case. Inserting the expression for $R$ given in Eq. (18) into (12) we get $\frac{\tau}{R_{0}}=\int\frac{dt}{t}\left(\frac{t}{t-t_{-}}\right)^{\beta+\frac{1}{2}}=-B_{z}(0,\frac{3}{2}-\beta),$ (25) where $B_{z}(0,\frac{3}{2}-\beta)$ is the incomplete beta function $B_{z}(p,q)$ and $z=t_{-}/t$. From the previous equation we get the leading behaviour of $\tau(t)$ near $t=t_{-}$ and $t=\infty$. We have, respectively, $\tau\propto(t-t_{-})^{\frac{1}{2}-\beta},\quad\tau\propto\log t.$ (26) From this equation we learn that $t=t_{-}$ and $t=\infty$ are mapped, respectively into $\tau=0$ and $\tau=\infty$. Moreover, from Eq. (25) one easily realises that $d\tau/dt$ is always strictly positive for $t_{-}\leq t<\infty$. When $\beta$ is a generic real number in $(-\frac{1}{4},\frac{1}{4})$ the function $\tau(t)$ cannot be expressed in terms of elementary functions. However, the integral (25) can be explicitly computed when $\beta$ is a rational number. The simplest example is given by $\beta=0$. In this case we get for the function $t=t(\tau)$, the scale factor $R$ and the scalar field $\phi$, $\frac{t}{t_{-}}=\cosh^{2}\frac{\tau}{2R_{0}},\quad R(\tau)=\frac{t_{-}}{2R_{0}}\sinh\frac{\tau}{R_{0}},\quad\phi=2\log\tanh\left(\frac{\tau}{2R_{0}}\right).$ (27) An other simple example is obtained for $\beta=1/4$. We get $\frac{\tau}{R_{0}}=-2\arctan Y-\log\frac{Y-1}{Y+1}+\pi,\,\quad\quad Y^{4}=\frac{t}{t-t_{-}}.$ (28) Let us conclude this section by giving a short description of the evolution of our universe described by Eq. (18). The universe starts from a curvature singularity at $\tau=0$, where the scale factor vanishes, $R=0$, and the scalar field, the Hubble parameter and the acceleration diverge. For $\tau>0$ the potential $V_{c}(\phi$) rolls down to its minimum at $\phi=0$ first following the exponential behavior given by Eq. (III). In this early stage the scale factor evolves following a power-law behaviour whereas the scalar field evolves logarithmically: $R\sim\tau^{\frac{1+2\beta}{1-2\beta}},\quad H\sim\frac{1}{\tau},\quad A\sim\frac{1}{\tau^{2}},\quad\phi\sim\log\tau.$ (29) The acceleration $A$ is positive for $\beta>0$ and negative for $\beta<0$. After a time-scale determined by $t_{-}$ the universe enters, for $\beta$ negative, in an accelerating phase, whereas for $\beta$ positive continues to accelerate. At late times, independently of the value of $\beta$, the potential approaches the quadratic minimum at $\phi=0$ and the universe has an exponential expansion described by de Sitter spacetime and a constant scalar. Therefore at late times the universe forgets about its initial conditions (the parameter $t_{-}$) and all the physical parameters are determined completely in terms of the cosmological constant. We have for the mass of the scalar field and for $H,A$: $m^{2}=2H^{2}=2A=\frac{2}{R_{0}^{2}}=\frac{\Lambda}{3}.$ (30) This behaviour is the cosmological counterpart of the flowing to an UV conformal fixed point of solitonic solutions in effective holographic theories with an hyperscaling violating phase. The dS solution corresponds to AdS vacuum (3) and is invariant under the scale symmetries (5) (obviously exchanging the $r,t$ coordinates). The power-law solution (29) corresponds to the scale covariant solution (4), it shares with it the scale symmetries (6). Thus, both class of solutions (the scalar soliton and the cosmological solutions) are characterized by the emergence of a mass-scale. In the case of the scalar soliton (17) this mass-scale is described by the the parameter $r_{-}$ and emerges in the IR of the dual QFT. In the case of the cosmological solution the mass-scale is described by the the parameter $t_{-}$, which characterizes the early-times cosmology. When the dual QFT flows in the UV fixed point, the conformal symmetry washes out all the information about the IR length $r_{-}$ which, characterizes the hyperscaling violating phase Dong et al. (2012); Cadoni and Mignemi (2012). Similarly, the cosmological evolution washes out all the information about the initial parameter $t_{-}$ and all the physical parameters are completely determined by the cosmological constant. In the next sections we will show how our cosmological solutions can be used to model dark energy. ## IV Dark energy models It is well known that dark energy can be considered a modified form of matter. The simplest way to model it, is by means of a scalar field (usually called quintessence) coupled to usual Einstein gravity, i.e with a model given by (1) with properly chosen potential. Modelling dark energy with a scalar field has many advantages. Unlike the cosmological constant scenario, the energy density of the scalar field at early times does not necessarily need to be small with respect to the other forms of matter. Cosmological evolution can be described as a dynamical system. It allows for the existence of attractor-like solutions (the so called “trackers”) in which the energy density of the scalar field is comparable with the the usual matter-fluid density for a wide range of initial conditions. This helps to solve the so-called coincidence problem of dark energy (see e.g. Amendola and Tsujikawa (2010)). The model described by Eq. (1) with the potential (14) is a good candidate for realizing a tracking behaviour. In fact, at early times the potential behaves exponentially (see Eq. (III)) giving the power-law cosmological solution (29). This kind of solution have been widely used to produce tracking behavior at early times. Moreover, at late times our model flows in a dS solution (i.e a solution modelling dark energy as a cosmological constant). This could help to explain the present accelerated expansion of the universe characterized by the tiny energy scale $\Lambda\approx 10^{-123}m_{pl}^{2}$. Obviously, to be realistic our models must pass all the tests coming from cosmological observations. The most stringent coming from the above value of the cosmological constant. In this section we will address the issues sketched above for our cosmological model (14). Being dark energy described as an exotic form of matter, useful information comes from its equation of state $p_{\phi}=w_{\phi}\rho_{\phi}$. For a quintessence model described by the action (1) one has $w_{\phi}=\frac{p_{\phi}}{\rho_{\phi}}=\frac{T(\phi)-V_{c}(\phi)}{T(\phi)+V_{c}(\phi)}=\frac{1-K(\phi)}{1+K(\phi)}.$ (31) where $T(\phi)=\dot{\phi}^{2}/2$ (the dot means derivation with respect the cosmic time $\tau$) is the kinetic energy of the scalar field and we have defined $K(\phi)=V_{c}/T$ as the ratio between potential and kinetic energy. The expression of $T$ and $K$ as a function of $\phi$ can be easily computed using Eq. (18) and (12). We have $\frac{t}{t_{-}}=(1-e^{\gamma\phi})^{-1}$ and $T(\phi)=\frac{2}{(R_{0}\gamma)^{2}}e^{2\gamma\beta\phi}\sinh^{2}(\gamma\phi/2)$. Whereas for $K$ we obtain $K(\phi)=\gamma^{2}\frac{2-8\beta^{2}+(1+8\beta^{2})\cosh\gamma\phi-6\beta\sinh\gamma\phi}{\sinh^{2}\frac{\gamma\phi}{2}}.$ (32) From these equations one can easily derive the time evolution of the parameter of state $w_{\phi}$. At $\tau=0$, corresponding to $\phi=-\infty$, both the kinetic and potential energy, as a function of $\phi$, diverge exponentially but their ratio is constant. $w_{\phi}$ takes the $\beta$-dependent value $w_{0}(\beta)=-\frac{1+10\beta}{3(1+2\beta)}.$ (33) In the range of variation of $\beta$ we have $-7/9\leq w_{0}<1$. In particular, for $0\leq\beta\leq 1/4$, $w_{0}(\beta)$ is always negative ($-7/9\leq w_{0}\leq-1/3$), whereas for $-1/4<\beta\leq 0$ , $w_{0}(\beta)$ goes from $-1/3$ to $1$. For $0<\tau<\infty$ (corresponding to $-\infty<\phi<0$) the ratio $K$ increases and, correspondingly, $w_{\phi}$ decreases, monotonically from $w_{\phi}=w_{0}(\beta)$ to $w_{\phi}=-1$. At $\tau=\infty$ ($\phi=0$) the potential energy goes to a minimum, the kinetic energy vanishes and the state parameter $w_{\phi}$ attains the value corresponding to a cosmological constant $w_{\phi}=-1$. As expected dark energy has an equation of state with $-1\leq w_{\phi}<1$ negative, but bigger than $-1$. The $w_{\phi}=-1$ value, corresponding to a cosmological constant, is attained when the potential rolls in its $\phi=0$ minimum at $\tau=\infty$. The behaviour of the parameter $w_{\phi}(t)$ is perfectly consistent with what we found for the acceleration parameter $A$. In fact, for $\beta$ positive $-1\leq w_{\phi}(t)<-1/3$ and the universe always accelerates. For $\beta$ negative, $-1\leq w_{\phi}(t)<1$ and we have a transition from early-times deceleration ($w_{\phi}(t)>-1/3$) to late-times acceleration ($w_{\phi}(t)<-1/3$). As we have mentioned in the previous section, in our model, late-time acceleration is not produced by the usual mechanism used in quintessence models, i.e by a slow-roll of the scalar field. Late-time acceleration requires $w_{\phi}<-1/3$ hence from Eq. (31), $K=V_{c}/T>2$. Sufficient conditions to satisfy the latter inequality is a slow evolution of the scalar field, which is guaranteed by the slow-roll conditions Bassett et al. (2006) $\epsilon=\left(\frac{1}{V_{c}}\frac{dV_{c}}{d\phi}\right)^{2}\ll 1,\quad|\mu|=2\left|\frac{1}{V_{c}}\frac{d^{2}V_{c}}{d\phi^{2}}\right|\ll 1.$ (34) In our model, the potential $V_{c}$ at late times behaves as Klein-Gordon potential (19), so that we have: $\epsilon=\mu=\frac{4}{\phi^{2}}$ (35) Obviously the slow-roll parameters 35 go to infinity at late time when $\phi$ approaches to $0$. However, the slow-roll conditions (34) are sufficient but not necessary for having late-time acceleration. In our model the condition $V_{c}>2T$ is satisfied by an alternative (freezing) mechanism: at late times the scalar field approaches its minimum at $\phi=0$ in which the potential energy $V_{c}$ is constant and non-vanishing whereas the kinetic energy $T$ is zero. ## V Coupling to matter Until now we have considered a quintessence model (1) with the potential (14) and shown that for a wide range of the parameter $\beta$ it can be consistently used to produce a late-time accelerating universe. The next step is to introduce matter fields in the action, in the form of a general perfect fluid (non-relativistic matter or radiation). Obviously, this is a crucial step because the key features of quintessence model (tracking behavior, stability etc.) are related to the presence of matter. In presence of matter the cosmological equations can be written as ${\dot{H}}=-\frac{1}{4}\left(\rho_{\phi}+\rho_{M}+p_{\phi}+p_{M}\right),\,\quad\ddot{\phi}+3H\dot{\phi}=-\frac{dV_{c}}{d\phi},\,\quad H^{2}=\frac{1}{6}\left(\rho_{\phi}+\rho_{M}\right),$ (36) where $\rho_{\phi}={\dot{\phi}}^{2}/2+V_{c},\,p_{\phi}={\dot{\phi}}^{2}/2-V_{c}$ are the density and pressure of the quintessence field, whereas $\rho_{M}$ and $p_{M}$ are those of matter, related by the equation of state $p_{M}=w_{M}\rho_{M}$. The cosmological dynamics following from Eqs. (36) can be recast in the form of a dynamical system. By defining $x={\dot{\phi}}/(\sqrt{12}H),\,y=\sqrt{V_{c}}/(\sqrt{6}H),\,N=\log R$, the cosmological equations (36) take the form (see e.g. Amendola and Tsujikawa (2010)): $\displaystyle\frac{dx}{dN}$ $\displaystyle=$ $\displaystyle-3x+\sqrt{\frac{3}{2}}\lambda y^{2}+\frac{3}{2}x\left[\left(1-w_{M}\right)x^{2}+\left(1+w_{M}\right)\left(1-y^{2}\right)\right]$ $\displaystyle\frac{dy}{dN}$ $\displaystyle=$ $\displaystyle-\sqrt{\frac{3}{2}}\lambda xy+\frac{3}{2}y\left[\left(1-w_{M}\right)x^{2}+\left(1+w_{M}\right)\left(1-y^{2}\right)\right]$ $\displaystyle\frac{d\lambda}{dN}$ $\displaystyle=$ $\displaystyle-\sqrt{6}\lambda^{2}\left(\Gamma-1\right)x,\quad\lambda=-\frac{\sqrt{2}}{V_{c}}\frac{dV_{c}}{d\phi},\quad\Gamma=V_{c}\frac{d^{2}V_{c}}{d\phi^{2}}\left(\frac{dV_{c}}{d\phi}\right)^{-2}.$ (37) This form of the dynamics is particularly useful for investigating the fixed points of the dynamics and their stability. In the case of a potential given by Eq. (14) neither $\lambda$ nor $\Gamma$ are constant and Eqs. (V) cannot be solved analytically. Even the characterization of the fixed points of the dynamical system is rather involved. To gain information about the cosmological dynamics we will use a simplified approach. We will first consider the dynamics in the two limiting regimes of small and large cosmic time, i.e $(1)\,\,\phi\to-\infty,\quad(2)\,\,\phi=0$ in which the potential behaves, respectively, exponentially (see Eq. (III) and quadratically (see Eq. (19)) and the scale factor evolves, respectively, as power-law and exponentially. After that we will describe qualitatively the cosmological evolution in the intermediate region $\phi\approx-1/\gamma$. ### V.1 Power-law evolution In the case of an exponential potential $\lambda=const$ in Eq. (V). Both the fixed points of the dynamical system (V) and their stability are well known Copeland et al. (1998); Neupane (2004); Amendola and Tsujikawa (2010)). Apart from fluid-dominated and quintessence-kinetic-energy-dominated fixed points, which are not interesting for our purposes, we have two fixed points in which the scale factor $R$ has a power-law behavior. The first fixed point is obtained for $x=\frac{\lambda}{\sqrt{6}},\quad y=\sqrt{1-\frac{\lambda^{2}}{6}},\quad\lambda=\sqrt{\frac{2(1-2\beta)}{1+2\beta}},$ (38) describes a quintessence-dominated solution with $\Omega_{\phi}=\frac{\rho_{\phi}}{6H^{2}}=1$ and a constant parameter of state $w_{\phi}=w_{0}{(\beta})$ with $w_{0}{(\beta})$ given by Eq. (33). This fixed point is stable for $\beta>\beta_{0}=-\frac{1+3w_{M}}{2(5+3w_{M})}.$ (39) Notice that if we take matter with $0\leq w_{M}<1$ we have $-1/4<\beta_{0}\leq-1/10$ so that the region of stability is inside the range of definition of the parameter $\beta$. One can easily realize that this solution is nothing but the previously found power-law solution (29) with the constant parameter of state $w_{0}(\beta)$ given by Eq. (33). Because $\Omega_{\phi}=1$ this solution cannot be obviously used to realize the radiation or matter-dominated epochs. Phenomenologically more interesting is the second fixed point of the dynamical system (V) with an exponential potential. This is the so-called scaling solution Copeland et al. (1998); Liddle and Scherrer (1999) and is given by $x=\sqrt{\frac{3}{2}}\frac{(1+w_{M})}{\lambda},\quad y=\sqrt{\frac{3(1-w_{M}^{2})}{2\lambda^{2}}},$ (40) where $\lambda$ is given as in Eq. (38). This scaling solution is characterized by a constant ratio $\Omega_{\phi}/\Omega_{M}$ and by the equality of the parameter of state for quintessence and matter $w_{\phi}=w_{M}$. Moreover we have $\Omega_{\phi}=x^{2}+y^{2}=\frac{3}{\lambda^{2}}(1+w_{M}).$ (41) The scale factor $R$ behaves also in this case as a power-law, with a $w_{M}$ dependent exponent, $R\propto\tau^{2/(3(1+w_{M}))}$. The scaling solution is a stable attractor for $\beta_{1}<\beta<\beta_{0}$, where $\beta_{0}$ is given as in Eq. (39) and $\beta_{1}=-\frac{12w_{M}^{2}+15w_{M}+5}{2(12w_{M}^{2}+33w_{M}+19)}.$ (42) Notice that for ordinary matter characterized $0\leq w_{M}<1$ we have $-1/4<\beta_{0}\leq-1/10$ and $-1/4<\beta_{1}\leq-5/38$. Hence, the range of stability of the scaling solution is well inside the range of definition of $\beta$. For $\beta>\beta_{0}$ the scaling solution is a saddle point, whereas for $\beta<\beta_{1}$ it is a stable spiral. The scaling solution has features that make it very appealing for describing the early-time universe. The ratio $\Omega_{\phi}/\Omega_{M}$ is constant and $\lambda$-dependent, in principle $\lambda$ can be chosen in such way that $\Omega_{\phi}$ and $\Omega_{M}$ have the same order of magnitude. Moreover the solution is an attractor making the dynamics largely independent of the initial conditions. These features allow to solve the coincidence problem. Cosmological evolution will be driven sooner or later to the scaling fixed point, allowing to have a value of density of the scalar field of the same order of magnitude of matter (or radiation) at the ending of inflation. Despite these nice features the scaling solution alone cannot be used to model the matter-dominated epoch of our universe for several reasons. Because $w_{\phi}=w_{M}$ it is not possible to realize cosmic acceleration using a scaling solution. The universe must therefore exit the scaling era, characterized by $\Omega_{\phi}=constant$, to connect to the accelerated epoch, but this is not possible if the parameters are within the range of stability of the solution. An other problem comes from nucleosynthesis constraints. They require $\Omega_{\phi}/\Omega_{M}<0.2$. However, in the range of the parameter $\beta$ where the scaling solution is a stable node the minimum value of the ratio is given by $\Omega_{\phi}/\Omega_{M}=(7+9w_{M})/(1-w_{M})$. In the most favourable case, $w_{M}=0$ (non-relativistic matter), we still have $\Omega_{\phi}=7\Omega_{M}$. The situation improves if we move in the region where the scaling solution is a stable spiral. Taking $w_{M}=0$ we find $1<\Omega_{\phi}/\Omega_{M}<7$, with $\Omega_{\phi}/\Omega_{M}\to 1$ for $\beta\to-1/4$. In the model under consideration some of these difficulties have the chance to be solved because the dynamics exits naturally the scaling era, at times when the exponential approximation $\lambda\approx const.$ is not anymore valid. ### V.2 Exponential evolution At late cosmic times the scalar field potential behaves as in Eq. (19) and the dynamics of the scalar field is governed by the equation: $\ddot{\phi}+3H\dot{\phi}+m^{2}\phi=0,$ (43) which is can be considered as describing a damped harmonic oscillator. In this analogy the scalar mass $m$ represents the pulsation of the oscillations and the Hubble parameter $H$ acts as a (Hubble) friction term. Two cases are possible Turner (1983); Dutta and Scherrer (2008): $(a)$ $r=3H/m>1$, the oscillations are suppressed by Hubble friction and $\phi$ goes to a constant value (overdamping); $(b)\,$ $r\ll 1$, the oscillating term dominates over Hubble friction and the scalar field oscillates around the minimum of the potential. Depending on the global dynamics of the system either case $(a)$ or case $(b)$ will be realized. Presently we do not have an exact control of this global dynamic. By studying the intermediate regime, however, we will give in Sect. V.3, strong evidence that cosmological evolution will be driven near to de Sitter point where $\phi\approx 0$ In the limit $\phi\to 0$ we have $V_{c}/\dot{\phi}^{2}\gg 1$ and the scalar field is frozen to a constant value and one can easily see that case $(a)$ is realized. This can be also checked directly. From Eq. (30) we can easily read out the ratio $r=\frac{3}{\sqrt{2}}>1$ for our mode model, so that we have overdamping. The absolute value of the scalar field decreases and approaches asymptotically the minimum of the potential where we can approximate $V_{c}(\phi)$ by a constant. Moreover, the value of the ratio $r$ does not depend on the parameter $\beta$. The value of the scalar field is completely determined by Eq. (43) and, in particular, is independent from the early time dynamics. This is again a manifestation of the conformal and scaling symmetries of the gravitational background: once the cosmological dynamics is driven near to the de Sitter vacuum any memory about the scaling regime is lost, the dynamics becomes universal and depends only on one mass-scale, that is set by the cosmological constant. This behaviour has to be compared with that pertinent to the previously discussed slow-roll conditions (34). They correspond to have $V_{c}\gg\dot{\phi^{2}}$ and $|\ddot{\phi}|\ll|3H\dot{\phi}|$ in (43). We can produce in this way late-time acceleration but the late-time dynamics is not universal but depends on the details of the model. Because of overdamping the cosmological evolution will be driven near to the minimum of the potential $V_{c}(\phi)$. In this region the potential at leading order can be approximated by a cosmological constant, $V_{c}(\phi)=6/R_{0}^{2}$. For a constant potential we have $\lambda=0$ in Eq. (V) and we can easily find the fixed points of the dynamical system. We have three fixed points $(1)$ $x=y=\Omega_{\phi}=w_{\phi}=0,\,\Omega_{M}=1,$ which represents a fluid- dominated solution. $(2)$ $y=0,\,x=\pm 1,\,\Omega_{\phi}=w_{\phi}=1,\,\Omega_{M}=0,$ which represents a solution dominated by the kinetic energy of the scalar field. $(3)$ $x=0,\,y=\pm 1,\,\Omega_{\phi}=1,\,w_{\phi}=-1,\,\Omega_{M}=0,$ which represents a a solution dominated by the energy of the vacuum (cosmological constant). Obviously, the only physical candidate for describing the late-time evolution of our universe is fixed point $(3)$. Neglecting the solution with negative $y$ (representing an exponentially shrinking universe), the solution with $y=1$ give the de Sitter spacetime, an exponentially expanding universe with $H=R_{0}^{-1}$, i.e. $R\propto{\rm e}^{\tau/R_{0}}$. By linearizing Eqs. (V) around the fixed point, one can easily find that the de Sitter solution is a stable node of the dynamical system. In fact the two eigenvalues of the matrix describing the linearized system are real and negative ($-3,-3(1+w_{M})$). Actually, for $\lambda=0$ one can go further and integrate exactly the dynamical system (V). After some calculation one finds $y=\frac{1}{\sqrt{1+cR^{-3(1+w_{M})}-a^{2}R^{-6}}},\quad x=\frac{aR^{-3}}{\sqrt{1+cR^{-3(1+w_{M})}-a^{2}R^{-6}}},$ (44) where $a,c$ are integration constants. Eq. (44) confirms that the dS spacetime is an attractor of the dynamical system. In fact, the two-parameter family of solutions (44) has a node at $x=0,y=1$ to which every member of the family approaches as $R\to\infty$. The three terms in the square root in the denominator represent, respectively, the contribution of the energy of the vacuum, the contribution of matter, and the contribution of the kinetic energy of the scalar field. One can easily see that at late times ($R\to\infty$) the vacuum energy always dominates over the other two contributions. Moreover, the scalar field kinetic energy contribution is always subdominant with respect to the matter contribution. In absence of matter ($c=0$) we have $HR_{0}=\sqrt{1-a^{2}R^{-6}},\quad\dot{\phi}^{2}\propto R^{-6}$, telling us that the kinetic energy of the scalar field falls off very rapidly as the scale factor $R$ increases. An explicit form of the time dependence of the scale factor can be derived from (44) only after fixing the parameter of state of matter. For dust ($w_{M}=0$) and radiation ($w_{M}=1/3$) we find, $R_{dust}(\tau)=c_{1}\left[\cosh\frac{3}{2R^{0}}(\tau-\tau_{0})\right]^{\frac{2}{3}},\quad R_{rad}(\tau)=c_{2}\left[\cosh\frac{2}{R^{0}}(\tau-\tau_{0})\right]^{\frac{1}{2}},$ (45) where $c_{1,2},\tau_{0}$ are constants. Summarizing, if cosmological evolution is such that the system is driven near to the minimum of the potential $V_{c}$, i.e the region where the potential can be approximated by a cosmological constant, then the universe will necessarily enter in the regime of exponential expansion described by the dS spacetime. Obviously, the crucial question is: will the system be driven to this near-minimum region? A definite answer to this question requires a full control of the global dynamics of the system (V). In the next subsection, by analyzing the intermediate region of the potential $V_{c}$, we will give strong indications that this is indeed the case. ### V.3 Intermediate regime A key role in discussing cosmological evolution in presence of dark energy is played by the so-called tracker solutions Steinhardt et al. (1999). These solutions are special attractor trajectories in the phase space of the dynamical system (V) characterized by having approximately constant $\lambda,\Omega_{\phi},w_{\phi}$. If the time-scale of the variation of $\lambda$ is much less then $H^{-1}$ we can consider these trajectories as build up from instantaneous fixed points changing in time Steinhardt et al. (1999); Amendola and Tsujikawa (2010). Thus, tracker solutions are very useful to solve the coincidence problem. During the matter dominate epoch dark energy tracks matter, the ratio $\Omega_{\phi}/\Omega_{M}$ remains almost constant and $w_{\phi}$ remains close to $w_{M}$ with $w_{\phi}<w_{M}$. Moreover, if the condition $\Gamma>1$ along the trajectory is satisfied, $\lambda$ decreases toward zero. Once the the value $\lambda^{2}=3(1+w_{M)}$ is reached the fixed point (38) with $\Omega_{\phi}=-1$ becomes stable and the universe exits the scaling phase to enter the accelerated phase. To check if our solutions behave as trackers let us first calculate the parameter $\Gamma$ of Eq. (V) for our potential (14). We get $\Gamma-1=\gamma^{2}\frac{1-16\beta^{2}+2(1+8\beta^{2})\cosh\gamma\phi-12\beta\sinh\gamma\phi}{\left(4\beta-4\beta\cosh\gamma\phi+\sinh\gamma\phi\right)^{2}}.$ (46) One can check analytically and numerically that for $-1/4<\beta\leq 1/4$ we have $\Gamma-1=0$ for $\phi=-\infty$. In the range $\phi\in(-\infty,0)$, $\Gamma-1$ monotonically increases and blows up to $\infty$ as $1/\phi^{2}$ for $\phi=0$. In Figs. 1 we show the plot of $\lambda$ and $\Gamma-1$ as a function of $\phi$ for selected values of the parameter $\beta$. The curves remain flat till the scalar field reaches values of order $-1/\gamma$. Moreover $\Gamma-1$ is exponentially suppressed as $\phi\to-\infty$ and stays flat, near to zero, till we reach values of $|\phi|$ of order $1/\gamma$. For instance for $-\infty<\phi<-10/\gamma$ we have $0<(\Gamma-1)<10^{-4}$. This shows that in the range $(-\infty,{\cal{O}}(1/\gamma))$, $\Gamma$ varies very slowly as a function of $\phi$. The same is true if we consider $\Gamma$ as a function of the number of e-foldings $N$. In fact we have $d\Gamma/dN=\sqrt{12}x(d\Gamma/d\phi)$ and because $x$ flows from the value $x=\sqrt{3/2}(1+w_{M})/\lambda$ at the scaling fixed point to $x=0$ at the dS fixed point we conclude that $\Gamma-1$ is also a slowly varying function of $N$. Notice that the previous features are not anymore true for $1/4<|\beta|<1/2$. This is because in these range of $\beta$ the denominator in Eq. (46) has a zero at finite negative values of $\phi$, namely for $\cosh\gamma\phi=-(1+16\beta^{2})/(1-16\beta^{2})$. Being $\Gamma$ nearly constant and $\Gamma>1$, we have a tracker behaviour of our solutions till the scalar field reaches values of order $1/\gamma$. In this region we have (see e.g. Amendola and Tsujikawa (2010)) $w_{\phi}=\frac{w_{M}-2(\Gamma-1)}{1+2(\Gamma-1)}.$ (47) Being $w_{\phi}<w_{M}$ dark energy evolves more slowly then matter. Also $\lambda$ and the ratio $\Omega_{\phi}/\Omega_{M}$ varies slowly. $\lambda$ decreases toward zero, whereas $\Omega_{\phi}/\Omega_{M}$ increases. The main difference between our model and the usual tracker solutions is the way in which the universe exits the scaling behaviour and produces the cosmic acceleration. In the usual scenario this happens when $\lambda$ reaches the lower bound for stability of the scaling solution, $\lambda^{2}=3(1+w_{M})$. One can easily check that for our models this happens instead when the system reaches the region where the approximation of slow varying $\lambda$ and $\Gamma$ does not hold anymore. The universe exits the scaling regime when it reaches the regions $\phi\sim-1/\gamma$ where $\Gamma$ and $\lambda$ vary very fast and we have a sharp transition to the dS phase. This transition is the cosmological counterpart of the hyperscaling violating/AdS phase transition in holographic theories of gravity Cadoni et al. (2010); Gouteraux and Kiritsis (2012). We are now in position of giving a detailed, albeit qualitative, description of the global behavior of our FLRW solutions. This behaviour depends on the range of variation of the parameter $\beta$. We have to distinguish three different cases: $(I):\beta<\beta_{1};\,(II):\beta_{1}<\beta<\beta_{0};\,(III):\beta>\beta_{0}$ with $\beta_{0,1}$ given by Eq. (39) and (42). In case $(I)$ the scaling solution, describing the universe at early times, is a stable spiral and $\Omega_{\phi}/\Omega_{M}\approx 1$. As the cosmic time increases, $\Omega_{\phi}/\Omega_{M}$ stays almost constant and $\lambda$ decreases toward the value $\lambda^{2}=24(1+w_{M})^{2}/(7+9w_{M})$ below which the scaling solution is a stable node. However, this value is not in the region of slow varying $\lambda$. Cosmological evolution undergoes a sharp transition to the dS accelerating phase. The behaviour of $\lambda$, $\Gamma-1$, $\Omega_{\phi}$ and $w_{\phi}$ as a function of $\phi$ for this class of solutions is explained in Figs. 1, 2, where we plot as representative element $\beta=-15/64$ and we take nonrelativistic matter, $w_{M}=0$. Notice that Figs 2 have been produced using the expressions (41), (47) respectively for $\Omega_{\phi}$ and $w_{\phi}$, which are valid in the region of slow variation of $\lambda$ and $\Gamma$. Therefore, the plots can be trusted only in the region $\phi<<-1/\gamma$. In case $(II)$ the scaling solution, describing the universe at early times, is a stable node and $\Omega_{\phi}/\Omega_{M}={\cal{O}}(1)$ but $\Omega_{\phi}>\Omega_{M}$. At early times $\lambda$ decreases very slowly. As explained above, there is no smooth transition to the accelerating scaling phase (38) with $\Omega_{\phi}=1$ but a sharp transition to the de Sitter phase. The behaviour of $\lambda$, $\Gamma-1$, $\Omega_{\phi}$ and $w_{\phi}$ as a function of $\phi$ for this class of solutions is explained in Figs. 1, 2, where we plot as representative element $\beta=-1/8$ and $w_{M}=0$. In case $(III)$ the scaling solution is a saddle point and at early times the accelerating, scalar-field dominated solution (38) is stable. We have $w_{\phi}<-1/3$ and $\Omega_{\phi}=1$. Here we have a transition from a power- law, accelerating universe at early times to the de Sitter solution at late times. Obviously, this case is not realistic because it cannot describe the matter dominated era. The plot of $\lambda$, $\Gamma-1$, $\Omega_{\phi}$ and $w_{\phi}$ as a function of $\phi$ for this class of solutions is depicted respectively in Figs. 1, 2 for $\beta=0$ and $w_{M}=0$. | ---|--- Figure 1: Plot of the function $\lambda(\phi)$ (left panel) and $\Gamma(\phi)-1$ (right panel) for selected values of the parameter $\beta$ representative of the three classes of solutions ($I,II$ and $III$) and for $w_{M}=0$. The thin, red lines are the plots relative to the model of class $I$ with $\beta=-15/64$. The thick blue lines give the plots of a model in class $II$ with $\beta=-1/8$. The green, dashed lines correspond to a model in class $III$ with $\beta=0$. | ---|--- Figure 2: Plot of the function $w_{\phi}(\phi)$ (left panel) and $\Omega_{\phi}(\phi)$ (right panel) for selected values of the parameter $\beta$ representative of the three classes of solutions ($I,II$ and $III$) and for $w_{M}=0$. The thin, red lines are the plots relative to the model of class $I$ with $\beta=-15/64$. The thick blue lines give the plot of a model in class $II$ $\beta=-1/8$. The green, dashed lines correspond to a model in class $III$ with $\beta=0$. Let us conclude this section with a brief, general discussion about the parameters entering in our model. Basically, apart from the Planck mass in the action (1) enter a dimensionless parameter $\beta$ and a length scale $R_{0}$ (Notice that in Eq. (1) we have set $\kappa^{2}=8\pi G=1/2$). In addition we have the integration constants of the differential equations (V), which have to be determined by solving the Cauchy problem. Some of these constants will be related to $t_{-}$ and $a,b$ characterizing respectively the power-law (29) and the exponential regime (44). However, the scale symmetries of the gravitational background together with the attractor behaviour of the scaling solution and of the de Sitter fixed point make the cosmological dynamics largely, if not completely, independent from the initial conditions. Cosmological evolution can be seen as a flow from a scaling fixed point to a conformal dS fixed point, in which the system looses any memory about initial conditions. The final state is therefore completely characterized by the length scale $R_{0}$, which determines everything (Hubble parameter, acceleration, cosmological constant and the mass for the scalar, see Eq. (30). The length scale $R_{0}$ can be fixed by the dark energy density necessary to explain the present acceleration of the universe, $\rho_{de}=10^{-123}m_{p}^{4}$. This gives a mass of the scalar $m\approx 10^{-33}eV$. On one side this uniqueness gives a lot of predictive power to the model, but on the other side the presence of an extremely light scalar excitation runs into the the well-known problems in the framework of particle physics, SUGRA theories and cosmological constant scenarios Peebles and Ratra (2003); Padmanabhan (2003). ## VI Conclusions In this paper we have shown that scalar solitonic solutions of holographic models with hyperscaling violation have an interesting cosmological counterpart, which can be obtained by analytical continuation and by flipping the sign of the potential for the scalar field. The resulting flat FLRW solutions can be used to model cosmological evolution driven by dark energy and usual matter. In absence of matter, the flow from the hyperscaling regime to the conformal AdS fixed point in holographic models correspond to cosmological evolution from power-law regime at early cosmic times to a dS fixed point at late times. In presence of matter, we have a scaling regime at early times, followed by an intermediate regime with tracking behaviour. At late times the solution exits the scaling regime with a sharp transition to a de Sitter spacetime. The phase transition between hyperscaling violation and conformal fixed point observed in holographic gravity has a cosmological analogue in the transition between a scaling, era and a dS era dominated by the energy of the vacuum. We have been able to solve exactly the dynamics only in absence of matter. When matter is present we do not have full control of the global solutions. Nevertheless, by writing the cosmological equations as a dynamical system and by investigating three approximated regimes we have given strong evidence that the above picture is realized. At the present stage our model for dark energy cannot be completely realistic. In the matter dominated epoch the ratio $\Omega_{\phi}/\Omega_{m}\approx 1$, so that we have a problem with nucleosynthesis. Moreover, the late-time cosmology shares the same problems of all cosmological constant scenarios. The vacuum energy is an unnaturally tiny free parameter of the model. The same is true for the mass of the scalar excitation associated to the quintessence field. There are several open questions, which are worth to be investigated in order to support the above picture. One should derive the exact full phase space description of the dynamical system in presence of matter to check the correctness of our results. In particular, having full control on the phase space would give a precise description of the sharp transition between the scaling and the dS regime. This would also help us to shed light on the analogy between the cosmological transition and the hyperscaling violation/ AdS holographic phase transition. Other key points that could improve our knowledge on the subject are: (1) Comparison between the cosmological dynamics and the RN group equations for the holographic gravity theory; (2) understanding of the analogy phase transition/cosmological transition in terms of the analytical continuation. ###### Acknowledgements. We thank O. Bertolami and S. Mignemi for discussions and valuable comments. ## References * Hartnoll et al. (2008a) S. A. Hartnoll, C. P. Herzog, and G. T. Horowitz, Phys. Rev. Lett. 101, 031601 (2008a), eprint 0803.3295. * Hartnoll et al. (2008b) S. A. Hartnoll, C. P. Herzog, and G. T. Horowitz, JHEP 12, 015 (2008b), eprint 0810.1563. * Horowitz and Roberts (2008) G. T. Horowitz and M. M. Roberts, Phys.Rev. D78, 126008 (2008), eprint arXiv:0810.1077. * Charmousis et al. (2009) C. Charmousis, B. Gouteraux, and J. Soda, Phys. Rev. D80, 024028 (2009), eprint 0905.3337. * Cadoni et al. (2010) M. Cadoni, G. D’Appollonio, and P. Pani, JHEP 03, 100 (2010), eprint 0912.3520. * Goldstein et al. (2010) K. Goldstein, S. Kachru, S. Prakash, and S. P. Trivedi, JHEP 08, 078 (2010), eprint 0911.3586. * Gouteraux and Kiritsis (2012) B. Gouteraux and E. Kiritsis (2012), eprint 1212.2625. * Gubser and Rocha (2010) S. S. Gubser and F. D. Rocha, Phys.Rev. D81, 046001 (2010), eprint 0911.2898. * Dong et al. (2012) X. Dong, S. Harrison, S. Kachru, G. Torroba, and H. Wang (2012), eprint arXiv:1201.1905. * Cadoni and Pani (2011) M. Cadoni and P. Pani, JHEP 1104, 049 (2011), eprint 1102.3820. * Cadoni and Mignemi (2012) M. Cadoni and S. Mignemi, JHEP 1206, 056 (2012), eprint arXiv:1205.0412. * Cadoni and Serra (2012) M. Cadoni and M. Serra (2012), eprint 1209.4484. * Narayan (2012) K. Narayan (2012), eprint arXiv:1202.5935. * Cadoni et al. (2013) M. Cadoni, P. Pani, and M. Serra, JHEP 1306, 029 (2013), eprint 1304.3279. * Huijse et al. (2012) L. Huijse, S. Sachdev, and B. Swingle, Phys.Rev. B85, 035121 (2012), eprint arXiv:1112.0573. * Strominger and Vafa (1996) A. Strominger and C. Vafa, Phys.Lett. B379, 99 (1996), eprint hep-th/9601029. * Strominger (1998) A. Strominger, JHEP 9802, 009 (1998), eprint hep-th/9712251. * Cadoni and Mignemi (1999) M. Cadoni and S. Mignemi, Phys.Rev. D59, 081501 (1999), eprint hep-th/9810251. * Peebles and Ratra (2003) P. Peebles and B. Ratra, Rev.Mod.Phys. 75, 559 (2003), eprint astro-ph/0207347. * Padmanabhan (2003) T. Padmanabhan, Phys.Rept. 380, 235 (2003), eprint hep-th/0212290. * McFadden and Skenderis (2010) P. McFadden and K. Skenderis, Phys.Rev. D81, 021301 (2010), eprint 0907.5542. * Strominger (2001) A. Strominger, JHEP 0110, 034 (2001), eprint hep-th/0106113. * Goheer et al. (2003) N. Goheer, M. Kleban, and L. Susskind, JHEP 0307, 056 (2003), eprint hep-th/0212209. * Cadoni and Carta (2004) M. Cadoni and P. Carta, Int.J.Mod.Phys. A19, 4985 (2004), eprint hep-th/0211018. * Skenderis and Townsend (2006) K. Skenderis and P. K. Townsend, Phys.Rev.Lett. 96, 191301 (2006), eprint hep-th/0602260. * Skenderis et al. (2007) K. Skenderis, P. K. Townsend, and A. Van Proeyen, JHEP 0708, 036 (2007), eprint 0704.3918. * Shaghoulian (2013) E. Shaghoulian (2013), eprint 1308.1095. * Kiritsis (2013) E. Kiritsis, JCAP 1311, 011 (2013), eprint 1307.5873. * Ford (1987) L. Ford, Phys.Rev. D35, 2339 (1987). * Wetterich (1988) C. Wetterich, Nucl.Phys. B302, 668 (1988). * Caldwell et al. (1998) R. Caldwell, R. Dave, and P. J. Steinhardt, Phys.Rev.Lett. 80, 1582 (1998), eprint astro-ph/9708069. * Zlatev et al. (1999) I. Zlatev, L.-M. Wang, and P. J. Steinhardt, Phys.Rev.Lett. 82, 896 (1999), eprint astro-ph/9807002. * Amendola and Tsujikawa (2010) L. Amendola and S. Tsujikawa, _Dark Energy: Theory and Observations_ (Cambridge University Press, 2010). * Cadoni et al. (2011) M. Cadoni, S. Mignemi, and M. Serra, Phys.Rev. D84, 084046 (2011), eprint arXiv:1107.5979. * Cadoni et al. (2012) M. Cadoni, S. Mignemi, and M. Serra, Phys.Rev. D85, 086001 (2012), eprint arXiv:1111.6581. * Sahni and Wang (2000) V. Sahni and L.-M. Wang, Phys.Rev. D62, 103517 (2000), eprint astro-ph/9910097. * Bertolami et al. (2012) O. Bertolami, P. Carrilho, and J. Paramos, Phys.Rev. D86, 103522 (2012), eprint 1206.2589. * DeWolfe et al. (2000) O. DeWolfe, D. Freedman, S. Gubser, and A. Karch, Phys.Rev. D62, 046008 (2000), eprint hep-th/9909134. * Bassett et al. (2006) B. A. Bassett, S. Tsujikawa, and D. Wands, Rev.Mod.Phys. 78, 537 (2006), eprint astro-ph/0507632. * Copeland et al. (1998) E. J. Copeland, A. R. Liddle, and D. Wands, Phys.Rev. D57, 4686 (1998), eprint gr-qc/9711068. * Neupane (2004) I. P. Neupane, Class.Quant.Grav. 21, 4383 (2004), eprint hep-th/0311071. * Liddle and Scherrer (1999) A. R. Liddle and R. J. Scherrer, Phys.Rev. D59, 023509 (1999), eprint astro-ph/9809272. * Turner (1983) M. S. Turner, Phys.Rev. D28, 1243 (1983). * Dutta and Scherrer (2008) S. Dutta and R. J. Scherrer, Phys.Rev. D78, 083512 (2008), eprint 0805.0763. * Steinhardt et al. (1999) P. J. Steinhardt, L.-M. Wang, and I. Zlatev, Phys.Rev. D59, 123504 (1999), eprint astro-ph/9812313.
# Bayesian improved cross entropy method for network reliability assessment Jianpeng Chan Iason Papaioannou Daniel Straub Engineering Risk Analysis Group, Technische Universität München, Arcisstr. 21, 80290 München, Germany ###### Abstract We propose a modification of the improved cross entropy (iCE) method to enhance its performance for network reliability assessment. The iCE method performs a transition from the nominal density to the optimal importance sampling (IS) density via a parametric distribution model whose cross entropy with the optimal IS is minimized. The efficiency and accuracy of the iCE method are largely influenced by the choice of the parametric model. In the context of reliability of systems with independent multi-state components, the obvious choice of the parametric family is the categorical distribution. When updating this distribution model with standard iCE, the probability assigned to a certain category often converges to 0 due to lack of occurrence of samples from this category during the adaptive sampling process, resulting in a poor IS estima tor with a strong negative bias. To circumvent this issue, we propose an algorithm termed Bayesian improved cross entropy method (BiCE). Thereby, the posterior predictive distribution is employed to update the parametric model instead of the weighted maximum likelihood estimation approach employed in the original iCE method. A set of numerical examples illustrate the efficiency and accuracy of the proposed method. ###### keywords: Network reliability analysis, improved cross entropy method, categorical distribution ††journal: TBD ## 1 Introduction Infrastructure networks, such as power transmission networks and water supply systems, operate as backbones of urban communities. Hence, it is essential to properly quantify the risk of network failure, which involves quantification of the failure probability of the network system. A network is considered as failed when it cannot deliver a specified level of performance. Mathematically, the failure is described through a function $g(\cdot)$, known as performance function, structure function, or limit state function (LSF). Let $\bm{X}$ be a $n$-dimensional vector of random variables with joint density function $p_{\bm{X}}(\bm{x})$. The failure event $F$ collects all system states $\bm{x}$ whose LSF $g(\bm{x})$ is less or equal than 0, i.e., $F\triangleq\\{\bm{x}:g(\bm{x})\leq 0\\}$. The probability of failure is defined as $p_{f}\triangleq\mathbb{P}(g(\bm{X})\leq 0)=\mathbb{E}_{p}[\mathbb{I}\\{g(\bm{X})\leq 0\\}],$ (1) where $\mathbb{I}\\{\cdot\\}$ represents the indicator function, and $\mathbb{E}_{p}$ denotes expectation with respect to the density $p_{\bm{X}}(\bm{x})$. The network performance is often measured through connectivity or flow. In connectivity-based problems, one evaluates the probability that a given set of nodes are disconnected [1, 2], and typically, both the network performance and the component state are modeled as binary random variables. In flow-based problems, one is interested in the flow that a network can deliver, e.g., the maximum number of passengers that can be transported from one city to another through the railway network. Multi-state or continuous random variables are often involved in this class of problems. For water supply systems and power grids, the flow is driven by the physical law and operation strategies, and the initial failure of network components leads to a reconfiguration of the power flow that may trigger additional cascading failure. A set of methodologies have been proposed for evaluating the reliability in the above two classes of problems, among which sampling-based methods such as Monte Carlo simulation (MCS) and its different variants feature prominently [3, 4, 5, 6, 7, 8, 9, 10, 11]. For rare event simulation, i.e., when the failure probability $p_{f}$ is small, crude MCS is inefficient or even infeasible when the LSF is expensive to compute. In such cases, advanced sampling techniques such as subset simulation [12, 13, 14, 15, 16, 17] and importance sampling (IS) [18, 19, 20] should be employed to decrease the required number of LSF evaluations for obtaining an accurate estimate of $p_{f}$. Alternatively, Dehghani et. al. [21] employ an actively trained surrogate model to approximate the computational demanded LSF, resulting in an accurate and efficient estimator. In this paper, we focus on the IS technique for rare event estimation in static (or time independent) network reliability problems. The performance of the IS method strongly depends on the choice of the IS density. The cross entropy (CE) method [22] determines the IS density as the member of a parametric family that has the minimal Kullback-Leibler (KL) divergence/distance from the optimal IS density $p^{*}_{\bm{X}}(\bm{x})$. For rare event estimation, the KL divergence is minimized iteratively between the parametric family and a sequence of intermediate target densities that gradually approach $p^{*}_{\bm{X}}(\bm{x})$. The resulting density in the last iteration is then employed as the IS density for estimating $p_{f}$. Papaioannou et. al [23] further enhanced the performance of the CE method through modifying the intermediate target densities. The basic idea of this approach, termed improved cross entropy (iCE) method, is to introduce a smooth transition from the input density to the optimal IS density to make better use of the intermediate samples. In the context of network reliability assessment with discrete multi-state components, the obvious choice of the parametric family is the categorical distribution. However, updating the categorical model with the CE or iCE method can perform poorly, especially when the sample size is small. This is because the probability assigned to a certain category often converges to 0 when no samples fall into this category during the adaptive sampling process. This is known in the literature as the zero count problem [24]. Neglecting a certain category in the IS distribution can lead to a bias in the IS estimate of $p_{f}$. To avoid such issue, one may think of transferring the discrete random variable space to a continuous one through, for example, the Rosenblatt transformation [17] and employ continuous parametric families in the iCE method. However the network reliability problem becomes more challenging after this non-linear transformation and the iCE method often fails to converge. Hui et. al. [25] combine the cross entropy method with the graph creation process [26] and efficiently estimate the connectivity reliability of networks using an independent exponential parametric model. Note that this method is computational demanded and applies only to coherent binary systems. In this paper, we employ the independent categorical distribution as IS distribution and propose an approach for learning its parameters during the iCE sampling process that avoids the zero count problem. The proposed algorithm, termed Bayesian improve cross entropy method (BiCE), employs the posterior predictive distribution to update the parametric family instead of the weighted maximum likelihood estimator used in the standard CE method. Compared with other non- sampling based methods (e.g., [27, 28, 29, 30, 31]), the proposed BiCE method facilitates using advanced network analysis algorithms that account for complex network dynamics. However, the BiCE may requires a large number of samples to achieve an acceptable results. The rest of the paper is organized as follows. A brief introduction to the IS approach is given in Section 2. In Section 3, we review the CE and iCE methods and provide some new insights into these two methods. In Section 4, we first illustrate the problem that occurs when updating the categorical distribution using CE or iCE, and then propose the BiCE method to circumvent this problem. A set of numerical samples is given in Section 5 to illustrate the efficiency and accuracy of the proposed approach. ## 2 Importance sampling Estimation of $p_{f}$ in Eq.(1) using crude MCS is straightforward; one generates $N$ samples from the joint density function $p_{\bm{X}}(\bm{x})$ and then take the sample mean of the indicator function as the unbiased estimator of $p_{f}$. The coefficient of variation (c.o.v.) of the MCS estimate equals $\sqrt{\frac{1-p_{f}}{N\cdot p_{f}}}$; therefore, for small $p_{f}$ the required number of samples for achieving an accurate result is large. For rare event estimation, acceleration techniques to speed up the occurrence of the failure events are necessary. IS is an efficient and widely utilized method for efficient simulation of rare events. The basic idea of IS is to sample from a proposal distribution, also known as IS distribution, under which the rare event is more likely to occur and to correct the resulting bias in the estimate by multiplying each sample in the IS estimator with an appropriate likelihood ratio $L$ [9]. Specifically, let $p_{IS}(\bm{x})$ denote the IS density and $\\{\bm{x}_{k}\\}_{k=1}^{N}$ be the $N$ samples generated from $p_{IS}(\bm{x})$. The IS estimator of the failure probability in Eq.(1) reads $\widehat{p}_{f}=\frac{1}{N}\sum_{k=1}^{N}\mathbb{I}\\{g(\bm{x}_{k})\leq 0\\}\frac{p_{\bm{X}}(\bm{x}_{k})}{p_{IS}(\bm{x}_{k})},$ (2) where the likelihood ratio (or IS weight) $L(\bm{x})\triangleq\frac{p_{\bm{X}}(\bm{x})}{p_{IS}(\bm{x})}$ can be interpreted as an adjustment factor that compensates for the fact that samples are generated from $p_{IS}(\bm{x})$ instead of $p_{\bm{X}}(\bm{x})$ [32]. The IS estimator in Eq.(2) is unbiased if the failure domain $F$ is included in the sample space of $p_{IS}(\bm{x})$ [32]. The variance of the estimator mainly depends on the choice of the IS distribution. A proper choice of the IS distribution can lead to significantly smaller variance than that of crude MCS. Indeed, the theoretical optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$ that results in zero variance of the estimator is equal to the input distribution conditional on occurrence of the failure event. That is $p^{*}_{\bm{X}}(\bm{x})=\frac{p_{\bm{X}}(\bm{x})\mathbb{I}\\{g(\bm{x})\leq 0\\}}{p_{f}}=p_{\bm{X}}(\bm{x}|F).$ (3) Unfortunately, $p^{*}_{\bm{X}}(\bm{x})$ cannot be directly used, since its analytical expression relies on a prior knowledge of the sought failure probability $p_{f}$. Nevertheless, the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$ still provides guidance for selecting an appropriate IS distribution. A common approach is to perform an initial first/second order reliability method analysis [33] or employ a Markov chain simulation algorithm [34] to form a distribution that resembles $p^{*}_{\bm{X}}(\bm{x})$. Alternatively, one can approximate $p^{*}_{\bm{X}}(\bm{x})$ in a adaptive manner through application of the CE or iCE methods, which are discussed in detailed in Section 3. ## 3 Cross entropy and improved cross entropy method ### 3.1 Cross entropy method The CE method determines the IS distribution in the estimator in Eq.(2) through minimizing the KL divergence between the theoretical optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$ and a predefined parametric family of distributions. The KL divergence, which is also known as relative entropy, is a measure of how one distribution differs from another. Specifically, let $h(\bm{x};\bm{v})$ denote a family of parametric distributions, where $\bm{v}\in\mathcal{V}$ is a parameter vector. The KL divergence between $p^{*}_{\bm{X}}(\bm{x})$ and $h(\bm{x};\bm{v})$ is defined as [35] $\displaystyle D(p^{*}_{\bm{X}},h)$ $\displaystyle=\mathbb{E}_{p^{*}_{\bm{X}}}\left[\ln\left(\frac{p^{*}_{\bm{X}}(\bm{X})}{h(\bm{X};\bm{v})}\right)\right]$ $\displaystyle=\mathbb{E}_{p^{*}_{\bm{X}}}[\ln(p^{*}_{\bm{X}}(\bm{X}))]-\mathbb{E}_{p^{*}_{\bm{X}}}[\ln(h(\bm{X};\bm{v}))].$ (4) In order to obtain a precise IS estimator, the KL divergence $D(p^{*}_{\bm{X}},h)$ needs to be small. In fact, one can prove that [36] the c.o.v. of the IS estimator, $\delta(\widehat{P}_{f})$ is lower-bounded by $\delta(\widehat{P_{f}})\geq\sqrt{\frac{\text{exp}(D(p^{*}_{\bm{X}},h))-1}{N}},$ (5) According to Eq.(5), if we require that $\delta(\widehat{P_{f}})\leq 0.1$, the KL divergence $D(p^{*}_{\bm{X}},h)$ should be less or equal than $\ln(1+0.01N)$. Conversely, a large KL divergence leads to a high c.o.v. and hence an imprecise result. The CE method determines the optimal parameter vector $\bm{v}^{*}$ through minimizing the KL divergence of Eq.(4), i.e., through solving $\bm{v}^{*}=\operatorname*{arg\,min}\limits_{\bm{v}\in\mathcal{V}}D(p^{*}_{\bm{X}},h).$ (6) Since the first term on the right hand side of Eq.(4) does not depend on $\bm{v}$, Eq.(6) is equivalent to $\bm{v}^{*}=\operatorname*{arg\,min}\limits_{\bm{v}\in\mathcal{V}}-\mathbb{E}_{p^{*}_{\bm{X}}}[\ln(h(\bm{X};\bm{v}))].$ (7) Typically, the optimization problem in Eq.(7) is convex and can be solved by the Lagrange multiplier method [37]. However, the objective function depends on the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$, which is not known in closed form, and therefore Eq.(7) cannot be solved analytically. Instead, we estimate $\bm{v}^{*}$ through solving an alternative objective function, which is introduced in the following. Substituting $p^{*}_{\bm{X}}$ in Eq.(7) with the expression of Eq.(3), one gets $\bm{v}^{*}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\mathbb{E}_{p}[\mathbb{I}\\{g(\bm{X})\leq 0\\}\ln(h(\bm{X};\bm{v}))]$ (8) The expectation in Eq.(8) can be approximated through IS, which gives the importance sampling counterpart of the CE optimization problem. That is $\widehat{\bm{v}}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\frac{1}{N}\sum\limits_{k=1}^{N}\frac{p_{\bm{X}}(\bm{x}_{k})\mathbb{I}\\{g(\bm{x}_{k})\leq 0\\}}{p_{ref}(\bm{x}_{k})}\ln(h(\bm{x}_{k};\bm{v})),\quad\quad\bm{x}_{k}\sim p_{ref}(\cdot).$ (9) Here, $p_{ref}(\bm{x})$ is the IS distribution used to estimate the expectation in Eq.(8) and is termed the reference distribution in the CE method [35]. Similarly to the original CE optimization problem, the optimization problem in Eq.(9) can also be solved by the Lagrange multiplier method. One should distinguish $h(\bm{x};\bm{v}^{*})$ from $h(\bm{x};\widehat{\bm{v}})$ in the CE method [38]. $h(\bm{x};\bm{v}^{*})$ represents the distribution that has the smallest KL divergence $D(p^{*}_{\bm{X}},h)$ among a set of distributions and hence is termed the sub-optimal IS distribution. $h(\bm{x};\widehat{\bm{v}})$ is the distribution we use as the IS distribution, i.e., the distribution resulting from solution of the optimization problem of Eq. (9). We term it the chosen IS distribution for the rest of the paper. Note that, as long as the parametric family is fixed, the ’distance’ between the optimal IS distribution and the sub-optimal IS distribution is also fixed. The objective of the CE method is finding a good estimator $\widehat{\bm{v}}$ that is close to the optimal but inaccessible CE parameter $\bm{v}^{*}$. ###### Remark 3.1. In general, if $h(\bm{x};\bm{v})$ is a properly parameterized exponential family, $\widehat{\bm{v}}$ can be interpreted as the self-normalized IS estimator of $\bm{v}^{*}$. The accuracy of the self-normalized IS estimator is measured by the effective sample size (ESS). For more details we refer to A. ###### Remark 3.2. $\widehat{\bm{v}}$ can also be interpreted as a weighted maximum likelihood estimation (MLE) of $\bm{v}$ [39] and therefore may suffer from the same drawbacks as MLE (e.g., overfitting). To circumvent the overfitting issue of $\widehat{\bm{v}}$, we propose a novel Bayesian estimator $\widetilde{\bm{v}}$ for the CE method in Section 4. The proposed estimator converges to $\bm{v}^{*}$ as the sample size goes to infinity. ### 3.2 Cross entropy method for rare events and improved cross entropy method The efficiency and accuracy of the CE method depend on the choice of the reference distribution $p_{ref}(\bm{x})$ in Eq.(9). A potential choice for $p_{ref}(\bm{x})$ is the input distribution $p_{\bm{X}}(\bm{x})$. However, for the case where $F=\\{\bm{x}:g(\bm{x})\leq 0\\}$ is a rare event, sampling directly from $p_{\bm{X}}(\bm{x})$ will lead to a large number of zero indicators in Eq.(9), and, hence, an inaccurate result. In such case, the reference distribution can be chosen adaptively. Let $p^{(t)}(\bm{x}),t=1,...,T$ denote a sequence of intermediate target distributions that gradually approach the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$. The CE optimization problem is then solved iteratively by finding a good approximation to each intermediate target distribution, resulting in a sequence of CE parameter vectors $\\{\widehat{\bm{v}}^{(t)},t=1,...,T\\}$ and distributions $\\{h(\bm{x};\widehat{\bm{v}}^{(t)}),t=1,...,T\\}$. The distribution one obtains in the $t$-th iteration, $h(\bm{x};\widehat{\bm{v}}^{(t)})$, is used as the reference distribution $p_{ref}(\bm{x})$ for the CE procedure in iteration $t+1$. For the first iteration, the input distribution $p_{\bm{X}}(\bm{x})$ is used as the reference distribution. In this way, one takes $h(\bm{x};\widehat{\bm{v}}^{(T-1)})$ as the reference distribution $p_{ref}(\bm{x})$ for Eq.(9), and $h(\bm{x};\widehat{\bm{v}}^{(T)})$ as the final IS distribution. The goal is to make $\widehat{\bm{v}}^{(T)}$ a good estimator of $\bm{v}^{*}$. Typically, the intermediate target distributions $p^{(t)}(\bm{x})$ are not predefined but are chosen adaptively during the iterations. Depending on the way of adaptively selecting $p^{(t)}(\bm{x})$, one distinguishes the (multilevel) CE method and its improved version, the improved cross entropy (iCE) method. For the CE method, the intermediate target distributions are defined as: $p^{(t)}(\bm{x})\triangleq\frac{1}{Z^{(t)}}p_{\bm{X}}(\bm{x})\mathbb{I}\\{g(\bm{x})\leq\gamma^{(t)}\\},t=1,...,T$ (10) where $\\{\gamma^{(t)},t=1,....T\\}$ is a parameter vector that satisfies $\gamma^{(t)}\geq 0$, and $Z^{(t)}$ is a normalizing constant. The CE optimization problem for Eq.(10) reads $\bm{v}^{(t,*)}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\mathbb{E}_{p}[\mathbb{I}\\{g(\bm{X})\leq\gamma^{(t)}\\}\ln(h(\bm{X};\bm{v}))].$ (11) The sample counterpart of the CE optimization problem for Eq.(11) reads as follows: $\widehat{\bm{v}}^{(t)}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\frac{1}{N}\sum\limits_{k=1}^{N}\frac{p_{\bm{X}}(\bm{x}_{k})\mathbb{I}\\{g(\bm{x}_{k})\leq\gamma^{(t)}\\}}{p_{ref}(\bm{x}_{k})}\ln(h(\bm{x}_{k};\bm{v})),\quad\bm{x}_{k}\sim p_{ref}(\cdot).$ (12) In the $t$-th iteration, the CE method proceeds through the following three steps: (1) Generate a set of samples $\mathcal{P}^{(t)}\triangleq\\{\bm{x}_{k},k=1,...,N\\}$ from the reference distribution $p_{ref}(\bm{x})=p_{\bm{X}}(\bm{x})$ in the first iteration and $p_{ref}(\bm{x})=h(\bm{x},\widehat{\bm{v}}^{(t-1)})$ thereafter. (2) Calculate the LSF value $g(\cdot)$ for each $\bm{x}_{k}$. Set $\gamma^{t}$ as the sample $\rho$-quantile of $\\{g(\bm{x}_{k}),k=1,...,N\\}$. $\rho$ represents a hyperparameter of the CE method and is typically chosen between 0.01 and 0.1 [40]. (3) Solve the optimization problem of Eq.(12) with $\mathcal{P}^{(t)}$ to get a new parameter vector $\widehat{\bm{v}}^{(t)}$. The above three steps are iterated until for some iteration $T$, $\gamma^{(T)}\leq 0$. One then sets $\gamma^{(T)}=0$ and carries out step (3) one last time to get $\widehat{\bm{v}}^{(T)}$. In the iCE method, the intermediate target distributions are defined as: $p^{(t)}(\bm{x})\triangleq\frac{1}{Z^{(t)}}p_{\bm{X}}(\bm{x})\Phi\left(-\frac{g(\bm{x})}{\sigma^{(t)}}\right),t=1,...,T$ (13) where $\sigma^{(t)}>0$ and $\Phi$ is the cumulative distribution function (CDF) of the standard normal distribution. Note that $\lim\limits_{\sigma\rightarrow 0}(\Phi(-\frac{g(\bm{x})}{\sigma}))=\mathbb{I}\\{g(\bm{x})\leq 0\\}$, meaning that for a decreasing sequence $\sigma^{(1)}>\cdots>\sigma^{(T)}$, the sequence of distributions gradually approaches the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$. We note that alternative smooth approximations of the indicator function could be used instead of $\Phi$ to define the intermediate target distributions [41]. The CE optimization problem for Eq.(13) reads $\bm{v}^{(t,*)}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\mathbb{E}_{p}[\Phi(-g(\bm{X})/\sigma^{(t)})\ln(h(\bm{X};\bm{v}))].$ (14) The sample counterpart of Eq.(14) can then be expressed as $\widehat{\bm{v}}^{(t)}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\frac{1}{N}\sum\limits_{k=1}^{N}\frac{p_{\bm{X}}(\bm{x}_{k})\Phi(-g(\bm{x}_{k})/\sigma^{(t)})}{p_{ref}(\bm{x}_{k})}\ln(h(\bm{x}_{k};\bm{v})),\quad\bm{x}_{k}\sim p_{ref}(\cdot).$ (15) According to A, when $h(\bm{x};\bm{v})$ represents a properly parameterized exponential family, $\widehat{\bm{v}}^{(t)}$ is a self-normalized IS estimator of $\bm{v}^{(t,*)}$, independent of the choice of the intermediate target distributions. For the iCE method, the weight function of the self-normalized IS estimator of $\bm{v}^{(t,*)}$ equals $W(\bm{x};\sigma^{(t)})=\frac{p_{\bm{X}}(\bm{x})\Phi(-g(\bm{x})/\sigma^{(t)})}{p_{ref}(\bm{x})}.$ (16) A common choice for measuring the accuracy of a self-normalized IS estimator is the ESS, whose approximate expression is given in Eq.(43). With predefined sample size $N$, ESS is only a function of the c.o.v. of the weight, $\delta\left(W(\bm{X};\sigma^{(t)})\right),\bm{X}\sim p_{ref}(\bm{x})$, which further depends on the reference distribution $p_{ref}(\bm{x})$ and the parameter $\sigma^{(t)}$. In the $t$-th iteration of iCE, one fixes the reference distribution $p_{ref}(\bm{x})$ as $h(\bm{x};\widehat{\bm{v}}^{(t-1)})$ (as $p_{\bm{X}}(\bm{x})$ in the first iteration) and selects $\sigma^{(t)}$ such that the sample c.o.v. of the weights $\\{W(\bm{x}_{k};\sigma^{(t)})\\}_{k=1}^{N}$ equals a predefined target value $\delta_{tar}$, i.e., one solves the following optimization problem $\sigma^{(t)}=\operatorname*{arg\,min}\limits_{\sigma\in(0,\sigma^{(t-1)})}|\widehat{\delta}\left(\\{W(\bm{x}_{k};\sigma)\\}_{k=1}^{N}\right)-\delta_{tar}|,\quad\quad\bm{x}_{k}\sim p_{ref}(\bm{x}).$ (17) where $W(\bm{x}_{k};\sigma)$ represents the weight in Eq.(16) and is a function of the optimization variable $\sigma$. In this way, the sample ESS equals $\frac{N}{1+\delta^{2}_{tar}}$. Hence, the accuracy of the self -normalized IS estimator $\widehat{\bm{v}}^{(t)}$ is tuned by the hyperparameter $\delta_{tar}$. A large $\delta_{tar}$ leads to an inaccurate $\widehat{v}^{(t)}$, while a small $\delta_{tar}$ increases the number of the intermediate target distributions $p^{(t)}(\bm{x})$ required to approach the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$, thereby reducing the overall efficiency of the iCE method. This will be illustrated in detail in Section 4. In general, 1.5 is a good choice for $\delta_{tar}$ in the iCE method [23]. Once $\sigma^{(t)}$ is fixed, the optimization problem of Eq.(15) can be solved for the parameter vector $\widehat{\bm{v}}^{(t)}$. The corresponding distribution $h(\bm{x};\widehat{\bm{v}}^{(t)})$ is then used as the reference distribution for the $(t+1)$-th iteration. The above procedure is iterated until the c.o.v. of the likelihood ratio [32] for sampling from $p^{(t)}(\bm{x})$ instead of $p^{*}_{\bm{X}}(\bm{x})$ is smaller than $\delta_{\epsilon}$, i.e., $\delta\left(\frac{p^{*}_{\bm{X}}(\bm{X})}{p^{(t)}(\bm{X})}\right)=\delta\left(\frac{\mathbb{I}\\{g(\bm{X})\leq 0\\}}{\Phi(-g(\bm{X})/\sigma^{(t)})}\right)\leq\delta_{\epsilon},\quad\quad\bm{X}\sim p^{(t)}(\bm{x}).$ (18) In practice, we sample $\mathcal{P}^{(t)}=\\{\bm{x}_{k}\\}_{k=1}^{N}$ from $h(\bm{x};\widehat{\bm{v}}^{(t)})$ rather than $p^{(t)}(\bm{x})$, and check whether the sample c.o.v. of $\frac{\mathbb{I}\\{g(\bm{x}_{k})\leq 0\\}}{\Phi(-g(\bm{x}_{k})/\sigma^{(t)})}$ is less or equal than $\delta_{\epsilon}$. Typically, $\delta_{\epsilon}$ is chosen the same as $\delta_{tar}$ [23]. The algorithm for the iCE method is shown in Algorithm 1. Input: $N$, $\delta_{tar}$, $\delta_{\epsilon}$ 1 2$t\leftarrow 1$, $t_{max}\leftarrow 50$, $\sigma_{0}\leftarrow\infty$ 3 $h(\bm{x};\widehat{\bm{v}}^{(t-1)})\leftarrow p_{\bm{X}}(\bm{x})$ 4 while _true_ do 5 Generate $N$ samples $\\{\bm{x}_{k}\\}_{k=1}^{N}$ from $h(\bm{x};\widehat{\bm{v}}^{(t-1)})$ and calculate the corresponding LSF values $\\{g(\bm{x}_{k})\\}_{k=1}^{N}$ 6 Compute the sample c.o.v. $\widehat{\delta}$ of $\left\\{\frac{\mathbb{I}\\{g(\bm{x}_{k})\leq 0\\}}{\Phi(-g(\bm{x}_{k})/\sigma^{(t-1)})}\right\\}_{k=1}^{N}$ 7 if _ $t>t_{max}$ or $\widehat{\delta}\leq\delta_{\epsilon}$_ then 8 Break 9 Determine $\sigma^{(t)}$ through solving Eq.(17) 10 Compute $\widehat{\bm{v}}^{(t)}$ through solving Eq.(15) 11 $t\leftarrow t+1$ 12$T\leftarrow t-1$ 13 Use $h(\bm{x};\widehat{\bm{v}}^{(T)})$ as the IS distribution and calculate the IS estimator $\widehat{p}_{f}$ through Eq.(2) Output: $\widehat{p}_{f}$ Algorithm 1 Improved cross entropy algorithm ## 4 Bayesian cross entropy method for the categorical parametric family In this section, we consider the iCE method for estimating a rare event with a discrete random input $\bm{X}$, which often occurs in network reliability assessment. For discrete inputs $\bm{X}$, the probability mass function of $\bm{X}$, $p_{\bm{X}}(\bm{x})$, defines the probability of the corresponding outcome, i.e., $p_{\bm{X}}(\bm{x})=\Pr(\bm{X}=\bm{x})$. We consider a slightly different definition of the intermediate target distribution in Eq.(13), which results from the definition of an auxiliary LSF $g_{a}(\bm{x})$: $g_{a}(\bm{x})\triangleq\begin{cases}g(\bm{x}),&\text{if}\quad g(\bm{x})>0\\\ 0,&\text{if}\quad g(\bm{x})\leqslant 0\end{cases}.$ (19) Note that the failure probability $p_{f}$ is unchanged if the original LSF in Eq.(1) is substituted with the auxiliary one, so we can equivalently estimate the probability that $g_{a}(\bm{X})\leq 0$ for $p_{f}$, i.e., $p_{f}=\mathbb{P}(g_{a}(\bm{X})\leq 0)=\sum_{\bm{x}\in\Omega_{\bm{X}}}p_{\bm{X}}(\bm{x})\mathbb{I}\\{g_{a}(\bm{x})\leq 0\\},$ (20) where $\Omega_{\bm{X}}$ is the sample space of the input random variables. In this way, the intermediate target distribution in Eq.(13) becomes $p^{(t)}(\bm{x})\triangleq\frac{1}{Z^{(t)}}p_{\bm{X}}(\bm{x})\Phi\left(-\frac{g_{a}(\bm{x})}{\sigma^{(t)}}\right),t=1,...,T.$ (21) In the following, we discuss the properties of the iCE method with the intermediate target distribution in Eq.(21). In particular, we examine the adaptation of the intermediate target distribution following Eq.(17) and formulate a theorem stating that, under certain assumptions, the resulting distribution sequence gradually approaches the optimal IS. For the independent categorical parametric family, i.e., the joint distribution consisting of independent components that follow the categorical distribution, we further illustrate the overfitting issue of the standard iCE method. Based on this observation, we introduce a novel approach called the Bayesian improved cross entropy (BiCE) method that circumvents this problem. ### 4.1 Adaptation of the intermediate target distribution The adaptation of the intermediate target distribution $p^{(t)}(\bm{x})$, or equivalently the parameter $\sigma^{(t)}$, plays an important role in achieving a balance between the efficiency and accuracy of the iCE method. Therefore, it is worth taking a closer look at the updating formula of $\sigma^{(t)}$ in Eq.(17) and (18). To simplify the problem, we make the following assumptions: ###### Assumption 1. The intermediate target distributions $p^{(t)}(\bm{x}),t=1,...,T$ are included in the parametric family $h(\bm{x};\bm{v})$ and therefore can be perfectly matched by $h(\bm{x};\bm{v}^{(t,*)}),t=1,...,T$. ###### Assumption 2. The sample size is infinite such that $\widehat{\bm{v}}^{(t)}$ is the same as $\bm{v}^{(t,*)}$. Under these two assumptions, the sample c.o.v. of the weight in Eq.(17) converges to the true c.o.v. $\delta\left(\frac{\Phi(-g_{a}(\bm{X})/\sigma)}{\Phi(-g(\bm{X})/\sigma^{(t-1)})}\right),\bm{X}\sim p^{(t-1)}(\bm{x})$, which is a function of $\sigma$. We write this function as $\delta^{(t)}(\sigma)$ for the rest of this paper. According to Eq.(18), the adaptive procedure of iCE method is stopped when $\delta^{(t)}(0)\leq\delta_{\epsilon}$. In [42], we introduce the following two theorems ###### Theorem 4.1. Under Assumptions 1 and 2, $\delta^{(t)}(\sigma)$ is a strictly decreasing function of $\sigma$ over $[0,\sigma^{(t-1)}]$. ###### Theorem 4.2. Under Assumptions 1 and 2, it holds $\delta^{(t)}(\sigma^{(t-1)})=0$ and $\delta^{(t)}(0)=\sqrt{\frac{Z^{(t-1)}}{0.5p_{f}}-1}>0$, where $Z^{(t-1)}$ is the normalizing constant of $p^{(t-1)}(\bm{x})$ and can be expressed as $Z^{(t-1)}=0.5p_{f}+\sum_{g_{a}(\bm{x})>0}p_{\bm{X}}(\bm{x})\Phi\left(-\frac{g_{a}(\bm{x})}{\sigma^{(t-1)}}\right).$ (22) As a corollary, the optimization problem of Eq.(17) has a unique solution $\sigma^{(t)}$ that is smaller than $\sigma^{(t-1)}$, resulting in a decreasing sequence of $\sigma$, i.e., $\sigma^{(1)}>\sigma^{(2)}>\cdots>\sigma^{(t)}$. Additionally, since $Z^{(t)}$ is a strictly increasing function of $\sigma^{(t)}$ according to Eq.(22), we have $Z^{(1)}>Z^{(2)}>\cdots>Z^{(t)}$, and this further leads to another decreasing sequence of $\delta^{(t)}(0)$, i.e., $\delta^{(1)}(0)>\delta^{(2)}(0)>\cdots>\delta^{(t)}(0)$. If for some iteration $T$ it holds $\delta^{(T)}(0)\leq\delta_{\epsilon}$, we terminate the adaptive procedure of iCE. The adaptation of $\sigma^{(t)}$ under Assumptions 1 and 2 is intuitively illustrated in Fig. 1. All symbols in the figure have the same meaning as before. Figure 1: Schematic diagram of the adaptation of the $\sigma^{(t)}$. In practice, the parametric family $h(\bm{x};\bm{v})$ has limited flexibility, and the sample size is also finite due to limited computational budgets. Nevertheless, we expect that the results given in Theorems 4.1 and 4.2 still apply when $h(\bm{x};\bm{v}^{(t)})$ forms a good approximation of $p^{(t)}(\bm{x})$, which is confirmed in the numerical experiments in Section 5. The adaptation of $p^{(t)}(\bm{x})$ in iCE is tuned by hyper-parameters $\delta_{tar}$ and $\delta_{\epsilon}$. A small $\delta_{\epsilon}$ indicates a strict convergence criterion for $p^{(t)}(\bm{x})$, which leads to a more accurate yet less efficient result. It should be stressed that if the capacity of the parametric family is insufficient, the algorithm often fails to converge with a small $\delta_{\epsilon}$. Similarly to $\delta_{\epsilon}$, a small $\delta_{tar}$ leads to a $\sigma^{(t)}$ close to $\sigma^{(t-1)}$, which lowers the speed of the intermediate target distribution approaching the optimal IS distribution, thereby reducing the overall efficiency of the iCE algorithm. Conversely, a small $\delta_{tar}$ implies a large ESS and hence high accuracy of $\widehat{\bm{v}}^{(t)}$ in each $t$-th iteration of iCE method. In this paper, we suggest select $\delta_{tar}=\delta_{\epsilon}$ from 1 to 2, which is justified by the numerical examples in Sec. 5. We also find that, instead of the original weight function defined in Eq.(16), using the alternative weight function $W^{alt}(\bm{x};\sigma)\triangleq\frac{\Phi(-g_{a}(\bm{x})/\sigma)}{\Phi(-g_{a}(\bm{x})/\sigma^{(t-1)})}$ (23) when solving $\sigma^{(t)}$ through Eq.(17) will lead to a better convergence of the iCE algorithm for network reliability assessment, especially when $\delta_{\epsilon}$ is small. ### 4.2 Parametric distribution family for discrete inputs To form a good approximation of $p^{(t)}(\bm{x})$ in the iCE method, a proper choice of the parametric family is necessary. In the context of reliability of systems with multi-state components, the obvious choice of the parametric model is the multivariate categorical distribution, which assigns a probability to each system state of the network, i.e., to each possible state in the sample space of the input distribution. The multivariate categorical distribution has great flexibility as it includes all possible distributions defined in the sample space of the network components. However, the number of parameters of this model grows exponentially with the input dimension (number of components) making this model impractical even for moderate dimensions. Therefore, consider independent categorical distributions in the following. Suppose $\bm{X}$ is a $n$ dimensional input random vector with statistically independent components and each $d$-th component $X_{d}$ follows the categorical distribution taking values $\\{s_{d,1},\cdots s_{d,n_{d}}\\}$ with probabilities $\\{p_{d,1},\cdots,p_{d,n_{d}}\\}$. $n_{d}$ is the number of sample states of $X_{d}$. The independent categorical family for $\bm{X}$ has the following general form: $h(\bm{x};\bm{v})=\prod\limits_{d=1}^{n}h_{d}(x_{d};\bm{v}_{d})=\prod\limits_{d=1}^{n}\prod\limits_{i=1}^{n_{d}}v_{d,i}^{\mathbb{I}\\{x_{d}=s_{d,i}\\}},0\leq v_{d,i}\leq 1,\sum_{i=1}^{n_{d}}v_{d,i}=1,$ (24) where $h_{d}(x_{d};\bm{v}_{d})=\prod\limits_{i=1}^{n_{d}}v_{d,i}^{\mathbb{I}\\{x_{d}=s_{d,i}\\}}$ represents a univariate categorical distribution for $X_{d}$ that assigns a probability of $v_{d,i}$ to each $i$-th state $s_{d,i}$ of $X_{d}$, and $\bm{v}=\\{\bm{v}_{d}\\}_{d=1}^{n}$ gathers the parameters of all components of the independent categorical family $h(\bm{x};\bm{v})$. The optimal parameter $\bm{v}^{(t,*)}$ is obtained through solving Eq.(14), which gives[35] $v^{(t,*)}_{d,i}=\mathbb{E}_{p^{(t)}}[\mathbb{I}\\{X_{d}=s_{d,i}\\}].$ (25) The explicit expression of $\bm{v}^{(t,*)}$ requires knowledge of the normalizing constant of $p^{(t)}$ and hence cannot be directly used in iCE method. Through optimizing the alternative objective function in Eq.(15), the near-optimal parameter $\widehat{\bm{v}}^{(t)}$ is explicitly given by $\widehat{v}^{(t)}_{d,i}=\frac{\sum_{k=1}^{N}W(\bm{x}_{k};\sigma^{(t)})\mathbb{I}\\{x_{k,d}=s_{d,i}\\}}{\sum_{k=1}^{N}W(\bm{x}_{k};\sigma^{(t)})},\quad d=1,...,n,\quad i=1,...,n_{d},$ (26) where samples $\\{\bm{x}_{k}\\}_{k=1}^{N}$ are generated from the reference distribution $p_{ref}(\bm{x})=h(\bm{x};\widehat{\bm{v}}^{(t-1)})$, and $W(\bm{x}_{k};\sigma^{(t)})=\frac{p_{\bm{X}}(\bm{x}_{k})\Phi(-g_{a}(\bm{x_{k}})/\sigma_{t})}{p_{ref}(\bm{x}_{k})}$. The expression of Eqs.(25) and (26) can also be obtained by considering that the independent categorical distribution is a member of the exponential family [24]. Note that $\widehat{\bm{v}}^{(t)}$ is the self-normalized estimator of $\bm{v}^{(t,*)}$. Additionally, $\widehat{\bm{v}}^{(t)}$ can be regarded as the weighted MLE of $\bm{v}$. This is because the objective function in Eq.(15) can be interpreted as a weighted log-likelihood function $\mathcal{LL}(\bm{v})$, with data set $\\{\bm{x}_{k}\\}_{k=1}^{N}$ and weights $\\{W(\bm{x}_{k})/N\\}_{k=1}^{N}$. Similarly to the MLE of an independent categorical distribution, $\widehat{\bm{v}}^{(t)}$ suffers from overfitting, which is also known as the zero count problem in the context of MLE with categorical data [24], and results in poor performance when the sample size is small. In particular, if there is no sample whose $d$-th component equals $s_{d,i}$, $s_{d,i}$ will be assigned a zero probability according to Eq.(26), i.e., $\hat{v}^{(t)}_{d,i}=0$. In the context of the iCE method, the parameter vector $\widehat{\bm{v}}^{(t)}$ is employed to generate samples at the $(t+1)$ iteration, and hence, $s_{d,i}$ will not occur in any of the new generated samples. In this way, we have $\hat{v}^{(t)}_{d,i}=\hat{v}^{(t+1)}_{d,i}=\cdots=\hat{v}^{(T)}_{d,i}=0$, resulting in a reduced sample space of the final IS distribution $h(\bm{x};\widehat{\bm{v}}^{(T)})$. However, for the optimal IS distribution, state $s_{d,i}$ is not necessarily negligible. In other words, the reduced sample space may only cover a part of the failure domain $F$, thereby underestimating the failure probability $p_{f}$. ### 4.3 Bayesian improved cross entropy method In this subsection, we propose an accurate yet efficient algorithm termed Bayesian improved cross entropy method (BiCE) that circumvents the zero count problem. In this approach, instead of employing a weighted MLE estimator $\widehat{\bm{v}}^{(t)}$, a prior distribution is imposed on $\bm{v}^{(t)}$, and the posterior predictive distribution is derived, which is then employed to update the independent categorical family in iCE. We insert the expression of the independent categorical parametric family of Eq.(24) into Eq.(15) and rewrite the objective function, or the weighted log- likelihood function $\mathcal{LL}(\bm{v})$, as follows $\displaystyle\mathcal{LL}(\bm{v})$ $\displaystyle=\sum_{k=1}^{N}\frac{W(\bm{x}_{k};\sigma^{(t)})}{N}\ln\left(\prod_{d=1}^{n}h_{d}(x_{k,d};\bm{v}_{d})\right)$ $\displaystyle=\sum_{d=1}^{n}\sum_{k=1}^{N}\frac{W(\bm{x}_{k};\sigma^{(t)})}{N}\ln\left(h_{d}(x_{k,d};\bm{v}_{d})\right)$ $\displaystyle\triangleq\sum_{d=1}^{n}\mathcal{LL}_{d}(\bm{v}_{d}),$ (27) where $\mathcal{LL}_{d}(\bm{v}_{d})$ is the weighted log-likelihood function of a one dimensional categorical family $h_{d}(x_{d};\bm{v}_{d})$, with data set $\\{x_{k,d}\\}_{k=1}^{N}$ and weights $\\{\frac{W(\bm{x}_{k};\sigma^{(t)})}{N}\\}_{k=1}^{N}$. From Eq.(4.3), we find that, once the sample set is fixed, the parameter vectors $\bm{v}_{d},d=1,\cdots,n$, are decoupled from each other in the expression of $\mathcal{LL}(\bm{v})$, that is the influence of each $\bm{v}_{d}$ on the outcome of $\mathcal{LL}(\bm{v})$ is separated (or additive). Additionally, we note that the feasible region for each parameter vector $\bm{v}_{d}$, that is $\mathcal{V}_{d}:\\{0\leq v_{d,i}\leq 1;i=1,\cdots,n_{d}|\sum_{i=1}^{n_{d}}v_{d,i}=1\\}$, is independent. Therefore, the original optimization problem can be decomposed into $n$ simpler subproblems, in which $\mathcal{LL}_{d}(\bm{v}_{d})$ is maximized with respective to $\bm{v}_{d}\in\mathcal{V}_{d}$. The solutions to the subproblems are then concatenated to give a solution to the original problem, i.e., $\widehat{\bm{v}}^{(t)}=[\widehat{\bm{v}}^{(t)}_{1};\cdots;\widehat{\bm{v}}^{(t)}_{n}]$. Therefore, it is sufficient to discuss the following subproblem: $\displaystyle\widehat{\bm{v}}_{d}^{(t)}$ $\displaystyle=\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}\sum_{k=1}^{N}\frac{W(\bm{x}_{k};\sigma^{(t)})}{N}\ln\left(h_{d}(x_{k,d};\bm{v}_{d})\right).$ Note that multiplying the objective function with a positive constant $\alpha$ or taking an exponential of the objective function does not change the solution to the optimization problem, so we have $\displaystyle\widehat{\bm{v}}_{d}^{(t)}$ $\displaystyle=\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}\sum_{k=1}^{N}\frac{\alpha W(\bm{x}_{k};\sigma^{(t)})}{N}\ln\left(h_{d}(x_{k,d};\bm{v}_{d})\right)$ $\displaystyle=\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}\prod_{k=1}^{N}\left(h_{d}(x_{k,d};\bm{v}_{d})\right)^{\frac{\alpha W(\bm{x}_{k};\sigma^{(t)})}{N}}.$ (28) The objective function in Eq.(28) can be regarded as a weighted likelihood function for $h_{d}(x_{d};\bm{v}_{d})$ with data set $\\{x_{k,d}\\}_{k=1}^{N}$ and weights $\\{\frac{\alpha W(\bm{x}_{k};\sigma^{(t)})}{N}\\}_{k=1}^{N}$. In this work, $\alpha$ is chosen such that the weighted likelihood function coincides with the standard likelihood function with unit weights when $W(\bm{x}_{k}),k=1,...,N$ are all equal, which gives $\alpha=\frac{N^{2}}{\sum_{k=1}^{N}W(\bm{x}_{k};\sigma^{(t)})}.$ (29) Inserting the above expression of $\alpha$ and substituting the parametric family $h_{d}(x_{k,d};\bm{v}_{d})$ with the expression in Eq.(24) into Eq.(28) gives $\widehat{\bm{v}}_{d}^{(t)}=\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}\prod\limits_{i=1}^{n_{d}}v_{d,i}^{\sum_{k=1}^{N}\left(\mathbb{I}\\{x_{k,d}=s_{d,i}\\}w_{k}\right)}\triangleq\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}\mathcal{L}_{d}(\bm{v}_{d}),$ (30) where $w_{k}\triangleq N\frac{W(\bm{x}_{k};\sigma^{(t)})}{\sum_{k=1}^{N}W(\bm{x}_{k};\sigma^{(t)})}$ is the weight for the $k$-th sample $\bm{x}_{k}$, and $\mathcal{L}_{d}(\bm{v}_{d})$ is the weighted likelihood function for the categorical distribution $h_{d}(x_{d};\bm{v}_{d})$ with data set $\\{x_{k,d}\\}_{k=1}^{N}$ and weights $\\{w_{k}\\}_{k=1}^{N}$. Note that when $W(\bm{x}_{k};\sigma^{(t)}),k=1,...,N$ are all equal, $\mathcal{L}_{d}(\bm{v}_{d})$ degenerates into a standard likelihood function with unit weights, i.e., $w_{k}=1,k=1,...,N$. The analytical solution to the optimization problem in Eq.(30) is obtained as $\widehat{v}_{d,i}^{(t)}=\frac{1}{N}\sum_{k=1}^{N}w_{k}\mathbb{I}\\{x_{k,d}=s_{d,i}\\},\quad i=1,...,n_{d},$ (31) which coincides with the expression of $\widehat{\bm{v}}^{(t)}$ in Eq.(26). Note that $0\leq\widehat{v}_{d,i}\leq 1$, and $\sum_{i=1}^{n_{d}}\widehat{v}_{d,i}^{(t)}=1$. The basic idea of the proposed BiCE method is to employ a Bayesian approach for estimating the parameter vector $\bm{v}_{d}$ that aims at avoiding the overfitting problem through adding a regularization term in Eq.(28). In particular, the weighted likelihood function in the second line of Eq.(28) is multiplied by a prior distribution over the parameter vector $\bm{v}_{d}$. Instead of solving the regularized optimization problem, we derive the full posterior predictive distribution for the categorical distribution. The imposed prior distribution acts as an additional information set, and through Bayes’ rule, we combine the two sources of information. If one source contains more information than the other, the posterior distribution will be pulled towards it; the relative ’strength’ between the prior and the data is adjusted by the $\alpha$ and also the prior parameters. The prior $f^{\prime}(\bm{\bm{v}_{d}})$ is chosen to be a Dirichlet distribution in this paper, which is the conjugate prior for the parameter vector $\bm{v}_{d}$ of a categorical distribution. The PMF of the Dirichlet distribution can be expressed as follows $f^{\prime}(\bm{\bm{v}_{d}})=\text{Dir}(\bm{v}_{d};\bm{\theta}_{d})=\frac{1}{B(\bm{v}_{d})}\prod_{i=1}^{n_{d}}(v_{d,i})^{\theta_{d,i}-1}\mathbb{I}\\{\bm{v}_{d}\in\mathcal{V}_{d}\\},$ (32) where $\bm{\theta}_{d}=\\{\theta_{d,i}>0\\}_{i=1}^{n_{d}}$ represents the parameter vector of the prior, and $B(\bm{v}_{d})$ is the normalizing constant with $B(\cdot)$ being the multivariate Beta function. Combining Eq.(32) with the weighted likelihood in Eq.(30), the posterior distribution $f^{\prime\prime}(\bm{\bm{v}_{d}})$ is also a Dirichlet distribution. In fact, according to Bayes’ rule, $f^{\prime\prime}(\bm{\bm{v}_{d}})$ can be expressed as $\displaystyle f^{\prime\prime}(\bm{v}_{d})$ $\displaystyle\propto f^{\prime}(\bm{v}_{d})\mathcal{L}_{d}(\bm{v}_{d})$ $\displaystyle\propto\prod_{i=1}^{n_{d}}(v_{d,i})^{N\widehat{v}_{d,i}^{(t)}+\theta_{d,i}-1}\mathbb{I}\\{\bm{v}_{d}\in\mathcal{V}_{d}\\}$ $\displaystyle=\text{Dir}(\bm{v}_{d};N\widehat{\bm{v}}_{d}^{(t)}+\bm{\theta_{d}}).$ (33) We then utilize the full distribution of the posterior $f^{\prime\prime}(\bm{v}_{d})$ for updating the categorical parametric family $h_{d}(x_{d};\bm{v}_{d})$. That is, we calculate the probability $\widetilde{\mu}^{(t)}_{d,i}$ that each state $s_{d,i}$ of $h_{d}(x_{d};\bm{v}_{d})$ occurs under the Dirichlet posterior distribution in Eq.(33). Based on the total probability theorem, $\widetilde{\mu}^{(t)}_{d,i}$ can be calculated through $\widetilde{\mu}^{(t)}_{d,i}=\int_{\mathcal{V}_{d}}h_{d}(s_{d,i};\bm{v}_{d})f^{\prime\prime}(\bm{v}_{d})d\bm{v}_{d}=\int_{\mathcal{V}_{d}}v_{d,i}\text{Dir}(\bm{v}_{d};N\widehat{\bm{v}}_{d}^{(t)}+\bm{\theta_{d}})d\bm{v}_{d}=\mathbb{E}_{\text{Dir}}[V_{d,i}].$ Since the mean value of a Dirichlet distribution $\text{Dir}(\bm{v}_{d};\bm{\theta}_{d})$ is explicitly given by $\frac{\theta_{d,i}}{\sum_{j=1}^{n_{d}}\theta_{d,j}},i=1,...,n_{d}$, [24], we further have $\displaystyle\widetilde{\mu}^{(t)}_{d,i}$ $\displaystyle=\frac{N\widehat{v}^{(t)}_{d,i}+\theta_{d,i}}{\sum_{j=1}^{n_{d}}\left(N\widehat{v}^{(t)}_{d,j}+\theta_{d,j}\right)}$ $\displaystyle=\frac{N\widehat{v}^{(t)}_{d,i}+\theta_{d,i}}{N+\sum_{j=1}^{n_{d}}\theta_{d,j}}$ $\displaystyle=\lambda_{d}\widehat{v}^{(t)}_{d,i}+(1-\lambda_{d})\frac{\theta_{d,i}}{\sum_{j=1}^{n_{d}}\theta_{d,j}},$ (34) where $\lambda_{d}\triangleq\frac{N}{N+\sum_{j=1}^{n_{d}}\theta_{d,j}}$ denotes the combination factor. According to Eq.(4.3), this estimator $\widetilde{\mu}^{(t)}_{d,i}$ can be written as a linear combination of the weighted MLE estimator $\widehat{v}_{d,i}^{(t)}$, which exploits the information of the weighted samples, and the prior estimator $\frac{\theta_{d,i}}{\sum_{i=1}^{n_{d}}(\theta_{d,i})}$, which can explore a different range of the sample space. The relative ’strength’ of $\widehat{v}^{(t)}_{d,i}$ is indicated by the combination factor $\lambda_{d}$ and is tuned by $\sum_{i=1}^{n_{d}}\theta_{d,i}$. Apparently, the smaller the $\sum_{i=1}^{n_{d}}\theta_{d,i}$, the more dominant the $\widehat{v}^{(t)}_{d,i}$, and vice versa. We therefore favour a balanced prior with an appropriately large $\sum_{i=1}^{n_{d}}\theta_{d,i}$, that will not dominate but can still deviate the potentially overfitted weighted MLE estimator $\widehat{v}^{(t)}_{d,i}$. A further investigation of the prior distribution is left for the future work and in this paper, we simply employ a symmetric Dirichlet prior for each $\bm{v}_{d}$, and set $\theta_{d,j}=b;\quad\quad\quad j=1,...,n_{d},d=1,...,n,$ (35) where $b$ is the hyper parameter. We suggest choose a moderate $b$, e.g., 5 or 10 when $N$ is around thousand. Additionally, $\widetilde{\mu}^{(t)}_{d,i}$ converges to $\widehat{v}_{d,i}^{(t)}$ as the sample size $N$ approaches infinity. Considering that $\widehat{v}_{d,i}^{(t)}$ is the normalized IS estimator of the optimal CE parameter $v_{d,i}^{(t,*)}$ in Eq.(25), both $\widetilde{\mu}^{(t)}_{d,i}$ and $\widehat{v}_{d,i}^{(t)}$ will converge to $v_{d,i}^{(t,*)}$. Therefore, similar to $\widehat{v}_{d,i}^{(t)}$, the accuracy of $\widetilde{\mu}^{(t)}_{d,i}$ is guaranteed for a large sample size. Conversely, $\widetilde{\mu}^{(t)}_{d,i}$ is positive even for a small sample size (at least $\frac{b}{N+b\cdot n_{d}}$), and hence, it does not suffer from the zero count problem as in the case of weighted MLE. In this way, $\widetilde{\bm{\mu}}^{(t)}_{d}=(\widetilde{\mu}^{(t)}_{d,1},...,\widetilde{\mu}^{(t)}_{d,n_{d}})$ forms a new parameter vector for the one-dimensional categorical family $h_{d}(x_{d};\bm{v}_{d})$. $h_{d}(x_{d};\widetilde{\bm{\mu}}^{(t)}_{d})$ is also known as the posterior predictive distribution in Bayesian statistics, so we term $\widetilde{\bm{\mu}}^{(t)}_{d}$ the Bayesian estimator of $\bm{v}_{d}$. After obtaining the Bayesian estimator $\widetilde{\bm{\mu}}^{(t)}_{d}$ for each dimension $d=1,...,n$ through Eq.(4.3), we concatenate the results to get the Bayesian estimator $\widetilde{\bm{\mu}}^{(t)}$ for the independent categorical distribution $h(\bm{x};\bm{v})$, i.e., $\widetilde{\bm{\mu}}^{(t)}=[\widetilde{\bm{\mu}}^{(t)}_{1};...,\widetilde{\bm{\mu}}^{(t)}_{d}]$. The posterior predictive distribution $h(\bm{x};\widetilde{\bm{\mu}}^{(t)})$ is then employed as the reference distribution $p_{ref}(\bm{x})$ for the $(t+1)$-th iteration in iCE. The resulting algorithm is termed the Bayesian improved cross entropy method (BiCE) and is given in Algorithm 2. Input: $N$, $\delta_{tar}$, $\delta_{\epsilon}$, prior parameters $\\{\bm{\theta}_{1},...,\bm{\theta}_{n}\\}$ 1 2$t\leftarrow 1$, $t_{max}\leftarrow 50$, $\sigma_{0}\leftarrow\infty$ 3 $h(\bm{x};\widetilde{\bm{\mu}}^{(t-1)})\leftarrow p_{\bm{X}}(\bm{x})$ 4 while _true_ do 5 Generate $N$ samples $\\{\bm{x}_{k}\\}_{k=1}^{N}$ from $h(\bm{x};\widetilde{\bm{\mu}}^{(t-1)})$ and calculate the corresponding LSF values $\\{g_{a}(\bm{x}_{k})\\}_{k=1}^{N}$ 6 Compute the sample c.o.v. $\widehat{\delta}$ of $\left\\{\frac{\mathbb{I}\\{g_{a}(\bm{x}_{k})\leq 0\\}}{\Phi(-g_{a}(\bm{x}_{k})/\sigma^{(t-1)})}\right\\}_{k=1}^{N}$ 7 if _ $t>t_{max}$ or $\widehat{\delta}\leq\delta_{\epsilon}$_ then 8 Break 9 Determine $\sigma^{(t)}$ through solving Eq.(17) using the alternative weight function defined in Eq.(23). 10 Compute $\widetilde{\mu}^{(t)}_{d,i}$ through Eq.(4.3), for each $d$ and $i$ 11 $t\leftarrow t+1$ 12$T\leftarrow t-1$ 13 Use $h(\bm{x};\widehat{\bm{v}^{(T)}})$ as the IS distribution and calculate the IS estimator $\widehat{p}_{f}$ through Eq.(2) Output: $\widehat{p}_{f}$ Algorithm 2 Bayesian improved cross entropy algorithm ###### Remark 4.1. Instead of using the full posterior distribution, one can also utilize the mode of the posterior distribution $\widetilde{\bm{v}}^{(t)}_{d}$ as a point estimate of $\bm{v}_{d}$, which is known as the maximum a posteriori (MAP) estimator in Bayesian statistics. By definition, the MAP estimator can be expressed as $\widetilde{\bm{v}}_{d}^{(t)}=\operatorname*{arg\,max}\limits_{\bm{v}_{d}\in\mathcal{V}_{d}}f^{\prime\prime}(\bm{v}_{d}).$ (36) Substituting the posterior $f^{\prime\prime}(\bm{v}_{d})$ with the expression in Eq.(33) and then solving the optimization problem in Eq.(36) with a Lagrange multiplier gives us $\displaystyle\widetilde{v}_{d,i}^{(t)}$ $\displaystyle=\frac{N\widehat{v}_{d,i}^{(t)}+\theta_{d,i}-1}{N\sum_{i=1}^{n_{d}}\widehat{v}_{d,i}^{(t)}+\sum_{i=1}^{n_{d}}(\theta_{d,i}-1)}$ $\displaystyle=\frac{N}{N+\sum_{i=1}^{n_{d}}(\theta_{d,i}-1)}\widehat{v}_{d,i}^{(t)}+\frac{\sum_{i=1}^{n_{d}}(\theta_{d,i}-1)}{N+\sum_{i=1}^{n_{d}}(\theta_{d,i}-1)}\frac{\theta_{d,i}-1}{\sum_{i=1}^{n_{d}}(\theta_{d,i}-1)}.$ (37) Comparing Eq.(37) and Eq.(4.3), we find that the MAP estimator $\widetilde{v}_{d,i}^{(t)}$ with prior parameter $\theta_{d,i}>1$ is the same as the Bayesian estimator $\widetilde{\mu}_{d,i}^{(t)}$ with prior parameter $\theta_{d,i}-1$. When $\theta_{d,i}=1$, the MAP estimator $\widetilde{v}_{d,i}^{(t)}$ reduces to the weighted MLE $\widehat{v}_{d,i}^{(t)}$. ###### Remark 4.2. If the symmetric Dirichlet prior described in Eq.(35) is applied in the BiCE method, it holds that $\widetilde{\mu}^{(t)}_{d,i}=\frac{N}{N+n_{d}}\widehat{v}^{(t)}_{d,i}+\frac{b}{N+b\cdot n_{d}}$ according to Eq.(4.3), which means the probability that $X_{d}$ equals $s_{d,i}$ in the posterior predictive distribution is at least $\frac{b}{N+b\cdot n_{d}}$. To avoid this probability being close to zero, the number of states for $X_{d}$, $n_{d}$, cannot be too large. Otherwise, the zero count problem can still occur in the BiCE method. ## 5 Examples ### 5.1 Toy example: system with linear limit state functions In this example, we consider a LSF $g_{1}(\bm{x})$, that is a linear combination of 50 random variables. The coefficients of the first and the last 10 random variables are set to be 2 and 0, respectively, while for the remaining random variables, the coefficients are fixed at 1. The LSF reads $g_{1}(\bm{x})=\sum_{d=1}^{10}2\cdot x_{d}+\sum_{d=11}^{40}1\cdot x_{d}+\sum_{d=41}^{50}0\cdot x_{d}.$ (38) #### 5.1.1 Binary input We first assume that $\\{X_{d}\\}_{d=1}^{50}$ are independent and identically distributed (i.i.d.) Bernoulli random variables with success probability $10^{-3}$, i.e., the probability that each $X_{d}$ takes the value 1 is $10^{-3}$. We estimate the probability that $g_{1}(\bm{X})\geq 6$ using the BiCE and compare the result with that of the standard iCE approach. The exact solution is $1.387\cdot 10^{-7}$ through the convolution of two binomial distributions. We fix the sample size $N$ at $500$ and $2,000$ for BiCE and iCE, respectively, and set $\delta_{tar}=\delta_{\epsilon}=1$ for both methods. $5,000$ repeated runs of each estimator are carried out to calculate the relative bias, the sample c.o.v., and the average computational cost (i.e., the average number of calls of the LSF) of the estimator. Also, the influence of different prior parameters on the performance of BiCE is investigated. We apply the symmetric Dirichlet prior defined in Eq.( 35), and vary the parameter $b$ therein. We note that when $n_{d}=2$ the Dirichlet distribution degenerates into the Beta distribution. The results are summarized in Table 1. Table 1: Performance of BiCE for Example 5.1.1. method | BiCE | BiCE | BiCE | BiCE | BiCE | iCE ---|---|---|---|---|---|--- sample size, $N$ | $500$ | $500$ | $500$ | $500$ | $500$ | $2,000$ prior parm., $b$ | $1$ | $5$ | $25$ | $50$ | $500$ | $/$ relative bias | 0.012 | 0.011 | 0.007 | 0.006 | -0.745 | -0.375 sample c.o.v. | 0.196 | 0.122 | 0.287 | 0.787 | 21.209 | 0.372 comp. cost | $4,660$ | $4,127$ | $2,495$ | $1,500$ | $1,700$ | $19,967$ We can see from the table that the iCE method even with $N=2,000$ samples per level, which is four times larger than the number of samples used with the BiCE, performs poorly with a strong negative bias. This is due to the zero count problem described in Subsection 4.2; the failure states of some of the input random variables are ignored throughout the sampling process, which leads to an under-representation of the failure domain. In contrast, BiCE with an uninformative prior, which is the case where $b=1$, works well, resulting in an efficient yet accurate estimator. The performance of the BiCE can be further enhanced though employing a larger $b$, e.g., the prior with $b=5$ outperforms the prior with $b=1$. Meanwhile, selecting an excessive value of $b$ leads to poor results. To further illustrate the zero count problem, Fig. 2 shows the influence of the number of samples per level, $N$, on the c.o.v. and the relative bias of the BiCE and iCE estimates. In this figure, the blue solid line represents the BiCE method with uninformative prior and target c.o.v. $\delta_{tar}=\delta_{\epsilon}=1$; the red dashed line shows the BiCE method with uninformative prior and target c.o.v. $\delta_{tar}=\delta_{\epsilon}=1.5$. Both variants show a negligible relative bias for all considered $N$. In contrast, the relative bias of the standard iCE method is almost -100% when the number of samples is small, i.e., $N=500$, and gradually approaches 0 as $N$ increases. Obviously, the zero count problem is less likely to happen for larger number of samples per intermediate level. Figure 2: Parameter study for example 5.1. #### 5.1.2 Multi-state input Next we assume that $\\{X_{d}\\}_{d=1}^{50}$ follows the i.i.d categorical distribution with categories 0, 1 and 3. The probabilities assigned to these categories are 0.899, 0.1, and $10^{-3}$. In this subsection, we estimate the probability that $g_{1}(\bm{x})\geq 19$. The exact value of this probability is approximated through crude MCS with $10^{7}$ samples, resulting in $7.3\cdot 10^{-5}$. 5000 repeated runs of BiCE with hyperparameters $\delta_{tar}=\delta_{\epsilon}=1$ and $N=1,000$ are performed. The obtained results are then compared with the standard iCE approach with $\delta_{tar}=\delta_{\epsilon}=1$ and $N=2,000$, and are summarized in Table 2. Similarly to the binary case in Subsection 5.1.1, the BiCE with the uniform prior, that is the case where $b=1$, outperforms the standard iCE approach, and the performance can be further improved through applying a larger $b$. Fig. 3 shows the performance of the iCE and the BiCE methods for varying the number of samples per intermediate level for multi-state input. Similarly to Fig. 2, which is given for binary input, the relative bias of the iCE method approaches zero as $N$ goes from 500 to $5,000$. Table 2: Performance of BiCE for Example 5.1.2. method | BiCE | BiCE | iCE ---|---|---|--- sample size, $N$ | $1,000$ | $1,000$ | $2,000$ prior parm., $b$ | $1$ | $10$ | $/$ relative bias | -0.004 | -0.014 | -0.375 sample c.o.v. | 0.285 | 0.107 | 0.124 comp. cost | $7,484$ | $5,997$ | $15,966$ Figure 3: Parameter study for example 5.2. ### 5.2 Multi-state two-terminal reliability Fig.4 shows the topology of a multi-state two-terminal network with 11 nodes and 20 edges (or arcs), which is motivated by the third example of [43]. The capacity of each edge, that is, the maximum flow that can pass through the edge, takes the same three states 0, 3 and 5 with probability $10^{-3}$, 0.1, and 0.899, respectively. Also, the states of each edge are independent. We are required to estimate the probability that the maximum flow from the source node $s$ to the sink node $t$ is less or equal than a predefined demand $D_{tar}=6$. The true value of the probability is approximately $1.8\cdot 10^{-4}$ according to the crude MCS results (with sample size $2\cdot 10^{6}$), and this value is employed to assess the accuracy of the proposed BiCE approach. We choose $\delta_{tar}=\delta_{\epsilon}=1$ and $N=1,000$ and use an uninformative uniform prior, $\text{Dir}(\cdot;\bm{\theta}=[1,1,1]^{T})$ for each dimension in BiCE, i.e., we set $b=1$. The relative bias, sample c.o.v., and average computational cost of 200 repeated runs of the BiCE are -0.0043, 0.100 and $9,385$, respectively, indicating an efficient yet accurate estimator. In contrast, the theoretical c.o.v. of the crude MCS with the same computational cost, i.e., with $9,385$ samples, equals 0.723, which is significantly larger than that of BiCE, and the standard iCE with hyperparameters $\delta_{tar}=\delta_{\epsilon}=1$ and $N=2,000$ will seriously underestimate the failure probability. On the other hand, the performance of the BiCE estimator can be improved through setting $b=10$. These results are summarized in Table 3. Table 4 shows the PMF of the final IS distribution in BiCE averaged over 200 repetitions. We can see from this table that the IS distribution of the 14-th edge differs the most from the input distribution in BiCE, followed by edges 16/17 and edges 4/13/19. This indicates that these are the most important edges for the failure probability. For remaining edges, the IS distribution differs only slightly from the input distribution. Figure 4: Topology of the network for example 5.2. Table 3: Performance of BiCE for Example 5.2. method | BiCE | BiCE | MCS | iCE ---|---|---|---|--- sample size, $N$ | $1,000$ | $1,000$ | $9,385$ | $2,000$ prior parm., $b$ | $1$ | $10$ | $/$ | $/$ relative bias | -0.004 | -0.021 | 0 | -0.253 sample c.o.v. | 0.100 | 0.089 | 0.723 | 0.168 comp. cost | $9,385$ | $8,650$ | $9,385$ | $20,400$ Table 4: The PMF of the IS distribution in BiCE for Example 5.2 (averaged over 200 repetitions). state | edge 14 | edge 16 | edge 17 | edge 4 | edge 13 | edge 19 | the rest ---|---|---|---|---|---|---|--- 0 | 0.334 | 0.170 | 0.179 | 0.128 | 0.123 | 0.127 | $\approx 0.002$ 3 | 0.628 | 0.355 | 0.348 | 0.254 | 0.254 | 0.250 | $\approx 0.10$ 5 | 0.037 | 0.475 | 0.473 | 0.618 | 0.623 | 0.624 | $\approx 0.898$ ### 5.3 Power transmission network with cascading failure In this example, we consider the IEEE39 benchmark system, a simplified model of the high voltage transmission system in the northeast of the U.S.A. The model was first presented in 1970 [44] and has been extensively used as a benchmark model in power system analysis [45, 46, 47]. It consists of 39 buses including 10 generators and 19 load buses, 34 transmission lines, and 12 transformers. The topology of the network is illustrated in Fig. 5 where all the buses are modeled as nodes and transmission lines together with transformers are modeled as edges, so there are in total 39 nodes and 46 edges in the model. In the figure, orange circles stand for the source nodes, representing the 10 generators, and grey circles represent the terminal nodes, the 19 load buses. Edges are weighted by their reactance values shown on the right-hand side of Fig. 5 and by their capacities shown on the left-hand side. Through solving the direct current load flow (DCLF) problem described in the literature (e.g. [48] for IEEE39 benchmark model), one can derive the actual direct current (DC) that passes through each edge of the network. An edge fails when the DC flow exceeds its capacity, and the initial edge failures change the topology of the network resulting in a new configuration of the flow across the remaining components, which in turn may lead to further overloading of the edge. This phenomenon is also known as cascading failure and is modeled here based on [49]. The system will finally reach an equilibrium state where no further edges are overloaded. In general, only a part of the original power demand at the load buses (the terminal nodes) can be matched in the equilibrium. We assume that nodes will never fail, and the state of each edge follows an i.i.d Bernoulli distribution, with component failure probability $10^{-3}$. The LSF is then defined as a function of the system state $\bm{x}$, which is a binary vector, as follows: $g_{3}(\bm{x})=30\%-L(\bm{x}),$ (39) where $L(\bm{x})$ denotes the percentage of the original power demand that cannot be matched when the system achieves the equilibrium after the cascading failure, and the failure probability is defined as the probability of such percentage loss being greater or equal than the threshold, 30%. A crude MCS procedure with $10^{6}$ samples is used to validate the results of the proposed method, which gives a failure probability of $9.3\cdot 10^{-5}$. We then apply the BiCE algorithm and set the hyperparameters $N=1,000,\delta_{tar}=\delta_{\epsilon}=1.5$. Two different kinds of prior distributions are considered here, the uninformative prior $\text{Beta}(\cdot;\bm{\theta}=[1,1]^{T})$ and an informative prior $\text{Beta}(\cdot;\bm{\theta}=[10,10]^{T})$, which correspond to setting $b=1$ and $b=10$, respectively. The second and third column of Table 5 shows the performance of the BiCE methods after 500 repeated runs. The BiCE estimator with the uninformative prior performs better than the crude MCS with the same computational cost and also the standard iCE with $N=2,000$. Although standard iCE gives a smaller c.o.v. than BiCE with a uniform prior, the obtained estimate is strongly biased, which is likely due to the zero count problem. Moreover, the performance of the BiCE can be significantly enhanced through setting a larger $b$,e.g., 10. The sample c.o.v. of the BiCE with this informative prior $\text{Beta}(\cdot;\bm{\theta}=[10,10]^{T})$ is 0.143, which is the smallest among all 4 cases. This setting also requires the lowest computational cost. Figure 5: IEEE39 bus system, with edge thicknesses proportional to their capacities (left) and reactances (right). Table 5: Performance of BiCE for Example 5.3. method | BiCE | BiCE | MCS | iCE ---|---|---|---|--- sample size, $N$ | $1,000$ | $1,000$ | $5,746$ | $2,000$ prior parm., $b$ | $1$ | $10$ | $/$ | $/$ relative bias | 0.0049 | -0.009 | 0 | -0.409 sample c.o.v. | 0.5836 | 0.143 | 1.367 | 0.305 comp. cost | $5,746$ | $5,000$ | $5,746$ | $11,870$ ## 6 Conclusions This paper studies the improved cross entropy (iCE) method in the context of network reliability assessment. We distinguish three distributions involved in iCE procedure: the optimal importance sampling (IS) distribution $p^{*}_{\bm{X}}(\bm{x})$, the suboptimal IS distribution $h(\bm{x};\bm{v}^{*})$, and the chosen IS distribution $h(\bm{x};\widehat{\bm{v}})$. Given a certain parametric family, the ’distance’ between $p^{*}_{\bm{X}}(\bm{x})$ and $h(\bm{x};\bm{v}^{*})$ is fixed, and the objective of the CE method is to find a good estimator $\widehat{\bm{v}}$ that is close to the optimal but inaccessible CE parameter $\bm{v}^{*}$. For parametric models that belong to the exponential family, $\widehat{\bm{v}}$ is the self-normalized IS estimator of the optimal CE parameter $\bm{v}^{*}$, and hence converges to $\bm{v}^{*}$ as the sample size goes to infinity. Moreover, we show that $\hat{\bm{v}}$ can be viewed as the solution of a weighted maximum likelihood estimation (MLE) problem given the samples obtained at a certain level of the adaptive iCE sampling process. In network reliability assessments with discrete multi-state inputs, the parametric family can be chosen as the independent categorical distribution. In these approaches, the CE estimator $\widehat{\bm{v}}$ suffers from the ’zero count problem’, which is essentially the overfitting issue, resulting in a poor IS estimator with a strong negative bias. This paper derives the posterior predictive distribution $h(\bm{x};\widetilde{\bm{v}})$ to update the categorical model instead of the original maximum likelihood estimation $\hat{\bm{v}}$. By introducing the symmetric Dirichlet prior as shown in Eq.(35), the probability assigned to each $d$-th category of the parametric model is at least $\frac{b}{b\cdot n_{d}+N}$, where $b$ is the hyper parameter. Hence, the ’zero count problem’ is less likely to occur. The Bayesian estimator $\widetilde{\bm{v}}$ is consistent, i.e., $\widetilde{\bm{v}}$ converges to $\bm{v}^{*}$ as the sample size goes infinity. Combining the Bayesian estimator $\widetilde{\bm{v}}$ with the standard iCE procedure, a modified iCE method called Bayesian improved cross entropy (BiCE) method is proposed for network reliability analysis. The efficiency and accuracy of the proposed method are illustrated through a set of numerical examples, from which it is found that BiCE with an appropriately chosen informative prior can significantly enhance the performance of the iCE method. Our numerical investigations indicate that a uniform prior performs only suboptimal. In all examples, we observed significantly better performance with a symmetric, informative prior with $b>1$, e.g., 5 or 10. It should also be stressed that the BiCE estimator can be skewed, and this is probably due to the limited capacity of the parametric model because of its assumption of independence. If the suboptimal IS distribution $h(\bm{x};\bm{v}^{*})$ itself is far from the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$, independent on close $\widetilde{\bm{v}}$ is to $\bm{v}^{*}$, the resulting IS estimator is bound to perform poorly ## 7 Acknowledgment The first author gratefully acknowledges the financial support of the China Scholarship Council. ## Appendix A Self-normalized importance sampling and cross entropy method In this Appendix, we first introduce the self-normalized IS estimator for estimating the expectation of a general function $H(\bm{x})$ and then prove that, in the CE (or iCE) method, the chosen parameter vector $\hat{\bm{v}}$ is the self normalized IS estimator of the sub-optimal parameter parameter vector $\bm{v}^{*}$ for the exponential parametric family. ### A.1 Self-normalized importance sampling We consider the following expectation of a general function $H(x)$: $\mu=\mathbb{E}_{\pi}[H(\bm{X})].$ (40) The input distribution $\pi(\bm{x})$ is only known pointwise to an unknown constant $Z$. That is $\pi(\bm{x})=\frac{1}{Z}\pi_{u}(\bm{x}).$ (41) $\pi_{u}(\bm{x})$ is the unnormalized form of $\pi$. In such case, the standard IS estimator of Eq.(2) cannot be applied, since the likelihood ratio $L$, which is defined as the ratio of the input distribution $\pi(\bm{x})$ to the IS distribution $p_{IS}(\bm{x})$, is intractable. Instead, the following self-normalized IS estimator can be applied $\bar{\mu}=\sum_{k=1}^{N}\frac{W(\bm{x}_{k})}{\sum_{k}W(\bm{x}_{k})}H(\bm{x}_{k}),$ (42) where $W(\bm{x}_{k})\triangleq\frac{\pi_{u}(\bm{x}_{k})}{p_{IS}(\bm{x}_{k})}$. It can be proved that the self-normalized IS estimator is consistent, i.e., the estimator converges to the exact value as $N$ goes infinity, under the condition that the sample space of the input distribution $\pi(\bm{x})$ is included in that of the IS distribution $p_{IS}(\bm{x})$ [32]. The estimator of Eq.(42) can be less efficient than the standard Monte Carlo estimator that samples directly from $\pi(\bm{x})$. The efficiency of the self-normalized estimator with respect to the crude Monte Carlo estimator can be measured by the so-called effective sample size (ESS) [50]. ESS represents the number of samples that a crude MCS would need in order to yield the same variance as that of the self-normalized IS estimator of Eq.(42). The ESS can be approximated through the following expression [50] $ESS\approx\frac{N}{1+\delta^{2}(W(\bm{X}))},\quad\bm{X}\sim p_{IS}(\bm{x}),$ (43) where $\delta(W(\bm{X}))$ represents the coefficient of variation of the weights $W(\bm{X})$ in Eq.(42), and $N$ is the sample size of the self- normalized IS estimator of Eq.(42). ### A.2 Cross entropy method with exponential parametric family In this subsection, we aim at finding a distribution from the exponential family $h(\bm{x};\bm{v})$ that has the minimal KL divergence with respect to the distribution $\pi$ of Eq.(41). Note that the optimal IS distribution $p^{*}_{\bm{X}}(\bm{x})$ in Eq.(3) and the intermediate target distribution $p^{(t)}$ in Eq.(13) (or Eq.(10)) can be regarded as special cases of $\pi$, with $\pi_{u}$ set equal to $p_{\bm{X}}(\bm{x})\mathbb{I}\\{g(\bm{x})\leq 0\\}$ and $p_{\bm{X}}(\bm{x})\Phi(-g(\bm{x})/\sigma^{(t)})$ (or $p_{\bm{X}}(\bm{x})\mathbb{I}\\{g(\bm{x})\leq\gamma^{(t)}\\}$), respectively. The corresponding CE optimization problem is given as $\bm{v}^{*}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\sum_{\bm{x}\in\Omega_{X}}\pi_{u}(\bm{x})\ln(h(\bm{x};\bm{v}))$ (44) where $\Omega_{\bm{X}}$ is the sample space of $\bm{X}$. The summation in Eq.(44) is substituted with the integral for continuous $\bm{X}$. The sample counterpart of Eq.(44) reads $\widehat{\bm{v}}=\operatorname*{arg\,max}\limits_{\bm{v}\in\mathcal{V}}\frac{1}{N}\sum\limits_{i=1}^{N}\frac{\pi_{u}(\bm{x})}{p_{ref}(\bm{x}_{k})}\ln(h(\bm{x}_{i};\bm{v})),\quad\quad\bm{x}_{i}\sim p_{ref}(\cdot).$ (45) The exponential family of distributions is defined as the collection of distributions that have the following general form: $f(\bm{x};\bm{\eta})=a(\bm{x})\text{exp}(\bm{\eta}^{\text{T}}\bm{t}(\bm{x})-A(\bm{\eta})),$ (46) where $\bm{\eta}=(\eta_{1},...,\eta_{m})^{\text{T}}$ is often referred to as the canonical parameter. The statistic $\bm{t}(\bm{x})=(t_{1}(\bm{x}),...,t_{n}(\bm{x}))^{\text{T}}$ is referred to as the sufficient statistic. The function $A(\bm{\eta})$ is known as the cumulant function. In the following, we reparameterize the exponential family with $\bm{v}=\nabla_{\bm{\eta}}A(\bm{\eta})$. Through inserting $h(\bm{x};\bm{v})$ into Eq.(44) and setting the gradient of the objective function equal to zero, we get $\bm{v}^{*}_{c}=\mathbb{E}_{\pi}[\bm{t}(\bm{X})]$ [40]. Typically, $\bm{v}^{*}_{c}$ satisfies the constraint $\bm{v}\in\mathcal{V}$, and we have $\bm{v}^{*}=\bm{v}^{*}_{c}$. Note that the explicit expression of $\bm{v}^{*}$ depends on a prior knowledge of the distribution of $\pi(\bm{x})$, or equivalently a knowledge of the unknown constant $Z$ and therefore cannot be directly used. In such case, the sample counterpart Eq.(45) is solved instead, which gives us $\widehat{\bm{v}}=\frac{\sum_{k=1}^{N}W(\bm{x_{k}})\bm{t}(\bm{x}_{k})}{\sum_{k=1}^{N}W(\bm{x_{k}})}$, where $W(\bm{x}_{k})=\frac{\pi_{u}(\bm{x})}{p_{ref}(\bm{x}_{k})}$. According to Eq.(42), $\widehat{\bm{v}}$ is the self-normalized estimator of $\bm{v}^{*}$ with $H(\bm{x})$ being the sufficient statistic $\bm{t}(\bm{x})$. The accuracy of the estimator $\widehat{\bm{v}}$ can be measured by the ESS defined in Eq.(43). ## References * [1] J. Li and W. Liu, _Lifeline engineering systems: Network reliability analysis and aseismic design_. Springer Nature, 2021. * [2] M. O. Ball, C. J. Colbourn, and J. S. Provan, “Network reliability,” _Handbooks in operations research and management science_ , vol. 7, pp. 673–762, 1995. * [3] H. Kumamoto, K. Tanaka, K. Inoue, and E. J. Henley, “Dagger-sampling Monte Carlo for system unavailability evaluation,” _IEEE Transactions on Reliability_ , vol. 29, no. 2, pp. 122–125, 1980. * [4] G. S. Fishman, “A Monte Carlo sampling plan for estimating network reliability,” _Operations Research_ , vol. 34, no. 4, pp. 581–594, 1986\. * [5] ——, “Monte Carlo estimation of the maximal flow distribution with discrete stochastic arc capacity levels,” _Naval Research Logistics (NRL)_ , vol. 36, no. 6, pp. 829–849, 1989. * [6] C. Alexopoulos and G. S. Fishman, “Characterizing stochastic flow networks using the Monte Carlo method,” _Networks_ , vol. 21, no. 7, pp. 775–798, 1991. * [7] T. Elperin, I. Gertsbakh, and M. Lomonosov, “An evolution model for Monte Carlo estimation of equilibrium network renewal parameters,” _Probability in the Engineering and Informational Sciences_ , vol. 6, no. 4, pp. 457–469, 1992. * [8] H. Cancela and M. El Khadiri, “The recursive variance-reduction simulation algorithm for network reliability evaluation,” _IEEE Transactions on Reliability_ , vol. 52, no. 2, pp. 207–212, 2003. * [9] G. Rubino and B. Tuffin, _Rare event simulation using Monte Carlo methods_. John Wiley & Sons, 2009. * [10] E. Zio, _Monte Carlo simulation: The method_. Springer, 2013. * [11] J. Behrensdorf, T.-E. Regenhardt, M. Broggi, and M. Beer, “Numerically efficient computation of the survival signature for the reliability analysis of large networks,” _Reliability Engineering & System Safety_, vol. 216, p. 107935, 2021. * [12] E. Zio and N. Pedroni, “Reliability analysis of discrete multi-state systems by means of subset simulation,” in _Proceedings of the 17th ESREL Conference_ , 2008, pp. 22–25. * [13] Z. I. Botev, P. L’Ecuyer, G. Rubino, R. Simard, and B. Tuffin, “Static network reliability estimation via generalized splitting,” _INFORMS Journal on Computing_ , vol. 25, no. 1, pp. 56–71, 2013. * [14] K. M. Zuev, S. Wu, and J. L. Beck, “General network reliability problem and its efficient solution by subset simulation,” _Probabilistic Engineering Mechanics_ , vol. 40, pp. 25–35, 2015. * [15] Z. I. Botev, P. l’Ecuyer, and B. Tuffin, “Reliability estimation for networks with minimal flow demand and random link capacities,” _arXiv preprint arXiv:1805.03326_ , 2018. * [16] H. A. Jensen and D. J. Jerez, “A stochastic framework for reliability and sensitivity analysis of large scale water distribution networks,” _Reliability Engineering & System Safety_, vol. 176, pp. 80–92, 2018. * [17] J. Chan, I. Papaioannou, and D. Straub, “An adaptive subset simulation algorithm for system reliability analysis with discontinuous limit states,” _Reliability Engineering & System Safety_, p. 108607, 2022. * [18] S. Bulteau and M. El Khadiri, “A new importance sampling Monte Carlo method for a flow network reliability problem,” _Naval Research Logistics (NRL)_ , vol. 49, no. 2, pp. 204–228, 2002. * [19] K.-P. Hui, N. Bean, M. Kraetzl, and D. Kroese, “Network reliability estimation using the tree cut and merge algorithm with importance sampling,” in _Proceedings of the 4th International Workshop on Design of Reliable Communication Networks_. IEEE, 2003, pp. 254–262. * [20] Z. Wang and J. Song, “Cross-entropy-based adaptive importance sampling using von mises-fisher mixture for high dimensional reliability analysis,” _Structural Safety_ , vol. 59, pp. 42–52, 2016. * [21] N. L. Dehghani, S. Zamanian, and A. Shafieezadeh, “Adaptive network reliability analysis: Methodology and applications to power grid,” _Reliability Engineering & System Safety_, vol. 216, p. 107973, 2021. * [22] R. Y. Rubinstein, “Optimization of computer simulation models with rare events,” _European Journal of Operational Research_ , vol. 99, no. 1, pp. 89–112, 1997. * [23] I. Papaioannou, S. Geyer, and D. Straub, “Improved cross entropy-based importance sampling with a flexible mixture model,” _Reliability Engineering & System Safety_, vol. 191, p. 106564, 2019. * [24] K. P. Murphy, _Machine learning: A probabilistic perspective_. MIT press, 2012. * [25] K.-P. Hui, N. Bean, M. Kraetzl, and D. P. Kroese, “The cross-entropy method for network reliability estimation,” _Annals of Operations Research_ , vol. 134, no. 1, p. 101, 2005. * [26] T. Elperin, I. Gertsbakh, and M. Lomonosov, “Estimation of network reliability using graph evolution models,” _IEEE Transactions on Reliability_ , vol. 40, no. 5, pp. 572–581, 1991. * [27] J. Li and J. He, “A recursive decomposition algorithm for network seismic reliability evaluation,” _Earthquake engineering & structural dynamics_, vol. 31, no. 8, pp. 1525–1539, 2002. * [28] G. Hardy, C. Lucet, and N. Limnios, “K-terminal network reliability measures with binary decision diagrams,” _IEEE Transactions on Reliability_ , vol. 56, no. 3, pp. 506–515, 2007. * [29] R. Paredes, L. Dueñas-Osorio, K. S. Meel, and M. Y. Vardi, “Principled network reliability approximation: A counting-based approach,” _Reliability Engineering & System Safety_, vol. 191, p. 106472, 2019. * [30] H. Miao, W. Liu, and J. Li, “Seismic reliability analysis of water distribution networks on the basis of the probability density evolution method,” _Structural Safety_ , vol. 86, p. 101960, 2020. * [31] J.-E. Byun and J. Song, “Generalized matrix-based bayesian network for multi-state systems,” _Reliability Engineering & System Safety_, vol. 211, p. 107468, 2021. * [32] A. B. Owen, _Monte Carlo theory, methods and examples_. Standford, 2013. * [33] H. O. Madsen, S. Krenk, and N. C. Lind, _Methods of structural safety_. Courier Corporation, 2006. * [34] S.-K. Au and J. L. Beck, “A new adaptive importance sampling scheme for reliability calculations,” _Structural safety_ , vol. 21, no. 2, pp. 135–158, 1999. * [35] R. Y. Rubinstein and D. P. Kroese, _Simulation and the Monte Carlo method_. John Wiley & Sons, 2016, vol. 10. * [36] S.-K. Au and J. Beck, “Important sampling in high dimensions,” _Structural safety_ , vol. 25, no. 2, pp. 139–163, 2003. * [37] S. Boyd and L. Vandenberghe, _Convex optimization_. Cambridge university press, 2004. * [38] J. C. Chan and D. P. Kroese, “Improved cross-entropy method for estimation,” _Statistics and computing_ , vol. 22, no. 5, pp. 1031–1040, 2012. * [39] S. Geyer, I. Papaioannou, and D. Straub, “Cross entropy-based importance sampling using gaussian densities revisited,” _Structural Safety_ , vol. 76, pp. 15–27, 2019. * [40] D. P. Kroese, T. Taimre, and Z. I. Botev, _Handbook of Monte Carlo methods_. John Wiley & Sons, 2013, vol. 706. * [41] F. Uribe, I. Papaioannou, Y. M. Marzouk, and D. Straub, “Cross-entropy-based importance sampling with failure-informed dimension reduction for rare event simulation,” _SIAM/ASA Journal on Uncertainty Quantification_ , vol. 9, no. 2, pp. 818–847, 2021. * [42] J. Chan, I. Papaioannou, and D. Straub, “Improved cross entropy-based importance sampling for network reliability assessment,” in _Proceedings of the 13th International Conference on Structural Safety & Reliability_. ICOSSAR, 2022. * [43] J. E. Ramirez-Marquez and D. W. Coit, “A monte-carlo simulation approach for approximating multi-state two-terminal reliability,” _Reliability Engineering & System Safety_, vol. 87, no. 2, pp. 253–264, 2005. * [44] G. Bills, “On-line stability analysis study, rp 90-1,” North American Rockwell Information Systems Co., Anaheim, CA (USA), Tech. Rep., 1970. * [45] T. Athay, R. Podmore, and S. Virmani, “A practical method for the direct analysis of transient stability,” _IEEE Transactions on Power Apparatus and Systems_ , no. 2, pp. 573–584, 1979. * [46] A. Scherb, L. Garrè, and D. Straub, “Reliability and component importance in networks subject to spatially distributed hazards followed by cascading failures,” _ASCE-ASME Journal of Risk and Uncertainty in Engineering Systems, Part B: Mechanical Engineering_ , vol. 3, no. 2, 2017. * [47] H. Rosero-Velásquez and D. Straub, “Representative natural hazard scenarios for risk assessment of spatially distributed infrastructure systems,” in _The 29th European Safety and Reliability Conference_ , 2019, pp. 1–7. * [48] J. J. Grainger, _Power system analysis_. McGraw-Hill, 1999. * [49] P. Crucitti, V. Latora, and M. Marchiori, “Model for cascading failures in complex networks,” _Physical Review E_ , vol. 69, no. 4, p. 045104, 2004\. * [50] A. Kong, “A note on importance sampling using standardized weights,” _University of Chicago, Dept. of Statistics, Tech. Rep_ , vol. 348, 1992.
* (8) J. Manschot, B. Pioline, and A. Sen, From Black Holes to Quivers, JHEP 11 (2012) 023, [arXiv:1207.2230]. * (9) J. Manschot, B. Pioline, and A. Sen, On the Coulomb and Higgs branch formulae for multi-centered black holes and quiver invariants, JHEP 05 (2013) 166, [arXiv:1302.5498]. * (10) J. Manschot, B. Pioline, and A. Sen, The Coulomb Branch Formula for Quiver Moduli Spaces, arXiv:1404.7154. * (11) M. Kontsevich and Y. Soibelman, Stability structures, motivic Donaldson-Thomas invariants and cluster transformations, arXiv:0811.2435. * (12) D. Joyce and Y. Song, A Theory of generalized Donaldson-Thomas invariants, arXiv:0810.5645. * (13) R. Eager, S. A. Selmani, and J. Walcher, Exponential Networks and Representations of Quivers, JHEP 08 (2017) 063, [arXiv:1611.06177]. * (14) S. Banerjee, P. Longhi, and M. Romo, Exploring 5d BPS Spectra with Exponential Networks, Annales Henri Poincare 20 (2019), no. 12 4055–4162, [arXiv:1811.02875]. * (15) S. Banerjee, P. Longhi, and M. Romo, Exponential BPS graphs and D brane counting on toric Calabi-Yau threefolds: Part I, arXiv:1910.05296. * (16) S. Banerjee, P. Longhi, and M. Romo, Exponential BPS graphs and D-brane counting on toric Calabi-Yau threefolds: Part II, arXiv:2012.09769. * (17) C. Closset and M. Del Zotto, On 5d SCFTs and their BPS quivers. Part I: B-branes and brane tilings, arXiv:1912.13502. * (18) P. Bousseau, Scattering diagrams, stability conditions, and coherent sheaves on $\mathbb{P}^{2}$, arXiv:1909.02985. * (19) S. Alexandrov and B. Pioline, Attractor flow trees, BPS indices and quivers, Adv. Theor. Math. Phys. 23 (2019), no. 3 627–699, [arXiv:1804.06928]. * (20) H. Argüz and P. Bousseau, The flow tree formula for Donaldson-Thomas invariants of quivers with potentials, arXiv:2102.11200. * (21) G. Beaujard, J. Manschot, and B. Pioline, Vafa-Witten invariants from exceptional collections, arXiv:2004.14466. * (22) S. Mozgovoy and B. Pioline, Attractor invariants, brane tilings and crystals, arXiv:2012.14358. * (23) S. Mozgovoy, Operadic approach to wall-crossing, arXiv:2101.07636. * (24) P. Descombes, Motivic DT invariants from localization, arXiv:2106.02518. * (25) P. Longhi, Wall-Crossing Invariants from Spectral Networks, arXiv:1611.00150. * (26) M. Gabella, P. Longhi, C. Y. Park, and M. Yamazaki, BPS Graphs: From Spectral Networks to BPS Quivers, arXiv:1704.04204. * (27) C. Closset and H. Magureanu, The $U$-plane of rank-one 4d $\mathcal{N}=2$ KK theories, arXiv:2107.03509. * (28) M. R. Douglas and G. W. Moore, D-branes, quivers, and ALE instantons, hep-th/9603167. * (29) M. R. Douglas, B. Fiol, and C. Romelsberger, Stability and BPS branes, JHEP 09 (2005) 006, [hep-th/0002037]. * (30) M. R. Douglas, B. Fiol, and C. Romelsberger, The Spectrum of BPS branes on a noncompact Calabi-Yau, JHEP 09 (2005) 057, [hep-th/0003263]. * (31) F. Denef, Quantum quivers and Hall / hole halos, JHEP 10 (2002) 023, [hep-th/0206072]. * (32) W.-y. Chuang, D.-E. Diaconescu, J. Manschot, G. W. Moore, and Y. Soibelman, Geometric engineering of (framed) BPS states, Adv. Theor. Math. Phys. 18 (2014), no. 5 1063–1231, [arXiv:1301.3065]. * (33) D. Berenstein and M. R. Douglas, Seiberg duality for quiver gauge theories, hep-th/0207027. * (34) G. Bonelli, F. Del Monte, and A. Tanzini, BPS quivers of five-dimensional SCFTs, Topological Strings and q-Painlevé equations, Ann. Henri Poincaré (3, 2021) [arXiv:2007.11596]. * (35) A. B. Goncharov and R. Kenyon, Dimers and cluster integrable systems, arXiv:1107.5588. * (36) V. V. Fock and A. Marshakov, Loop groups, Clusters, Dimers and Integrable systems, arXiv:1401.1606. * (37) M. Bershtein, P. Gavrylenko, and A. Marshakov, Cluster integrable systems, $q$-Painlevé equations and their quantization, JHEP 02 (2018) 077, [arXiv:1711.02063]. * (38) Y. Mizuno, $q$-painlevé equations on cluster $\mathcal{X}$-varieties via toric geometry, arXiv preprint arXiv:2008.11219 (2020). * (39) M. A. Bershtein and A. I. Shchechkin, q-deformed Painlevé $\tau$ function and q-deformed conformal blocks, J. Phys. A 50 (2017), no. 8 085202, [arXiv:1608.02566]. * (40) G. Bonelli, A. Grassi, and A. Tanzini, Quantum curves and $q$-deformed Painlevé equations, Lett. Math. Phys. 109 (2019), no. 9 1961–2001, [arXiv:1710.11603]. * (41) P. Longhi, Instanton Particles and Monopole Strings in 5D SU(2) Supersymmetric Yang-Mills Theory, Phys. Rev. Lett. 126 (2021), no. 21 211601, [arXiv:2101.01681]. * (42) H. Sakai, Rational Surfaces Associated with Affine Root Systems and Geometry of the Painlevé Equations, Communications in Mathematical Physics 220 (Jan., 2001) 165–229. * (43) S. Cecotti and M. Del Zotto, $Y$ systems, $Q$ systems, and 4D $\mathcal{N}=2$ supersymmetric QFT, J. Phys. A 47 (2014), no. 47 474001, [arXiv:1403.7613]. * (44) M. Cirafici and M. Del Zotto, Discrete Integrable Systems, Supersymmetric Quantum Mechanics, and Framed BPS States - I, arXiv:1703.04786. * (45) M. Cirafici, A note on discrete dynamical systems in theories of class $S$, JHEP 05 (2021) 224, [arXiv:2011.12887]. * (46) J. Gu and M. Mariño, Peacock patterns and new integer invariants in topological string theory, arXiv:2104.07437. * (47) A. King, Moduli of representations of finite dimensional algebras, Quarterly Journal of Mathematics 45 (1994) 515–530. * (48) T. Bridgeland, Stability conditions on triangulated categories, arXiv Mathematics e-prints (Dec., 2002) math/0212237, [math/0212237]. * (49) T. Bridgeland, Scattering diagrams, Hall algebras and stability conditions, arXiv e-prints (Mar., 2016) arXiv:1603.00416, [arXiv:1603.00416]. * (50) M. R. Douglas, Dirichlet branes, homological mirror symmetry, and stability, math/0207021. * (51) M. R. Douglas, D-Branes on Calabi-Yau Manifolds, arXiv Mathematics e-prints (Sept., 2000) math/0009209, [math/0009209]. * (52) M. R. Douglas, D-branes, categories and N=1 supersymmetry, J. Math. Phys. 42 (2001) 2818–2843, [hep-th/0011017]. * (53) P. Yi, Witten index and threshold bound states of D-branes, Nucl. Phys. B 505 (1997) 307–318, [hep-th/9704098]. * (54) Z. Duan, D. Ghim, and P. Yi, 5D BPS Quivers and KK Towers, arXiv:2011.04661. * (55) D. Gaiotto, G. W. Moore, and A. Neitzke, Framed BPS States, Adv.Theor.Math.Phys. 17 (2013) 241–397, [arXiv:1006.0146]. * (56) N. Seiberg and E. Witten, Electric - magnetic duality, monopole condensation, and confinement in N=2 supersymmetric Yang-Mills theory, Nucl. Phys. B426 (1994) 19–52, [hep-th/9407087]. [Erratum: Nucl. Phys.B430,485(1994)]. * (57) P. Longhi, The structure of BPS spectra. PhD thesis, Rutgers U., Piscataway, 2015. * (58) N. Seiberg, Five-dimensional SUSY field theories, nontrivial fixed points and string dynamics, Phys. Lett. B 388 (1996) 753–760, [hep-th/9608111]. * (59) K. A. Intriligator, D. R. Morrison, and N. Seiberg, Five-dimensional supersymmetric gauge theories and degenerations of Calabi-Yau spaces, Nucl. Phys. B 497 (1997) 56–100, [hep-th/9702198]. * (60) C. Closset, M. Del Zotto, and V. Saxena, Five-dimensional SCFTs and gauge theory phases: an M-theory/type IIA perspective, SciPost Phys. 6 (2019), no. 5 052, [arXiv:1812.10451]. * (61) A. Iqbal, A. Neitzke, and C. Vafa, A Mysterious duality, Adv. Theor. Math. Phys. 5 (2002) 769–808, [hep-th/0111068]. * (62) A. Hanany and A. Iqbal, Quiver theories from D6 branes via mirror symmetry, JHEP 04 (2002) 009, [hep-th/0108137]. * (63) N. Joshi, Discrete Painlevé Equations. CBMS Regional Conference Series in Mathematics. Conference Board of the Mathematical Sciences, 2019. * (64) M. Gross, P. Hacking, and S. Keel, Moduli of surfaces with an anti-canonical cycle, Compositio Mathematica 151 (2015), no. 2 265–291. * (65) S. Cecotti and C. Vafa, Classification of complete N=2 supersymmetric theories in 4 dimensions, Surveys in differential geometry 18 (2013) [arXiv:1103.5832]. * (66) S. Franco, Y.-H. He, C. Sun, and Y. Xiao, A Comprehensive Survey of Brane Tilings, Int. J. Mod. Phys. A 32 (2017), no. 23n24 1750142, [arXiv:1702.03958]. * (67) A. Klemm, W. Lerche, P. Mayr, C. Vafa, and N. P. Warner, Selfdual strings and N=2 supersymmetric field theory, Nucl. Phys. B477 (1996) 746–766, [hep-th/9604034]. * (68) K. Kajiwara, M. Noumi, and Y. Yamada, Geometric Aspects of Painlevé Equations, arXiv e-prints (Sept., 2015) arXiv:1509.08186, [arXiv:1509.08186]. * (69) K.-M. Lee and P. Yi, Monopoles and instantons on partially compactified D-branes, Phys. Rev. D 56 (1997) 3711–3717, [hep-th/9702107]. * (70) M. Wijnholt, Large volume perspective on branes at singularities, Adv. Theor. Math. Phys. 7 (2003), no. 6 1117–1153, [hep-th/0212021]. * (71) A. Hanany and R.-K. Seong, Brane Tilings and Reflexive Polygons, Fortsch. Phys. 60 (2012) 695–803, [arXiv:1201.2614].
<EMAIL_ADDRESS> # Global atmospheric data assimilation with multi-modal masked autoencoders Thomas J. Vandal Zeus AI Kate Duffy Zeus AI Daniel McDuff Zeus AI Yoni Nachmany Zeus AI Chris Hartshorn Zeus AI (July 2024) ###### Abstract Global data assimilation enables weather forecasting at all scales and provides valuable data for studying the Earth system. However, the computational demands of physics-based algorithms used in operational systems limits the volume and diversity of observations that are assimilated. Here, we present “EarthNet”, a multi-modal foundation model for data assimilation that learns to predict a global gap-filled atmospheric state solely from satellite observations. EarthNet is trained as a masked autoencoder that ingests a 12 hour sequence of observations and learns to fill missing data from other sensors. We show that EarthNet performs a form of data assimilation producing a global 0.16∘ reanalysis dataset of 3D atmospheric temperature and humidity at a fraction of the time compared to operational systems. It is shown that the resulting reanalysis dataset reproduces climatology by evaluating a 1 hour forecast background state against observations. We also show that our 3D humidity predictions outperform MERRA-2 and ERA5 reanalyses by 10% to 60% between the middle troposphere and lower stratosphere (5 to 20 km altitude) and our 3D temperature and humidity are statistically equivalent to the Microwave integrated Retrieval System (MiRS) observations at nearly every level of the atmosphere. Our results indicate significant promise in using EarthNet for high-frequency data assimilation and global weather forecasting. ###### keywords: Data assimilation, multi-modal foundation models, microwave sounders, infrared imagers, weather forecasting ## Introduction Observations of the Earth’s atmosphere, land, and ocean are invaluable for studying the planet and predicting future conditions. These observations are ingested by data assimilation (DA) systems operated on some of the world’s largest supercomputers by organizations like National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), and the European Center for Medium-Range Weather Forecasting (ECMWF). DA in numerical weather prediction (NWP) merges observations with forecasts through statistical methods to produce an optimal initial state of the atmosphere. The quantity, quality, and diversity of observations is accelerating as a result of combined public and private sector investments and technological advancements [1]. While these data improvements provide us with more overall information, most is never used to inform weather forecasts. Operational DA systems ingest a wide variety of observations to produce analysis ready data and to initialize NWP. These petabyte/exabyte scale data processing steps translate raw observations from satellites into geophysical variables that are then ingested by variational DA methods. Every six hours, DA systems at NOAA [2] and ECMWF [3] collect the most recent observations and perform this process to generate new forecasts. The process takes 3-5 hours to complete with the majority of the computational cost from applying 3D/4D variational DA. Weather forecasts are then distributed many hours after the observations occurred, creating a large gap between reality and forecast. Forecast data produced by NOAA and ECMWF are used around the world for decision making and emergency response. In recent years, numerous studies have shown impressive results replacing physics-based numerical weather prediction with AI models, both in terms of improved computational efficiency _and_ accuracy [4, 5, 6, 7, 8]. Improvements are found by learning model dynamics from reanalysis data that are not well represented in the physics-based NWP models. The efficiency and accuracy gains for short-term forecasting indicate substantial potential for long-term adoption of the technology. However, current operational AI forecasts depend on global DA systems that are not learned from data. This presents a large gap between observations and the weather forecasts released to users. A more efficient DA system that learns from observations and can be applied in near real-time would generate more accurate and higher resolution representations of the environment, leading to improved forecasts. As highlighted in the National Academy’s Decadal Survey [9, 10], modeling the 3D atmospheric structure of temperature, humidity, and winds is a key challenge in current NWP systems. A solution would be “transformative to weather and air quality forecasting” [10]. Improving atmospheric profile accuracy and resolution is important for both global scale energy balance and local short- term severe events. The key challenge is obtaining complete vertical profile observations, globally, in all-sky conditions, at resolutions applicable to forecasting and representing these well in a geophysical model. Here we present an approach to ingest general weather observations aimed at producing more timely and accurate 3D profiles of temperature and humidity using a pure AI approach. Modality / Sensor | Orbit | Channels / Variables ---|---|--- GOES-16 Advanced Baseline Imager (ABI) [24] | G | Thermal infrared 10 bands GOES-18 Advanced Baseline Imager (ABI) [24] | G | Thermal infrared 10 bands Geostationary Korea Multi-Purpose Satellite - 2A (GK2A) [25] | G | Thermal infrared 10 bands Spinning Enhanced Visible Infra-Red Imager (SEVIRI) [26] | G | Thermal infrared 8 bands Advanced Technology Microwave Sounder (ATMS) [14] | L | Brightness temperature 22 bands Visible Infrared Imaging Radiometer Suite (VIIRS) [15] | L | Thermal infrared 7 bands Shuttle Radar Topography Mission (SRTM) [16] | S | Elevation, land-sea mask Microwave integrated Retrieval System temperature [17] | L | 3D temperature (37 levels) Microwave integrated Retrieval System humidity [17] | L | 3D specific humidity (37 levels) Table 1: Sensor modalities assimilated in EarthNet comprise a diversity of spectra and orbital perspectives. Along with static topographical data, these sources provide complementary views of atmospheric and surface states and enable the model to learn complex relationships across space, time, and modality. G =Geostationary (GEO), L =Low Earth Orbit (LEO), S =Static. Figure 1: EarthNet ingests multi-dimensional Earth observations from varying orbits and spectra. Sensor modalities in the first four rows have 12 hours of input sequence with a number of channels. The last row is a static elevation variable defining the topography. Sub-images are extracted spatially of size $(144,144)$ with a token size of $(16,16)$. Tokens are encoded with a vision transformer and embedded per sensor modality. After tokens pass through the backbone transformer, each decoder sees all context tokens. This process is applied as a moving window across the image and reassembled using Hann windows. ## EarthNet We propose Multi-modal Masked Autoencoders (MMAE) as an alternative approach to DA that includes end-to-end learning of observational and forward operators from historical observations. Masked Autoencoders (MAE) are widely used in large language and vision models to learn expressive representations applicable to downstream tasks [18]. MAEs define the learning problem such that a subset of input features predicts the remaining features, typically using a transformer architecture and trained to maximize the model’s marginal likelihood [30, 20]. Applied to Earth observations, MAEs can be used to gap- fill areas with sparse or missing observations if given sufficient context from different sensors and nearby in space-time. Recent work on MMAE presents a scalable encoder-decoder transformer methodology applicable to diverse modalities with steerable content generation [21, 22]. Here we extend the MMAE to Earth observations with temporal sequences, multi-spectral imagery, and data gaps. A key feature of our approach is the strict dependence on learning directly from observations, either from space or ground systems, without any reanalysis datasets. Nine datasets listed in Table 1 are used to train the model including geostationary (GEO), low earth orbit satellite (LEO) observations, and static topographical data. Data is interpolated and reprojected to hourly, 0.16∘ resolution composites which is higher resolution than current operational analysis products. LEO sensors observe every point on Earth approximately twice per day, giving us four ATMS and two VIIRS samples, while GEO is available every hour. A composited dataset of two years is collected and transformed from $\sim$500 terabytes to 2 terabytes of clean training data. Data volume is reduced as we downsample sensors to the lowest common spatial resolution of ATMS near 0.16∘. Each modality is a high-dimensional tensor with space, time, and channel dimensions, much like a video. The combination of four geostationary sensors covers 60% of the pixels at all times. Each LEO sensor captures approximately 10% of the pixels, with 20% from ATMS soundings. Given the range of modalities, these ratios are well within the capabilities of masked autoencoders. Figure 2: Background departures of EarthNet’s temperature (a-f) and specific humidity (g-l) against observations temporally averaged across the spatial and vertical dimensions. The top row shows MiRS average temperature and humidity values across February and March 2024. EarthNet’s 1 hour background state average temperature and humidity are shown in the second row. The third row shows error departures as computed from 1 hour background state predictions minus the MiRS observation. EarthNet, outlined in Figure 1, treats each sensor as a separate modality with a unified transformer architecture and modality specific encoder/decoders as applied in [21]. Modalities with a time dimension are tokenized frame by frame with vision transformers, enabling the model to ignore patches with missing data. Tokens from each modality are concatenated and passed through the multi- modal transformer. The predicted tokens are used as context to decode complete information from each modality. Within, each modality decoder, cross-modality embeddings are added to provide context for the output prediction. This process makes it possible to generate every sensor from cross sensor contextual information. It is expected that as more sensors and modalities are added, the context and decoded results will continue to improve. The model is trained by randomly selecting a small subset (128 or 1.4%) of tokens, predicting the remainder, and applying a simple mean square error (MSE) loss. This process naturally ignores areas of missing data and fills the pixels with context from previous timesteps and other satellites. Model training is performed in stages on 16 Nvidia V100 graphic processor units (GPUs) for $\sim$21 days. These stages included pretraining tokenizers, training on level 1 sensors, and adding MiRS data with less temporal coverage. Short-term 1 to 6 hour forecasts are directly available from EarthNet by filling future timesteps, which will enable us to evaluate the forward process. The learned dependencies across space, time, and modalities is a powerful feature that will enable seamless integration with additional sensors. Figure 3: Sensitivity analysis and sensor importance measured by relative mean absolute errors over the land and ocean. (Top row) Each sensor is dropped individually while reconstructing all modalities and comparing errors to the baseline of all modalities included. (Bottom row) One sensor is taken as input to reconstruct all with errors computed as above. The analysis is split into the land (left) and ocean (right) regions to delineate surface types. ## Verification Methods EarthNet is evaluated for the ability to reconstruct missing information and generate 3D atmospheric temperature and humidity profiles. A test set of February and March 2024 is held out from training and used for each experiment. We present a series of analyses of the model’s capabilities. First, an evaluation of background state departure errors is presented. A sensor sensitivity analysis tests the importance of each modality for reconstruction. Comparisons with unseen (by EarthNet) radiosonde observations against ERA5 [3] and MERRA-2 [23] indicate statistically significant global reanalysis skill. ### Background departures DA performance is commonly evaluated by comparing the model’s background state to observations. In traditional DA, this is done by computing the error between a forecast and observations, before those observations are assimilated. This process increases the independence between inputs and analysis data for a fair evaluation. Here we perform this experiment by removing the last frame from each input modality then forecasting the last frame. This is applied for every time step in the test set and saved as the model background state. Background departures are computed as the mean absolute error (MAE) between the one hour forecast and observations unseen during inference. The results presented in Figure 2 show MiRS temperature and humidity 500 hectopascal (hpa) statistics of the spatial and vertical dimensions. Mean climatologies of MiRS and EarthNet over the test period are computed and shown in the first two rows. It is seen that EarthNet reconstructs the spatial and vertical distribution of both temperature and humidity effectively. Due to the test set months, the highest temperatures are seen near the surface just below the equator. This also corresponds to higher humidity values in the tropics. Background departures are shown on the bottom row. Vertically, temperature errors largely follow the expected distribution with the largest near the surface. EarthNet is shown to capture the overall global distribution of specific humidity, both vertically and spatially, though the spatial distribution of humidity is smoother in EarthNet indicating degraded resolution. Vertically, EarthNet well captures the boundary layer height and distribution. Background departures are mostly limited to 1 g/kg and are found largest in high humidity tropical regions. Overall we see that EarthNet captures the global spatial and vertical distributions of temperature and humidity. Detailed statistics of bias and MAE for all modalities and channels can be found in the Supplement LABEL:sup:sec:backgrounddepartures. ### Sensor sensitivity analysis By exploring the model’s ability to perform many-to-many and one-to-many prediction, we gain insight into the importance of each sensor for representing the atmosphere’s state. Here we perform a sensor sensitivity test by systematically removing one or more sensors to reconstruct all. In this way we are able to remove information at inference and test the model’s ability to reproduce that observation. In the first experiment, we remove one sensor at a time from the input and reconstruct all. This process is executed for every sensor and compared to the baseline of no sensors dropped. Mean absolute errors of observations minus reconstructions are computed over land and ocean regions. Results in Figure 3 (top row) show that as sensors are removed, the mean absolute error generally increases, indicating that more sensors would improve performance. We find that ATMS has a high effect in predicting humidity and temperature while infrared imagery is generally less important. Errors over the ocean regions are slightly larger than over land, most notably for the infrared data. It is also found that the overlapping GOES-16 and GOES-18 satellites lose minimal performance, suggesting duplicate information content. In contrast, GK2A and SEVIRI have little spatial overlap with other GEO sensors and have larger reconstruction errors. VIIRS contains substantial overlap with GEO sensors outside of the polar regions and is found to have relatively low error when removed. Secondly, a one-to-many experiment is performed by predicting all sensors from a single input sensor. This experiment tests the amount of total information from a given sensor to reconstruct all other sensors and the 3D structure. Results in Figure 3 (bottom row) highlight that ATMS contains a substantial amount of information for reconstructing all modalities, including all of the infrared sensors. This is found over both land and ocean regions, indicating that much of the information in infrared imagery can be represented by microwave sounders. Infrared sensors show the capability of reconstructing themselves but have limited skill representing the 3D structure. The increased uncertainty in the upper troposphere for ATMS and decreased uncertainty for infrared is attributed to errors in ATMS limb correction. Overall, these experiments show that microwave sounders provide important information for modeling the atmosphere. Generally, however, infrared imagery is of higher resolution than sounders due to the sensing technique and in this study the infrared observations are downsampled from 750m/2km to 16km. While the utility of infrared imagery is not as apparent in this analysis, the increased temporal frequency is expected to aid in the temporal interpolation of ATMS. Figure 4: Comparison with radiosonde soundings shows that EarthNet’s performance is similar to MiRS observations in matching radiosonde observations of temperature (a) and humidity (b). EarthNet outperforms MERRA-2 and ERA5 reanalyses for humidity predictions between 50 and 500 hPa, without the benefit of having assimilated radiosonde data. Scatter plots show a strong linear relationship between ERA5/MERRA-2 reanalysis and radiosonde temperature observations (c). Relative humidity scatter plots show more normally distributed errors for EarthNet and MiRS at high humidity values (d). ### Global reanalysis verification We compared EarthNet’s reanalysis data to those of operational models ERA5 (from ECMWF) and MERRA-2 (from NASA) and Microwave integrated Retrieval System (MiRS) retrievals using radiosonde soundings of temperature and relative humidity as ground truth (Figure 4; Table LABEL:tab:radiosonde_results). Radiosondes are weather balloons designed to directly measure the atmospheric vertical profile at a single location. We found that EarthNet errors are similar in magnitude and distribution to MiRS, indicating that EarthNet is successful at reconstructing MiRS. Upon inspection of the temperature and relative humidity error distributions over the height of the atmosphere, we find there is no statistically significant difference ($p\leq 0.05$) between EarthNet’s errors and MiRS errors at the majority of pressure levels. Like MiRS, EarthNet achieves 10-60% lower error for humidity between 500 and 50 hPa than ERA5 or MERRA-2. However, we found that EarthNet has mean absolute error of temperature that is 1.28 K (1.11 K) greater than ERA5 (MERRA-2) over the full vertical distribution. We note that temperature and humidity errors are relatively consistent between ERA5 and MERRA-2, with ERA5 performing slightly better, perhaps due to the higher spatial resolution of ERA-5 (0.25∘) compared to MERRA-2 (0.5∘ x 0.625∘). The scatter plots in Figure 4c and 4d show the correspondence between the reanalysis/satellite products and radiosonde observations. For temperarure, MERRA-2 and ERA5 show a strong linear fit to radiosonde measurements ($R>0.99$), while EarthNet and MiRS have similar and slightly lower correlation ($R=0.94$). For relative humidity, ERA5 and MERRA-2 exhibit a bias toward higher humidity relative to radiosonde soundings, whereas EarthNet and MiRS show more normally distributed errors. EarthNet’s relative humidity has a mean absolute error of 8.337% that is comparable to numerical models (9.937% for MERRA-2 and 7.746% for ERA5) and a minimal bias of $<$1%, compared to a 6-7% bias from the numerical models. These results suggest that EarthNet is successful at reconstructing MiRS, while highlighting inconsistencies between reference datasets. Previous studies have noted the unreliability of radiosonde moisture soundings at the 100 hPa level and intravariability between ERA-5, Global Data Assimilation System (GDAS), and MiRS, while concluding that MiRS, and therefore EarthNet, is “well within the uncertainty” of all of the data taken together [17]. It is also worthwhile to note that radiosonde data are assimilated into DA systems with low uncertainty assignments, potentially drawing ERA5 and MERRA-2 reanalysis disproportionately close to radiosonde observations. ## Discussion While incremental NWP improvements have driven slow and steady advancements for decades, recent progress in ML have disrupted this trend and achieved greater accuracy forecasts in some cases than numerical models. Until now, almost all ML achievements in weather modeling have focused on forecasting and retained a dependence on NWP reanalysis for training and NWP DA for forecast initialization. We present a versatile model capable of using a masked modeling objective across a range of strictly observational input and output modalities. Without any dependence on numerical models, EarthNet is a multi-modal generative model that can be conditioned on any arbitrary subset of input modalities to produce complete global reanalyses at 0.16∘. We show that the model can gap-fill missing observations, perform a one hour forecast, and extend to more modalities. The methodology outlined has potential to be extended to a wide range of applications, varying resolutions, and modality types. One of the main limitations of our current architecture are the high memory requirements in both training and inference. Without further optimization, the high-dimensionality of remote sensing data will limit the number of potential modalities. This issue can be addressed by compressing the data into tokens prior to multi-modal training, as shown in [22]. Secondly, due to the encoding transformer token size, the current model does not retain all high-frequency information, leading to overly smooth predictions. EarthNet represents a significant departure from traditional DA techniques and showcases the potential of multimodal ML for simulation of Earth system processes. This advancement arrives as Earth science is experiencing rapid progress in acquiring scientific observations, in a large part due to satellite-based observations that provide a global perspective. We believe that multi-modal foundation modeling approaches like EarthNet are poised to make DA possible in near real-time, leading to more timely and accurate short term weather forecasts. ## Acknowledgements The computing used to train this model was provided by the NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center, accessed through the SBIR award 80NSSC23CA169. This research also used resources of the National Energy Research Scientific Computing Center (NERSC), a DOE Office of Science User Facility using NERSC award SBIR-ERCAP0030185. We thank NOAA, NASA, European Space Agency (ESA), and the Korean Meteorology Agency (KMA) for data access. Software developed for this study leveraged open source projects including PyTorch and Pangeo. We thank the thank our collaborators who provided comments on this work: Daniel Holdaway, Tsengdar Lee, Ramakrishna Nemani, and Max Wilson. ## References * [1] Lucas N Joppa “The case for technology investments in the environment” In _Nature_ 552.7685 Nature Publishing Group UK London, 2017, pp. 325–328 * [2] Matthew Rodell et al. “The global land data assimilation system” In _Bulletin of the American Meteorological society_ 85.3 American Meteorological Society, 2004, pp. 381–394 * [3] Hans Hersbach et al. “The ERA5 global reanalysis” In _Quarterly Journal of the Royal Meteorological Society_ 146.730 Wiley Online Library, 2020, pp. 1999–2049 * [4] Peter Bauer, Alan Thorpe and Gilbert Brunet “The quiet revolution of numerical weather prediction” In _Nature_ 525.7567 Nature Publishing Group UK London, 2015, pp. 47–55 * [5] Remi Lam et al. “Learning skillful medium-range global weather forecasting” In _Science_ 382.6677 American Association for the Advancement of Science, 2023, pp. 1416–1421 * [6] Kaifeng Bi et al. “Accurate medium-range global weather forecasting with 3D neural networks” In _Nature_ 619.7970 Nature Publishing Group UK London, 2023, pp. 533–538 * [7] Yuchen Zhang et al. “Skilful nowcasting of extreme precipitation with NowcastNet” In _Nature_ 619.7970 Nature Publishing Group UK London, 2023, pp. 526–532 * [8] Imme Ebert-Uphoff and Kyle Hilburn “The outlook for AI weather prediction” Nature Publishing Group UK London, 2023 * [9] National Research Council et al. “Earth science and applications from space: national imperatives for the next decade and beyond” National Academies Press, 2007 * [10] Space Studies Board, Engineering National Academies of Sciences and Medicine “Thriving on our changing planet: A decadal strategy for Earth observation from space” National Academies Press, 2019 * [11] Timothy J Schmit et al. “A closer look at the ABI on the GOES-R series” In _Bulletin of the American Meteorological Society_ 98.4 American Meteorological Society, 2017, pp. 681–698 * [12] Sung-Rae Chung et al. “Meteorological products of Geo-KOMPSAT 2A (GK2A) satellite” In _Asia-Pacific Journal of Atmospheric Sciences_ 56 Springer, 2020, pp. 185–185 * [13] DMA Aminou “MSG’s SEVIRI instrument” In _ESA Bulletin(0376-4265)_ , 2002, pp. 15–17 * [14] F Weng et al. “Introduction to Suomi national polar-orbiting partnership advanced technology microwave sounder for numerical weather prediction and tropical cyclone applications” In _Journal of Geophysical Research: Atmospheres_ 117.D19 Wiley Online Library, 2012 * [15] RE Murphy et al. “The visible infrared imaging radiometer suite” In _Earth Science Satellite Remote Sensing: Vol. 1: Science and Instruments_ Springer, 2006, pp. 199–223 * [16] Tom G Farr et al. “The shuttle radar topography mission” In _Reviews of geophysics_ 45.2 Wiley Online Library, 2007 * [17] Sid-Ahmed Boukabara et al. “MiRS: An all-weather 1DVAR satellite data assimilation and retrieval system” In _IEEE Transactions on Geoscience and Remote Sensing_ 49.9 IEEE, 2011, pp. 3249–3272 * [18] Kaiming He et al. “Masked autoencoders are scalable vision learners” In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2022, pp. 16000–16009 * [19] Pablo Moreno-Muñoz, Pol Garcia Recasens and Søren Hauberg “On masked pre-training and the marginal likelihood” In _Advances in Neural Information Processing Systems_ 36, 2023, pp. 79781–79791 * [20] Colorado J Reed et al. “Scale-mae: A scale-aware masked autoencoder for multiscale geospatial representation learning” In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2023, pp. 4088–4099 * [21] Roman Bachmann, David Mizrahi, Andrei Atanov and Amir Zamir “Multimae: Multi-modal multi-task masked autoencoders” In _European Conference on Computer Vision_ , 2022, pp. 348–367 Springer * [22] David Mizrahi et al. “4M: Massively Multimodal Masked Modeling” In _Advances in Neural Information Processing Systems_ 36, 2024 * [23] Ronald Gelaro et al. “The modern-era retrospective analysis for research and applications, version 2 (MERRA-2)” In _Journal of climate_ 30.14 American Meteorological Society, 2017, pp. 5419–5454 ## References * [24] Timothy J Schmit et al. “A closer look at the ABI on the GOES-R series” In _Bulletin of the American Meteorological Society_ 98.4 American Meteorological Society, 2017, pp. 681–698 * [25] Sung-Rae Chung et al. “Meteorological products of Geo-KOMPSAT 2A (GK2A) satellite” In _Asia-Pacific Journal of Atmospheric Sciences_ 56 Springer, 2020, pp. 185–185 * [26] DMA Aminou “MSG’s SEVIRI instrument” In _ESA Bulletin(0376-4265)_ , 2002, pp. 15–17 * [27] Fuzhong Weng et al. “Calibration of Suomi national polar-orbiting partnership advanced technology microwave sounder” In _Journal of Geophysical Research: Atmospheres_ 118.19 Wiley Online Library, 2013, pp. 11–187 * [28] Alberto Carrassi, Marc Bocquet, Laurent Bertino and Geir Evensen “Data assimilation in the geosciences: An overview of methods, issues, and perspectives” In _Wiley Interdisciplinary Reviews: Climate Change_ 9.5 Wiley Online Library, 2018, pp. e535 * [29] Mathieu Germain, Karol Gregor, Iain Murray and Hugo Larochelle “Made: Masked autoencoder for distribution estimation” In _International conference on machine learning_ , 2015, pp. 881–889 PMLR * [30] Pablo Moreno-Muñoz, Pol Garcia Recasens and Søren Hauberg “On masked pre-training and the marginal likelihood” In _Advances in Neural Information Processing Systems_ 36, 2023, pp. 79781–79791 * [31] Diederik P Kingma and Jimmy Ba “Adam: A method for stochastic optimization” In _arXiv preprint arXiv:1412.6980_ , 2014 appendix
# Enhancing Cross-task Transferability of Adversarial Examples with Dispersion Reduction Yunhan Jia, Yantao Lu*†, Senem Velipasalar†, Zhenyu Zhong, Tao Wei Baidu X-Lab, †Syracuse University {jiayunhan, yantaolu, edwardzhong<EMAIL_ADDRESS><EMAIL_ADDRESS>Equal contribution ###### Abstract Neural networks are known to be vulnerable to carefully crafted adversarial examples, and these malicious samples often transfer, i.e., they maintain their effectiveness even against other models. With great efforts delved into the transferability of adversarial examples, surprisingly, less attention has been paid to its impact on real-world deep learning deployment. In this paper, we investigate the transferability of adversarial examples across a wide range of real-world computer vision tasks, including image classification, explicit content detection, optical character recognition (OCR), and object detection. It represents the cybercriminal’s situation where an ensemble of different detection mechanisms need to be evaded all at once. We propose practical attack that overcomes existing attacks’ limitation of requiring task-specific loss functions by targeting on the “dispersion” of internal feature map. We report evaluation on four different computer vision tasks provided by Google Cloud Vision APIs to show how our approach outperforms existing attacks by degrading performance of multiple CV tasks by a large margin with only modest perturbations ($l_{\infty}\leq 16$). ## 1 Introduction Recent research in adversarial learning has brought the weaknesses of deep neural networks (DNNs) to the spotlights of security and machine learning studies. Given a deep learning model, it is easy to generate adversarial examples (AEs), which are close to the original but are misclassified by the model [9, 24]. More importantly, their effectiveness sometimes _transfer_ , which may severely hinder DNN based applications especially in security critical scenarios [16, 10, 25]. While such vulnerabilities are alarming, little attention has been paid on the realistic threat model of commercial or proprietary vision-based detection systems against real-world cybercriminals, which turn out to be quite different from those intensively studied by aforementioned research. Figure 1: Real-world computer vision systems deployed in safety- and security- critical scenarios usually employ an ensemble of detection mechanisms that are opaque to attackers. Cybercriminals are required to generate adversarial examples that transfer across tasks to maximize their chances of evading the entire detection systems. Deployment. Computer vision (CV) based detection mechanisms have been extensively deployed in security-critical applications such as content censorship and authentication with facial biometrics, and readily available services are provided by cloud giants through APIs (e.g., Google Cloud Vision [3], Amazon Rekognition [1]). The detection systems have long been targeted by evasive attacks from cybercriminals, and it has resulted in an arm race between new attacks and more advanced defenses. Ensemble of different detection mechanisms. To overcome the weakness of deep learning in individual domain, real-world CV systems tend to employ an ensemble of different detection mechanisms to prevent evasions. As shown in Fig. 1, underground businesses embed promotional contents such as URLs into porn images with sexual content for illicit online advertising or phishing. A detection system combines Optical Character Recognition (OCR) and image-based explicit content detection can thus drop posted images containing either suspicious URLs or sexual content to mitigate evasion attacks. Similarly, a face recognition model that is known to be fragile [22] is usually protected by a liveness detector to defeat spoofed digital images when deployed for authentications. Such ensemble mechanisms are widely adopted in real-world CV deployment. To evade detections with uncertain mechanisms, attackers turn to generate adversarial examples that transfer across CV tasks. Many adversarial techniques on enhancing transferability have been proposed [26, 25, 16, 10]. However, most of them are designed for image classification tasks, and rely on task-specific loss function (e.g., cross-entropy loss), which limits their effectiveness when transferred to other CV tasks. In this paper, we propose a simple yet effective approach to generate adversarial examples that transfer across a broad class of CV tasks, including classification, object detection, explicit content detection and OCR. Our approach called _Dispersion Reduction_ (DR) as shown in Fig. 2, is inspired by the impact of “contrast” on an image’s perceptibility. As lowering the contrast of an image would make the objects indistinguishable, we presume that reducing the “contrast” of internal feature map would also degrade the recognizability of the subjects in the image, and thus could evade CV-based detections. We use _dispersion_ as a measure of “contrast” in feature space, which describes how scattered a set of data is. We empirically validate the impact of dispersion on model predictions, and find that reducing the dispersion of internal feature map would largely affect the activation of subsequent layers. Based on another observation that lower layers detect simple features [15], we hypothesis that the low level features extracted by early convolution layers share many similarities across CV models. Thus the distortions caused by dispersion reduction in feature space, are ideally suited to fool any CV models, whether designed for classification, object detection, OCR, or other vision tasks. We evaluate our proposed DR attack on both popular open source models and commercially deployed detection models. The results on four Google Cloud Vision APIs: classification, object detection, SafeSearch, and OCR (see §4) show that our attack causes larger drops on the model performance than state- of-the-art attacks ( MI-FGSM [10] and DIM [25]) by a big margin of 11% on average across different tasks. We hope that our finding to raise alarms for real-world CV deployment in security-critical applications, and our attacks to be used as benchmarks to evaluate the robustness of DNN-based detection mechanisms. Code is available at: https://github.com/jiayunhan/dispersion_reduction. Figure 2: DR attack targets on the dispersion of feature map at specific layer of feature extractors. The adversarial example generated by minimizing dispersion at conv3.3 of VGG-16 model also distorts feature space of subsequent layers (e.g., conv5.3), and its effectiveness transfers to commercially deployed GCV APIs. ## 2 Background & Related Work ### 2.1 Transferability of Adversarial Examples Since the seminal finding of Szegedy _et al_. [24], the transferability of adversarial examples between different models trained over same or disjoint datasets have been discovered. Followed by Goodfellow _et al_. [11], this phenomenon was attributed to the reason that adversarial perturbations is highly aligned with the weight vector of model. More recently, Papernot _et al_. [21] investigated attacks against black-box models by training substitute models. They also demonstrated attacks against machine learning services hosted by Amazon, and Google. Our work differs from Papernot _et al_. [21] in two main aspects. First, the GCV APIs we attack in this work is not the same as the Cloud Prediction API [2] (now the Google Cloud Machine Learning Service) attacked in Papernot _et al_. [21]. Both systems are black-box, but the Prediction API is intended to be trained by user’s own data, while the GCV APIs are trained on Google’s data and are provided ”out-of-box”. Second, we study transferability over black-box commercial models assuming no feedback on testing samples. Our proposed DR attack do not query the systems for constructing substitute model [21, 20] nor running score or decision based attacks [8, 12, 19, 23], and as Liu _et al_. [16] demonstrated, it is more difficult to transfer adversarial examples to commercial models that are trained on large dataset, and are potentially ensemble. ### 2.2 Adversarial Attacks Several methods have been proposed recently to find AEs and improve transferability. A single-step attack, called fast gradient sign method (FGSM) was proposed by Goodfellow _et al_. [11]. In a follow up work, Kurakin _et al_. [13] proposed a multi-step attack, called iterative fast gradient sign method (I-FGSM) that iteratively searches the loss surface. Generally iterative attack achieves higher success rate than single-step attack in white-box setting, while performs worse when transfer to other models [25]. Fueled by the NIPS 2017 adversary competition [14], several adversarial techniques that enhance transferability have been introduced, among them we given an overview of the most notable ones. MI-FGSM. Momentum Iterative Fast Gradient Sign Method (MI-FGSM) proposed by Dong _et al_. [10] integrates momentum term into the attack process to stabilize update directions and escape poor local maxima. The update procedure is as follow: $\displaystyle x^{\prime}_{t+1}=x^{\prime}_{t}+\alpha\cdot sign(g_{t+1})$ (1) $\displaystyle g_{t+1}=\mu\cdot g_{t}+\frac{\bigtriangledown_{x}J(x^{\prime}_{t},y)}{\parallel\bigtriangledown_{x}J(x^{\prime}_{t},y)\parallel_{1}}$ The strength of MI-FGSM can be controlled by the momentum and the number of iterations. DIM. Momentum Diverse Inputs Fast Gradient Sign Method (DIM) combines momentum and input diversity strategy to enhance transferability [25]. DIM applies image transformation($T(\cdot)$) to the inputs with a probability $p$ at each iteration of iterative FGSM to alleviate the overfitting phenomenon. The updating procedure is similar to MI-FGSM, with the only replacement of Eq.1 by: $\displaystyle x^{\prime}_{t+1}=Clip^{\epsilon}_{x}\\{x^{\prime}_{t}+\alpha\cdot sign(\bigtriangledown_{x}L(T(x^{\prime}_{t+1};p),y^{true})\\}$ (2) where $T(x^{\prime}_{t},p)$ is a stochastic transformation function that performs input diversion on input with a probability of $p$. The major difference between dispersion reduction (DR) with existing attacks is that DR doesn’t require task-specific loss functions (e.g., cross-entropy used by the family of FGSM attacks). It targets on the numerical property of low level features that is task-independent, and presumably similar across CV models. Our evaluation in §4 demonstrate good transferability of adversarial examples generated by DR across real-world CV tasks. ## 3 Methodology Algorithm 1 Dispersion reduction attack Input: A classifier $f$, original sample $x$, feature map at layer $k$; perturbation budget $\epsilon$ Input: Attack iterations $T$, learning rate $\ell$. Output: An adversarial example $x^{\prime}$ with ${\parallel x^{\prime}-x\parallel}_{\infty}\leqslant\epsilon$ 1:procedure Despersion reduction 2: $x^{\prime}_{0}\leftarrow x$ 3: for $t=0$ to $T-1$ do 4: Forward $x^{\prime}_{t}$ and obtain feature map at layer $k$: $\mathcal{F}_{k}=f(x^{\prime}_{t})|_{k}$ (3) 5: Compute standard deviation of $\mathcal{F}_{k}$: $g(\mathcal{F}_{k})$ 6: Compute its gradient $w.r.t$ the input: $\bigtriangledown_{x}g(\mathcal{F}_{k})$ 7: Update $x^{\prime}_{t}$ by applying Adam optimization: $x^{\prime}_{t}=x^{\prime}_{t}-Adam(\bigtriangledown_{x}g(\mathcal{F}_{k}),\ell)$ (4) 8: Project $x^{\prime}_{t}$ to the vicinity of $x$: $x^{\prime}_{t}=clip(x^{\prime}_{t},x-\epsilon,x+\epsilon)$ (5) 9: return $x^{\prime}_{t}$ Existing attacks perturb input images along gradient directions $\bigtriangledown_{x}J$ that depend on the ground-truth label $y$ and the definition of the task-specific loss function $J$, which limits their cross- task transferability. We propose _dispersion reduction_ (DR) attack that formally define the problem of finding an AE as an optimization problem: $\displaystyle\min_{x‘}g(f(x^{\prime},\theta))$ (6) $\displaystyle s.t.$ $\displaystyle{\parallel x^{\prime}-x\parallel}_{\infty}\leqslant\epsilon$ where $f(\cdot)$ is a DNN classifier with output of intermediate feature map and $g(\cdot)$ calculates the dispersion. Our proposed DR attack in Algorithm 1 takes a multi-step approach that creates an adversarial example by iteratively reducing the dispersion of intermediate feature map at layer $k$. Dispersion describes the extent to which a distribution is stretched or squeezed, and there can be different measures of dispersion such as the variance, standard deviation, and gini coefficient [18]. In this work, we choose standard deviation as the dispersion metric and denote it as $g(\cdot)$ due to its simplicity. Given a target feature map, DR applies Adam optimizer to iteratively perturb image $x^{\prime}$ along the direction of reducing standard deviation, and projects it to the vicinity of $x$ by clipping at $x\pm\epsilon$. Denoting the feature map at layer $k$ as $\mathcal{F}_{k}=f(x^{\prime}_{t})|_{k}$, DR attack solves the following formula: $\displaystyle x^{\prime}_{t+1}$ $\displaystyle=x^{\prime}_{t}-\bigtriangledown_{x^{\prime}}{g(\mathcal{F}_{k})}$ (7) $\displaystyle=x^{\prime}_{t}-\frac{dg(t)}{dt}\cdot\frac{df(x^{\prime}_{t})|_{k}}{dx^{\prime}}$ $\displaystyle=x^{\prime}_{t}-\frac{t-\bar{t}}{\sqrt{N-1}\cdot\sqrt{\sum_{i}{(t_{i}-\bar{t})}^{2}}}\cdot\frac{df(x^{\prime}_{t})|_{k}}{dx^{\prime}}$ | Target Layer | Classification - acc. | Detection - mAP(IoU=0.5) ---|---|---|--- Inception-v3 | DenseNet | RetinaNet | YOLOv3 VGG-16 | conv1.2 (shallow) | 52.5% | 29.3% | 31.8 | 42.3 conv3.3 (mid) | 28.7% | 34.6% | 18.3 | 33.8 conv5.1 (deep) | 35.5% | 44.8% | 34.0 | 41.5 Resnet-152 | conv1 (shallow) | 53.7% | 63.1% | 28.3 | 57.3 conv3.8.3 (mid) | 25.8% | 34.7% | 29.5 | 41.6 conv5.3.3 (deep) | 28.4% | 41.5% | 20.5 | 38.5 Table 1: The performance of classification and object detection models (columns) when attacked by adversarial examples generated on VGG-16 and Resnet-152. The profiling result suggests that AEs generated by targeting middle layers degrade performance of both classification and detection models by a larger margin. From Eq.7, we state that given the targeted intermediate feature map, the optimized adversarial example $x^{\prime}_{t}$ is achieved when all feature map elements $t_{j}\in t$ have the same value. Table 2 compares the transferability of AEs generate on different layers (shallow to deep) of off- the-shelf feature extractors across different classification and object detection models. The result on 1000 randomly chosen samples from ImageNet validation set shows that targeting on middle layers, i.e. conv3.3 of VGG-16 and conv3.8.3 of Resnet-152 provides better transferability. ## 4 Experiments In this section, we compare DR with state-of-the-art adversarial techniques to enhance transferability on commercially deployed Google Cloud Vision (GCV) tasks: * • Image Label Detection (Labels) 111https://cloud.google.com/vision/docs/detecting-labels classifies image into broad sets of categories. * • Object Detection (Objects) 222https://cloud.google.com/vision/docs/detecting- text detects multiple objects with their labels and bounding boxes in an image. * • Image Texts Recognition (Texts) 333https://cloud.google.com/vision/docs/detecting-safe-search detects and recognize text within an image, which returns their bounding boxes and transcript texts. * • Explicit Content Detection (SafeSearch) 444https://cloud.google.com/vision/docs/detecting-objects detects explicit content such as adult or violent content within an image, and returns the likelihood. Figure 3: Visualization of images chosen from testing set and their corresponding AEs generated by DR. All the AEs are generated on VGG-16 conv3.3 layer, with perturbations clipped by $l_{\infty}\leq 16$, and they effectively fool the four GCV APIs as indicated by their outputs. Datasets. We use ImageNet validation set for testing Labels and Objects, and the NSFW Data Scraper [7] and COCO-Text [4] dataset for evaluating against SafeSearch and Texts respectively. We randomly choose 100 images from each dataset for our evaluation, and Fig. 3 shows sample images in our testing set. [b] Model Attack Labels Objects SafeSearch Texts acc. mAP(IoU=0.5) acc. AP(IoU=0.5) C.R.W2 baseline (SOTA)1 82.5% 73.2 100% 69.2 76.1% VGG-16 MI-FGSM 41% 42.6 62% 38.2 15.9% DIM 39% 36.5 57% 29.9 16.1% DR (Ours) 23% 32.9 35% 20.9 4.1% Resnet-152 MI-FGSM 37% 41.0 61% 40.4 17.4% DIM 49% 46.7 60% 34.2 15.1% DR (Ours) 25% 33.3 31% 34.6 9.5% * 1 The baseline performance of GCV models cannot be measured due to the mismatch between original labels and labels used by Google. We use the GCV prediction results on original images as ground truth, thus the baseline performance should be 100% for all accuracy and 100.0 for mAP and AP. Here we provide state-of-the-art performance [5, 6, 4, 7] for reference. * 2 Correctly recognized words (C.R.W) [4]. Table 2: The degraded performance of four Google Cloud Vision models, where we attack a single model from the left column. Our proposed DR attack degrades the accuracy of Lables and SafeSearch to 23% and 35%, the mAP of Objects and Texts to 32.9 and 20.9, the word recognition accuracy of Texts to only 4.1%, which outperform existing attacks. Experiment setup. We choose normally trained VGG-16 and Resnet-152 as our target models, from which the AEs are generated, as Resnet-152 is commonly used by MI-FGSM and DIM for generation [25, 10]. As DR attack targets on specific layer, we choose conv3.3 for VGG-16 and conv3.8.3 for Resnet-152 as per the profiling result in Table 1. Attack parameters. We follow the default settings in [10] with the momentum decay factor $\mu=1$ when implementing the MI-FGSM attack. For the DIM attack, we set probability $p=0.5$ for the stochastic transformation function $T(x;p)$ as used in [25], and use the same decay factor $\mu=1$ and total iteration number $N=20$ as in the vanilla MI-FGSM. For our proposed DR attack, we don’t rely on FGSM method, and instead we use Adam optimizer ($\beta_{1}=0.98$, $\beta_{2}=0.99$) with learning rate $5e^{-2}$ to reduce the dispersion of target feature map. The maximum perturbation of all attacks in the experiment are limited by clipping at $l_{\infty}=16$, which is still considered less perceptible for human observers [17]. Evaluation metrics. We perform adversarial attacks only on single network and test them on the four black-box GCV models. The effectiveness of attacks are measured by the model performance under attacks. As the labels from original datasets are different from labels used by GCV, we use the prediction results of GCV APIs on the original data as the ground truth, which gives a baseline performance of 100% accuracy or 100.0 mAP and AP respectively. We also provide state-of-the-art results on each CV tasks as references (Table 2). Figure 3 shows example of each GCV model’s output for original and adversarial examples. The performance of Labels and SafeSearch are measured by the accuracy of classifications. More specifically, we use _top1_ accuracy for Labels, and use the accuracy for detecting our given porn images as LIKELY or VERY_LIKELY being adult for SafeSearch. The performance of Objects is given by the mean average precision (mAP) at $IoU=0.5$. For Texts, we follow the bi-fold evaluation method of ICDAR 2017 Challenge [4]. We measure text localization accuracy using average precision (AP) of bounding boxes at $IoU=0.5$, and evaluate the word recognition accuracy with correctly recognized words (C.R.W) that are case insensitive. Results. As shown in Table 2, DR outperforms other baseline attacks by degrading the target model performance by a larger margin. For example, the adversarial examples crafted by DR on VGG-16 model brings down the accuracy of Labels to only 23%, and SafeSearch to 35%. Adversarial examples created with the same technique also degrade mAP of Objects to 32.9% and AP of text localization to 20.9%, and with barely 4.1% accuracy in recognizing words. Strong baselines like MI-FGSM and DIM on the other hand, only obtains 38% and 43% success rate when attacking SafeSearch, and are less effective compared with DR when attacking all other GCV models. The results demonstrates the better cross-task transferability of dispersion reduction attack. When comparing the effectiveness of attacks on different generation models, the results that DR generates adversarial examples that transfer better across these four commercial APIs still hold. The visualization in Fig. 3 shows that the perturbed images with $l_{\infty}\leq 16$ well maintain their visual similarities with original images, but fools real-world computer vision systems. ## 5 Discussion & Conclusion One intuition behind DR attack is that by minimizing the dispersion of feature maps, we are making images “featureless”, as few features can be detected, if neuron activations are suppressed by perturbing the input (Fig. 2). Further, with the observation that low level features bear more similarities across CV models, we hypothesis that DR attack would produce transferable adversarial examples when targeted on intermediate convolution layers. Evaluation on four different CV tasks shows that this enhanced attack greatly degrades model performance, and thus would facilitate evasion attacks against even an ensemble of CV-based detection mechanisms. We hope that our proposed attack can serve as benchmark for evaluating robustness of future defense. ## References * [1] Amazon Rekognition. Link. * [2] Google Cloud Machine Learning Engine. Link. * [3] Google Cloud Vision. Link. * [4] ICDAR2017 Robust reading challenge on COCO-Text. Link. * [5] ImageNet Challenge 2017. Link. * [6] Keras Applications. Link. * [7] NSFW Data Scraper. Link. * [8] Wieland Brendel, Jonas Rauber, and Matthias Bethge. Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248, 2017. * [9] Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017. * [10] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu, and Jianguo Li. Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9185–9193, 2018. * [11] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014. * [12] Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598, 2018. * [13] Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236, 2016. * [14] Alexey Kurakin, Ian Goodfellow, Samy Bengio, Yinpeng Dong, Fangzhou Liao, Ming Liang, Tianyu Pang, Jun Zhu, Xiaolin Hu, Cihang Xie, et al. Adversarial attacks and defences competition. In The NIPS’17 Competition: Building Intelligent Systems, pages 195–231. Springer, 2018. * [15] Honglak Lee, Roger Grosse, Rajesh Ranganath, and Andrew Y Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the 26th annual international conference on machine learning, pages 609–616. ACM, 2009. * [16] Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770, 2016. * [17] Yan Luo, Xavier Boix, Gemma Roig, Tomaso Poggio, and Qi Zhao. Foveation-based mechanisms alleviate adversarial examples. arXiv preprint arXiv:1511.06292, 2015. * [18] Chris A Mack. NIST,SEMATECH e-Handbook of Statistical Methods. 2007\. * [19] Nina Narodytska and Shiva Prasad Kasiviswanathan. Simple black-box adversarial perturbations for deep networks. arXiv preprint arXiv:1612.06299, 2016. * [20] Nicolas Papernot, Patrick McDaniel, and Ian Goodfellow. Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277, 2016. * [21] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519. ACM, 2017. * [22] Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K Reiter. Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016. * [23] Jiawei Su, Danilo Vasconcellos Vargas, and Kouichi Sakurai. One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 2019. * [24] Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013. * [25] Cihang Xie, Zhishuai Zhang, Jianyu Wang, Yuyin Zhou, Zhou Ren, and Alan Yuille. Improving transferability of adversarial examples with input diversity. arXiv preprint arXiv:1803.06978, 2018. * [26] Wen Zhou, Xin Hou, Yongjun Chen, Mengyun Tang, Xiangqi Huang, Xiang Gan, and Yong Yang. Transferable adversarial perturbations. In Proceedings of the European Conference on Computer Vision (ECCV), pages 452–467, 2018.
# Perception Prioritized Training of Diffusion Models Jooyoung Choi1 Jungbeom Lee1 Chaehun Shin1 Sungwon Kim1 Hyunwoo Kim3 Sungroh Yoon1,2,∗ 1 Data Science and AI Laboratory, Seoul National University 2 AIIS, ASRI, INMC, ISRC, and Interdisciplinary Program in AI, Seoul National University 3 LG AI Research ###### Abstract ††$*$Correspondence to: Sungroh Yoon<EMAIL_ADDRESS> Diffusion models learn to restore noisy data, which is corrupted with different levels of noise, by optimizing the weighted sum of the corresponding loss terms, i.e., denoising score matching loss. In this paper, we show that restoring data corrupted with certain noise levels offers a proper pretext task for the model to learn rich visual concepts. We propose to prioritize such noise levels over other levels during training, by redesigning the weighting scheme of the objective function. We show that our simple redesign of the weighting scheme significantly improves the performance of diffusion models regardless of the datasets, architectures, and sampling strategies. ## 1 Introduction Diffusion models [39, 14], a recent family of generative models, have achieved remarkable image generation performance. Diffusion models have been rapidly studied, as they offer several desirable properties for image synthesis, including stable training, easy model scaling, and good distribution coverage [29]. Starting from Ho et al. [14], recent works [29, 8, 43] have shown that the diffusion models can render high-fidelity images comparable to those generated by generative adversarial networks (GANs) [12], especially in class- conditional settings, by relying on additional efforts such as classifier guidance [8] and cascaded models [35]. However, the unconditional generation of single models still has considerable room for improvement, and performance has not been explored for various high-resolution datasets (e.g., FFHQ [20], MetFaces [18]) where other families of generative models [20, 44, 22, 11, 3] mainly compete. Starting from tractable noise distribution, a diffusion model generates images by progressively removing noise. To achieve this, a model learns the reverse of the predefined diffusion process, which sequentially corrupts the contents of an image with various levels of noise. A model is trained by optimizing the sum of denoising score matching losses [46] for various noise levels [42], which aims to learn the recovery of clean images from corrupted images. Instead of using a simple sum of losses, Ho et al. [14] observed that their empirically obtained weighted sum of losses was more beneficial to sample quality. Their weighted objective is the current de facto standard objective for training diffusion models [29, 8, 25, 35, 43]. However, surprisingly, it remains unknown why this performs well or whether it is optimal for sample quality. To the best of our knowledge, the design of a better weighting scheme to achieve better sample quality has not yet been explored. Given the success of diffusion models with the standard weighted objective, we aim to amplify this benefit by exploring a more appropriate weighting scheme for the objective function. However, designing a weighting scheme is difficult owing to two factors. First, there are thousands of noise levels; therefore, an exhaustive grid search is impossible. Second, it is not clear what information the model learns at each noise level during training, therefore hard to determine the priority of each level. In this paper, we first investigate what a diffusion model learns at each noise level. Our key intuition is that the diffusion model learns rich visual concepts by solving pretext tasks for each level, which is to recover the image from corrupted images. At the noise level where the images are slightly corrupted, images are already available for perceptually rich content and thus, recovering images does not require prior knowledge of image contexts. For example, the model can recover noisy pixels from neighboring clean pixels. Therefore, the model learns imperceptible details, rather than high-level contexts. In contrast, when images are highly corrupted so that the contents are unrecognizable, the model learns perceptually recognizable contents to solve the given pretext task. Our observations motivate us to propose P2 (perception prioritized) weighting, which aims to prioritize solving the pretext task of more important noise levels. We assign higher weights to the loss at levels where the model learns perceptually rich contents while minimal weights to which the model learns imperceptible details. To validate the effectiveness of the proposed P2 weighting, we first compare diffusion models trained with previous standard weighting scheme and P2 weighting on various datasets. Models trained with our objective are consistently superior to the previous standard objective by large margins. Moreover, we show that diffusion models trained with our objective achieve state-of-the-art performance on CelebA-HQ [17] and Oxford-flowers [30] datasets, and comparable performance on FFHQ [20] among various types of generative models, including generative adversarial networks (GANs) [12]. We further analyze whether P2 weighting is effective to various model configurations and sampling steps. Our main contributions are as follows: * • We introduce a simple and effective weighting scheme of training objectives to encourage the model to learn rich visual concepts. * • We investigate how the diffusion models learn visual concepts from each noise level. * • We show consistent improvement of diffusion models across various datasets, model configurations, and sampling steps. ## 2 Background ### 2.1 Definitions Diffusion models [39, 14] transform complex data distribution $p_{data}(x)$ into simple noise distribution $\mathcal{N}(0,\mathbf{I})$ and learn to recover data from noise. The diffusion process of diffusion models gradually corrupts data $x_{0}$ with predefined noise scales $0<\beta_{1},\beta_{2},...,\beta_{T}<1$, indexed by time step $t$. Corrupted data $x_{1},...,x_{T}$ are sampled from data $x_{0}\sim p_{data}(x)$, with a diffusion process, which is defined as Gaussian transition: $q(x_{t}|x_{t-1})=\mathcal{N}(x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{I}).$ (1) Noisy data $x_{t}$ can be sampled from $x_{0}$ directly: $x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\epsilon,$ (2) where $\epsilon\sim\mathcal{N}(0,\mathbf{I})$ and $\alpha_{t}:=\prod_{s=1}^{t}(1-\beta_{s})$. We note that data $x_{0}$, noisy data $x_{1},...,x_{T}$, and noise $\epsilon$ are of the same dimensionality. To ensure $p(x_{T})\sim\mathcal{N}(0,\mathbf{I})$ and the reversibility of the diffusion process [39], one should set $\beta_{t}$ to be small and $\alpha_{T}$ to be near zero. To this end, Ho et al. [14] and Dhariwal et al. [8] use a linear noise schedule where $\beta_{t}$ increases linearly from $\beta_{1}$ to $\beta_{T}$. Nichol et al. [29] use a cosine schedule where $\alpha_{t}$ resembles the cosine function. Diffusion models generate data $x_{0}$ with the learned denoising process $p_{\theta}(x_{t-1}|x_{t})$ which reverses the diffusion process of Eq. 1. Starting from noise $x_{T}\sim\mathcal{N}(0,\mathbf{I})$, we iteratively subtract the noise predicted by noise predictor $\epsilon_{\theta}$: $x_{t-1}=\frac{1}{\sqrt{1-\beta_{t}}}(x_{t}-\frac{\beta_{t}}{\sqrt{1-\alpha_{t}}}\epsilon_{\theta}(x_{t},t))+\sigma_{t}z,$ (3) where $\sigma_{t}^{2}$ is a variance of the denoising process and $z\sim\mathcal{N}(0,\mathbf{I})$. Ho et al. [14] used $\beta_{t}$ as $\sigma_{t}^{2}$. Recent work Kingma et al. [23] simplified the noise schedules of diffusion models in terms of signal-to-noise ratio (SNR). SNR of corrupted data $x_{t}$ is a ratio of squares of mean and variance from Eq. 2, which can be written as: $\text{SNR}(t)=\alpha_{t}/(1-\alpha_{t}),$ (4) and thus the variance of noisy data $x_{t}$ can be written in terms of SNR: $\alpha_{t}=1-1/(1+\text{SNR}(t))$. We would like to note that SNR($t$) is a monotonically decreasing function. Figure 1: Information removal of a diffusion process. (Left) Perceptual distance of corrupted images as a function of signal-to-noise ratio (SNR). Distances are measured between two noisy images either corrupted from the same image (blue) or different images (orange). We averaged distances measured with 200 random triplets from CelebA-HQ. Perceptually recognizable contents are removed when SNR magnitude is between $10^{-2}$ and $10^{0}$. (Right) Illustration of the diffusion process. ### 2.2 Training Objectives The diffusion model is a type of variational auto-encoder (VAE); where the encoder is defined as a fixed diffusion process rather than a learnable neural network, and the decoder is defined as a learnable denoising process that generates data. Similar to VAE, we can train diffusion models by optimizing a variational lower bound (VLB), which is a sum of denoising score matching losses [46]: $L_{vlb}=\sum_{t}L_{t}$, where weights for each loss term are uniform. For each step $t$, denoising score matching loss $L_{t}$ is a distance between two Gaussian distributions, which can be rewritten in terms of noise predictor $\epsilon_{\theta}$ as: $\displaystyle L_{t}=~{}$ $\displaystyle D_{KL}(q(x_{t-1}|x_{t},x_{0})~{}||~{}p_{\theta}(x_{t-1}|x_{t}))$ $\displaystyle=~{}$ $\displaystyle\mathbb{E}_{x_{0},\epsilon}[\frac{\beta_{t}}{(1-\beta_{t})(1-\alpha_{t})}||\epsilon-\epsilon_{\theta}(x_{t},t)||^{2}].$ (5) Intuitively, we train a neural network $\epsilon_{\theta}$ to predict the noise $\epsilon$ added in noisy image $x_{t}$ for given time step $t$. Ho et al. [14] empirically observed that the following simplified objective is more beneficial to sample quality: $L_{simple}=\sum_{t}\mathbb{E}_{x_{0},\epsilon}[||\epsilon-\epsilon_{\theta}(x_{t},t)||^{2}].$ (6) In terms of VLB, their objective is $L_{simple}=\sum_{t}\lambda_{t}L_{t}$ with weighting scheme $\lambda_{t}=(1-\beta_{t})(1-\alpha_{t})/\beta_{t}$. In a continuous-time setting, this scheme can be expressed in terms of SNR: $\lambda_{t}=-1/\text{log- SNR}^{\prime}(t)=-\text{SNR}(t)/\text{SNR}^{\prime}(t),$ (7) where $\text{SNR}^{\prime}(t)=\frac{d\text{SNR}(t)}{dt}$. See appendix for derivations. While Ho et al. [14] use fixed values for the variance $\sigma_{t}$, Nichol et al. [29] propose to learn it with hybrid objective $L_{hybird}=L_{simple}+cL_{vlb}$, where $c=1e^{-3}$. They observed that learning $\sigma_{t}$ enables reducing sampling steps while maintaining the generation performance. We inherit their hybrid objective for efficient sampling and modify $L_{simple}$ to improve performance. ### 2.3 Evaluation Metrics We use FID [13] and KID [2] for quantitative evaluations. FID is well-known to be analogous to human perception [13] and well-used as a default metric [18, 8, 20, 11, 32] for measuring generation performances. KID is a well-used metric to measure performance on small datasets [18, 19, 32]. However, since both metrics are sensitive to the preprocessing [32], we use a correctly implemented library [32]. We compute FID and KID between the generated samples and the entire training set. We measured final scores with 50k samples and conducted ablation studies with 10k samples for efficiency, following [8]. We denote them as FID-50k and FID-10k respectively. ## 3 Method We first investigate what the model learns at each diffusion step in Sec. 3.1. Then, we propose our weighting scheme in Sec. 3.2. We provide discussions on how our weighting scheme is effective in Sec. 3.3. Figure 2: Stochastic reconstruction. (Left) Illustration of reconstruction, where sample are obtained from full sampling chain. (Right) Reconstructions $\hat{x}_{0}$ with input images $x_{0}$ on the rightmost column and SNR of $x_{t}$ on the bottom. Samples in the 1st, 2nd columns share only the coarse attributes (e.g., global color structure) with the input. The 3rd, 4th columns share perceptually discriminative contents with the input. 5th column are almost identical to the input, suggesting that the model learns imperceptible details when SNRs are large. ### 3.1 Learning to Recover Signals from Noisy Data Diffusion models learn visual concepts by solving pretext task at each noise level, which is to recover signals from corrupted signals. More specifically, the model predicts the noise component $\epsilon$ of a noisy image $x_{t}$, where the time step $t$ is an index of the noise level. While the output of diffusion models is noise, other generative models (VAE, GAN) directly output images. Because noise does not contain any content or signals, it is difficult to understand how the noise predictions contribute to learning rich visual concepts. Such nature of diffusion models arises the following question: what information does the model learn at each step during training? Investigating diffusion process. We first investigate the predefined diffusion process to explore what the model can learn from each noise level. Let say we have two different clean images $x_{0}$, $x^{\prime}_{0}$ and three noisy images $x_{tA},~{}x_{tB}\sim q(x_{t}|x_{0})$, $x^{\prime}_{t}\sim q(x_{t}|x^{\prime}_{0})$, where $q$ is the diffusion process. In Fig. 1 (left), we measure perceptual distances (LPIPS [53]) in two cases: the distance between $x_{tA}$ and $x_{tB}$ (blue line), which share the same $x_{0}$, and the distance between $x_{tA}$ and $x^{\prime}_{t}$ (orange line), which were synthesized from different images $x_{0}$ and $x^{\prime}_{0}$. We present the distances of the two cases as functions of the signal-to-noise ratio (SNR) introduced in Eq. 4, which characterizes the noise level at each step. To briefly review, SNR decreases through the diffusion process, as shown in Fig. 1 (right), and increases through the denoising process. The early steps of the diffusion process have large SNRs, which indicates invisibly small noise; thus, noisy images $x_{t}$ retain a large amount of contents from the clean image $x_{0}$. Therefore, in the early steps, $x_{tA}$ and $x_{tB}$ are perceptually similar, while $x_{tA}$ and $x^{\prime}_{t}$ are perceptually different, as shown by the large SNR side in Fig. 1 (left). A model can recover signals without understanding holistic contexts, as perceptually rich signals are already prepared in the image. Thus the model will learn only imperceptible details by solving recovery tasks when SNR is large. In contrast, the late steps have small SNRs, indicating a sufficiently large noise to remove the contents of $x_{0}$. Therefore, distances of both cases start to converge to a constant value, as the noisy images become difficult to recognize the high-level contents. It is shown in the small SNR side in Fig. 1 (left). Here, a model needs prior knowledge to recover signals because the noisy images lack recognizable content. We argue that the model will learn perceptually rich contents by solving recovery tasks when SNR is small. Investigating a trained model. We would like to verify the aforementioned discussions with a trained model. Given an input image $x_{0}$, we first perturb it to $x_{t}$ using a diffusion process $q(x_{t}|x_{0})$ and reconstruct it with the learned denoising process $p_{\theta}(\hat{x}_{0}|x_{t})$, as illustrated in Fig. 2 (left). When $t$ is small, the reconstruction $\hat{x}_{0}$ will be highly similar to the input $x_{0}$ as the diffusion process removes a small amount of signals, while $\hat{x}_{0}$ will share less content with $x_{0}$ when $t$ is large. In Fig. 2 (right), we compare $x_{0}$ and $\hat{x}_{0}$ among various $t$ to show how each step contributes to the sample. Samples in the first two columns share only coarse features (e.g., global color scheme) with the input on the rightmost column, whereas samples in the third and fourth columns share perceptually discriminative contents. This suggests that the model learns coarse features when the SNR of step $t$ is smaller than $10^{-2}$ and the model learns the content when SNR is between $10^{-2}$ and $10^{0}$. When the SNR is larger than $10^{0}$ (fifth column), reconstructions are perceptually identical to the inputs, suggesting that the model learns imperceptible details that do not contribute to perceptually recognizable contents. Based on the above observations, we hypothesize that diffusion models learn coarse features (e.g., global color structure) at steps of small SNRs ($0\text{--}10^{-2}$), perceptually rich contents at medium SNRs ($10^{-2}\text{--}10^{0}$), and remove remaining noise at large SNRs ($10^{0}\text{--}10^{4}$). According to our hypothesis, we group noise levels into three stages, which we term coarse, content, and clean-up stages. ### 3.2 Perception Prioritized Weighting In the previous section, we explored what the diffusion model learns from each step in terms of SNR. We discussed that the model learns coarse features (e.g., global color structure), perceptually rich contents, and to clean up the remaining noise at three groups of noise levels. We pointed out that the model learns imperceptible details at the clean-up stage. In this section, we introduce Perception Prioritized (P2) weighting, a new weighting scheme for the training objective, which aims to prioritize learning from more important noise levels. We opt to assign minimal weights to the unnecessary clean-up stage thereby assigning relatively higher weights to the rest. In particular, we aim to emphasize training on the content stage to encourage the model to learn perceptually rich contexts. To this end, we construct the following weighting scheme: $\lambda^{\prime}_{t}=\frac{\lambda_{t}}{(k+\text{SNR}(t))^{\gamma}},$ (8) where $\lambda_{t}$ is the previous standard weighting scheme (Eq. 7) and $\gamma$ is a hyperparameter that controls the strength of down-weighting focus on learning imperceptible details. $k$ is a hyperparameter that prevents exploding weights for extremely small SNRs and determines sharpness of the weighting scheme. While multiple designs are possible, we show that even the simplest choice (P2) outperforms the standard scheme $\lambda_{t}$. Our method is applicable to existing diffusion models by replacing $\sum_{t}\lambda_{t}L_{t}$ with $\sum_{t}\lambda^{\prime}_{t}L_{t}$. In fact, our weighting scheme $\lambda^{\prime}_{t}$ is a generalization of the popularly used [29, 8, 25, 43] weighting scheme $\lambda_{t}$ of Ho et al. [14] (Eq. 7), where $\lambda^{\prime}_{t}$ arrives at $\lambda_{t}$ when $\gamma=0$. We refer to $\lambda_{t}$ as the baseline herein. Figure 3: Weighting schemes. (Left) Signal-to-noise ratio (SNR) of linear and cosine noise schedules for reference. (Middle) Weights of our P2 weighting and the baseline with a cosine schedule. (Right) Weights of P2 weighting and the baseline with a linear schedule. Compared to the baseline, P2 weighting suppresses weights for large SNRs, where the model learns imperceptible details. Figure 4: Comparison of FID-10k through training on FFHQ. P2 weighting consistently improves performance for both linear and cosine schedules. Training progress refers to the number of images seen by the model. Samples are generated with 250 steps. Figure 5: Samples generated by our models trained on several datasets (FFHQ, CelebA-HQ, MetFaces, AFHQ-Dogs, Oxford Flowers, CUB Bird) at 256$\times$256 resolution. See appendix for more samples. ### 3.3 Effectiveness of P2 Weighting Prior works [14, 29] empirically suggest that the baseline objective $\sum_{t}\lambda_{t}L_{t}$ offers a better inductive bias for sample quality than the VLB objective $\sum_{t}L_{t}$, which does not impose any inductive bias during training. Fig. 3 exhibits $\lambda^{\prime}_{t}$ and $\lambda_{t}$ for both linear [14] and cosine [29] noise schedules, which are explained in Sec. 2.1, indicating that both weighting schemes focus training on the content stage the most and the cleaning stage the least. The success of the baseline weighting is in line with our previous hypothesis that models learn perceptually rich content by solving pretext tasks at the content stage. However, despite the success of the baseline objective, we argue that the baseline objective still imposes an undeserved focus on learning imperceptible details and prevents from learning perceptually rich content. Fig. 3 shows that our $\lambda^{\prime}_{t}$ further suppresses the weights for the cleaning stage, which relatively uplifts the weights for the coarse and the content stages. To visualize relative changes of weights, we exhibit normalized weighting schemes. Fig. 4 supports our method in that FID of the diffusion model trained with our weighting scheme ($\gamma=1$) beats the baseline for both linear and cosine schedules throughout the training. Another notable result from Fig. 4 is that the cosine schedule is inferior to the linear schedule by a large margin, although our weighting scheme improves the FID by a large gap. Sec. 2.2 indicates that the weighting scheme is closely related to the noise schedule. As shown in Fig. 3, the cosine schedule assigns smaller weights to the content stage compared with the linear schedule. We would like to note that designing weighting schemes and noise schedules are correlated but not equivalent, as the noise schedules affects both weights and MSE terms. To summarize, our P2 weighting provides a good inductive bias for learning rich visual concepts, by uplifting weights at the coarse and the content stages, and suppressing weights at the clean-up stage. ### 3.4 Implementation We set $k$ as 1 for easy deployment, because $1/(1+\text{SNR}(t))=1-\alpha_{t}$, as discussed in Sec. 2.1. We set $\gamma$ as either 0.5 and 1. We empirically observed that $\gamma$ over 2 suffers noise artifacts in the sample because it assigns almost zero weight to the clean-up stage. We set $T=1000$ for all experiments. We implemented the proposed approach on top of ADM [8], which offers well-designed architecture and efficient sampling. We use lighter version of ADM through our experiments. Our code and models are available111https://github.com/jychoi118/P2-weighting. ## 4 Experiment We start by exhibiting the effectiveness of our new training objective over the baseline objective in Sec. 4.1. Then, we compare with prior literature of various types of generative models in Sec. 4.2. Finally, we conduct analysis studies to further support our method in Sec. 4.3. Samples generated with our models are shown in Fig. 5. | | FID-50k$\downarrow$ | KID-50k$\downarrow$ ---|---|---|--- Dataset | Step | Base | Ours | Base | Ours FFHQ | 1000 | 7.86 | 6.92 | 3.85 | 3.46 500 | 8.41 | 6.97 | 4.48 | 3.56 CUB | 1000 | 9.60 | 6.95 | 3.49 | 2.38 250 | 10.26 | 6.32 | 4.06 | 1.93 AFHQ-D | 1000 | 12.47 | 11.55 | 4.79 | 4.10 250 | 12.95 | 11.66 | 5.25 | 4.20 Flowers | 250 | 20.01 | 17.29 | 16.8 | 14.8 MetFaces | 250 | 44.34 | 36.80 | 22.1 | 17.6 Table 1: Quantitative comparison. Diffusion models trained with our weighting scheme achieve consistent improvement over the baseline at various datasets and sampling steps, in terms of both FID and KID ($\times 10^{3}$). ### 4.1 Comparison to the Baseline Quantitative comparison. We trained diffusion models by optimizing training objectives with both baseline and our weighting scheme on FFHQ [20], AFHQ-dog [7], MetFaces [18], and CUB [47] datasets. These datasets contain approximately 70k, 50k, 1k, and 12k images respectively. We resized and center-cropped data to 256$\times$256 pixels, following the pre-processing performed by ADM [8]. Tab. 1 shows the results. Our method consistently exhibits superior performance to the baseline in terms of FID and KID. The results suggest that our weighting scheme imposes a good inductive bias for training diffusion models, regardless of the dataset. Our method outperforms the baseline by a large margin especially on MetFaces, which contains only 1k images. Hence, we assume that wasting the model capacity on learning imperceptible details is very harmful when training with limited data. Qualitative comparison. We observe that diffusion models trained with the baseline objective are likely to suffer color shift artifacts, as shown in Fig. 6. We assume that the baseline training objective unnecessarily focuses on the imperceptible details; therefore, it fails to learn global color schemes properly. In contrast, our objective encourages the model to learn global and holistic concepts in a dataset. ### 4.2 Comparison to the Prior Literature We compare diffusion models trained with our method to existing models on FFHQ [20], Oxford flowers [30], and CelebA-HQ [17] datasets, as shown in Tab. 2. We use 256$\times$256 resolutions for all datasets. We achieve state-of-the-art FIDs on the Oxford Flowers and CelebA-HQ datasets. While our models are trained with $T=1000$, we already achieve state-of-the-art with reduced sampling steps; 250 and 500 steps respectively. On FFHQ, we achieve a superior result to most models except StyleGAN2 [21], whose architecture was carefully designed for FFHQ. We note that our method brought the diffusion model closer to the state-of-the-art, and scaling model architectures and sampling steps will further improve the performance. Figure 6: Qualitative comparison. Uncurated samples generated during training. The number of images seen for training displayed on the top. We observed that the baseline suffer color shift problem at early iterations of the training (FFHQ) or even at the convergence (MetFaces). The baseline weighting scheme fails to focus on global coherence and waste model capacity on the imperceptible details. Dataset | Method | Type | FID$\downarrow$ ---|---|---|--- FFHQ | $\text{BigGAN}_{\text{~{}ICLR'19}}$ [3] | GAN | 12.4 $\text{UNet GAN}_{\text{~{}CVPR'20}}$ [37] | GAN | 10.9 $\text{StyleGAN2}_{\text{~{}CVPR'20}}$ [21] | GAN | 3.73 $\text{NVAE}_{\text{~{}NeurIPS'20}}$ [44] | VAE | 26.02 $\text{VDVAE}_{\text{~{}ICLR'21}}$ [5] | VAE | 33.5 $\text{VQGAN}_{\text{~{}CVPR'21}}$[11] | GAN+AR | 9.6 $\text{D2C}_{\text{~{}NeurIPS'21}}$ [38] | Diff | 13.04 Baseline (500 step) | Diff | 8.41 P2 (500 step) | Diff | 6.97 P2 (1000 step) | Diff | 6.92 Oxford Flower | $\text{PGGAN}_{\text{~{}ICLR'18}}$ [17] | GAN | 64.40 $\text{StyleGAN1}_{\text{~{}CVPR'19}}$ [20] | GAN | 64.70 $\text{MSG-GAN}_{\text{~{}CVPR'20}}$ [16] | GAN | 19.60 Baseline (250step) | Diff | 20.01 P2 (250step) | Diff | 17.29 CelebA -HQ | $\text{PGGAN}_{\text{~{}ICLR'18}}$ [17] | GAN | 8.03 $\text{GLOW}_{\text{~{}NeurIPS'18}}$ [22] | Flow | 68.93 $\text{ALAE}_{\text{~{}CVPR'20}}$ [33] | GAN | 19.21 $\text{NVAE}_{\text{~{}NeurIPS'20}}$ [44] | VAE | 29.76 $\text{VAEBM}_{\text{~{}ICLR'21}}$ [51] | VAE+EM | 20.38 $\text{VQGAN}_{\text{~{}CVPR'21}}$ [11] | GAN+AR | 10.70 $\text{LSGM}_{\text{~{}NeurIPS'21}}$ [45] | VAE+Diff | 7.22 P2 (500step) | Diff | 6.91 Table 2: Comparison to prior literature. FFHQ results reproduced from [32, 11, 38], Oxford Flower from [16], and CelebA-HQ from from [45]. Except for GANs, we achieve superior results. ### 4.3 Analysis In this section, we analyze whether our weighting scheme is robust to the model configurations, number of sampling steps, and sampling schedules. Model configuration matters? Previous experiments are conducted using our default model for fair comparisons. Here, we show that P2 weighting is effective regardless of the model configurations. Tab. 3 shows that our method achieves consistently superior performance to the baseline for various configurations. We investigated for following variations: replacing the BigGAN [3] residual block with the residual block of Ho et al. [14], removing self- attention at 16$\times$16, using two BigGAN [3] residual blocks, and training our default model with a learning rate of $2.5\times 10^{-5}$. Our default model contains a single BigGAN [3] residual block and is trained with a learning rate $2\times 10^{-5}$. Our weighting scheme consistently improves FID and KID by a large margin, across various model configurations. Our method is especially effective when the self-attention is removed ((c)), indicating that P2 encourages learning global dependency. Sampling step matters? We trained our models on 1000 diffusion steps following the convention of previous studies. However, it requires more than 10 min to generate a high-resolution image with a modern GPU. Nichol et al. [29] have shown that their sampling strategy maintains performance even when reducing the sampling steps. They also observed that using the DDIM [40] sampler was effective when using 50 or fewer sampling steps. Fig. 7 shows the FID scores of various sampling steps with models trained on the FFHQ. A model trained with our weighting scheme consistently outperforms the baseline by considerable margins. It should be noted that our weighting scheme consistently achieves better performance with half the number of sampling steps required by the baseline. Why not schedule sampling steps? In addition to the consistent improvement across various sampling steps, we sweep over the sampling steps in Tab. 4. Sweeping sampling schedules slightly improves FID and KID but does not reach our improvement. Our method is more effective compared to scheduling sampling steps, as we improve the model training, which benefits predictions at all steps. | FID-10k$\downarrow$ | KID-10k$\downarrow$ ---|---|--- | Base | Ours | Base | Ours (a) | 46.80 | $\text{41.93}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-4.87)}}$ | 22.6 | $\text{20.5}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-2.1)}}$ (b) | 47.62 | $\text{47.37}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-0.25)}}$ | 23.4 | $\text{22.7}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-0.7)}}$ (c) | 49.56 | $\text{43.09}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-6.47)}}$ | 24.3 | $\text{20.6}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-3.7)}}$ (d) | 45.45 | $\text{42.06}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-3.39)}}$ | 21.1 | $\text{18.9}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-2.2)}}$ (e) | 46.34 | $\text{39.51}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-6.83)}}$ | 23.0 | $\text{17.4}_{~{}{\color[rgb]{0,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{0,1,1}\pgfsys@color@cmyk@stroke{1}{0}{0}{0}\pgfsys@color@cmyk@fill{1}{0}{0}{0}(-5.6)}}$ Table 3: Comparison among various model configurations. (a) Our default configuration (b) No BigGAN block (c) Self-attention only at bottleneck (8$\times$8 resolution) (d) Two residual blocks (e) Learning rate $2.5e^{-5}$. Samples generated with 250 steps. Figure 7: Reducing sampling steps. FID as a function of sampling steps. Our method is superior to the baseline regardless of the sampling steps. Samples generated following [29] and DDIM [40]. Models trained on FFHQ dataset. Method | Schedule | FID-10k$\downarrow$ | KID-10k$\downarrow$ ---|---|---|--- Base | 250 uniform | 10.88 | 5.91 130-60-60 | 10.62 | 5.14 60-130-60 | 12.23 | 7.54 500 uniform | 9.74 | 4.48 Ours | 250 uniform | 8.92 | 4.24 Table 4: Sweeping sampling schedule. Schedules expressed as a sequence of integers which are numbers of steps assigned to one-third of the diffusion process. 130-60-60 implies consuming more steps near $t=0$. Modifying the sampling schedule can slightly improve performance, but does not exceed our improvement. ## 5 Related Work ### 5.1 Diffusion-Based Generative Models Diffusion models [39, 14, 29, 8] and score-based models [42, 43] are two recent families of generative models that generate data using a learned denoising process. Song et al. [43] showed that both families can be expressed with stochastic differentiable equations (SDE) with different noise schedules. We note that score-based models may enjoy different hyperparamters of P2 ($\gamma$ and $k$) as noise schedules are correlated with weighting schemes (Sec. 2.2). Recent studies [35, 8, 43] have achieved remarkable improvements in sample quality. However, they rely on heavy architectures, long training and sampling steps [43], classifier guidance [8], and a cascade of multiple models [35]. In contrast, we improved the performance by simply redesigning the training objective without requiring heavy computations and additional models. Along with the success in the image domain, diffusion models have also shown effectiveness in speech synthesis [25, 4]. ### 5.2 Advantages of Diffusion Models Diffusion models have several advantages over other generative models. First, their sample quality is superior to likelihood-based methods such as autoregressive models [36, 31], flow models [9, 10], and variational autoencoders (VAEs) [24]. Second, because of the stable training, scaling and applying diffusion models to new domains and datasets is much easier than generative adversarial networks (GANs) [12], which rely on unstable adversarial training. Moreover, pre-trained diffusion models are surprisingly easy to apply to downstream image synthesis tasks. Recent works [6, 28] have demonstrated that pre-trained diffusion models can easily adapt to image translation and image editing. Compared to GAN-based methods [15, 54, 1], they adapt a single diffusion model to various tasks without task-specific training and loss functions. They also show that diffusion models allow stochastic (one-to-many) generation in those tasks, while GAN-based methods suffer deterministic (one- to-one) generations [15]. ### 5.3 Redesigning Training Objectives Recent works [23, 41, 45] introduced new training objectives to achieve state- of-the-art likelihood. However, their objectives suffer from degradation of sample quality and training instability, therefore rely on importance sampling [41, 45] or sophisticated parameterization [23]. Because the likelihood focuses on fine-scale details, their objectives impede understanding global consistency and high-level concepts of images. For this reason, [45] use different weighting schemes for likelihood training and FID training. Our P2 weighting provides a good inductive bias for perceptually rich contents, allowing the model to achieve improved sample quality with stable training. ## 6 Conclusion We proposed perception prioritized weighting, a new weighting scheme for the training objective of the diffusion models. We investigated how the model learns visual concepts at each noise level during training, and divided diffusion steps into three groups. We showed that even the simplest choice (P2) improves diffusion models across datasets, model configurations, and sampling steps. Designing a more sophisticated weighting scheme may further improve the performance, which we leave as future work. We believe that our method will open new opportunities to boost the performance of diffusion models. Acknowledgements: This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) [NO.2021-0-01343, Artificial Intelligence Graduate School Program (Seoul National University)], LG AI Research, Samsung SDS, AIRS Company in Hyundai Motor and Kia through HMC/KIA-SNU AI Consortium Fund, and the BK21 FOUR program of the Education and Research Program for Future ICT Pioneers, Seoul National University in 2022. ## References * [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4432–4441, 2019. * [2] Mikołaj Bińkowski, Danica J Sutherland, Michael Arbel, and Arthur Gretton. Demystifying mmd gans. arXiv preprint arXiv:1801.01401, 2018. * [3] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large scale gan training for high fidelity natural image synthesis. arXiv preprint arXiv:1809.11096, 2018. * [4] Nanxin Chen, Yu Zhang, Heiga Zen, Ron J Weiss, Mohammad Norouzi, and William Chan. Wavegrad: Estimating gradients for waveform generation. arXiv preprint arXiv:2009.00713, 2020. * [5] Rewon Child. Very deep vaes generalize autoregressive models and can outperform them on images. arXiv preprint arXiv:2011.10650, 2020. * [6] Jooyoung Choi, Sungwon Kim, Yonghyun Jeong, Youngjune Gwon, and Sungroh Yoon. Ilvr: Conditioning method for denoising diffusion probabilistic models. arXiv preprint arXiv:2108.02938, 2021. * [7] Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha. Stargan v2: Diverse image synthesis for multiple domains. In CVPR, 2020. * [8] Prafulla Dhariwal and Alex Nichol. Diffusion models beat gans on image synthesis. arXiv preprint arXiv:2105.05233, 2021. * [9] Laurent Dinh, David Krueger, and Yoshua Bengio. Nice: Non-linear independent components estimation. arXiv preprint arXiv:1410.8516, 2014. * [10] Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. arXiv preprint arXiv:1605.08803, 2016. * [11] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12873–12883, 2021. * [12] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In NeurIPS, 2014. * [13] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30, 2017. * [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020. * [15] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In CVPR, 2017. * [16] Animesh Karnewar and Oliver Wang. Msg-gan: Multi-scale gradients for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7799–7808, 2020. * [17] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen. Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196, 2017. * [18] Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and Timo Aila. Training generative adversarial networks with limited data. NeurIPS, 2020. * [19] Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Alias-free generative adversarial networks. arXiv preprint arXiv:2106.12423, 2021. * [20] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In CVPR, 2019. * [21] Tero Karras, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8110–8119, 2020. * [22] Diederik P Kingma and Prafulla Dhariwal. Glow: Generative flow with invertible 1x1 convolutions. arXiv preprint arXiv:1807.03039, 2018. * [23] Diederik P Kingma, Tim Salimans, Ben Poole, and Jonathan Ho. Variational diffusion models. arXiv preprint arXiv:2107.00630, 2021. * [24] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114, 2013. * [25] Zhifeng Kong, Wei Ping, Jiaji Huang, Kexin Zhao, and Bryan Catanzaro. Diffwave: A versatile diffusion model for audio synthesis. arXiv preprint arXiv:2009.09761, 2020. * [26] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017. * [27] Eric Luhman and Troy Luhman. Knowledge distillation in iterative generative models for improved sampling speed. arXiv preprint arXiv:2101.02388, 2021. * [28] Chenlin Meng, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021. * [29] Alex Nichol and Prafulla Dhariwal. Improved denoising diffusion probabilistic models. arXiv preprint arXiv:2102.09672, 2021. * [30] Maria-Elena Nilsback and Andrew Zisserman. Automated flower classification over a large number of classes. In 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, pages 722–729. IEEE, 2008. * [31] Aaron van den Oord, Nal Kalchbrenner, Oriol Vinyals, Lasse Espeholt, Alex Graves, and Koray Kavukcuoglu. Conditional image generation with pixelcnn decoders. arXiv preprint arXiv:1606.05328, 2016. * [32] Gaurav Parmar, Richard Zhang, and Jun-Yan Zhu. On buggy resizing libraries and surprising subtleties in fid calculation. arXiv preprint arXiv:2104.11222, 2021. * [33] Stanislav Pidhorskyi, Donald A Adjeroh, and Gianfranco Doretto. Adversarial latent autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 14104–14113, 2020. * [34] Olaf Ronneberger, Philipp Fischer, and Thomas Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention, 2015. * [35] Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636, 2021. * [36] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. arXiv preprint arXiv:1701.05517, 2017. * [37] Edgar Schonfeld, Bernt Schiele, and Anna Khoreva. A u-net based discriminator for generative adversarial networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8207–8216, 2020. * [38] Abhishek Sinha, Jiaming Song, Chenlin Meng, and Stefano Ermon. D2c: Diffusion-denoising models for few-shot conditional generation. arXiv preprint arXiv:2106.06819, 2021. * [39] Jascha Sohl-Dickstein, Eric A Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. arXiv preprint arXiv:1503.03585, 2015. * [40] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020. * [41] Yang Song, Conor Durkan, Iain Murray, and Stefano Ermon. Maximum likelihood training of score-based diffusion models. 2021\. * [42] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. In NeurIPS, 2019. * [43] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020. * [44] Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. NeurIPS, 2020. * [45] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. arXiv preprint arXiv:2106.05931, 2021. * [46] Pascal Vincent. A connection between score matching and denoising autoencoders. Neural computation, 2011. * [47] Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona, and Serge Belongie. The caltech-ucsd birds-200-2011 dataset. 2011\. * [48] Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros. Cnn-generated images are surprisingly easy to spot… for now. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 8695–8704, 2020. * [49] Daniel Watson, Jonathan Ho, Mohammad Norouzi, and William Chan. Learning to efficiently sample from diffusion probabilistic models. arXiv preprint arXiv:2106.03802, 2021. * [50] Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018. * [51] Zhisheng Xiao, Karsten Kreis, Jan Kautz, and Arash Vahdat. Vaebm: A symbiosis between variational autoencoders and energy-based models. In International Conference on Learning Representations, 2020. * [52] Ning Yu, Vladislav Skripniuk, Sahar Abdelnabi, and Mario Fritz. Artificial fingerprinting for generative models: Rooting deepfake attribution in training data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 14448–14457, 2021. * [53] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018. * [54] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, pages 2223–2232, 2017. Figure A: Unnormalized or normalized weights as functions of diffusion steps or signal-to-noise ratio (SNR). Large $t$ and small SNR indicates noisy image $x_{t}$ near random noise $x_{T}$, whereas small $t$ and large SNR indicates $x_{t}$ near a clean image $x_{0}$. ## A Weighting Schemes ### A.1 Additional Visualizations In the main text, we showed weights of both our new weighting scheme and the baseline, as functions of signal-to-noise ratio (SNR). In Fig. A (left), we show weights as functions of time steps ($t$). To exhibit relative changes of weights, we show normalized weights as functions of both time steps (Fig. A (middle)) and SNR (Fig. A (right)). We normalized so that the sum of weights for all time steps become 1. Normalized weights suggest that larger $\gamma$ suppresses weights at steps near $t=0$ and uplifts weights at larger steps. Note that weights of VLB objective are equal to a constant, as such objective does not impose any inductive bias for training. In contrast, as discussed in the main text, our method encourages the model to learn rich content rather than imperceptible details. ### A.2 Derivations In the main text, we wrote the baseline weighting scheme $\lambda_{t}$ as a funtion of SNR, which characterizes the noise level at each step $t$. Below is the derivation: $\displaystyle\lambda_{t}=~{}$ $\displaystyle(1-\beta_{t})(1-\alpha_{t})/\beta_{t}$ $\displaystyle=~{}$ $\displaystyle(\alpha_{t}/\alpha_{t-1})(1-\alpha_{t})/(1-\alpha_{t}/\alpha_{t-1})$ $\displaystyle=~{}$ $\displaystyle\alpha_{t}(1-\alpha_{t})/(\alpha_{t-1}-\alpha_{t})$ $\displaystyle=~{}$ $\displaystyle\alpha_{t}(1-\alpha_{t})/((1-\alpha_{t})-(1-\alpha_{t-1}))$ $\displaystyle=~{}$ $\displaystyle\frac{\text{SNR}(t)}{(1+\text{SNR}(t))^{2}}/(\frac{1}{1+\text{SNR}(t)}-\frac{1}{1+\text{SNR}(t-1)})$ $\displaystyle=~{}$ $\displaystyle\frac{\text{SNR}(t)}{(1+\text{SNR}(t))^{2}}/\frac{\text{SNR}(t-1)-\text{SNR}(t)}{(1+\text{SNR}(t))(1+\text{SNR}(t-1))}$ $\displaystyle=~{}$ $\displaystyle\frac{\text{SNR}(t)(1+\text{SNR}(t-1))}{(1+\text{SNR}(t))(\text{SNR}(t-1)-\text{SNR}(t))}$ $\displaystyle\approx~{}$ $\displaystyle\frac{-\text{SNR}(t)}{\text{SNR}^{\prime}(t)}~{}(T\to\infty),$ (A) which is a differential of log-SNR($t$) regarding time-step $t$. ## B Discussions ### B.1 Limitations Despite the promising performances achieved by our method, diffusion models still need multiple sampling steps. Diffusion models require at least 25 feed- forwards with DDIM sampler, which makes it difficult to use diffusion models in real-time applications. Yet, they are faster than autoregressive models which generate a pixel at each step. In addition, we have observed in section 4.3 that our method enables better FID with half the number of steps required by the baseline. Along with our method, optimizing sampling schedules with dynamic programming [49] or distilling DDIM sampling into a single step model [27] might be promising future directions for faster sampling. ### B.2 Broader Impacts The proposed method in this work allows high-fidelity image generation with diffusion-based generative models. Improving the performance of generative models can enable multiple creative applications [6, 28]. However, such improvements have the potential to be exploited for deception. Works in deepfake detection [48] or watermarking [52] can alleviate the problems. Investigating invisible frequency artifacts [48] in samples of diffusion models might be promising approach to detect fake images. ## C Implementation Details For a given time-step $t$, the input noisy image $x_{t}$ and output noise prediction $\epsilon$ and variance $\sigma_{t}$ are images of the same resolution. Therefore, $\epsilon_{\theta}$ is parameterized with the U-Net [34]-style architecture of three input and six output channel dimensions. We inherit the architecture of ADM [8], which is a U-Net with large channel dimension, BigGAN [3] residual blocks, multi-resolution attention, and multi- head attention with fixed channels per head. Time-step $t$ is provided to the model by adaptive group normalization (AdaGN), which transforms $t$ embeddings to scales and biases of group normalizations [50]. However, for efficiency, we use fewer base channels, fewer residual blocks, and a self-attention at a single resolution (16$\times$16). Hyperparameters for training models are in Tab. A. We use $\gamma=0.5$ for FFHQ and CelebA-HQ as it achieve slightly better FIDs than $\gamma=1.0$ on those datasets. Models consist of one or two residual blocks per resolution and self-attention blocks at 16$\times$16 resolution or at bottleneck layers of 8$\times$8 resolution. Our default model has only 94M parameters, while recent works rely on large models (larger than 500M) [8]. While recent works use 2 or 4 blocks per resolution, we use only one block, which leads to speed- up of training and inference. We use dropout when training on limited data. We trained models using EMA rate of 0.9999, 32-bit precision, and AdamW optimizer [26]. | FFHQ, CelebA-HQ | AFHQ-D | CUB, Flowers, MetFaces | Tab. 3 (b) | Tab. 3 (c) | Tab. 3 (d) ---|---|---|---|---|---|--- $T$ | 1000 | 1000 | 1000 | 1000 | 1000 | 1000 $\beta_{t}$ | linear | linear | linear | linear | linear | linear Model Size | 94 | 94 | 94 | 81 | 90 | 132 Channels | 128 | 128 | 128 | 128 | 128 | 128 Blocks | 1 | 1 | 1 | 1 | 1 | 2 Self-attn | 16, bottle | 16, bottle | 16, bottle | 16, bottle | bottle | 16, bottle Heads Channels | 64 | 64 | 64 | 64 | 64 | 64 BigGAN Block | yes | yes | yes | no | yes | yes $\gamma$ | 0.5 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 Dropout | 0.0 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 Learning Rate | $\text{2e}^{\text{-5}}$ | $\text{2e}^{\text{-5}}$ | $\text{2e}^{\text{-5}}$ | $\text{2e}^{\text{-5}}$ | $\text{2e}^{\text{-5}}$ | $\text{2e}^{\text{-5}}$ Images (M) | 18, 4.4 | 2.4 | 4.8, 4.4, 1.6 | 0.8 | 0.8 | 0.8 Table A: Hyperparameters. ## D Additional Results Qualitative. Additional samples for all datasets mentioned in the paper are in Fig. D. Quantitative. In Fig. 1 of the main text, we measured perceptual distances to investigate how the diffusion process corrupts perceptual contents. In Fig. 2, we qualitatively explored what a trained model learned at each step (Fig. 2). Here, we reproduce Fig. 1 at various datasets and resolutions in Fig. B and show the quantitative result of Fig. 2 in Fig. C. These results indicate that our investigation in Sec. 3.1 holds for various datasets and resolutions. Figure B: Generalization of sec. 3. Results with CelebA-HQ, LSUN-Church, and CUB at $256^{2}$ and $64^{2}$ resolutions. Figure C: Stochastic reconstruction. Perceptual distance between input and reconstructed image as a function of signal-to-noise ratio, measured with random 200 images from FFHQ. Figure D: Additional samples generated with our models traind on various datasets.
# Photon regions, shadow observables and constraints from M87* of a charged rotating black hole Yuan Meng Xiao-Mei Kuang<EMAIL_ADDRESS>Center for Gravitation and Cosmology, College of Physical Science and Technology, Yangzhou University, Yangzhou, 225009, China Zi-Yu Tang School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute for Advanced Study, UCAS, Hangzhou 310024, China School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China ###### Abstract Inspired by the observations of supermassive black hole M87* in _Event Horizon Telescope_(EHT) experiment, a remarkable surge in black hole physics is to use the black hole shadow’s observables to distinguish general relativity (GR) and modified theories of gravity (MoG), which could also help to disclose the astrophysical nature of the center black hole in EHT observation. In this paper, we shall extensively carry out the study of a charged rotating black hole in conformal gravity, in which the term related with the charge has different falloffs from the usual Kerr-Newman (KN) black hole. We investigate the spacetime properties including the horizons, ergospheres and the photon regions; afterward, we show the boundary of black hole shadow and investigate its characterized observables. The features closely depend on the spin and charge parameters, which are compared with those in Kerr and KN black holes. Then presupposing the M87* a charged rotating black hole in conformal gravity, we also constrain the black hole parameters via the observation constraints from EHT experiment. We find that the constraints on the inferred circularity deviation, $\Delta C\lesssim 0.1$, and on the shadow axial ratio, $1<D_{x}\lesssim 4/3$, for the M87* black hole are satisfied for the entire parameter space of the charged rotating black hole in conformal gravity. However, the shadow angular diameter $\theta_{d}=42\pm 3\mu as$ will give upper bound on the parameter space. Our findings indicate that the current charged rotating black hole in conformal gravity could be a candidate for astrophysical black holes. Moreover, the EHT observation on the axial ratio $D_{x}$ may help us to distinguish Kerr black hole and the current charged rotating black hole in conformal gravity in some parameter space. ###### Contents 1. I Introduction 2. II The charged rotating black hole in conformal gravity 1. II.1 Black hole horizons 2. II.2 Static limit surface 3. III Null geodesics and photon regions 4. IV Black hole shadows 1. IV.1 Coordinates setup 2. IV.2 Shadow for observers at finite distance 5. V Shadow observables and parameter estimation 1. V.1 Shadow size and deformation 2. V.2 Energy emission rate 6. VI Constraints from EHT observations of M87* 7. VII Closing remarks ## I Introduction Since Bardeen addressed that the shadow of the Kerr black hole would be distorted by the spin Bardeen1973 in contrast to a perfect circle for the Schwarzschild black hole Synge:1966okc , the study on shadow of rotating black hole has been blooming with the motivation that the trajectories of light near black hole and shadow are closely connected with the essential properties of the background theory of gravity. Thus, physicists could use shadow to unreveal the near horizon features of black hole by analytical investigations or numerical simulation of their shadows Falcke:1999pj ; Virbhadra:1999nm ; Shen:2005cw ; Younsi:2016azx ; Atamurotov:2013sca ; Atamurotov:2015xfa ; Amir:2017slq ; Eiroa:2017uuq ; Vagnozzi:2019apd ; Long:2019nox ; Long:2020wqj ; Banerjee:2019nnj ; Mishra:2019trb ; Kumar:2020hgm ; Qian:2021qow ; Zeng:2020dco ; Zeng:2021dlj ; Lin:2022ksb ; Sun:2022wya ; Cimdiker:2021cpz ; Zhong:2021mty ; Hou:2021okc ; Cai:2021uov ; Gan:2021pwu ; Chang:2021ngy ; Wang:2021ara ; Shaikh:2021cvl ; Guo:2020blq and therein. Moreover, the size and distortion of shadow Hioki:2009na ; Kumar:2018ple , which could be calculated via the boundary of shadow, has been widely investigated to estimate the black hole parameters in both GR and MoG, with or without additional sources surrounding the black hole Wei:2013kza ; Allahyari:2019jqz ; Tsupko:2017rdo ; Cunha:2019dwb ; Kumar:2020owy ; Chen:2020aix ; Brahma:2020eos ; Belhaj:2020kwv ; Badia:2021kpk ; Lee:2021sws ; Badia:2020pnh ; Afrin:2021imp ; Kumar:2019pjp ; Ghosh:2020spb ; Bambi:2019tjh ; Afrin:2021wlj ; Jha:2021bue ; Khodadi:2021gbc ; Frion:2021jse ; Roy:2021uye . This direction could be seen as one aspect of black hole shadows to distinguish GR and other theories of gravity, or to acquire the information of the surrounding matter, though it was found that those theoretical features of shadow are usually not sufficient to distinguish black holes in different theories or confirm the details of the surrounding matter. More details about black hole shadows can be seen in the reviews Cunha:2018acu ; Perlick:2021aok . More recently, the EHT collaboration captured the first image of the supermassive black hole M87* which makes the black hole shadow become a physical reality beyond theory EventHorizonTelescope:2019dse ; EventHorizonTelescope:2019ths ; EventHorizonTelescope:2019pgp . The shadow of M87* from EHT observation has a derivation from circularity $\Delta C\lesssim 0.1$, a axis ratio $1<D_{x}\lesssim 4/3$ and the angular diameter $\theta_{d}=42\pm 3\mu as$. These observations are consistent with the image of Kerr black hole predicted from GR, but they cannot rule out Kerr or non- Kerr black holes in MoG. Thus, the EHT observations of shadow are then applied as an important tool to test black hole in strong gravitational field regime, as the observational data could be used to constrain the black hole parameters in MoG, and even to distinguish different theories of gravity Cunha:2019ikd ; EventHorizonTelescope:2021dqv ; Khodadi:2020jij ; Bambi:2019tjh ; Afrin:2021imp ; Kumar:2019pjp ; Ghosh:2020spb ; Afrin:2021wlj ; Jha:2021bue ; Khodadi:2021gbc . In this work, we shall mainly study the aspects of shadows for a charged rotating black hole in conformal gravity characterized by the spin and charge parameters, in which the charge-related term has different falloffs from the usual KN black hole. We will show more details about this black hole geometry later in next section. The charged rotating black hole here we consider was given in Liu:2012xn as a solution in conformal gravity with the Lagrangian $L=\frac{1}{2}\gamma C^{\mu\nu\rho\sigma}C_{\mu\nu\rho\sigma}+\frac{1}{3}\gamma F^{2}$ (1) which includes the Weyl-squared term minimally coupling to the Maxwell field. Here $C_{\mu\nu\rho\sigma}$ is the Weyl tensor and $F=dA$ is the strength of the Maxwell field. Conformal gravity was pioneerly introduced by Weyl as an extension of GR Weyl:1918pdp and later extensively considered by ’t Hooft etc. in Mannheim:1988dj ; Varieschi:2009vlp ; tHooft:2010xlr ; tHooft:2014swy and therein. The analysis of ghost instability and unitary of conformal gravity has been studied in Bender:2007wu ; Mannheim:2000ka . Different from GR, in conformal gravity the dark matter or dark energy is not necessary to solve several cosmological and astrophysical problems, and readers can refer to Mannheim:2011ds for more details on this symposium. In addition, Maldacena addressed that conformal gravity would reduce to Einstein gravity for a certain boundary condition and there could be a holographic connection between the two theories of gravity Maldacena:2011mk . Such advantageous features indicate that the contents in conformal gravity deserve to explore further. One natural direction is the black hole shadow, as the recent progress on EHT experiment opens a new window to test the strong field regime. The shadow boundary of Kerr-like metric in conformal gravity has been investigated in Mureika:2016efo . Here, we consider the charged rotating black hole geometry and extensively study the aspects of its shadow. Starting from the null geodesics, we study the photon regions and then figure out the shadow boundary of the black hole. We also analyze the characterized observables, i.e. the shape, size and distortion of the shadows and argue the estimation of the black hole parameters from given observables. Then we consider the M87* as the charged rotating black hole in conformal gravity and constrain the black hole parameters with the EHT observations. The remaining parts of this paper are organized as follows. In section II, we study the horizons, static limit and other spacetime properties of the charged rotating black hole in conformal gravity. We obtain the photon region by analyzing the null geodesics in section III, and in section IV with the use of Cartesian coordinates, we show the shadow boundary with various values of the parameters for observers at finite distance. In section V, we investigate the size and deformation of the black hole shadow for infinite distant observer and address the parameter estimation by the shadow observables, from which we also calculate the energy emission rate. In section VI, by presupposing the M87* the current charged rotating black hole in conformal gravity, we constrain the black hole parameters from the EHT observations. The last section contributes to our closing remarks. ## II The charged rotating black hole in conformal gravity Starting from (1), a rotating charged black hole in conformal gravity was constructed in Liu:2012xn with the metric $\displaystyle ds^{2}=$ $\displaystyle\Sigma\left(\frac{1}{\Delta_{r}}dr^{2}+d\vartheta^{2}\right)+\frac{1}{\Sigma}\left((\Sigma+a\chi)^{2}\sin^{2}\vartheta-\Delta_{r}\chi^{2}\right)d\varphi^{2}$ (2) $\displaystyle+\frac{2}{\Sigma}\left(\Delta_{r}\chi-a(\Sigma+a\chi)\sin^{2}\vartheta\right)dtd\varphi-\frac{1}{\Sigma}\left(\Delta_{r}-a^{2}\sin^{2}\vartheta\right)dt^{2}~{},$ where $\displaystyle\Sigma=r^{2}+a^{2}\cos^{2}\vartheta,~{}~{}\chi=a\sin^{2}\vartheta,~{}~{}\Delta_{r}=r^{2}-2mr+a^{2}+\frac{\beta r^{3}}{6m}~{}.$ (3) Here $m$, $\beta=p^{2}+q^{2}$ and $a$ are the mass, charge and rotating parameters, respectively. When the charge parameter vanishes, the metric reduces to the well-known Kerr black hole. This black hole is different from the usual KN black hole where the charge term in $\Delta_{r}$ is simply a constant $\beta$, instead of the cube term $\beta r^{3}/6m$ in current conformal gravity. It is noted that comparing the expression of the rotating charged solution in Liu:2012xn , here we focus on the case with the integral constant $\Lambda$ being zero 111We appreciate professor Hai-Shan Liu reminding us this point. . ### II.1 Black hole horizons It is known that $\Sigma\neq 0$ and $g^{rr}=0$ could determine the black hole horizons, which correspond to the positive roots of $\Delta_{r}=r^{2}-2mr+a^{2}+\frac{\beta r^{3}}{6m}=0.$ (4) There are three roots to the above equation. Depending on $m,a$ and $\beta$, the three roots could have two real positive values, one real positive value or no real positive value. The three cases correspond to that the metric (2) describes a non-extremal black hole with event horizon $(r_{+})$ and Cauchy horizon $(r_{-})$, extremal black hole with event horizon $r_{ex}=r_{+}=r_{-}$ and no black hole sector, respectively. When $\beta$ is smaller than the critical value from the extremal condition $\beta_{ex}=\frac{4\left(8m^{4}-9a^{2}m^{2}+\sqrt{m^{2}\left(4m^{2}-3a^{2}\right)^{3}}\right)}{9a^{4}}~{},$ (5) the metric describes a non-extremal black hole with $0<r_{-}<r_{+}$. A naked singularity emerges when $\beta>\beta_{ex}$ because in this case none of the three roots is real positive. Besides, as $\beta=0$, the horizons $r_{\pm}$ reduce to be $m\pm\sqrt{m^{2}-a^{2}}$ with $|a|\leq m$ (Kerr case). The extremal value $\beta_{ex}$ is different from that for KN black hole ($\beta_{ex}^{KN}=m^{2}-a^{2}$). While for $a\to 0$ we have $\beta_{ex}\to+\infty$, which indicates the black hole is always non-extremal, in contrast to a finite value $\beta_{ex}^{KN}=m^{2}$ for Reissner-Nordstrom (RN) black hole. The above scenarios in $(a,\beta)$ parameter space is shown in FIG. 1, where the case for KN black hole is also present for comparison. Note that here all parameters could be re-scaled to be dimensionless, depending on their dimensions related with $m$, for example, $a/m,r/m,\beta$ are dimensionless quantities. All the numerical exhibition of the quantities in this work denote the dimensionless ones, and for simplicity, we will set $m=1$ in the calculations unless we reassign. Figure 1: The parameter space $(a,\beta)$ of the charged rotating black hole in conformal gravity (left) and KN black hole (right). The red curves correspond to the extremal case that separating black holes (blue regions) from naked singularities (white regions). The explicit dependencies of the horizons on the parameters are shown in FIG. 2. It is obvious that as $\beta$ or $a$ increases, $r_{+}$ decreases while $r_{-}$ increases; as the extremal condition (5) is satisfied, $r_{+}$ and $r_{-}$ converge to $r_{ex}$ which decreases as $\beta$ increases but increases as $a$ increases (see the solid black curves). Here the effects of the charge and spin parameters on $r_{+}$ and $r_{-}$ are similar with that in KN spacetime, where, however the extremal horizon is $r_{ex}^{KN}=m$ independent of the charge and spin parameters. Figure 2: The event horizon $r_{+}$ (solid curve) and Cauchy horizon $r_{-}$ (dashed curve) are plotted with various values of $a$ and $\beta$. While the solid black curve represents the extremal case where the event horizon and Cauchy horizon coincide with each other. ### II.2 Static limit surface For a rotating black hole, the event horizon of the black hole does not coincide with the static limit surface, at which the asymptotical time translational Killing vector is null and therefore we have $g_{tt}=-\frac{1}{\Sigma}\left(\Delta_{r}-a^{2}\sin^{2}\vartheta\right)=0.$ (6) Depending on the values of $a,\beta$ and $\vartheta$, the roots to the above equation have three cases: no real positive root, a double real positive root and two real positive roots. We denote the real positive roots as $r_{SL_{-}}$ and $r_{SL_{+}}$ with $r_{SL_{-}}<r_{SL_{+}}$. The explicit expressions of the solutions are so complicated that we do not show them here, instead, we plot their behaviors in FIG. 3. From the figures we can see that, there exist at least one border on which the two static limit surfaces coincide $r_{SL_{-}}=r_{SL_{+}}$, i.e. the extremal case with one real positive root. Figure 3: The static limit surfaces are plotted with fixed $\vartheta$, $a$ and $\beta$ respectively, where the red surface denotes $r_{SL_{+}}$ while the blue surface represents for $r_{SL_{-}}$. Here we will not explicitly describe the dependence of $r_{SL_{\pm}}$ on the parameters $a,\beta$ and $\vartheta$. What we really want to show is that the ergoregion of this rotating black hole is bounded between $r_{+}<r<r_{SL+}$ and $r_{-}<r<r_{SL-}$, in which the timelike killing vector becomes spacelike ($g_{tt}>0$). Particularly, when $\Sigma=0$, which requires both $r=0$ and $a\cos\vartheta=0$, the spacetime has a true physical singularity. Apart from this ring singularity, the sphere $r=0$ is regular. Besides, for $g_{\varphi\varphi}<0$, the spacetime violates the causality condition, because of the closed timelike curves. More detailed exhibitions of the horizons, ergoregions, singularity and causality violating regions will be present later together with the photon regions. ## III Null geodesics and photon regions The light propagation near a black hole has important significance in both theoretical physics and astrophysics, particularly the circular orbits. For photons, the circular orbits outside the event horizon of a black hole are usually unstable. This indicates that a slight perturbation can make the photons fall into the black hole, or escape to infinity, the latter can constitute a photon ring that confines the black hole image for observers at a distant. Therefore, we start from the geodesics of the photons, to analyze the photon regions and the shadow images in the charged rotating black hole spacetime (2) in conformal gravity. We first consider the particles with mass $\mu$, the Lagrangian of which writes $\mathscr{L}=\frac{1}{2}g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}$. Here the dot represents the derivative with respect to the affine parameter $\lambda$ which relates to the proper time via $\tau=\lambda\mu$. Following Carter:1968rr , we introduce the Hamilton-Jacobi equation $\mathscr{H}=-\frac{\partial S}{\partial\lambda}=\frac{1}{2}g_{\mu\nu}\frac{\partial S}{\partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}}=-\frac{1}{2}\mu^{2},$ (7) where $\mathscr{H}$ and $S$ are the canonical Hamiltonian and the Jacobi action. With the conserved quantities $\displaystyle E:=-\frac{\partial S}{\partial t}=-g_{\varphi t}\dot{\varphi}-g_{tt}\dot{t},~{}~{}~{}\mathrm{and}~{}~{}~{}L_{z}:=\frac{\partial S}{\partial\varphi}=g_{\varphi\varphi}\dot{\varphi}+g_{\varphi t}\dot{t}~{},$ (8) the Jacobi action can be separated as $S=\frac{1}{2}\mu^{2}\lambda- Et+L_{z}\varphi+S_{r}(r)+S_{\vartheta}(\vartheta),$ (9) where $E$, $L_{z}$ are the constants of motion associated with the energy and angular momentum of the particle, respectively. Then focusing on the photons ($\mu=0$), we obtain four first-order differential equations for the geodesic motions $\displaystyle\dot{t}$ $\displaystyle=$ $\displaystyle\frac{\chi(L_{z}-E\chi)}{\Sigma\sin^{2}\vartheta}+\frac{(\Sigma+a\chi)((\Sigma+a\chi)E-aL_{z})}{\Sigma\Delta_{r}},$ (10) $\displaystyle\dot{\varphi}$ $\displaystyle=$ $\displaystyle\frac{(L_{z}-E\chi)}{\Sigma\sin^{2}\vartheta}+\frac{a(E(a\chi+\Sigma)-aL_{z})}{\Sigma\Delta_{r}},$ (11) $\displaystyle\Sigma^{2}\dot{\vartheta}^{2}$ $\displaystyle=$ $\displaystyle K-\frac{(E\chi- L_{z})^{2}}{\sin^{2}\vartheta}=:\Theta(\vartheta),$ (12) $\displaystyle\Sigma^{2}\dot{r}^{2}$ $\displaystyle=$ $\displaystyle((\Sigma+a\chi)E-aL_{z})^{2}-\Delta_{r}K=:R(r),$ (13) where $K$ is Carter constant. Comparing to the complete solution to the above equations, we are more interested in the photon region, which is filled by the null geodesics staying on a sphere. For convenience, we introduce the abbreviations $L_{E}\equiv\frac{L_{z}}{E},~{}~{}~{}~{}~{}K_{E}\equiv\frac{K}{E^{2}}.$ (14) The spherical orbits require $\dot{r}=0$ and $\ddot{r}=0$, which can be fulfilled by $R(r)=0$ and $R^{\prime}(r)=0$ according to (13). Subsequently, the constants of motion $K_{E}$ and $L_{E}$ are given as $K_{E}=\frac{16r^{2}\Delta_{r}}{(\Delta^{\prime}_{r})^{2}},~{}~{}~{}~{}aL_{E}=(\Sigma+a\chi)-\frac{4r\Delta_{r}}{\Delta^{\prime}_{r}},$ (15) where the prime denotes the derivative to $r$. Substituting the above expression into (12), we find that its non-negativity could give us the condition for the photon region $(4r\Delta_{r}-\Sigma\Delta^{\prime}_{r})^{2}\leq 16a^{2}r^{2}\Delta_{r}\sin^{2}\vartheta.$ (16) In this region, for each point with coordinates $(r_{p},\vartheta_{p})$, there is a null geodesic staying on the sphere $r=r_{p}$, along which $\vartheta$ can oscillate between the extremal values determined by the equality in (16), while the $\varphi$ is governed by (11). With respect to radial perturbations, the spherical null geodesic at $r=r_{p}$ could be either unstable or stable depending on the sign of $R^{\prime\prime}(r_{p})$ which can be derived from (13) and (15) as $\frac{R^{\prime\prime}(r)}{8E^{2}}(\Delta^{\prime}_{r})^{2}=2r\Delta_{r}\Delta^{\prime}_{r}+r^{2}(\Delta^{\prime}_{r})^{2}-2r^{2}\Delta_{r}\Delta^{\prime\prime}_{r}.$ (17) The condition $R^{\prime\prime}(r_{p})>0$ means it is unstable while $R^{\prime\prime}(r_{p})<0$ indicates the stability. The photon regions of the charged rotating black hole in conformal gravity are shown in $(r,\vartheta)$ plane, see FIG. 4 and FIG. 5, where the unstable photon orbits (the region) and stable photon orbits (the region) are distinguished. Here we plot the whole range of the spacetime in $r$ direction and use two different scales, following Grenzebach:2014fha . The radial coordinate has been scaled as $m\exp{(r/m)}$ in the region $r<0$, while scaled as $r+m$ in the region $r>0$, hence we use the black dashed circle to denote the throat at $r=0$. Moreover, the region represents $\Delta r\leq 0$ and its boundaries indicate the black hole horizons. The region and region represent the ergosphere and the causality violating regions, respectively. Besides, the shows the singularity. In the figures, we fix $a=0.5$ and $0.95$ respectively and change $\beta=\sharp\beta_{ex}$ where $\sharp\in{(0,0.3,0.6,1)}$ and $\beta_{ex}$ is the corresponding extremal value (5). As in Kerr black hole Grenzebach:2014fha , we see an exterior photon region outside the outer horizon and an interior photon region inside the inner horizon, which are symmetric with respect to the equatorial plane. All photon orbits are unstable in the exterior photon region while there exists both stable and unstable orbits in the interior photon region. The exterior and interior photon regions enlarge as $a$ increases but shrink as $\beta$ increases. Moreover, the dependence of the unidirectional membrane region and the ergosphere region on the black hole parameters are also obvious here and consistent with the analysis in the previous section. Also, the causality violation region lying to the side of negative $r$ always exists, and for small $a$ and large enough $\beta$, we see an additional causality violating region which is symmetric and extends from the the outer horizon to an finite region depending on $\beta$. (a) $\beta=0$ (b) $\beta=0.3\beta_{ex}$ (c) $\beta=0.6\beta_{ex}$ (d) $\beta=\beta_{ex}$ Figure 4: The photon regions with $a=0.5$, accompany with the unidirectional membrane region, the ergosphere region and the causality violation region. The plots in the bottom show a magnified inner part. (a) $\beta=0$ (b) $\beta=0.3\beta_{ex}$ (c) $\beta=0.6\beta_{ex}$ (d) $\beta=\beta_{ex}$ Figure 5: The photon regions with $a=0.95$, accompany with the unidirectional membrane region, the ergosphere region and the causality violation region. ## IV Black hole shadows Since the photon region determines the boundary of the black hole shadow, we then go on to construct the shadow of the charged rotating black hole in conformal gravity. ### IV.1 Coordinates setup For light rays issuing from the position of an observer into the past, the initial direction is determined by two angles in the observer’s sky, a colatitude angle and an azimuthal angle. Then we consider an observer at position $(r_{o},\vartheta_{o})$ in the Boyer-Lindquist coordinates. To fix the boundary of shadow, we choose an orthonormal tetrad Grenzebach:2014fha $\begin{split}e_{0}=\frac{(\Sigma+a\chi)\partial t+a\partial\varphi}{\sqrt{\Sigma\Delta r}}\Big{\mid}_{(r_{o},\vartheta_{o})},~{}~{}~{}~{}e_{1}=\sqrt{\frac{1}{\Sigma}}\partial\vartheta\Big{\mid}_{(r_{o},\vartheta_{o})},\\\ e_{2}=-\frac{\partial\varphi+\chi\partial t}{\sqrt{\Sigma}\sin\vartheta}\Big{\mid}_{(r_{o},\vartheta_{o})},~{}~{}~{}~{}~{}e_{3}=-\sqrt{\frac{\Delta_{r}}{\Sigma}}\partial r\Big{\mid}_{(r_{o},\vartheta_{o})},\end{split}$ (18) at the observation event in the domain of outer communication. In this set of tetrad, $e_{0}$ is treated as the four velocity of the observer and $e_{3}$ represents the spatial direction towards the center of the black hole, and $e_{0}\pm e_{3}$ are tangential to the principal null congruences of our background metric. In this way, a linear combination of $e_{i}$ is tangent to a light ray $s(\lambda)=(t(\lambda),r(\lambda),\vartheta(\lambda),\varphi(\lambda))$, such that we have $\partial_{\lambda}=\dot{r}\partial_{r}+\dot{\vartheta}\partial_{\vartheta}+\dot{\varphi}\partial_{\varphi}+\dot{t}\partial_{t}=\alpha_{s}(-e_{0}+\sin\theta\cos\psi e_{1}+\sin\theta\sin\psi e_{2}+\cos\theta e_{3})$ (19) where the scalar factor can be determined by inserting (18) into (19) as $\alpha_{s}=\frac{aL_{z}-(\Sigma+a\chi)E}{\sqrt{\Sigma\Delta_{r}}}\Big{\mid}_{(r_{o},\vartheta_{o})},$ (20) and it is easy to see that the direction $\theta=0$ points to the black hole. Moreover, here we have introduced $\theta$ and $\psi$ which are the aforementioned two angles, i.e. the celestial coordinates in the observer’s sky, see the left picture of FIG. 6. Further comparing the coefficients of $\partial_{t}$ and $\partial_{r}$, we find that $\displaystyle\sin\psi=\frac{L_{E}-\chi}{\sqrt{K_{E}}\sin\vartheta}\Big{\mid}_{\vartheta=\vartheta_{o}},~{}~{}~{}\sin\theta=\frac{\sqrt{\Delta_{r}K_{E}}}{\Sigma+a\chi- aL_{E}}\Big{\mid}_{r=r_{o}}.$ (21) Since the boundary of shadow could correspond to the light rays which infinity approach a spherical null geodesic, so such light ray must have the same $K_{E}$ and $L_{E}$ as the limiting spherical null geodesic, so in (21), we have $\displaystyle K_{E}=\frac{16r^{2}\Delta_{r}}{(\Delta^{\prime}_{r})^{2}}\Big{\mid}_{r=r_{p}},~{}~{}~{}aL_{E}=(\Sigma+a\chi)-\frac{4r\Delta_{r}}{\Delta^{\prime}_{r}}\Big{\mid}_{r=r_{p}}.$ (22) where $r_{p}$ is the radius coordinate of the limiting spherical null geodesic. Figure 6: This picture is taken from Perlick:2021aok . The left picture shows the definition of the celestial coordinates $\theta$ and $\psi$ on the observer’s sky. The right picture shows the stereographic projection of the celestial sphere onto a plane. Therefore, the boundary of the black hole shadow depends on $r_{p}$ in the form of $(\theta(r_{p}),\psi(r_{p}))$. Since the points $(\theta,\psi)$ and $(\theta,\pi-\psi)$ have the same $K_{E}$ and $L_{E}$, so the shadow is symmetric with respect to the horizontal axis. And for $a>0$, $\theta$ reaches its maximal and minimal value along the boundary curve at $\psi=-\pi/2$ and $\psi=\pi/2$, respectively, which could give us the corresponding $r_{p}^{max}$ and $r_{p}^{min}$. Putting (22) into (21) with $\psi=\mp\pi/2$, $r_{p}^{max/min}$ can be solved via $(4r\Delta_{r}-\Sigma\Delta^{\prime}_{r})\mp 4ar\sqrt{\Delta_{r}}\sin\vartheta\Big{\mid}_{(r=r_{p},\vartheta=\vartheta_{o})}=0.$ (23) Note that for $a=0$, the above method that parameterizes the shadow boundary by $r_{p}$ does not work. Then following Grenzebach:2014fha , one could apply the stereographic projection (see the right picture of FIG. 6) to transform the celestial coordinates $(\theta(r_{p}),\psi(r_{p}))$ into the standard Cartesian coordinates $(X(r_{p}),Y(r_{p}))$ $\begin{split}X{(r_{p})}=-2\tan\left(\frac{\theta{(r_{p})}}{2}\right)\sin\psi{(r_{p})},\\\ Y{(r_{p})}=-2\tan\left(\frac{\theta{(r_{p})}}{2}\right)\cos\psi{(r_{p})}.\end{split}$ (24) Then we can figure out the boundary of the shadow on a two-dimensional plane, observed by our chosen observer with four-velocity $e_{0}$. Note that the range of the inclination angle is $\vartheta_{o}\in[0,\pi]$, and $\vartheta_{o}=0(\pi)$ corresponds to the observer in north (south) direction while $\vartheta_{o}=\pi/2$ corresponds to the observer at equatorial plane of the black hole. Due to the symmetry, we shall consider $\vartheta_{o}\in[0,\pi/2]$ in the following study. ### IV.2 Shadow for observers at finite distance Firstly, we consider the observer located at finite distance with position $(r_{o},\vartheta_{o})$. We know for non-rotating black hole, the shape of the shadow is a perfect circle due to the spherically symmetrical system, and the rotation will lead to the shape deformation. In FIG. 7 and FIG. 8, we show the boundary of the shadow for the charged rotating black hole in conformal gravity. The effects of parameter $\beta$ with different values of $a$ are shown in FIG. 7. It is clear that the existence of $a$ and $\beta$ both enhances the deformation of shadow. This means that the shadow of the charged rotating black hole in conformal gravity with parameters $(a,\beta)$ and that of the Kerr black hole with a certain spin may be coincident. While the influence of the charge parameter $\beta$ in conformal gravity on the shadow is qualitatively similar to that of the KN case Grenzebach:2014fha ; Perlick:2021aok ; Tsukamoto:2017fxq ; Xavier:2020egv . In FIG. 8, we fix $a=0.95$ and $\beta=0.999\beta_{ex}$. The left plot shows the influence of the viewing angle of the observer, which indicates that the shadow remains circular for a polar observer with $\vartheta_{o}=0$, while the shadow is maximally deformed for an observer in the equatorial plane with $\vartheta_{o}=\pi/2$. The right plot shows the influence of the distance between the observer and the black hole on the shadow, where the shadow is smaller for the farther observer as expected. Figure 7: Black hole shadows seen by an equatorial observer $(\vartheta_{o}=\frac{\pi}{2})$ at $r_{o}=5$. Plots from left to right correspond to $a=0.1,a=0.5$ and $a=0.95$. In each plot, the black, blue, red and green curves correspond to $\beta=(0,0.3,0.6,0.999)\beta_{ex}$. Figure 8: LEFT: Black hole shadows with $r_{o}=5$ and different viewing angles. The shadow boundaries from left to right have $\vartheta_{o}=0,\pi/8,\pi/4$, and $\pi/2$, respectively. RIGHT: Black hole shadows with $\vartheta_{o}=\pi/2$, and $r_{o}=5,10,20,50$ for boundaries from outer to inner. In both figures we have fixed $a=0.95$ and $\beta=0.999\beta_{ex}$. ## V Shadow observables and parameter estimation To carefully study how the shadow observables are affected by the model parameters, we consider the black hole shadows observed at spatial infinity, i.e. $r_{o}\gg m$. In this case, as addressed in Perlick:2021aok , the coordinates in (24) can be transformed to $\bar{\alpha}=r_{o}X-a\sin\vartheta_{o}$ and $\bar{\beta}=r_{o}Y$, which are finally reduced as $\displaystyle\bar{\alpha}{(r_{p})}=-\frac{\xi{(r_{p})}}{\sin\vartheta_{o}},~{}~{}~{}~{}\bar{\beta}{(r_{p})}=\pm\sqrt{\eta{(r_{p})}+a^{2}\cos^{2}\vartheta_{o}-\xi{(r_{p})}^{2}\cot^{2}\vartheta_{o}}$ (25) where $\xi(r_{p})=L_{E}\mid_{r_{p}}$ and $\eta=K_{E}-(L_{E}-a)^{2}\mid_{r_{p}}$. Here $(\bar{\alpha},\bar{\beta})$ are the Bardeen’s two impact parameters with length dimension describing the celestial sphere Cunningham:cgh . Subsequently, we show the boundary of shadow for the observer at spatial infinity in FIG. 9 and FIG. 10 in which the axes labels $(X,Y)$ represent $(\bar{\alpha}/m,\bar{\beta}/m)$. We see that the boundary of black hole shadow closely depends on the parameters $a,\beta$ and $\vartheta_{o}$, and the tendencies are similar as that for the observer at finite distance. It is noticed that the black hole parameters are expected to be associated and estimated from observations. Though the image of M87* is mostly connected with Kerr black hole, the interesting point is if it is a black hole from MoG, the distortion of the shadow for a given spin parameter also arises due to the presence of additional parameter as we show in our figures. Thus, instead of describing the similar properties, here we shall study how to estimate the parameters from the observables like the size and distortion of the black hole shadow. Figure 9: Black hole shadow seen by an observer at infinity distance and $\vartheta_{0}=\pi/2$. We fix $a=0.1,a=0.5$ and $a=0.95$ from left to right. In each plot, the black, blue, red and green curves correspond to $\beta=(0,0.3,0.6,0.999)\beta_{ex}$. Figure 10: Black hole shadows seen by an observer at infinity distance for different inclination angles: $\vartheta_{o}=0$ (black), $\pi/8$ (blue), $\pi/4$ (red), and $\pi/2$(green). We have fixed $a=0.95,\beta=0.999\beta_{ex}$. ### V.1 Shadow size and deformation To describe the distortion and size of the charged rotating black hole in conformal gravity, we first study two characterized observables, $R_{s}$ and $\delta_{s}$ which were proposed by Hioki and Maeda Hioki:2009na . Here $R_{s}$ is the radius of the reference circle for the distorted shadow and $\delta_{s}$ is the deviation of the left edge of the shadow from the reference circle boundary. For convenience, we denote the top, bottom, right and left of the reference circle as $(X_{t},Y_{t})$, $(X_{b},Y_{b})$, $(X_{r},0)$ and $(X_{l}^{\prime},0)$, respectively and $(X_{l},0)$ as the leftmost edge of the shadow Ghosh:2020ece . Subsequently, the definitions of the characterized observables are Hioki:2009na $R_{s}=\frac{(X_{t}-X_{r})^{2}+Y_{t}^{2}}{2\mid X_{r}-X_{t}\mid},~{}~{}~{}~{}~{}\delta_{s}=\frac{\mid X_{l}-X_{l}^{\prime}\mid}{R_{s}}.$ (26) From the density plots of $R_{s}$ and $\delta_{s}$ in FIG. 11 and FIG. 12, we see that the black hole parameters in conformal gravity have prints on the shadow size and shape. FIG. 11 shows that with the increase of the charge parameter $\beta$, the radius $R_{s}$ decreases rapidly. It is slightly affected by the spin paremeter $a$ and the inclination angle $\vartheta_{o}$, and their effects are enlarged in the left plot of FIG. 13 from which we find that $R_{s}$ slightly decreases as $a$ increases while it increases as $\vartheta_{o}$ increases. On the other hand, FIG. 12 shows that increasing $a$ or $\vartheta_{o}$, the distortion character $\delta_{s}$ increases which means that the shadow is more distorted as expected. Moreover, when $a$ or $\vartheta_{o}$ is small, the effect of $\beta$ on $\delta_{s}$ is slight but when they are large enough, $\beta$ has a profoundly incremental effect. The above analysis further implies that comparing to Kerr black hole, the shadow radius of this charged rotating black hole in conformal gravity is always smaller but more distorted, which is similar to that of KN black hole Kumar:2019pjp . Figure 11: The density plots for the radius of the reference circle $R_{s}$ as a function of $a$ and $\beta$. Here we fix $\vartheta_{o}=\pi/2$ in the left plot and $a=0.4$ in the right plot. Figure 12: The density plot of the distortion $\delta_{s}$. Here we fix $\vartheta_{o}=\pi/2$ in the left plot and $a=0.4$ in the right plot. Figure 13: The 3D plots of radius $R_{s}$ (left) and area $A$ (right) of the black hole shadow. Here we fix $\beta=0.1$. Since $R_{s}$ and $\delta_{s}$ may not accurately describe the shadow of some irregular black holes as they require the shadow of black holes to have certain symmetry. Then to characterize the shadow with any shape, Kumar and Ghosh proposed another two characterized observables, the shadow area $A$ and oblateness $D$, which are defined as Kumar:2018ple $\displaystyle A=2\int Y{(r_{p})}dX{(r_{p})}=2\int^{r_{p}^{max}}_{r_{p}^{min}}\left(Y{(r_{p})}\frac{dX{(r_{p})}}{d{r_{p}}}\right)d{r_{p}},~{}~{}~{}~{}~{}D=\frac{X_{r}-X_{l}}{Y_{t}-Y_{b}}.$ (27) It was found in Tsupko:2017rdo that $D=1$ for Schwarzschild black hole and $\sqrt{3}/{2}\leq D<1$ for Kerr black hole in the view of an equatorial observer, where $D=\sqrt{3}/{2}$ is for the extremal case. In FIG. 14-15, we show the density plots of $A$ and $D$ for the shadow of the charged rotating black hole in conformal gravity. The area $A$ monotonously decreases as $\beta$ increases. The influence of $a$ and $\vartheta_{o}$ is enlarged in the right plot of FIG. 13, which shows that the area slightly decreases as the spin increases while the effect of $\vartheta_{o}$ is negligible. As $\beta$ increases, the oblateness $D$ becomes smaller which is significant near the extremal case. In addition, as $a$ or $\vartheta_{o}$ increases with the other fixed, $D$ also has decremental tendency. The above analysis also implies that the shadow of the charged rotating black hole in conformal gravity is smaller and more distorted than that of Kerr black hole, which matches our aforementioned finding. Figure 14: The density plots of the shadow area $A$. Here we fix $\vartheta_{o}=\pi/2$ in the left plot and $a=0.4$ in the right plot. Figure 15: The density plots of the oblateness $D$. Here we fix $\vartheta_{o}=\pi/2$ in the left plot and $a=0.4$ in the right plot. So far, we have explored how the black hole parameters leave prints on the two couples of shadow observables, i.e. $(R_{s},\delta_{s})$ and $(A,D)$. Then with given values of $(R_{s},\delta_{s})$ or $(A,D)$, we can find their contour intersection in the parameters $a-\beta$ plane to estimate the parameters of the charged rotating black hole in conformal gravity. This method of black hole parameter estimation from its shadow observables has been implemented in Hioki:2009na ; Kumar:2018ple ; Afrin:2021imp ; Afrin:2021wlj . Here, we fix $\vartheta_{o}=\pi/2$ and show the contour plots of $R_{s}$ and $\delta_{s}$ as well as $A$ and $D$ in FIG. 16 in which the intersection point of $R_{s}(A)$ and $\delta_{s}(D)$ uniquely determines the black hole parameters $a$ and $\beta$. Figure 16: LEFT: the contour plot for shadow observables $R_{s}$ (red) and $\delta_{s}$ (black) in the parameter plane $(a,\beta)$ of the charged rotating black hole in conformal gravity. RIGHT: the contour plot for shadow observables area $A$ (red) and oblateness $D$ (black). ### V.2 Energy emission rate Apart from being used to estimate the model parameters, the shadow observables are also helpful to predict various interesting astronomical phenomena Kumar:2018ple ; Kumar:2020owy ; Afrin:2021imp . In this subsection, we shall analyze the energy emission rate for the charged rotating black hole in conformal gravity using the shadow observables. For an observer at infinity distance, the shadow of a spherically symmetric black hole coincides to a high energy absorption cross section, which oscillates around a constant limiting value $\delta_{lim}$. It was addressed in Wei:2013kza that $\delta_{lim}$ is connected with the black hole shadow via $\delta_{lim}\approx\pi R_{s}^{2}$ (28) with $R_{s}$ defined in (26), hence the energy emission rate for a rotating black hole can be calculated as $\frac{d^{2}E(\varpi)}{d\varpi dt}=\frac{2\pi^{2}R_{s}^{2}}{e^{\varpi/T}-1}\varpi^{3}$ (29) where $\varpi$ is the photon frequency and $T$ is the Hawking temperature at the event horizon of the black hole. The energy emission rate in this proposal has been widely studied in GR and MoG. Now we intend to discuss the energy emission rate for the charged rotating black hole (2), the Hawking temperature of which is $T=\frac{-3a^{4}+r_{+}^{4}(3+4\beta)+\sqrt{3}B(r_{+}^{2}-a^{2})}{4\pi r_{+}^{2}(a^{2}+r_{+}^{2})(3a^{2}+3r_{+}^{2}+\sqrt{3}B)},$ (30) with $B=3a^{4}+6a^{2}r_{+}^{2}+r_{+}^{4}(3+4\beta)$. In FIG. 17, we present the behavior of the energy emission rate as a function of photon frequency. The left and middle plots show that the peak of the emission rate decreases as both $\beta$ and $a$ increases and the peak shifts to lower frequency, while the right plot shows that the inclination angle has the opposite effect on the emission rate. Figure 17: The distribution of the energy emission rate in terms of the photon frequency $\varpi$ with various values of parameters $\beta$, $\vartheta_{o}$ and $a$. ## VI Constraints from EHT observations of M87* The black hole image of M87* photographed by the EHT is crescent shaped, and its deviation from circularity in terms of the root-mean-square distance from the average radius of the shadow is $\Delta C\lesssim 0.1$. The axis ratio is $1<D_{x}\lesssim 4/3$ while the angular diameter is $\theta_{d}=42\pm 3\mu as$ EventHorizonTelescope:2019dse ; EventHorizonTelescope:2019ths ; EventHorizonTelescope:2019pgp . The preliminary analysis of the image of M87* by EHT collaboration refers to the Kerr black hole whose parameters are constrained by the above observations, but the results can not rule out the alternative black holes in GR or the rotating black holes in MoG. Thus, the shadow observables $\Delta C$, $D_{x}$ and $\theta_{d}$ could also be used to constrain the parameters of black holes in MoGs, and some attempts can be seen in Cunha:2019ikd ; EventHorizonTelescope:2021dqv ; Khodadi:2020jij ; Bambi:2019tjh ; Afrin:2021imp ; Kumar:2019pjp ; Ghosh:2020spb ; Afrin:2021wlj ; Jha:2021bue ; Khodadi:2021gbc . In this section, we presuppose the M87* a rotating charged black hole in conformal gravity and will use the EHT observations to constrain the parameters $a$ and $\beta$. To this end, we shall first review the definition of $\Delta C$, $D_{x}$ and $\theta_{d}$, and show their density plots in the parameter space $(a,\beta)$. To describe the circularity deviation $\Delta C$, we have to recall from subsection V.1 that the distorted black hole shadow is always compared with a reference circle. The geometric center of the shadow $(X_{c},Y_{c})$ is connected with the edges of the shaped boundary via $({X_{c}=\frac{X_{r}+X_{l}}{2}},Y_{c}=0)$, and with this point as the origin, the boundary of a black hole shadow can be described by the polar coordinates $(\phi,R(\phi))$ where $\begin{split}\phi=\tan^{-1}\left(\frac{Y-Y_{C}}{X-X_{c}}\right),~{}~{}~{}~{}R(\phi)=\sqrt{(X-X_{c})^{2}+(Y-Y_{c})^{2}},\end{split}$ (31) while the average radius of the shadow is $\bar{R}=\frac{1}{2\pi}\int^{2\pi}_{0}R(\phi)d\phi.$ (32) Then the circularity deviation $\Delta C$ which measures the deviation from a perfect circle is defined by Afrin:2021imp ${\Delta C=\frac{1}{\bar{R}}\sqrt{\frac{1}{2\pi}\int^{2\pi}_{0}(R(\phi)-\bar{R})^{2}d\phi}.}$ (33) The axis ratio is given by Banerjee:2019nnj $D_{x}=\frac{1}{D}=\frac{Y_{t}-Y_{b}}{X_{r}-X_{l}},$ (34) where the oblateness $D$ has been defined in (27). In fact, $D_{x}$ could be seen as another way of defining the circular derivation since the emission ring reconstructed in EHT images is close to circular with an axial ratio of $4:3$, which indeed also correspond to $\Delta C\lesssim 0.1$ EventHorizonTelescope:2019dse . Another observable from the EHT collaboration is the angular diameter of the shadow which is defined as Kumar:2020owy $\theta_{d}=2\frac{R_{a}}{d}\,$ (35) where $R_{a}=\sqrt{\frac{A}{\pi}}$ with $A$ defined in (27) is known as the shadow areal radius and $d$ is the distance of the M87* from the earth. It is obvious from the formulas (33), (34) and (35) that $\Delta C$, $D_{x}$ and $\theta_{d}$ depend on the black hole parameters. Assuming M87* the current charged rotating black hole in conformal gravity, we could evaluate them for the metric (2) and use the EHT observations $\Delta C\lesssim 0.1$, $D_{x}\in(1,4/3]$ and $\theta_{d}\in[39,45]\mu as$ to give constraints on the parameters $a$ and $\beta$. In addition, we know that the shadow is maximally deformed at large inclination angle $\vartheta_{o}=\pi/2=90^{\circ}$, while the inclination angle (with respect to the line of sight) is estimated to be $\vartheta_{o}=17^{\circ}$ in the M87* image if considering the orientation of the relativistic jets CraigWalker:2018vam . So we shall then show our computational results for both $\vartheta_{o}=90^{\circ}$ and $\vartheta_{o}=17^{\circ}$. We give the density plots of the circularity deviation $\Delta C$ in FIG. 18, which shows that the shadows of the charged rotating black hole in conformal gravity satisfy $\Delta C\lesssim 0.1$ for all theoretically allowed parameters. Moreover, we also show the density plots of $D_{x}$ in FIG. 19. We see that for the entire parameter space, the axial ratio is within the observation constraint $D_{x}\in(1,4/3]$, which is consistent with the conclusion from $\Delta C\lesssim 0.1$ as we expect. In addition, in order to compare with $D_{x}\in(1,2\sqrt{3}/3]$ in Kerr black hole, we tend to show the contour with $D_{x}=2\sqrt{3}/3$ in the calculation. For $\vartheta=90^{\circ}$, we see that in the current background, though all parameters satisfy $D_{x}<4/3$, their is still some parameter space with $D_{x}>2\sqrt{3}/3$. It means that if in the future, the EHT experiment is improved, the observation $2\sqrt{3}/3<D_{x}<4/3$ even could rule out Kerr black hole in the center, and the current charged rotating black hole in conformal gravity could be a candidate. Nevertheless, for $\vartheta=17^{\circ}$, all the parameters give $1<D_{x}\leq 2\sqrt{3}/3$, so one cannot distinguish GR and the conformal gravity. Figure 18: The density plots of the circularity deviation $\Delta C$. The left plot is for $\vartheta_{o}=90^{\circ}$ while the right plot is for $\vartheta_{o}=17^{\circ}$. Figure 19: The density plots of the axial ratio $D_{x}$. The left plot is for $\vartheta_{o}=90^{\circ}$ while the right plot is for $\vartheta_{o}=17^{\circ}$. The black curve in the left plot denotes $D_{x}=2\sqrt{3}/3$ contour which is the upper bound for Kerr black hole. In FIG. 20, we present the density plots of $\theta_{d}$ for the charged rotating black hole in conformal gravity. In the calculation, we set $d=16.8Mpc$ and the black hole mass as $m=6.5\times 10^{9}M_{\odot}$ as estimated by EHT collaboration. The enlarged plots in the right panel clearly show that only the parameter space at the left corner enclosed by the $\theta_{d}=39\mu as$ contour (the black curve) is consistent with the EHT observations of M87*, indicating that $\theta_{d}$ gives upper limit on both $a$ and $\beta$ in the charged rotating black hole in conformal gravity (2). Moreover, it is not difficult to find that the constraint on $a$ at $\vartheta=17^{\circ}$ is stricter than that at $\vartheta=90^{\circ}$, but the difference of their effects on $\beta$ is slight. Figure 20: The density plots of the angular diameter $\theta_{d}$. The upper panels are for $\vartheta_{o}=90^{\circ}$ while the bottom panels are for $\vartheta_{o}=17^{\circ}$. The black curves in the right enlarged plots correspond to $\theta_{d}=39~{}\mu as$. ## VII Closing remarks The published EHT observations on black hole image are consistent with those for Kerr black hole predicted by GR, but the current experimental outcome can not rule out alternatives to the Kerr black hole as well as other theories of gravity. In this paper, we considered a charged rotating black hole in conformal gravity which has remarkable implements in cosmological and holographical framework. The charged related term in the current black hole has different falloff from that in KN black hole, such that it exhibits different configurations. The charge parameter $\beta$ would decrease the size of both Cauchy and event horizons, of which the tendency is similar to that in KN black hole but with a different slope. Also, the size of event horizon in extremal case decreases as the charge parameter increases, in contrast to the independent situation in KN black hole. Moreover, the falloff term also has influence on the static limit surfaces, ergoregions, the causality violating regions and photon regions as we explicitly presented in FIG. 4-5. Then we figure out the shadow boundary of the black hole with various cases for observers at both finite and infinity distances. The effects of the spin parameter, the charge parameter, the inclination angle and the distance on the shadow shape can be clearly seen in FIG. 7-10, which are qualitatively similar to that of Kerr or KN black hole Perlick:2004tq ; Hasse:2005vu ; Grenzebach:2014fha . Then focusing on the shadow cast for observer at infinity, we systematically analyze the shadow observables that characterize the shadow size and shape, namely shadow radius $R_{s}$, distortion $\delta_{s}$, the shadow area $A$ and oblateness $D$. It was found that comparing with Kerr black hole, the black hole shadow is smaller and more distorted with the increasing of the charge parameter. Our analysis also indicates that the shadow observables could be used to estimate the parameters $(a,\beta)$ of the charged rotating black hole in conformal gravity. Finally, we considered the M87* in EHT experiment as the current charged rotating black hole in conformal gravity, and used the EHT constraints on the circularity deviation $\Delta C$, the axial ratio $D_{x}$ and the angular diameter $\theta_{d}$ to constrain the black hole parameters. For inclination angles $\vartheta_{0}=90^{\circ}$ and $\vartheta_{0}=17^{\circ}$, the entire $(a,\beta)$ space satisfies $\Delta C\lesssim 0.1$ and $1<D_{x}\lesssim 4/3$. It is worthwhile to point out that for $\vartheta=90^{\circ}$, some parameter space would give $2\sqrt{3}/3<D_{x}<4/3$ where $D_{x}=2\sqrt{3}/3$ is the upper bound for Kerr black hole. While for $\vartheta=17^{\circ}$, all the parameters give $1<D_{x}<2\sqrt{3}/3$, so one cannot distinguish GR and the conformal gravity in this case. The $39~{}\mu as\leq\theta_{d}\leq 45~{}\mu as$ gives upper bounds on both $a$ and $\beta$ and constrain the parameter space into a small portion. To conclude, in plenty of parameter points $(a,\beta)$, the charged rotating black hole shadows are consistent with that in EHT observations of M87*. Our findings indicate that the charged rotating black hole in conformal gravitys with those parameters could be candidates for astrophysical black holes. Moreover, for the equatorial observer, the constraint of EHT on the axial ratio $D_{x}$ could help us to distinguish Kerr black hole and the current charged rotating black hole in conformal gravity in some parameter space. ###### Acknowledgements. We appreciate Xi-Jing Wang for helpful discussion. This work is partly supported by Fok Ying Tung Education Foundation under Grant No. 171006 and Natural Science Foundation of Jiangsu Province under Grant No.BK20211601. ## References * (1) Bardeen, J. M., “Les Houches Summer School of Theoretical Physics: Black Holes,” Gordon and Breach Science Publishers, Inc., United States. (1973): 215-240 * (2) J. L. Synge, “The Escape of Photons from Gravitationally Intense Stars,” Mon. Not. Roy. Astron. Soc. 131 (1966) no.3, 463-466 * (3) H. Falcke, F. Melia and E. Agol, “Viewing the shadow of the black hole at the galactic center,” Astrophys. J. Lett. 528 (2000), L13 [arXiv:astro-ph/9912263 [astro-ph]]. * (4) K. S. Virbhadra and G. F. R. Ellis, “Schwarzschild black hole lensing,” Phys. Rev. D 62 (2000), 084003 [arXiv:astro-ph/9904193 [astro-ph]]. * (5) Z. Q. Shen, K. Y. Lo, M. C. Liang, P. T. P. Ho and J. H. Zhao, “A size of ~1 au for the radio source sgr a* at the centre of the milky way,” Nature 438 (2005), 62 [arXiv:astro-ph/0512515 [astro-ph]]. * (6) Z. Younsi, A. Zhidenko, L. Rezzolla, R. Konoplya and Y. Mizuno, “New method for shadow calculations: Application to parametrized axisymmetric black holes,” Phys. Rev. D 94 (2016) no.8, 084025 [arXiv:1607.05767 [gr-qc]]. * (7) F. Atamurotov, A. Abdujabbarov and B. Ahmedov, “Shadow of rotating non-Kerr black hole,” Phys. Rev. D 88 (2013) no.6, 064004 * (8) F. Atamurotov, S. G. Ghosh and B. Ahmedov, “Horizon structure of rotating Einstein–Born–Infeld black holes and shadow,” Eur. Phys. J. C 76 (2016) no.5, 273 [arXiv:1506.03690 [gr-qc]]. * (9) M. Amir, B. P. Singh and S. G. Ghosh, “Shadows of rotating five-dimensional charged EMCS black holes,” Eur. Phys. J. C 78 (2018) no.5, 399 [arXiv:1707.09521 [gr-qc]]. * (10) E. F. Eiroa and C. M. Sendra, “Shadow cast by rotating braneworld black holes with a cosmological constant,” Eur. Phys. J. C 78 (2018) no.2, 91 [arXiv:1711.08380 [gr-qc]]. * (11) S. Vagnozzi and L. Visinelli, “Hunting for extra dimensions in the shadow of M87*,” Phys. Rev. D 100 (2019) no.2, 024020 [arXiv:1905.12421 [gr-qc]]. * (12) F. Long, J. Wang, S. Chen and J. Jing, “Shadow of a rotating squashed Kaluza-Klein black hole,” JHEP 10 (2019), 269 [arXiv:1906.04456 [gr-qc]]. * (13) F. Long, S. Chen, M. Wang and J. Jing, “Shadow of a disformal Kerr black hole in quadratic degenerate higher-order scalar–tensor theories,” Eur. Phys. J. C 80 (2020) no.12, 1180 [arXiv:2009.07508 [gr-qc]]. * (14) I. Banerjee, S. Chakraborty and S. SenGupta, “Silhouette of M87*: A New Window to Peek into the World of Hidden Dimensions,” Phys. Rev. D 101 (2020) no.4, 041301 [arXiv:1909.09385 [gr-qc]]. * (15) A. K. Mishra, S. Chakraborty and S. Sarkar, “Understanding photon sphere and black hole shadow in dynamically evolving spacetimes,” Phys. Rev. D 99 (2019) no.10, 104080 [arXiv:1903.06376 [gr-qc]]. * (16) R. Kumar, S. G. Ghosh and A. Wang, “Gravitational deflection of light and shadow cast by rotating Kalb-Ramond black holes,” Phys. Rev. D 101 (2020) no.10, 104001 [arXiv:2001.00460 [gr-qc]]. * (17) W. L. Qian, S. Chen, C. G. Shao, B. Wang and R. H. Yue, “Cuspy and fractured black hole shadows in a toy model with axisymmetry,” Eur. Phys. J. C 82 (2022) no.1, 91 [arXiv:2102.03820 [gr-qc]]. * (18) X. X. Zeng, H. Q. Zhang and H. Zhang, “Shadows and photon spheres with spherical accretions in the four-dimensional Gauss–Bonnet black hole,” Eur. Phys. J. C 80 (2020) no.9, 872 [arXiv:2004.12074 [gr-qc]]. * (19) X. X. Zeng, G. P. Li and K. J. He, “The shadows and observational appearance of a noncommutative black hole surrounded by various profiles of accretions,” Nucl. Phys. B 974 (2022), 115639 [arXiv:2106.14478 [hep-th]]. * (20) F. L. Lin, A. Patel and H. Y. Pu, “Black Hole Shadow with Soft Hair,” [arXiv:2202.13559 [gr-qc]]. * (21) C. Sun, Y. Liu, W. L. Qian and R. Yue, “Shadows of magnetically charged rotating black holes surrounded by quintessence,” [arXiv:2201.01890 [gr-qc]]. * (22) İ. Çimdiker, D. Demir and A. Övgün, “Black hole shadow in symmergent gravity,” Phys. Dark Univ. 34 (2021), 100900 [arXiv:2110.11904 [gr-qc]]. * (23) Z. Zhong, Z. Hu, H. Yan, M. Guo and B. Chen, “QED effects on Kerr black hole shadows immersed in uniform magnetic fields,” Phys. Rev. D 104 (2021) no.10, 104028 [arXiv:2108.06140 [gr-qc]]. * (24) Y. Hou, M. Guo and B. Chen, “Revisiting the shadow of braneworld black holes,” Phys. Rev. D 104 (2021) no.2, 024001 [arXiv:2103.04369 [gr-qc]]. * (25) X. C. Cai and Y. G. Miao, “Can we know about black hole thermodynamics through shadows?,” [arXiv:2107.08352 [gr-qc]]. * (26) Q. Gan, P. Wang, H. Wu and H. Yang, “Photon spheres and spherical accretion image of a hairy black hole,” Phys. Rev. D 104 (2021) no.2, 024003 [arXiv:2104.08703 [gr-qc]]. * (27) Z. Chang and Q. H. Zhu, “The observer-dependent shadow of the Kerr black hole,” JCAP 09 (2021), 003 [arXiv:2104.14221 [gr-qc]]. * (28) M. Wang, S. Chen and J. Jing, “Kerr black hole shadows in Melvin magnetic field with stable photon orbits,” Phys. Rev. D 104 (2021) no.8, 084021 [arXiv:2104.12304 [gr-qc]]. * (29) R. Shaikh, S. Paul, P. Banerjee and T. Sarkar, “Shadows and thin accretion disk images of the $\gamma$-metric,” [arXiv:2105.12057 [gr-qc]]. * (30) H. Guo, H. Liu, X. M. Kuang and B. Wang, “Acoustic black hole in Schwarzschild spacetime: quasi-normal modes, analogous Hawking radiation and shadows,” Phys. Rev. D 102 (2020), 124019 [arXiv:2007.04197 [gr-qc]]; “Shadow and near-horizon characteristics of the acoustic charged black hole in curved spacetime, ”Phys. Rev. D 104 (2021) no.10, 104003 [arXiv:2107.05171 [gr-qc]] * (31) K. Hioki and K. i. Maeda, “Measurement of the Kerr Spin Parameter by Observation of a Compact Object’s Shadow,” Phys. Rev. D 80 (2009), 024042 [arXiv:0904.3575 [astro-ph.HE]]. * (32) R. Kumar and S. G. Ghosh, “Black Hole Parameter Estimation from Its Shadow,” Astrophys. J. 892 (2020), 78 [arXiv:1811.01260 [gr-qc]]. * (33) J. Bada and E. F. Eiroa, “Shadow of axisymmetric, stationary, and asymptotically flat black holes in the presence of plasma,” Phys. Rev. D 104 (2021) no.8, 084055 [arXiv:2106.07601 [gr-qc]]. * (34) S. W. Wei and Y. X. Liu, “Observing the shadow of Einstein-Maxwell-Dilaton-Axion black hole,” JCAP 11 (2013), 063 [arXiv:1311.4251 [gr-qc]]. * (35) A. Allahyari, M. Khodadi, S. Vagnozzi and D. F. Mota, “Magnetically charged black holes from non-linear electrodynamics and the Event Horizon Telescope,” JCAP 02 (2020), 003 [arXiv:1912.08231 [gr-qc]]. * (36) O. Y. Tsupko, “Analytical calculation of black hole spin using deformation of the shadow,” Phys. Rev. D 95 (2017) no.10, 104058 [arXiv:1702.04005 [gr-qc]]. * (37) P. V. P. Cunha, C. A. R. Herdeiro and E. Radu, “Spontaneously Scalarized Kerr Black Holes in Extended Scalar-Tensor–Gauss-Bonnet Gravity,” Phys. Rev. Lett. 123 (2019) no.1, 011101 [arXiv:1904.09997 [gr-qc]]. * (38) R. Kumar and S. G. Ghosh, “Rotating black holes in $4D$ Einstein-Gauss-Bonnet gravity and its shadow,” JCAP 07 (2020), 053 [arXiv:2003.08927 [gr-qc]]. * (39) C. Y. Chen, “Rotating black holes without $\mathbb{Z}_{2}$ symmetry and their shadow images,” JCAP 05 (2020), 040 [arXiv:2004.01440 [gr-qc]]. * (40) S. Brahma, C. Y. Chen and D. h. Yeom, “Testing Loop Quantum Gravity from Observational Consequences of Nonsingular Rotating Black Holes,” Phys. Rev. Lett. 126 (2021) no.18, 181301 [arXiv:2012.08785 [gr-qc]]. * (41) A. Belhaj, M. Benali, A. E. Balali, W. E. Hadri and H. El Moumni, “Shadows of Charged and Rotating Black Holes with a Cosmological Constant,” [arXiv:2007.09058 [gr-qc]]. * (42) B. H. Lee, W. Lee and Y. S. Myung, “Shadow cast by a rotating black hole with anisotropic matter,” Phys. Rev. D 103 (2021) no.6, 064026 [arXiv:2101.04862 [gr-qc]]. * (43) J. Badía and E. F. Eiroa, “Influence of an anisotropic matter field on the shadow of a rotating black hole,” Phys. Rev. D 102 (2020) no.2, 024066 [arXiv:2005.03690 [gr-qc]]. * (44) E. Frion, L. Giani and T. Miranda, “Black Hole Shadow Drift and Photon Ring Frequency Drift,” [arXiv:2107.13536 [gr-qc]]. * (45) R. Roy, S. Vagnozzi and L. Visinelli, “Superradiance evolution of black hole shadows revisited,” [arXiv:2112.06932 [astro-ph.HE]]. * (46) M. Afrin, R. Kumar and S. G. Ghosh, “Parameter estimation of hairy Kerr black holes from its shadow and constraints from M87*,” Mon. Not. Roy. Astron. Soc. 504 (2021), 5927-5940 [arXiv:2103.11417 [gr-qc]]. * (47) R. Kumar, S. G. Ghosh and A. Wang, “Shadow cast and deflection of light by charged rotating regular black holes,” Phys. Rev. D 100 (2019) no.12, 124024 [arXiv:1912.05154 [gr-qc]]. * (48) S. K. Jha and A. Rahaman, “Study of shadow and parameter estimation of non-commutative Kerr-like Lorentz violating black holes,” [arXiv:2111.02817 [gr-qc]]. * (49) S. G. Ghosh, R. Kumar and S. U. Islam, “Parameters estimation and strong gravitational lensing of nonsingular Kerr-Sen black holes,” JCAP 03 (2021), 056 [arXiv:2011.08023 [gr-qc]]. * (50) C. Bambi, K. Freese, S. Vagnozzi and L. Visinelli, “Testing the rotational nature of the supermassive object M87* from the circularity and size of its first image,” Phys. Rev. D 100 (2019) no.4, 044057 [arXiv:1904.12983 [gr-qc]]. * (51) M. Khodadi, G. Lambiase and D. F. Mota, “No-hair theorem in the wake of Event Horizon Telescope,” JCAP 09 (2021), 028 [arXiv:2107.00834 [gr-qc]]. * (52) M. Afrin and S. G. Ghosh, “Constraining rotating black holes in Horndeski theory with EHT observations of M87*,” [arXiv:2110.05258 [gr-qc]]. * (53) P. V. P. Cunha and C. A. R. Herdeiro, “Shadows and strong gravitational lensing: a brief review,” Gen. Rel. Grav. 50 (2018) no.4, 42 [arXiv:1801.00860 [gr-qc]]. * (54) V. Perlick and O. Y. Tsupko, “Calculating black hole shadows: Review of analytical studies,” Phys. Rept. 947 (2022), 1-39 [arXiv:2105.07101 [gr-qc]]. * (55) K. Akiyama et al. [Event Horizon Telescope], “First M87 Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole,” Astrophys. J. Lett. 875 (2019), L1 [arXiv:1906.11238 [astro-ph.GA]]. * (56) K. Akiyama et al. [Event Horizon Telescope], “First M87 Event Horizon Telescope Results. IV. Imaging the Central Supermassive Black Hole,” Astrophys. J. Lett. 875 (2019) no.1, L4 [arXiv:1906.11241 [astro-ph.GA]]. * (57) K. Akiyama et al. [Event Horizon Telescope], “First M87 Event Horizon Telescope Results. V. Physical Origin of the Asymmetric Ring,” Astrophys. J. Lett. 875 (2019) no.1, L5 [arXiv:1906.11242 [astro-ph.GA]]. * (58) P. Kocherlakota et al. [Event Horizon Telescope], “Constraints on black-hole charges with the 2017 EHT observations of M87*,” Phys. Rev. D 103 (2021) no.10, 104047 doi:10.1103/PhysRevD.103.104047 [arXiv:2105.09343 [gr-qc]]. * (59) P. V. P. Cunha, C. A. R. Herdeiro and E. Radu, “EHT constraint on the ultralight scalar hair of the M87 supermassive black hole,” Universe 5 (2019) no.12, 220 [arXiv:1909.08039 [gr-qc]]. * (60) M. Khodadi, A. Allahyari, S. Vagnozzi and D. F. Mota, “Black holes with scalar hair in light of the Event Horizon Telescope,” JCAP 09 (2020), 026 [arXiv:2005.05992 [gr-qc]]. * (61) H. S. Liu and H. Lu, “Charged Rotating AdS Black Hole and Its Thermodynamics in Conformal Gravity,” JHEP 02 (2013), 139 [arXiv:1212.6264 [hep-th]]. * (62) H. Weyl, “Reine Infinitesimalgeometrie,” Math. Z. 2 (1918) no.3-4, 384-411 * (63) P. D. Mannheim and D. Kazanas, “Exact Vacuum Solution to Conformal Weyl Gravity and Galactic Rotation Curves,” Astrophys. J. 342 (1989), 635-638 * (64) G. U. Varieschi, “A Kinematical Approach to Conformal Cosmology,” Gen. Rel. Grav. 42 (2010), 929-974 [arXiv:0809.4729 [gr-qc]]. * (65) G. ’t Hooft, “Probing the small distance structure of canonical quantum gravity using the conformal group,” [arXiv:1009.0669 [gr-qc]]. * (66) G. ’t Hooft, “Local Conformal Symmetry: the Missing Symmetry Component for Space and Time,” [arXiv:1410.6675 [gr-qc]]. * (67) C. M. Bender and P. D. Mannheim, “No-ghost theorem for the fourth-order derivative Pais-Uhlenbeck oscillator model,” Phys. Rev. Lett. 100 (2008), 110402 [arXiv:0706.0207 [hep-th]]. * (68) P. D. Mannheim and A. Davidson, “Fourth order theories without ghosts,” [arXiv:hep-th/0001115 [hep-th]]. * (69) P. D. Mannheim, “Making the Case for Conformal Gravity,” Found. Phys. 42 (2012), 388-420 [arXiv:1101.2186 [hep-th]]. * (70) J. Maldacena, “Einstein Gravity from Conformal Gravity,” [arXiv:1105.5632 [hep-th]]. * (71) J. R. Mureika and G. U. Varieschi, “Black hole shadows in fourth-order conformal Weyl gravity,” Can. J. Phys. 95 (2017) no.12, 1299-1306 [arXiv:1611.00399 [gr-qc]]. * (72) B. Carter, “Global structure of the Kerr family of gravitational fields,” Phys. Rev. 174 (1968), 1559-1571 * (73) A. Grenzebach, V. Perlick and C. Lämmerzahl, “Photon Regions and Shadows of Kerr-Newman-NUT Black Holes with a Cosmological Constant,” Phys. Rev. D 89 (2014) no.12, 124004 [arXiv:1403.5234 [gr-qc]]. * (74) N. Tsukamoto, “Black hole shadow in an asymptotically-flat, stationary, and axisymmetric spacetime: The Kerr-Newman and rotating regular black holes,” Phys. Rev. D 97 (2018) no.6, 064021 [arXiv:1708.07427 [gr-qc]]. * (75) S. V. M. C. B. Xavier, P. Cunha, V.P., L. C. B. Crispino and C. A. R. Herdeiro, “Shadows of charged rotating black holes: Kerr–Newman versus Kerr–Sen,” Int. J. Mod. Phys. D 29 (2020) no.11, 2041005 [arXiv:2003.14349 [gr-qc]]. * (76) C. T. Cunningham, and J. M. Bardeen, “The optical appearance of a star orbiting an extreme Kerr black hole,” Astrophy. J. Lett. 173 (1972) L137 * (77) P. V. P. Cunha, C. A. R. Herdeiro, E. Radu and H. F. Runarsson, “Shadows of Kerr black holes with scalar hair,” Phys. Rev. Lett. 115 (2015) no.21, 211102 [arXiv:1509.00021 [gr-qc]]. * (78) M. Fathi, M. Olivares and J. R. Villanueva, “Ergosphere, Photon Region Structure, and the Shadow of a Rotating Charged Weyl Black Hole,” Galaxies 9 (2021) no.2, 43 [arXiv:2011.04508 [gr-qc]]. * (79) S. G. Ghosh, M. Amir and S. D. Maharaj, “Ergosphere and shadow of a rotating regular black hole,” Nucl. Phys. B 957 (2020), 115088 [arXiv:2006.07570 [gr-qc]]. * (80) R. Craig Walker, P. E. Hardee, F. B. Davies, C. Ly and W. Junor, “The Structure and Dynamics of the Subparsec Jet in M87 Based on 50 VLBA Observations over 17 Years at 43 GHz,” Astrophys. J. 855 (2018) no.2, 128 [arXiv:1802.06166 [astro-ph.HE]]. * (81) V. Perlick, “Gravitational lensing from a spacetime perspective,” Living Rev. Rel. 7 (2004), 9 * (82) W. Hasse and V. Perlick, “A Morse-theoretical analysis of gravitational lensing by a Kerr-Newman black hole,” J. Math. Phys. 47 (2006), 042503 [arXiv:gr-qc/0511135 [gr-qc]].
# The Impact of Initialization on LoRA Finetuning Dynamics Soufiane Hayou Simons Institute UC Berkeley <EMAIL_ADDRESS> &Nikhil Ghosh Dept of Statistics UC Berkeley <EMAIL_ADDRESS> &Bin Yu Dept of Statistics UC Berkeley <EMAIL_ADDRESS> ###### Abstract In this paper, we study the role of initialization in Low Rank Adaptation (LoRA) as originally introduced in [19]. Essentially, to start from the pretrained model as initialization for finetuning, one can either initialize $B$ to zero and $A$ to random (default initialization in PEFT package), or vice-versa. In both cases, the product $BA$ is equal to zero at initialization, which makes finetuning _starts_ from the pretrained model. These two initialization schemes are seemingly similar. They should in- principle yield the same performance and share the same optimal learning rate. We demonstrate that this is an _incorrect intuition_ and that the first scheme (initializing $B$ to zero and $A$ to random) on average yields better performance compared to the other scheme. Our theoretical analysis shows that the reason behind this might be that the first initialization allows the use of larger learning rates (without causing output instability) compared to the second initialization, resulting in more efficient learning of the first scheme. We validate our results with extensive experiments on LLMs. ## 1 Introduction One of the most important paradigm shifts in deep learning has been to embrace the pretrain-finetune paradigm (e.g., [7, 9]) in order to solve many real world tasks. Previously, to solve a specific task, typically a custom model would be trained from scratch on purely task relevant data. Nowadays however, it is standard to instead finetune an already pretrained based model on the specific task required. The base pretrained model is trained on a generic unsupervised objective in order to learn powerful and general features which can be rapidly adapted to the downstream task, greatly accelerating the speed of learning and reducing the number of training samples needed compared to training from scratch. In this paradigm, one of the clearest empirical trends has been that the most performant models are obtained at the largest scales [14, 25] with state-of- the-art models of hundreds of billions of parameters. Due to the immense cost of training such models, only a few industry labs can pretrain large models from scratch. Many of these pretrained models are accessible through open- source platforms (e.g., Llama by [38]) and practitioners are interested in finetuning such models for specific tasks. However, due to their size, adapting such models to downstream tasks with full finetuning (updating all model parameters) is computationally infeasible for most practitioners who lack considerable computational resources. However, since pretrained models learn already useful representations for finetuning, in-principle a significant adaptation of all parameters should not usually be required. To realize this intuition, researchers have proposed a variety of parameter- efficient finetuning methods that typically freeze a bulk of the pretrained weights and tune only a small set of (possibly newly initialized) parameters. Such methods include the adapters method [11] where lightweight “adapter" layers are inserted and trained, prompt tuning [20] where a “soft prompt" is learned and appended to the input, and $(IA)^{3}$ [24] where activation vectors are modified with learned scalings. One of the most popular and effective such parameter-efficient finetuning methods is known as _Low Rank Adaptation_ [19] abbreviated as LoRA. In LoRA finetuning, for a given layer, only a low rank matrix called an adapter which is added to the pretrained weights, is trainable. The training can be done with any optimizer but the common choice in practice is Adam [3]. Since the trained adapter is low-rank, LoRA significantly reduces the number of trainable parameters in the finetuning process compared with full finetuning. On many tasks such as instruction finetuning, LoRA has been shown to achieve comparable or better performance compared with full-finetuning [39, 35], although there are cases such as complicated and long form generation tasks where it is not always as performant. The generally high performance level and the computational savings of LoRA have contributed to it becoming a standard finetuning method. Just as in all neural network training scenarios, efficient use of LoRA requires a careful choice of multiple hyperparameters such as the rank, the learning rate, and choice of initialization. Although there has been prior work investigating the rank [31] and learning rate [44] hyperparameters, there has been limited investigation into the initialization scheme used for vanilla LoRA. In this work we focus on the question of initialization. Through experimental verification and theoretical insights, we justify the use of a particular initialization choice over the a priori equally natural alternative. ##### Related Work. In standard LoRA training, one of the two LoRA matrices is initialized with random values and the other is initialized to zero (see Section 2.1). Recently, in [48] the authors proposed an alternative initialization scheme to LoRA which uses the top singular vectors of the pretrained weights as opposed to a random initialization and showed improved training on several tasks. To further improve LoRA training with quantization, [34] introduced a new method called LoftQ for computing a better initialization for quantized training [27]. However, to the best of our knowledge, there has not been any study concerning the random initialization in vanilla LoRA. Specifically, it is not clear from prior work _which of the two LoRA matrices should be initialized to be zero_. Empirical results by [50] suggested that the two initialization schemes mentioned above yield similar performance, but it is not clear if the learning rate was well-tuned for each initialization scheme. Our findings suggest that these two initialization schemes lead to fundamentally different finetuning dynamics, and that one of these schemes generally yields better result compared to the other. ##### LoRA Variations. We remark that beyond altering the LoRA initialization scheme there have been a series of works which try to address limitations of vanilla LoRA using different variations. To further reduce the number of trainable parameters LoRA-FA [42] freezes the $A$ matrix which leads to small performance loss while reducing memory consumption by up to 1.4$\times$. The performance of this training scheme is also investigated in [50]. VeRA [33] freezes random weight tied adapters and learns vector scalings of the internal adapter activations. LoRA-XS [43] initializes the $A$ and $B$ matrices using the SVD of the pretrained weights and trains a low-rank update of the form $BRA$ where $R$ is a trainable $r\times r$ matrix and $B$, $A$ are fixed. NOLA [32] parametrizes the adapter matrices to be linear combinations of frozen random matrices and optimizes the linear coefficients of the mixtures. VB-LORA [46] shares adapter parameters using a global vector bank. In order to improve the learning ability for more challenging finetuning tasks, [31] proposes a scaling rule for the scalar adapter multiplier to unlock increased gains with higher adapter ranks. MoRA [45] learns high-rank updates while still preserving parameter efficiency by applying hand-designed compress and decompress operations before and after a trainable adapter matrix. DoRA [47] decomposes the pretrained weight into magnitude and direction components to allow for better training dynamics. Figure 1: Summary of our contributions in this paper: a description of the difference between the finetuning dynamics when LoRA weights $A$ and $B$ are initialized with Init[A] or Init[B]. ##### Contributions. In this paper, we study the impact of different random initialization schemes for LoRA adapters through a theory of large width for neural networks. There is a large literature on the scaling of neural networks from the infinite width perspective. The core approach is to take the width of a neural network to infinity and determine how the behavior of the limit depends on the choice of the hyperparameters such as the learning rate and initialization variance. This approach allows to derive principled scaling choices for these hyperparameters such that desired goals (e.g. stable feature learning) are achieved as the network size approaches the limit (see Section A.2 for more details). Examples of the infinite-width limit include works on initialization schemes such as [4], training dynamics [21]. Examples for the depth limit include initialization strategies [6, 30, 10], depth scaling (see e.g. [18, 28, 29, 37, 41, 23]). A similar strategy was used to derive scaling rules for the LoRA learning rate in [44] (LoRA$+$) that concluded that the learning rates for different LoRA matrices should be scaled differently to ensure optimal feature learning. In this work we use the same approach to provide a systematic comparison between two different random initialization schemes for vanilla LoRA finetuning (using the same learning rate for the $A$ and $B$ matrices). Using the notation Init[A] to refer to the case where $A$ is initialized to random and $B$ to zero (as in [19]) and Init[B] for the opposite, we show that Init[A] and Init[B] lead to fundamentally different training dynamics (as shown in Figure 1): 1. 1. Init[A] allows the use of larger learning rates compared to Init[B] 2. 2. Init[A] can lead to a form of ‘internal instability’ where the features $Az$ (for some input $z$) are large but LoRA output $BAz$ is small. This form of instability allows more efficient feature learning. We identify a _feature learning / stability tradeoff_ in this case and support it with empirical results. 3. 3. Init[B] does not cause any instabilities but training is suboptimal in this case (matrix $B$ is undertrained). 4. 4. Empirical results confirm the theory and show that Init[A] generally leads to better performance than Init[B]. ## 2 Setup and Definitions We consider a general neural network model of the form $\begin{cases}Y_{in}(x)=W_{in}x,\\\ Y_{l}(x)=\mathcal{F}_{l}(W_{l},Y_{l-1}(x)),\;l\in[L],\\\ Y_{out}(x)=W_{out}Y_{L}(x),\end{cases}$ (1) where $x\in\mathbb{R}^{d}$ is the input, $L\geq 1$ is the network depth, $(\mathcal{F}_{l})_{l\in[L]}$ are mappings that define the layers, and $W_{l}\in\mathbb{R}^{n\times n}$ are the hidden weights, where $n$ is the network width, and $W_{in},W_{out}$ are input and output embedding weights.111We use the same notation from [44]. This model will represent the pretrained model that will later be finetuned on some new task. To finetune a (large) pretrained model with a limited amount of computational resources, a popular resource efficient approach is to use the LoRA finetuning method defined below. ###### Definition 1 (Low Rank Adapters (LoRA) from [19]). To apply LoRA to a weight matrix $W\in\mathbb{R}^{n_{1}\times n_{2}}$ in the model, we constrain its update in the fine-tuning process by representing the latter with a low-rank decomposition $W=W^{*}+\frac{\alpha}{r}BA$. Here, only the weight matrices $B\in\mathbb{R}^{n_{1}\times r}$, $A\in\mathbb{R}^{r\times n_{2}}$ are trainable and the original pretrained weights $W^{*}$ remain frozen. The rank $r\ll\min(n_{1},n_{2})$ and $\alpha\in\mathbb{R}$ are tunable constants. As the width $n$ grows,222The width in SOTA models is typically large, i.e. of width $n>10^{3}$. the network initialization scheme and the learning rate should be adapted to avoid numerical instabilities and ensure efficient learning. For instance, the variance of the initialization weights (in hidden layers) should scale like $1/n$ to prevent the pre-activations from blowing up as we increase model width $n$ (e.g., He initialization [4]). To derive proper scaling rules, a principled approach consist of analyzing the statistical properties of key quantities in the model (e.g. second moment of the pre- activations) as $n$ grows and then adjust the initialization variance, the learning rate, and the architecture to achieve desirable properties in the limit $n\to\infty$ [10, 5, 13, 40]. We use this approach to study the effect of initialization on the feature learning dynamics of LoRA in the infinite- width limit. For more details about the theory of scaling of neural networks, see Section A.2. Throughout the paper, we will be using asymptotic notation to describe the behaviour of several quantities as the width $n$ grows. Note that the width $n$ will be the only scaling dimension of neural network training which grows and all other scaling dimensions such as the LoRA rank $r$, number of layers $L$, sequence length, number of training steps, etc., will be considered as fixed. We use the following notation for the asymptotic analysis. ##### Notation. Given sequences $c_{n}\in\mathbb{R}$ and $d_{n}\in\mathbb{R}^{+}$, we write $c_{n}=\mathcal{O}(d_{n})$, resp. $c_{n}=\Omega(d_{n})$, to refer to $c_{n}<\kappa d_{n}$, resp. $c_{n}>\kappa d_{n}$, for some constant $\kappa>0$. We write $c_{n}=\Theta(d_{n})$ if both $c_{n}=\mathcal{O}(d_{n})$ and $c_{n}=\Omega(d_{n})$ are satisfied. For vector sequences $c_{n}=(c_{n}^{i})_{1\leq i\leq k}\in\mathbb{R}^{k}$ (for some $k>0$), we write $c_{n}=\mathcal{O}(d_{n})$ when $c_{n}^{i}=\mathcal{O}(d_{n}^{i})$ for all $i\in[k]$, and same holds for other asymptotic notations. Finally, when the sequence $c_{n}$ is a vector of random variables, convergence is understood to be convergence in second moment ($L_{2}$ norm). ### 2.1 Initialization of LoRA Adapters The standard way to initialize trainable weights is to take an iid initialization of the entries $A_{ij}\sim\mathcal{N}(0,\sigma_{A}^{2}),B_{ij}\sim\mathcal{N}(0,\sigma_{B}^{2})$ for some $\sigma_{A},\sigma_{B}\geq 0$ (this includes initialization with zeros if $\sigma_{B}$ or $\sigma_{A}$ are set to $0$).333Gaussianity is not important and can be replaced by any zero-mean distribution with finite- variance for our purposes.. Due to the additive update structure of LoRA, we want to initialize the product $BA$ to be $0$ so that finetuning starts from the pretrained model [19]. This can be achieved by initializing one of the weights $A$ and $B$ to $0$. If both are initialized to $0$, no learning occurs in this case since this is a saddle point and the parameter gradients will remain zero. Thus, we should initialize one of the parameters $A$ and $B$ to be non-zero and the other to be zero. If we choose a non-zero initialization for $A$, then following standard initialization schemes (e.g., He Init [4], LeCun Init [1]), one should set $\sigma_{A}^{2}=\Theta(n^{-1})$ to ensure $Ax$ does not explode for large $n$. This is justified by the Central Limit Theorem (CLT). On the other hand, if we choose a non-zero initialization for $B$, one should make sure that $\sigma_{b}^{2}=\Theta(r^{-1})=\Theta(1)$. This leaves us with two possible initialization schemes: * • Init[A]: $\sigma_{B}^{2}=0,\sigma_{A}^{2}=\Theta(n^{-1})$ (default initialization in LoRA [19]). * • Init[B]: $\sigma_{B}^{2}=\Theta(r^{-1})=\Theta(1),\sigma_{A}^{2}=0$.444Here, we assumed that $r=\Theta(1)$ (in width), i.e. it doesn’t grow with width. In general, the right scaling for Init[B] is $\sigma_{B}^{2}=\Theta(r^{-1})$. These two initialization achieve the goal of starting finetuning from the pretrained model. A priori, it is unclear if there is a material difference between the two initialization schemes. Surprisingly, as we will show later in this paper, these two initialization schemes lead to fundamentally _different training dynamics_ when model width is large. ### 2.2 LoRA Features ##### Notation. For a given LoRA layer in the network, we use $\underline{Z}$ to denote the input to that layer and $\bar{Z}$ for the output after adding the pretrained weights. More precisely, we can write the layer operation as $\bar{Z}=W^{*}\underline{Z}+\frac{\alpha}{r}BA\,\underline{Z}$. Our main analysis relies on a careful estimation of the magnitude of several quantities involving _LoRA features_. Let us first give a formal definition. ###### Definition 2 (LoRA Features). Given a general neural architecture and a LoRA layer (1), we define LoRA features $(Z_{A},Z_{B})$ as $\begin{cases}Z_{A}=A\underline{Z}\\\ Z_{B}=BZ_{A}=BA\underline{Z},\end{cases}$ At fine-tuning step $t$, we use the superscript $t$ to denote the value of LoRA features $Z_{A}^{t},Z_{B}^{t}$, and the subscript $t$ to denote the weights $A_{t},B_{t}$. ## 3 LoRA Finetuning Dynamics in the Large Width Limit We fix the LoRA rank $r$ throughout the analysis and examine the finetuning dynamics in the limit of large width. This setup aligns well with practical scenarios where the rank is much smaller than the width (i.e., $r\ll n$ ). Typically, for Llama models the rank $r$ is generally of order $2^{k}$ for $k\in\\{2,\dots,6\\}$, and model width $n$ is generally larger than $2^{12}$. We will refer to a layer of the network to which LoRA is applied (see Definition 1) as a _LoRA layer_. For the theoretical analysis, we adopt a simplified setting that facilitates a rigorous yet intuitive derivations of the results. ### 3.1 Simplified Setting The following simplified setup was considered in [44] to derive asymptotic results concerning the learning rates in LoRA. We use the same setup in our analysis to investigate the impact of initialization. ##### Finetuning Dataset. We assume that the dataset used for finetuning consists of a single datapoint $(x,y)$,555Although this a simplifying assumption for our analysis, the results can be extended to mini-batched gradients without affecting the conclusions. Such results will require additional assumptions to be fully rigorous. and the goal is to minimize the loss calculated with the model with adjusted weights $W^{*}+BA$ for all LoRA layers (here $\bm{\theta}=\\{A,B,\textrm{for all LoRA layers in the model}\\}$). $\underline{Z}^{t}$ is the input to the LoRA layer, computed with data input $x$. Similarly, we write $d\bar{Z}^{t}$ to denote the gradient of the loss function with respect to the layer output features $\bar{Z}$ evaluated at data point $(x,y)$. ##### Single LoRA Module. Given a LoRA layer, LoRA feature updates are not only driven by the change in the $A,B$ weights, but also the changes in $\underline{Z},d\bar{Z}$ which are updated as we finetune the model (assuming there are multiple LoRA layers). To isolate the contribution of individual LoRA layers to feature learning, we assume that only a _single LoRA layer is trainable_ and all other LoRA layers are frozen.666This is equivalent to having only a single LoRA layer in the model since LoRA layers are initialized to zero. For this LoRA layer the layer input $\underline{Z}$ is fixed and does not change with $t$, whereas $d\bar{Z}$ changes with step $t$ (because $\bar{Z}^{t}=(W^{*}+\frac{\alpha}{r}B_{t}A_{t})\underline{Z}$). After step $t$, $Z_{B}$ is updated as follows $\Delta Z_{B}^{t}=\underset{\delta_{t}^{1}}{\underbrace{B_{t-1}\Delta Z_{A}^{t}}}+\underset{\delta_{t}^{2}}{\underbrace{\Delta B_{t}Z_{A}^{t-1}}}+\underset{\delta^{3}_{t}}{\underbrace{\Delta B_{t}\Delta Z_{A}^{t}}}.$ (2) As discussed in [44], the terms $\delta^{1}_{t},\delta^{2}_{t}$ represent ‘linear’ feature updates that we obtain if we fix one weight matrix and only train the other. The third term $\delta^{3}_{t}$ represents the ‘multiplicative’ feature update which captures the compounded update due to updating both $A$ and $B$. ### 3.2 Stability and Feature Learning [44] introduced the notion of stability of LoRA features as width grows. We introduce here a slightly more relaxed notion of stability. ###### Definition 3 (Feature Stability). We say that LoRA finetuning is stable if for all LoRA layers in the model, and all training steps $t$, we have $\underline{Z},Z_{B}=\mathcal{O}(1),$ as the width $n$ goes to infinity. Here, feature stability implies that LoRA output $Z_{B}$ remains bounded (in $L^{2}$ norm) as width grows. To achieve such stability, hyperparameters (initialization, learning rate) should be scaled as $n$ grows. We will show that the dependence of the optimal learning rate on $n$ is highly sensitive to the choice of initialization (Init[A] or Init[B]). Note that feature stability also requires that $\underline{Z}=\mathcal{O}(1)$ which is directly related to pretraining dynamics since it depends on some pretrained weights $W^{*}$. We assume that pretraining parameterization (how initialization and learning rate are parametrized w.r.t width) ensures this kind of stability (see Appendix A for more details).777When taking the infinite width limit, we can for instance assume that pretraining parameterization is $\mu$P [26]. This is a technicality for the infinite-width limit and does not have any implications on practical scenarios where the width is finite. The most important implications of this assumption is that in the pretrained network (before introducing LoRA layers), we have $\underline{Z}=\Theta(1),\bar{Z}=\Theta(1)$, which holds for a general input- output pair $(x,y)$. As discussed above, feature updates are driven by the terms $(\delta_{t}^{i})_{i\in\\{1,2,3,\\}}$. As $n$ grows, these feature updates might become trivial (i.e. vanish as $n\to\infty$) or unstable (i.e. grows unbounded). To avoid such scenarios, we want to ensure that $\Delta Z_{B}=\Theta(1)$. Such conditions are the main ideas behind $\mu$P [26] and Depth-$\mu$P [41], which are network parametrizations that ensure stability and feature learning in the large width and depth limits for pretraining. We recall this definition from [44]. ###### Definition 4 (Feature Learning). We say that LoRA finetuning induces stable feature learning in the limit of large width if the dynamics are stable (3), and for all finetuning steps $t$, we have $\Delta Z_{B}^{t}\overset{def}{=}Z_{B}^{t+1}-Z_{B}^{t}=\Theta(1)$. $\Delta Z_{B}$ is the sum of the terms $\delta_{t}^{i}$’s (Equation 2). To achieve optimal feature learning, we want to ensure that $\delta_{t}^{1}=\Theta(1)$ and $\delta_{t}^{2}=\Theta(1)$ which means that both weight matrices $A$ and $B$ are efficiently updated and contribute to the update in $Z_{B}$. An intuitive explanation is provided in Section A.1. This leads us to the following definition of efficient learning with LoRA. ###### Definition 5 (Efficient Learning with LoRA). We say that LoRA fine-tuning is efficient if it is stable (3), and for all LoRA layers in the model, and all fine-tuning steps $t>1$, we have $\delta_{t}^{i}=\Theta(1),\quad i\in\\{1,2\\}.$ Next, we introduce the $\gamma$-operator, an essential tool in our analysis of the large width dynamics of LoRA. ### 3.3 Introduction to the $\gamma$-operator In the theory of scaling, one usually tracks the asymptotic behaviour of key quantities as we scale some model ingredient. For instance, if we scale the width $n$ of a neural network, we are interested in quantifying how certain quantities in the network behave as $n$ grows. This is a standard approach for (principled) model scaling and it has so far been used to derive scaling rules for initialization [5], activation function [10], network parametrization [41], amongst other things. With Init[A] and Init[B], initialization weights are of order $\Theta(n^{-\beta})$ for some $\beta\geq 0$. Assuming that the learning rate also scales polynomialy with $n$, it is straightforward that preactivations, gradients, and weight updates are all asymptotically polynomial in $n$. Note that this is only possible because all neural computations consists of sums of $\Theta(n^{\alpha})$ terms, where typically $\alpha\in\\{0,1\\}$. For instance, when calculating the features $A\underline{Z}$, each entry is a sum of $n$ terms, while when calculating $BZ_{A}$, each entry is a sum of $r$ terms ($r$ fixed as $n$ goes to infinity). This is true for general neural computation that can be expressed as Tensor Programs [15]. Consequently, for some quantity $v$ in the computation graph, it is natural to track the exponent that determines the asymptotic behaviour of $v$ with respect to $n$. We write $v=\Theta(\gamma[v])$ to capture this polynomial dependence. Elementary operations with the $\gamma$-operator include:888The $\gamma$-operator is a mapping from the set $\\{v,\textrm{ s.t.}v=\Theta(n^{\beta})\textrm{ for }\beta\in\mathbb{R}\cup\\{-\infty\\}\\}$ to the set $\mathbb{R}\cup\\{-\infty\\}$. ##### Zero. When $v=0$, we write $\gamma[v]=-\infty$ (as a limit of $\gamma[n^{-\beta}]$ when $\beta\rightarrow\infty$). ##### Multiplication. Given two real-valued variables $v,v^{\prime}$, we have $\gamma[v\times v^{\prime}]=\gamma[v]+\gamma[v^{\prime}]$. ##### Addition. Given two real-valued variables $v,v^{\prime}$, we _generally_ have $\gamma[v+v^{\prime}]=\max(\gamma[v],\gamma[v^{\prime}])$. The only case where this is violated is when $v^{\prime}=-v$. This is generally a zero probability event if $v$ and $v^{\prime}$ are random variables that are not perfectly (negatively) correlated, which is the case in most situations where we make use of this formula. ##### When does $\gamma$-Operator fail to capture asymptotic behaviour? When non-polynomial dependencies (in terms of $n$) appear in neural computations, then $\gamma$ function cannot capture asymptotic behaviour of the learning dynamics. For instance, if one of the layers has embedding dimension $e^{n}$ or $n\times\log(n)$, polynomial exponents are no longer sufficient to capture the asymptotic dynamics. Fortunately, such cases are generally not considered in practice. We have now introduced all required notions for the subsequent analysis. For better readability, we defer all the proofs to the appendix. ### 3.4 Recursive formulas Using the $\gamma$-operator, we can track the asymptotic behaviour of the finetuning dynamics as model width $n$ grows. At finetuning step $t$, the gradients are given $\displaystyle\frac{\partial\mathcal{L}_{t}}{\partial B}$ $\displaystyle=\frac{\alpha}{r}d\bar{Z}^{t-1}\otimes A_{t-1}\underline{Z}$ $\displaystyle\frac{\partial\mathcal{L}_{t}}{\partial A}$ $\displaystyle=dZ_{A}^{t-1}\otimes\underline{Z}=\frac{\alpha}{r}B^{\top}_{t-1}d\bar{Z}^{t-1}\otimes\underline{Z},$ where $\mathcal{L}_{t}$ is the loss at step $t$. The weights are updated as follows $A_{t}=A_{t-1}-\eta g_{A}^{t-1},\quad B_{t}=B_{t-1}-\eta g_{B}^{t-1},$ where $g_{A},g_{B}$ are processed gradients (e.g. normalized gradients with momentum as in AdamW). We assume that the gradients are processed in a way that makes their entries $\Theta(1)$. This is generally satisfied in practice (with Adam for instance) and has been considered in [40] to derive the $\mu$-parametrization for general gradient processing functions. From this, we obtain the following recursive formulas for $\gamma[Z_{A}^{t}]$ and $\gamma[B_{t}]$, which characterizes their behaviour in the large width limit. ###### Lemma 1 (Informal). For $t$ fixed, the asymptotic dynamics of $Z_{A}^{t}$ and $B_{t}$ follow the recursive formula $\displaystyle\gamma[Z_{A}^{t}]$ $\displaystyle=\max(\gamma[Z_{A}^{t-1}],\gamma[\eta]+1)$ (3) $\displaystyle\gamma[B_{t}]$ $\displaystyle=\max(\gamma[B_{t-1]}],\gamma[\eta]).$ The formal proof of Lemma 1 is provided in Appendix A and relies on 1 which fairly represents practical scenarios (see Appendix A for a detailed discussion). Lemma 1 captures the change in asymptotic behaviour of quantities $Z_{A}^{t}$ and $B_{t}$ as width grows. Naturally, the dynamics depend on the the initialization scheme which lead to completely different behaviours as we show in the next two results. ### 3.5 Init[A] leads to more efficient feature learning but suffers “internal” instability In the next result, we provide a precise characterization of stability and feature learning when using Init[A]. ###### Theorem 1 (Informal). For $t$ fixed, with Init[A] and learning rate $\eta$, we have * • Stability: $Z_{B}^{t}=\mathcal{O}(1)$ if and only if $\gamma[\eta]\leq-1/2$. * • Feature Learning: $\Delta Z_{B}^{t}=\Theta(1)$ if and only if $\gamma[\eta]=-1/2$. In this case, we also have $\delta_{t}^{1},\delta_{t}^{2}=\Theta(1)$ (efficient feature learning, 5). Moreover, “internal” instability ($Z_{A}^{t}=\Omega(1)$) occurs when $\gamma[\eta]\in(-1,1/2]$. With Init[A], the maximal learning rate999Maximal $\gamma[\eta]$ that does not cause instability in $Z_{B}$ that does not lead to instability in $Z_{B}$ scales as $\Theta(n^{-1/2})$. This can be seen as an asymptotic form of the edge of stability phenomenon [17] where if we increase the learning rate beyond some level, instability occurs. Interestingly, in this case (i.e. with $\Theta(n^{-1/2})$ learning rate) the features are efficiently updated (5). However, this comes with caveat: the features $Z_{A}^{t}$ grow as $\Theta(n^{1/2})$ which can potentially cause numerical instabilities. We call this phenomenon _internal instability_ : only the features $Z_{A}$ (internal LoRA features) grows, LoRA output $Z_{B}$ remains $\Theta(1)$ in this case. The fact that $\Theta(n^{-1/2})$ is the maximal learning rate that does not cause instability in $Z_{B}$ does not mean it is the _optimal_ learning rate. As the width $n$ grows, this internal instability in $Z_{A}$ will become more and more problematic. Intuitively, we expect that a trade-off appears in this case: the optimal learning rate (found by grid search) to be larger than $\Theta(n^{-1})$ but smaller than $\Theta(n^{-1/2})$, i.e. the network will try to achieve a balance between optimal feature learning ($\gamma[\eta]=-1/2$) and internal stability $Z_{A}^{t}=\Theta(1)$ ($\gamma[\eta]=-1$). We verify this empirically in the next section. ### 3.6 Init[B] leads to suboptimal feature learning with internal stability In the next result, we show that the maximal learning rate allowed with Init[B] is different from that with Init[A], leading to completely different dynamics. ###### Theorem 2 (Informal). For $t$ fixed, with Init[B], we have * • Stability: $Z_{B}^{t}=\mathcal{O}(1)$ if and only if $\gamma[\eta]\leq-1$. * • Feature Learning: $\Delta Z_{B}^{t}=\Theta(1)$ if and only if $\gamma[\eta]=-1$. Moreover, efficient feature learning cannot be achieved with Init[B] for any choice of learning rate scaling $\gamma[\eta]$ (that does not violate the stability condition). More precisely, with $\Theta(n^{-1})$ learning rate, the limiting dynamics (when $n\to\infty$) are the same if $B$ was not trained and $A$ is trained. With Init[B], the maximal learning rate (that does not violate stability) scales as $\Theta(n^{-1})$ (for any $\epsilon>0$, a learning rate of $\Theta(n^{-1+\epsilon})$ leads to $Z_{B}=\Omega(1)$). Because of this bound on the maximal learning rate, no internal instability occurs with Init[B]. In this case, feature learning is suboptimal since the $B$ weight matrix is undertrained in the large width limit ($\delta_{t}^{2}\to 0$). ##### Conclusions from Sections 3.5 and 3.6. The results of 1 and 2 suggest that Init[A] allows the use of _larger learning rates_ compared to Init[B], which might lead to better feature learning and hence better performance at the expense of some internal instability. Here, ‘larger’ learning rate should be interpreted in asymptotic terms: with Init[A] the maximal learning rate that does not cause instability satisfies $\gamma[\eta]=-1/2$. With Init[B], we have $\gamma[\eta]=-1$ instead. Note that because of the constants in $\Theta(n^{\beta})$ learning rates (for some $\beta$) , the optimal learning rate with Init[A] is not systematically larger than Init[B] for _finite width_. However, as width grows, we will see that it is case. Figure 2: Optimal Learning rate for the finetuning of synthetic model Equation 4 with Init[A] and Init[B] as initialization. The optimal LRs are shown as a function of width $n$. Theoretical lines $n^{-1}$ and $n^{-1/2}$ are shown as well (constants $C_{1},C_{2}$ are chosen to provide suitable trend visualization). As model width $n$ grows, the optimal learning rate with Init[A] becomes larger than the optimal learning rate with Init[B]. This is in agreement with the theoretical results. Another important finding from this analysis is that with both initialization schemes, the dynamics are suboptimal in the limit: internal instability with Init[A] and undertraining of $B$ with Init[B].101010More precisely, one can show that with Init[B], for fixed $t$, in the limit $n\to\infty$, $B_{t}$ converges to $B_{0}$, i.e. $B$ is untrained in this limit. We will later discuss possible solutions to this behaviour. ### 3.7 Experiments with a Teacher-Student Model To validate our theory in a controlled setting, we consider the following simple model: $\begin{cases}Y_{in}=W_{in}x,\\\ Y_{h}=Y_{in}+(W_{h}+BA)\phi(Y_{in})\\\ Y_{out}=W_{out}\phi(Y_{h})\end{cases}$ (4) where $W_{in}\in\mathbb{R}^{n\times d},W_{h}\in\mathbb{R}^{n\times n},W_{out}\in\mathbb{R}^{1\times n}$, and $B,A^{\top}\in\mathbb{R}^{r\times n}$. We generate synthetic data from the teacher model using the following config: $d=5,r_{teacher}=20,n=1000,N=1000$ (train data size), and $N_{test}=100$ (test data size). The weight $W_{in}^{teacher},W_{out}^{teacher},A^{teacher},$ and $B^{teacher}$ are randomly initialized, and $W_{h}^{teacher}=0$.111111Here, the pretrained model is effectively given by $Y_{out}=W_{out}^{teacher}\phi(W_{in}^{teacher}x)$, and the finetuning dataset is simulated by injecting the LoRA weights $A^{teacher},B^{teacher}$. We train student models with $d=5,r=4,$ and varying widths $n\in\\{2^{k},\hskip 8.5359ptk=7,\dots,13\\}$.121212In this setup, a student model can have larger width $n$ than the teacher model. Figure 3: Evolution of the norms of the $Z_{A},Z_{B}$ features, averaged over training data. We compute the average $\hat{|}Z_{A}|\overset{def}{=}N^{-1}\sum_{i=1}^{N}\|Z_{A}(x_{i})\|$ (and same for $Z_{B}$), where the $x_{i}$’s are the training data. The dynamics are shown for widths $n=128$ and $n=8192$, two seeds, and for both Init[A] and Init[B]. Train loss and the (optimal) learning rate are shown on top of each plot. We observe that the magnitude of $Z_{A}$ is significantly higher with Init[A] compared to Init[B] at large width ($n=8192$). Interestingly, the train loss is smaller with Init[A], as compared to Init[B]. Results with other seeds and widths are shown in Appendix B. ##### Optimal Learning Rate. We finetune model (4) on synthetic data generated from the teacher model. In Figure 2, we show the optimal learning rate when using either Init[A] or Init[B] to initialize the finetuning, as a function of width $n$. For $n\gg 1$ (typically $n\geq 2^{9}$), the optimal learning rate with Init[A] is larger than the optimal learning rate with Init[B]. This is in agreement with the theoretical results obtained in 1 and 2 which predict asymptotic maximal learning rates (that satisfy the stability condition) of $\Theta(n^{-1/2})$ and $\Theta(n^{-1})$ respectively. With Init[A], we observe the stability/feature learning trade-off for large $n$. The optimal learning rate with Init[A] in this regime (e.g. $n=2^{13}$) is smaller than the maximal theoretical learning rate $n^{-1/2}$ that achieves optimal feature learning (1). Here, the model seems to balance the internal instability that occurs in the $Z_{A}$ features with feature learning and thus favors smaller learning rates: the optimal learning rates is smaller than $\Theta(n^{-1/2})$ and larger than $\Theta(n^{-1})$. ##### Internal Instability and Feature Learning. Figure 3 shows the (average) magnitude of $Z_{A}$ and $Z_{B}$ for Init[A] and Init[B] for widths $n=128$ and $n=8192$. With Init[A], the magnitude of $Z_{A}$ features seem to grow with width, hence trading off internal stability for more efficient feature learning. This behaviour is consistent across random seeds as shown in the figure, and as further confirmed by experiments in Appendix B. The train loss is consistently smaller with Init[A], which can be explained by the fact that Init[A] allows more efficient feature learning at the cost of some internal instability. This flexibility cannot be achieved with Init[B]. Note also that $Z_{B}$ features tends to get smaller with $n$ with Init[A] as predicted by theory: the trade-off between internal instability and feature learning implies that $\eta^{*}=o(n^{-1/2})$, which implies that $Z_{B}^{t}=o(1)$, i.e. the $Z_{B}$ features vanish as width grows. While this might problematic, it only becomes an issue at extremely large width: for instance if the optimal learning rate scales as $\Theta(n^{-\beta})$ for some $\beta\in(1/2,1)$ (so that the learning rate is between $\Theta(n^{-1})$ and $\Theta(n^{-1/2})$, balancing internal instability and efficient feature learning), the LoRA output feature scales as $Z_{B}=B_{t}A_{t}\underline{Z}=\Theta(n^{-\beta+1})$. Therefore, if $\beta\approx 0.7$ for instance, the vanishing rate of LoRA output feature is $Z_{B}\approx\Theta(n^{-0.3})$ which is slow given the order of magnitude of width in practice (for $n=2^{12}$, we have $n^{-0.3}\approx 0.08$). ## 4 Experiments with Language Models Our theoretical results from earlier provides a detailed asymptotic analysis of the finetuning dynamics when LoRA modules are initialized with Init[A] or Init[B]. The main conclusions are that Init[A] generally leads to more efficient feature learning (which can be justified by the fact that optimal learning rate is larger when using Init[A] compared to when using Init[B]). To provide evidence of this claim on real-world tasks, we use LoRA to finetune a set of language models on different benchmarks. Details about the experimental setup and more empirical results are provided in Appendix B. We use LoRA$+$ code [44] for our experiments (available at https://github.com/nikhil-ghosh- berkeley/loraplus). ### 4.1 GLUE tasks with RoBERTa The GLUE benchmark (General Language Understanding Evaluation) consists of several language tasks that evaluate the understanding capabilities of langugage models [8]. Using LoRA, we finetune Roberta-large from the RoBERTa family [12] on MNLI, SST2, and QNLI tasks with varying learning rates $\eta$ and initialization schemes (Init[A] or Init[B]). We use the same experimental setup of [19] for Roberta-Large to compare our results with theirs (see Appendix B for more details). Figure 4: Test Accuracy for RoBERTa-Large finetuned on GLUE tasks. The results are shown after convergence of finetuning with LoRA, initialized with either Init[A] or Init[B]. Models were finetuned using LoRA rank $r=8$ and FP16 precision. Optimal learning rate and corresponding accuracy are shown on top of each panel for both initializations. The experimental setup is provided in Appendix B. The results in Figure 4 are aligned with our theory: we observe that Init[A] generally leads to better performance, and the optimal learning rate with Init[A] is generally larger than with Init[B]. Models initialized with Init[A] match the performances reported in [19], while those initialized with Init[B] generally underperform that baseline. For MNLI task (the hardest one amongst the three tasks), we observe a significant difference in the best test accuracy (over 3 random seeds) with $90.69$ with Init[A] and $89.47$ with Init[B]. We also observe that for MNLI, the optimal learning rate with Init[A] ($\eta^{*}=8$e-5) is much larger than the optimal learning rate with Init[B] ($\eta^{*}=1$e-5), which aligns with our theoretical predictions. However, note that for QNLI for instance (an easier task), while the optimal test accuracy is significantly better with Init[A], the optimal learning rate (from the grid search) is the same for Init[A] and Init[B]. There are many possible explanations for this: 1) the width is not large enough in this case to see the gap between optimal learning rates (for RoBERTa-Large, the width is $n=2^{10}$) 2) The constants in $\Theta(n^{-1})$ are $\Theta(n^{-1/2})$ are significantly different in magnitude due to dependence on finetuning task. We notice similar behaviour with LLama experiments below. A precise analysis of this observation is beyond the scope of this paper, we leave it for future work. ### 4.2 Llama Figure 5: (Left) Test perplexity (lower is better) of TinyLlama LoRA on WikiText-2 with Init[A] and Init[B]. (Center) MMLU accuracy of Llama-7b LoRA finetuned on the Flan-v2 dataset. (Right) GSM8k test accuracy of Llama-7b LoRA finetuned on the GSM8k dataset. More experimental details are provided in Appendix B. To further validate our theoretical findings on more modern models and datasets, we report the results of finetuning the Llama-7b model [38] on the Flan-v2 dataset [36] and the GSM8k dataset [16], and finetuning the TinyLlama model [49] on WikiText-2 using LoRA. Each trial is averaged over two seeds and the shaded region indicates one standard error. In the left panel of Figure 5 we see that when finetuning TinyLlama using LoRA the optimal learning rate using Init[A] is larger than with Init[B] and the corresponding test perplexity is lower. Similarly, for the center panel of Figure 5, when finetuning the Llama-7b model on Flan-v2, the optimal learning rates for Init[A] and Init[B] are the same (for the learning rate grid we used), but the the optimal MMLU accuracy for Init[A] is slightly higher than for Init[B]. For learning rates close to the optimal choice, the accuracy using Init[A] is generally higher than for Init[B]. An analagous result holds for the GSM8k dataset as shown in the rightmost panel of Figure 5. More details about this setting are provided in Appendix B. ## 5 Conclusion and Limitations We showed that finetuning dynamics are highly sensitive to the way LoRA weights are initialized. Init[A] is associated with larger optimal learning rates, compared to Init[B]. Larger learning rates typically result in better performance, as confirmed by our empirical results. Note that this is a zero- cost adjustment with LoRA finetuning: _we simply recommend using Init[A] instead of Init[B]_. One limitation of our work is that we only define feature learning via the magnitude of feature updates in the limit of large width. In this way, our definition of feature learning is data-agnostic and therefore no conclusion about generalization can be obtained with this analysis. The constants in $\Theta(.)$ asymptotic notation naturally depend on the data (the finetuning task) and therefore such data-agnostic approach does not allow us to infer any information about the impact of the data on the finetuning dynamics. _More importantly_ , our results indicate that both initialization schemes lead to suboptimal scenarios, although Init[A] has an advantage over Init[B] as it allows more efficient feature learning. In both cases, instability and/or suboptimal feature learning present fundamental issues, which can potentially be mitigated by approaches such as LoRA$+$ [44]. Understanding the interaction of LoRA$+$ and related efficiency methods with the initialization scheme is an important question for future work. ## 6 Acknowledgement We thank Gradient AI for cloud credits under the Gradient AI fellowship awarded to SH and thank AWS for cloud credits under an Amazon Research Grant awarded to the Yu Group. We also gratefully acknowledge partial support from NSF grants DMS-2209975, 2015341, 20241842, NSF grant 2023505 on Collaborative Research: Foundations of Data Science Institute (FODSI), the NSF and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning through awards DMS-2031883 and 814639, and NSF grant MC2378 to the Institute for Artificial CyberThreat Intelligence and OperatioN (ACTION). ## References * [1] Yann LeCun, Léon Bottou, Genevieve B Orr and Klaus-Robert Müller “Efficient backprop” In _Neural networks: Tricks of the trade_ Springer, 2002, pp. 9–50 * [2] Liu Yang, Steve Hanneke and Jaime Carbonell “A theory of transfer learning with applications to active learning” In _Machine learning_ 90 Springer, 2013, pp. 161–189 * [3] Diederik P Kingma and Jimmy Ba “Adam: A method for stochastic optimization” In _arXiv preprint arXiv:1412.6980_ , 2014 * [4] Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun “Deep residual learning for image recognition” In _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778 * [5] S.S. Schoenholz, J. Gilmer, S. Ganguli and J. Sohl-Dickstein “Deep Information Propagation” In _International Conference on Learning Representations_ , 2017 * [6] Samuel S. Schoenholz, Justin Gilmer, Surya Ganguli and Jascha Sohl-Dickstein “Deep Information Propagation”, 2017 arXiv:1611.01232 [stat.ML] * [7] Alec Radford, Karthik Narasimhan, Tim Salimans and Ilya Sutskever “Improving language understanding by generative pre-training” OpenAI, 2018 * [8] Alex Wang et al. “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding”, 2018 arXiv:1804.07461 [cs.CL] * [9] Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova “BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding” In _arXiv preprint arXiv:1810.04805_ , 2019 * [10] Soufiane Hayou, Arnaud Doucet and Judith Rousseau “On the Impact of the Activation function on Deep Neural Networks Training” In _Proceedings of the 36th International Conference on Machine Learning_ 97, Proceedings of Machine Learning Research PMLR, 2019, pp. 2672–2680 URL: https://proceedings.mlr.press/v97/hayou19a.html * [11] Neil Houlsby et al. “Parameter-efficient transfer learning for NLP” In _International Conference on Machine Learning_ , 2019, pp. 2790–2799 PMLR * [12] Yinhan Liu et al. “RoBERTa: A Robustly Optimized BERT Pretraining Approach”, 2019 arXiv:1907.11692 [cs.CL] * [13] G. Yang “Scaling limits of wide neural networks with weight sharing: Gaussian process behavior, gradient independence, and neural tangent kernel derivation” In _arXiv preprint arXiv:1902.04760_ , 2019 * [14] Jared Kaplan et al. “Scaling laws for neural language models” In _arXiv preprint arXiv:2001.08361_ , 2020 * [15] Greg Yang “Tensor programs iii: Neural matrix laws” In _arXiv preprint arXiv:2009.10685_ , 2020 * [16] Karl Cobbe et al. “Training verifiers to solve math word problems” In _arXiv preprint arXiv:2110.14168_ , 2021 * [17] Jeremy Cohen et al. “Gradient Descent on Neural Networks Typically Occurs at the Edge of Stability” In _International Conference on Learning Representations_ , 2021 URL: https://openreview.net/forum?id=jh-rTtvkGeM * [18] Soufiane Hayou et al. “Stable ResNet” In _Proceedings of The 24th International Conference on Artificial Intelligence and Statistics_ 130, Proceedings of Machine Learning Research PMLR, 2021, pp. 1324–1332 URL: https://proceedings.mlr.press/v130/hayou21a.html * [19] Edward J. Hu et al. “LoRA: Low-Rank Adaptation of Large Language Models” In _arXiv preprint arXiv:2106.09685_ , 2021 * [20] Brian Lester, Rami Al-Rfou and Noah Constant “The power of scale for parameter-efficient prompt tuning” In _arXiv preprint arXiv:2104.08691_ , 2021 * [21] Greg Yang and Edward J Hu “Tensor programs iv: Feature learning in infinite-width neural networks” In _International Conference on Machine Learning_ , 2021, pp. 11727–11737 PMLR * [22] Jordan Hoffmann et al. “Training Compute-Optimal Large Language Models”, 2022 arXiv:2203.15556 [cs.CL] * [23] Mufan Li, Mihai Nica and Dan Roy “The Neural Covariance SDE: Shaped Infinite Depth-and-Width Networks at Initialization” In _Advances in Neural Information Processing Systems_ 35 Curran Associates, Inc., 2022, pp. 10795–10808 URL: https://proceedings.neurips.cc/paper_files/paper/2022/file/45fc4a0da7e7f6fbabaabe2d20a441d1-Paper-Conference.pdf * [24] Haokun Liu et al. “Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning” In _Advances in Neural Information Processing Systems_ 35, 2022, pp. 1950–1965 * [25] Jason Wei et al. “Emergent abilities of large language models” In _arXiv preprint arXiv:2206.07682_ , 2022 * [26] Greg Yang et al. “Tensor programs v: Tuning large neural networks via zero-shot hyperparameter transfer” In _arXiv preprint arXiv:2203.03466_ , 2022 * [27] Tim Dettmers, Artidoro Pagnoni, Ari Holtzman and Luke Zettlemoyer “QLoRA: Efficient Finetuning of Quantized LLMs” In _arXiv preprint arXiv:2305.14314_ , 2023 * [28] Soufiane Hayou “On the infinite-depth limit of finite-width neural networks” In _Transactions on Machine Learning Research_ , 2023 URL: https://openreview.net/forum?id=RbLsYz1Az9 * [29] Soufiane Hayou and Greg Yang “Width and Depth Limits Commute in Residual Networks” In _Proceedings of the 40th International Conference on Machine Learning_ 202, Proceedings of Machine Learning Research PMLR, 2023, pp. 12700–12723 URL: https://proceedings.mlr.press/v202/hayou23a.html * [30] Bobby He et al. “Deep Transformers without Shortcuts: Modifying Self-attention for Faithful Signal Propagation”, 2023 arXiv:2302.10322 [cs.LG] * [31] Damjan Kalajdzievski “A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA” In _arXiv preprint arXiv:2312.03732_ , 2023 * [32] Soroush Abbasi Koohpayegani et al. “NOLA: Networks as linear combination of low rank random basis” In _arXiv preprint arXiv:2310.02556_ , 2023 * [33] Dawid Jan Kopiczko, Tijmen Blankevoort and Yuki Markus Asano “VeRA: Vector-based Random Matrix Adaptation” In _arXiv preprint arXiv:2310.11454_ , 2023 * [34] Yixiao Li et al. “Loftq: Lora-fine-tuning-aware quantization for large language models” In _arXiv preprint arXiv:2310.08659_ , 2023 * [35] Haotian Liu, Chunyuan Li, Yuheng Li and Yong Jae Lee “Improved baselines with visual instruction tuning” In _arXiv preprint arXiv:2310.03744_ , 2023 * [36] Shayne Longpre et al. “The flan collection: Designing data and methods for effective instruction tuning” In _arXiv preprint arXiv:2301.13688_ , 2023 * [37] Lorenzo Noci et al. “The Shaped Transformer: Attention Models in the Infinite Depth-and-Width Limit”, 2023 arXiv:2306.17759 [stat.ML] * [38] Hugo Touvron et al. “Llama 2: Open Foundation and Fine-Tuned Chat Models” In _arXiv preprint arXiv:2307.09288_ , 2023 * [39] Yizhong Wang et al. “How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources” In _arXiv preprint arXiv:2306.04751_ , 2023 * [40] Greg Yang and Etai Littwin “Tensor programs ivb: Adaptive optimization in the infinite-width limit” In _arXiv preprint arXiv:2308.01814_ , 2023 * [41] Greg Yang, Dingli Yu, Chen Zhu and Soufiane Hayou “Tensor Programs VI: Feature Learning in Infinite-Depth Neural Networks” In _arXiv preprint arXiv:2310.02244_ , 2023 * [42] Longteng Zhang et al. “Lora-fa: Memory-efficient low-rank adaptation for large language models fine-tuning” In _arXiv preprint arXiv:2308.03303_ , 2023 * [43] Klaudia Bałazy, Mohammadreza Banaei, Karl Aberer and Jacek Tabor “LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters” In _arXiv preprint arXiv:2405.17604_ , 2024 * [44] Soufiane Hayou, Nikhil Ghosh and Bin Yu “LoRA+: Efficient Low Rank Adaptation of Large Models”, 2024 arXiv:2402.12354 [cs.LG] * [45] Ting Jiang et al. “MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning” In _arXiv preprint arXiv:2405.12130_ , 2024 * [46] Yang Li, Shaobo Han and Shihao Ji “VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks” In _arXiv preprint arXiv:2405.15179_ , 2024 * [47] Shih-Yang Liu et al. “DoRA: Weight-Decomposed Low-Rank Adaptation” In _arXiv preprint arXiv:2402.09353_ , 2024 * [48] Fanxu Meng, Zhaohui Wang and Muhan Zhang “PiSSA: Principal Singular Values and Singular Vectors Adaptation of Large Language Models” In _arXiv preprint arXiv:2404.02948_ , 2024 * [49] Peiyuan Zhang, Guangtao Zeng, Tianduo Wang and Wei Lu “Tinyllama: An open-source small language model” In _arXiv preprint arXiv:2401.02385_ , 2024 * [50] Jiacheng Zhu et al. “Asymmetry in Low-Rank Adapters of Foundation Models”, 2024 arXiv:2402.16842 [cs.LG] ## Appendix A Theory and Proofs ### A.1 Role of A and B weight matrices Recall the feature update decomposition $\Delta Z_{B}^{t}=\underset{\delta_{t}^{1}}{\underbrace{B_{t-1}\Delta Z_{A}^{t}}}+\underset{\delta_{t}^{2}}{\underbrace{\Delta B_{t}Z_{A}^{t-1}}}+\underset{\delta^{3}_{t}}{\underbrace{\Delta B_{t}\Delta Z_{A}^{t}}}.$ (5) To achieve optimal feature learning, we want to ensure that $\delta_{t}^{1}=\Theta(1)$ and $\delta_{t}^{2}=\Theta(1)$ which means that both weight matrices $A$ and $B$ are efficiently updated and contribute to the update in $Z_{B}$. To justify why this is a desirable property, let us analyze how changes in matrices $A$ and $B$ affect LoRA feature $Z_{B}=BA\,\underline{Z}$. Let $(B_{:,i})_{1\leq i\leq r}$ denote the columns of $B$. We have the following decomposition of $Z_{B}$: $Z_{B}=\sum_{i=1}^{r}(A\,\underline{Z})_{i}B_{:,i},$ where $(A\underline{Z})_{i}$ is the $i^{th}$ coordinate of $A\underline{Z}$. This decomposition suggests that the _direction_ of $Z_{B}$ is a weighted sum of the columns of $B$, and $A$ modulates the _weights_. With this, we can also write $\begin{cases}\delta^{1}_{t}=\sum_{i=1}^{r}(\Delta A_{t}\underline{Z})_{i}(B_{:,i})_{t-1}\\\ \delta^{2}_{t}=\sum_{i=1}^{r}(A_{t-1}\underline{Z})_{i}(\Delta B_{:,i})_{t-1},\end{cases}$ where $(B_{:,i})_{t}$ refers to the columns of $B$ at time step $t$. Having both $\delta_{t}^{1}$ and $\delta_{t}^{2}$ of order $\Theta(1)$ means that both $A$ and $B$ are ‘sufficiently’ updated to induce a change in weights $(A\underline{Z})_{i}$ and directions $B_{:,i}$. If one of the matrices $A,B$ is not efficiently updated, we might end up with suboptimal finetuning, leading to either non updated directions $B$ or direction weights $(A_{t-1}Z)$. For instance, assuming that the model is initialized with Init[B], and that $B$ is not efficiently updated, the direction of $Z_{B}$ will be mostly determined by the vector (sub)space of dimension $r$ generated by the columns of $B$ at initialization. This intuition was discussed in details in [44]. ### A.2 Scaling of Neural Networks Scaling refers to the process of increasing the size of one of the ingredients in the model to improve performance (see e.g. [22]). This includes model capacity which can be increased via width (embedding dimension) or depth (number of layers) or both, compute (training data), number of training steps etc. In this paper, we are interested in scaling model capacity via the width $n$. This is motivated by the fact that most state-of-the-art language and vision models have large width. It is well known that as the width $n$ grows, the network initialization scheme and the learning should be adapted to avoid numerical instabilities and ensure efficient learning. For instance, the initialization variance should scale $1/n$ to prevent arbitrarily large pre-activations as we increase model width $n$ (e.g. He init [4]). To derive such scaling rules, a principled approach consist of analyzing statistical properties of key quantities in the model (e.g. pre-activations) as $n$ grows and then adjust the initialization, the learning rate, and the architecture itself to achieve desirable properties in the limit $n\to\infty$ [10, 5, 13]. In this context, [26] introduces the Maximal Update Parameterization (or $\mu$P), a set of scaling rules for the initialization scheme, the learning rate, and the network architecture that ensure stability and maximal feature learning in the infinite width limit. Stability is defined by $Y_{l}^{i}=\Theta(1)$ for all $l$ and $i$ where the asymptotic notation ‘$\Theta(.)$’ is with respect to width $n$ (see next paragraph for a formal definition), and feature learning is defined by $\Delta Y_{l}=\Theta(1)$, where $\Delta$ refers to the feature update after taking a gradient step. $\mu$P guarantees that these two conditions are satisfied at any training step $t$. Roughly speaking, $\mu$P specifies that hidden weights should be initialized with $\Theta(n^{-1/2})$ random weights, and weight updates should be of order $\Theta(n^{-1})$. Input weights should be initialized $\Theta(1)$ and the weights update should be $\Theta(1)$ as well. While the output weights should be initialized $\Theta(n^{-1})$ and updated with $\Theta(n^{-1})$. These rules ensure both stability and feature learning in the infinite-width limit, in contrast to standard parameterization (exploding features if the learning rate is well tuned), and kernel parameterizations (e.g. Neural Tangent Kernel parameterization where $\Delta Y_{l}=\Theta(n^{-1/2})$, i.e. no feature learning in the limit). ### A.3 Proof of Lemma 1 In this section, we provide the formal proof of Lemma 1. The proof relies on the following assumption on the processed gradient $g_{A}$. This assumption was used in [44] to derive scaling rules for the optimal learning rates for $A$ and $B$ weight matrices. Here, we use it to study the sensitivity of LoRA dynamics to initialization. We provide an intuitive discussion that shows why this assumption is realistic. ###### Assumption 1. With the same setup of Section 3, at training step $t$, we have $\underline{Z},d\bar{Z}=\Theta(1)$ and $g_{A}^{t}\underline{Z}=\Theta(n)$. 1 consists of two parts: that 1) $\underline{Z},d\bar{Z}=\Theta(1)$ and 2) $g_{A}^{t}\underline{Z}=\Theta(n)$. The first condition is mainly related to pretraining paramterization which we assume satisfied such conditions.131313There is a technical intricacy on this point. While $\underline{Z}$ depends only on pretraining, the Jacobian $d\bar{Z}$ depends on finetuning. However, under the stability conditions mentioned in 3, if $d\bar{Z}=\Theta(1)$, it should remain so during finetuning as well. The second condition is less intuitive, so let us provide an argument to justify why it is sound in practice. Let us study the product $g_{A}^{t}\underline{Z}$ in the simple case of Adam with no momentum, a.k.a SignSGD which is given by $g_{A}=\textrm{sign}\left(\frac{\partial\mathcal{L}}{\partial A}\right),$ where the sign function is applied element-wise. At training step $t$, we have $\frac{\partial\mathcal{L}_{t}}{\partial A}=\frac{\alpha}{r}B^{\top}_{t-1}d\bar{Z}^{t-1}\otimes\underline{Z},$ Let $S^{t}=\frac{\alpha}{r}B^{\top}_{t-1}d\bar{Z}^{t-1}$. Therefore we have $g_{A}=\textrm{sign}(S^{t}\otimes\underline{Z})=(\textrm{sign}(S^{t}_{i}\underline{Z}_{j}))_{1\leq i,j\leq n}.$ However, note that we also have $\textrm{sign}(S^{t}_{i}\underline{Z}_{j})=\textrm{sign}(S^{t}_{i})\textrm{sign}(\underline{Z}_{j}),$ and as a result $g_{A}^{t}=\textrm{sign}(S^{t})\otimes\textrm{sign}(\underline{Z}).$ Hence, we obtain $g_{A}^{t}\underline{Z}=(\textrm{sign}(\underline{Z})^{\top}\underline{Z})\textrm{sign}(S^{t})=\Theta(n),$ where we used the fact that $\textrm{sign}(\underline{Z})^{\top}\underline{Z}=\Theta(n)$. This intuition should in-principle hold for the general variant of Adam with momentum as long as the gradient processing function (a notion introduced in [2]) roughly preserves the $\textrm{sign}(\underline{Z})$ direction. This reasoning can be made rigorous for general gradient processing function using the Tensor Program framework and taking the infinite-width limit where the components of $g_{A},\underline{Z},d\bar{Z}$ all become iid. However this necessitates an intricate treatment of several quantities in the process, which we believe is an unnecessary complication and does not serve the main purpose of this paper. Lemma 1. _Under 1, the asymptotic behaviour of $Z_{A}^{t}$ and $B_{t}$ follow the recursive formula_ $\displaystyle\gamma[Z_{A}^{t}]$ $\displaystyle=\max(\gamma[Z_{A}^{t-1}],\gamma[\eta]+1)$ $\displaystyle\gamma[B_{t}]$ $\displaystyle=\max(\gamma[B_{t-1]}],\gamma[\eta]).$ ###### Proof. At finetuning step $t$, the weights are updated as follows $A_{t}=A_{t-1}-\eta g_{A}^{t-1},\quad B_{t}=B_{t-1}-\eta g_{B}^{t-1}.$ Using the elementary operations with the $\gamma$-operator, we obtain $\gamma[Z_{A}^{t}]=\max(\gamma[Z_{A}^{t-1}],\gamma[\eta g_{A}^{t-1}\underline{Z}])=\max(\gamma[Z_{A}^{t-1}],\gamma[\eta]+\gamma[g_{A}^{t-1}\underline{Z}]).$ We conclude for $Z_{A}^{t}$ using 1. The formula for $\gamma[B_{t}]$ follows using the same techniques. ∎ ### A.4 Proof of 1 Theorem 1. _Under 1, For $t$ fixed, with Init[A] and learning rate $\eta$, we have_ * • Stability _:_ $Z_{B}^{t}=\mathcal{O}(1)$ if and only if $\gamma[\eta]\leq-1/2$. * • Feature Learning _:_ $\Delta Z_{B}^{t}=\Theta(1)$ if and only if $\gamma[\eta]=-1/2$. In this case, we also have $\delta_{t}^{1},\delta_{t}^{2}=\Theta(1)$ (efficient feature learning, 5). _Moreover, “internal” instability ( $Z_{A}^{t}=\Omega(1)$) occurs when $\gamma[\eta]\in(-1,1/2]$. _ ###### Proof. With Init[A], we have $\gamma[B_{0}]=-\infty$ and $\gamma[A_{0}\underline{Z}]=0$. As a result, we have for all $t$ $\displaystyle\gamma[A_{t}\underline{Z}]$ $\displaystyle=\max(0,\gamma[\eta]+1)$ $\displaystyle\gamma[B_{t}]$ $\displaystyle=\gamma[\eta]$ To achieve $Z_{B}=\mathcal{O}(1)$, we should therefore have $\gamma[\eta]+\max(0,\gamma[\eta]+1)\leq 0,$ which is equivalent to $\gamma[\eta]\leq-1/2$. This implies that the maximum learning rate that does not cause instability is $\Theta(n^{-1/2})$. Such learning rate causes internal instability, i.e. the feature $Z_{A}$ explodes with width. Why? Because, with this learning rate, we have $\gamma[A_{t}\underline{Z}]=1/2$, i.e. $A_{t}\underline{Z}=\Theta(n^{1/2})$ which diverges as $n$ grows. However, this growth is compensated with the fact that $\gamma[B_{t}]=-1/2$, i.e. $B_{t}=\Theta(n^{-1/2})$. This analysis is valid for any $\gamma[\eta]\in(-1,1/2]$. In this case, feature learning is efficient in the sense of 5: $\delta_{t}^{1}=\Theta(1)$ and $\delta_{t}^{2}=\Theta(1)$. To see this, recall that $\delta_{t}^{1}=B_{t-1}\Delta Z^{1}_{A}$ which yields $\gamma[\delta_{t}^{1}]=\gamma[B_{t-1}]+\gamma[\Delta Z_{A}^{t}]=\gamma[\eta]+\gamma[\eta]+1=0$ and $\gamma[\delta_{t}^{2}]=\gamma[\Delta B_{t}]+\gamma[Z_{A}^{t-1}]=\gamma[\eta]+\max(\gamma[\eta]+1,0)=0$. So both weights contribute significantly to feature updates at the expense of benign exploding in $Z_{A}^{t}=A_{t}\underline{Z}$. ∎ ### A.5 Proof of 2 Theorem 2. _Under 1, for $t$ fixed, with Init[B] and learning rate $\eta$, we have_ * • Stability _:_ $Z_{B}^{t}=\mathcal{O}(1)$ if and only if $\gamma[\eta]\leq-1$. * • Feature Learning _:_ $\Delta Z_{B}^{t}=\Theta(1)$ if and only if $\gamma[\eta]=-1$. _Moreover, efficient feature learning cannot be achieved with Init[B] for any choice of learning rate scaling $\gamma[\eta]$ (that does not violate the stability condition). More precisely, with $\Theta(n^{-1})$ learning rate, the limiting dynamics (when $n\to\infty$) are the same if $B$ was not trained and $A$ is trained._ ###### Proof. Here, we show that maximal learning rate that does not cause instability in LoRA output features $Z_{B}$ is $\Theta(n^{-1})$ and no internal instability occurs in this scenario. With Init[B], we have that $\gamma[B_{0}]=0$ and $\gamma[A_{0}\underline{Z}]=-\infty$. From Equation 3, we obtain that $\displaystyle\gamma[A_{t}\underline{Z}]$ $\displaystyle=\gamma[\eta]+1$ $\displaystyle\gamma[B_{t}]$ $\displaystyle=\max(0,\gamma[\eta]).$ As a result, LoRA output stability is achieved if and only if $\gamma[\eta]+1+\max(0,\gamma[\eta])\leq 0,$ which is equivalent to having $\gamma[\eta]\leq-1$. Moreover, with $\eta=\Theta(n^{-1})$ we have that $\gamma[\delta^{1}_{t}]=\gamma[B_{t-1}]+\gamma[\Delta Z^{t}_{A}]=0+\gamma[\eta]+1=0$ and $\gamma[\delta^{2}_{t}]=\gamma[\Delta B_{t}]+\gamma[Z^{t-1}_{A}]=\gamma[\eta]+0=-1$. As a result, feature learning is not efficient in this case, and the learning dynamics are asymptotically equivalent to not training matrix $B$ (because $\delta^{2}_{t}\rightarrow 0$). ∎ ## Appendix B Additional Experiments This section complements the empirical results reported in the main text. We provide the details of our experimental setup, and show the acc/loss heatmaps for several configurations. ### B.1 Empirical Details #### B.1.1 Toy Example In Figure 2, we trained a simple model with LoRA layers to verify the results of the analysis in LABEL:sec:toy. Here we provide the empirical details for these experiments. ##### Model. We consider a simple model given by $f(x)=W_{out}\phi(W_{in}x+(W_{h}+BA)\phi(W_{in}x)),$ where $W_{in}\in\mathbb{R}^{n\times d},W_{out}\in\mathbb{R}^{1\times n},A\in\mathbb{R}^{r\times n},B\in\mathbb{R}^{n\times r}$ are the weights, and $\phi$ is the ReLU activation function. ##### Dataset. Here, we used $d=5$, $n=1000$, and $r=20$ to simulate synthetic data (the teacher model). Synthetic dataset generated by $X\sim\mathcal{N}(0,I_{d}),Y=f(X)$. The number of training examples is $N_{train}=1000$, and the number of test examples is $N_{test}=100$. the weights $W_{in},W_{h},W_{out},B,A$ are randomly sampled from a Gaussian distribution with normalized variance (1/fan-in). ##### Training. We train the model with AdamW with $\beta_{1}=0.9$ and $\beta_{2}=0.99$ for a range for values of $\eta$. The weights are initialized as follows: $W_{in}\sim\mathcal{N}(0,1/d),W_{h}\sim\mathcal{N}(0,1/n),W_{out}\sim\mathcal{N}(0,1/n)$ and fixed. Only the weight matrices $A,B$ are trainable. #### B.1.2 GLUE tasks with RoBERTa For our experiments with RoBERTa models, finetuned on GLUE tasks, we use the following setup: Training Alg Details Model | Roberta-Large ---|--- Learning Rates | $\\{2^{k}\times 10^{-5},\textrm{ for }k=0,1,2,\dots,10\\}$ $\beta_{1}$ | 0.9 $\beta_{2}$ | 0.999 $\varepsilon$ | $1\times 10^{-8}$ LR Schedule | Linear with Warmup Ratio 0.06 Weight Decay | 0.0 Train Batch Size | 4 Number of Epochs | 10 LoRA Hyperparameters LoRA Rank | $8$ ---|--- LoRA $\alpha$ | $16$ LoRA Dropout | $0.1$ Target Modules | ‘query, value’ Other Hyperparameters Sequence Length | $T_{\text{target}}=128$ ---|--- Random Seeds | 3 Precision | FP16 ##### GPUs. Nvidia A10 with 24GB VRAM. #### B.1.3 TinyLlama WikiText-2 For our experiments using the TinyLlama model finetuned on Wikitext-2, we use the following setup training with AdamW. Training Algorithm Details Learning Rates | $1\times 10^{-5},\,5\times 10^{-5},\,1\times 10^{-4},\,2\times 10^{-4},\,4\times 10^{-4},\,7\times 10^{-4},\,1\times 10^{-3},\,2\times 10^{-3}$ ---|--- $\beta_{1}$ | 0.9 $\beta_{2}$ | 0.999 $\varepsilon$ | $1\times 10^{-6}$ LR Schedule | Linear with Warmup Ratio 0.03 Weight Decay | 0.0 Train Batch Size | 8 Number of Epochs | 1 LoRA Hyperparameters LoRA Rank | $64$ ---|--- LoRA $\alpha$ | $16$ LoRA Dropout | $0.0$ Target Modules | ‘q_proj, k_proj, v_proj, o_proj, up_proj, down_proj, gate_proj’ Other Hyperparameters Sequence Length | $1024$ ---|--- Random Seeds | $2$ Precision | BF16 ##### GPUs. Nvidia A10 with 24GB VRAM. #### B.1.4 Llama-7b Flan-v2 For our experiments using the Llama-7b model finetuned on a size 100k random subset of flan-v2, we use following setup training with AdamW Training Algorithm Details Learning Rates | $1\times 10^{-5},\,5\times 10^{-5},\,1\times 10^{-4},\,2\times 10^{-4},\,4\times 10^{-4},\,7\times 10^{-4},\,1\times 10^{-3}$ ---|--- $\beta_{1}$ | 0.9 $\beta_{2}$ | 0.999 $\varepsilon$ | $1\times 10^{-6}$ LR Schedule | Linear with Warmup Ratio 0.03 Weight Decay | 0.0 Train Batch Size | 16 Number of Epochs | 1 LoRA Hyperparameters LoRA Rank | $64$ ---|--- LoRA $\alpha$ | $16$ LoRA Dropout | $0.0$ Target Modules | ‘q_proj, k_proj, v_proj, o_proj, up_proj, down_proj, gate_proj’ Other Hyperparameters Sequence Length | $T_{\text{source}}=1536$, $T_{\text{target}}=512$ ---|--- Random Seeds | 2 Precision | BF16 ##### MMLU Evaluation: We evaluate average accuracy on MMLU using 5-shot prompting. ##### GPUs: Nvidia A10 with 24GB VRAM. #### B.1.5 Llama-7b GSM8k For our experiments using the Llama-7b model finetuned on the GSM8k training dataset, we use following setup training with AdamW Training Algorithm Details Learning Rates | $1\times 10^{-5},\,5\times 10^{-5},\,1\times 10^{-4},\,2\times 10^{-4},\,4\times 10^{-4},\,7\times 10^{-4},\,1\times 10^{-3}$ ---|--- $\beta_{1}$ | 0.9 $\beta_{2}$ | 0.999 $\varepsilon$ | $1\times 10^{-6}$ LR Schedule | Linear with Warmup Ratio 0.03 Weight Decay | 0.0 Train Batch Size | 16 Number of Epochs | 1 LoRA Hyperparameters LoRA Rank | $64$ ---|--- LoRA $\alpha$ | $16$ LoRA Dropout | $0.0$ Target Modules | ‘q_proj, k_proj, v_proj, o_proj, up_proj, down_proj, gate_proj’ Other Hyperparameters Sequence Length | $T_{\text{source}}=1536$, $T_{\text{target}}=512$ ---|--- Random Seeds | 2 Precision | BF16 ##### GPUs: Nvidia A10 with 24GB VRAM. ### B.2 Additional Exps Figure 6: Same as Figure 3 with differents widths.
# Stress testing $\Lambda$CDM with high-redshift galaxy candidates Michael Boylan-Kolchin Department of Astronomy, The University of Texas at Austin, 2515 Speedway, Stop C1400, Austin, TX 78712-1205, USA<EMAIL_ADDRESS> ###### Abstract Early data from JWST have revealed a bevy of high-redshift galaxy candidates with unexpectedly high stellar masses. An immediate concern is the consistency of these candidates with galaxy formation in the standard cosmological model. In the $\Lambda$CDM paradigm, the stellar mass ($M_{\star}$) of a galaxy is limited by the available baryonic reservoir of its host dark matter halo. The mass function of dark matter halos therefore imposes an absolute upper limit on the number density $n(>M_{\star},z)$ and stellar mass density $\rho_{\star}(>M_{\star},z)$ of galaxies more massive than $M_{\star}$ at any epoch $z$. Here I show that the most massive galaxy candidates in JWST observations at $z\sim 7-10$ lie at the very edge of these limits, indicating an important unresolved issue with the properties of galaxies derived from the observations, how galaxies form at early times in $\Lambda$CDM, or within this standard cosmology itself. ###### keywords: cosmology: theory – galaxies: abundances – galaxies: high-redshift ††pubyear: 2023††pagerange: Stress testing $\Lambda$CDM with high-redshift galaxy candidates–LABEL:LastPage ## 1 Introduction $\Lambda$CDM-like cosmological models share a similar basic assumption: baryons and dark matter are well-mixed at very early times, and as baryons collapse into dark matter halos, the maximum amount of baryonic material within a halo will be equal to $M_{\rm b}=f_{\rm{b}}\,M_{\rm halo}$, where $f_{\rm{b}}\equiv\Omega_{\rm b}/\Omega_{\rm m}$ is the cosmic baryon fraction. This, in turn bounds the total stellar content of a dark matter halo: $M_{\star}(M_{\rm halo})\leq M_{\rm b}(M_{\rm halo})$. I show how this simple relation can be used as a stringent test of either cosmological models with minimal assumptions about galaxy formation or the reliability of photometric selection and physical characterization of high-redshift galaxy candidates. My analysis is in many ways similar to Behroozi & Silk (2018), who connected cumulative number densities of dark matter halos to high-redshift galaxy stellar mass functions (see also Steinhardt et al. 2016), though I also consider the maximal cumulative stellar mass density allowed in $\Lambda$CDM. The question of the consistency of stellar mass functions and the underlying cosmological dark matter halo mass functions has become considerably more urgent with the release of the first data from JWST, and with it, a swarm of high-redshift galaxy candidates (Adams et al., 2023; Atek et al., 2023; Castellano et al., 2022; Donnan et al., 2023; Finkelstein et al., 2022; Labbé et al., 2022; Morishita & Stiavelli, 2022; Naidu et al., 2022; Yan et al., 2023). Figure 1: Limits on the abundance of galaxies as a function of redshift. Curves show the relationship between $M_{\star}$ and $z$ at fixed cumulative halo abundance (left) and fixed $\rho_{\rm b}(>M_{\rm halo})$, or equivalently fixed peak height $\nu$ (right). The most extreme L22 galaxy candidates are shown as blue stars, with uncertainties indicating 68% intervals (symmetric about the median) of the posterior probability distribution. The existence of a galaxy with $M_{\star}$ at redshift $z$ requires that such galaxies have a cumulative comoving number density that is at most the number density shown in the left panel, as those galaxies must reside in host halo of mass $M_{\rm halo}=M_{\star}/(f_{\rm{b}}\,\epsilon)$. The cumulative comoving number density corresponding to an observed $M_{\star}$ will likely be (much) smaller than is indicated here, as the curves are placed on the plot by assuming the physically maximal $\epsilon=1$. For smaller values of $\epsilon$, the curves in each panel move down relative to the points by a factor of $\epsilon$ (as indicated by black downward-facing arrows). The right panel demonstrates that even for the most conservative assumption of $\epsilon=1$, the data points correspond to very rare peaks in the density field, implying a limited baryonic reservoir that is in tension with the measured stellar masses of the galaxies. ## 2 Assumptions I adopt the base $\Lambda$CDM model of Planck Collaboration et al. (2020), which assumes no spatial curvature and initial conditions that are Gaussian and adiabatic, as the standard cosmological model. I use best fit values for cosmological parameters based on the Plik TT,TE,EE+lowE+lensing likelihood applied to the full-mission data. The relevant parameters and values for this work are the present-day Hubble constant, $H_{0}=67.32\,{\rm km\,s}^{-1}\,{\rm Mpc}^{-1}$; the $z=0$ density parameter for matter, $\Omega_{\rm m}=0.3158$ (which includes baryons, dark matter, and non-relativistic neutrinos); the slope of the primordial power spectrum of density fluctuations, $n_{\rm s}=0.96605$; the rms amplitude of the linear matter power spectrum at $z=0$ as measured in spheres of radius $8\,h^{-1}\,{\rm Mpc}$, $\sigma_{8}=0.8120$; and the cosmic baryon fraction, $f_{\rm{b}}\equiv\Omega_{\rm b}/\Omega_{\rm m}=0.156$ (Planck Collaboration et al., 2020). With these values, the linear matter power spectrum is specified at all times relevant for structure formation. The non-linear density field, home to the dark matter halos that host galaxies, must be computed numerically. However, a long line of research starting with Press & Schechter (1974) has been devoted to connecting the abundance of dark matter halos as a function of redshift and mass to the underlying linear matter power spectrum. In what follows, I use the Sheth & Tormen (1999) dark matter halo mass function $dn(M,z)/dM$ — the number of dark matter halos of mass $M$ per unit mass per unit comoving volume at redshift $z$ — to compute the comoving number density of halos above a given halo mass threshold, $n(>M_{\rm halo},z)=\int_{M_{\rm halo}}^{\infty}dM\,\frac{dn(M,z)}{dM}\,$ (1) and the comoving mass density in halos more massive than $M_{\rm halo}$, $\rho_{\rm m}(>M_{\rm halo},z)=\int_{M_{\rm halo}}^{\infty}dM\,M\,\frac{dn(M,z)}{dM}\,.$ (2) These translate directly to upper limits on the statistics of galaxies through the straightforward assumption that the largest stellar content a halo can have given its cosmic allotment of baryons is $M_{\star,\rm{max}}=f_{\rm{b}}\,M_{\rm halo}$. More generally, we may write $M_{\star}=\epsilon\,f_{\rm{b}}\,M_{\rm halo}$, with $\epsilon\leq 1$ being the efficiency of converting baryons into stars. The cumulative comoving number density of dark matter halos more massive than $M_{\rm halo}$ thus sets an upper limit on the comoving number density of galaxies more massive than $M_{\star}$, $n_{\rm gal}(>M_{\star})\leq n_{\rm halo}(>M_{\star}/f_{\rm{b}})\,.$ (3) Similarly, the cumulative comoving density of collapsed mass sets an upper limit on the density of collapsed baryons, $\rho_{\rm b}(>M_{\rm halo})=f_{\rm{b}}\,\rho_{\rm m}(>M_{\rm halo})$, which in turn strictly bounds the comoving mass density of stars contained in halos more massive than $M_{\rm halo}$, $\rho_{\star}(>M_{\rm halo})\leq f_{\rm{b}}\,\rho_{\rm m}(>M_{\rm halo})\,,$ (4) and the density of stars contained in galaxies above a given $M_{\star}$, $\rho_{\star}(>M_{\star})\leq f_{\rm{b}}\,\rho_{\rm m}(>M_{\star}/f_{\rm{b}})\,.$ (5) ## 3 Results Figure 2: Stellar mass density limits. The comoving stellar mass density contained within galaxies more massive than $M_{\star}$ at $z\approx 9.1$ (left) and $z\approx 7.5$ (right) for three values of the assumed conversion efficiency $\epsilon$ of a halo’s cosmic allotment of baryons into stars. Only if all available baryons in all halos with enough baryons to form the galaxies reported by L22 have indeed been converted into stars by that point — an unrealistic limit — is it possible produce the stellar mass density in the highest $M_{\star}$ bin at $z\approx 9$ measured by L22 in a typical volume of a $\Lambda$CDM Universe with the Planck 2020 cosmology. Results are similar at $z\approx 7.5$. For more realistic values of $\epsilon$, the required baryon reservoir is substantially larger than the theoretical maximum in this cosmology. When considering shot noise and sample variance errors (which comprise the plotted uncertainties on the L22 data points in each panel), the measurements are consistent with the base $\Lambda$CDM model if $\epsilon>0.57$, which would still imply incredibly efficient star formation in the high-redshift Universe. The left panel of Figure 1 shows the relationship between the maximal inferred stellar mass for a given $M_{\rm halo}$, $M_{\star}=f_{\rm{b}}\,M_{\rm halo}$ (i.e., assuming the maximal $\epsilon=1$), and redshift $z$ for fixed cumulative comoving halo number densities ranging from $10^{-10}\,{\rm Mpc}^{-3}$ (light gray) to $10^{-2}\,{\rm Mpc}^{-3}$ (yellow). The curves evolve rapidly with redshift, with the maximal stellar mass corresponding to a fixed cumulative comoving halo number density increasing by two orders of magnitude from $z=20$ to $z=8$. This rapid rise indicates that the mass reservoir available for the most massive galaxies increases quickly with redshift at fixed halo number density. The two most massive high-redshift galaxy candidates from the Labbé et al. (2022, hereafter L22) sample, at $z\approx 7.5$ ($M_{\star}\approx 10^{11}\,M_{\odot}$) and $z\approx 9.1$ ($M_{\star}\approx 10^{10.5}\,M_{\odot}$), are shown as blue stars. These objects are unexpectedly massive, with stellar content reflective of halos that have cumulative comoving number densities no higher than $\approx 10^{-5.2}\,{\rm Mpc}^{-3}$ (if $\epsilon=1$); for $\epsilon=0.32$ (0.1), the implied number density is $\approx 10^{-7}$ ($10^{-9.3}$) ${\rm Mpc}^{-3}$. By comparison, the candidates were found in a survey of 38 arcmin2, a volume of $V\approx 10^{5}\,{\rm Mpc}^{3}$ at each of the redshift bins — $7<z<8.5$ and $8.5<z<10$ — considered by L22. The right panel of Figure 1 recasts the issue in terms of the scarcity of systems as measured by cumulative mass density. In extended Press-Schechter models, the peak height $\nu(M_{\rm halo},z)=\delta_{\rm c}/\sigma(M_{\rm halo},z)$ of an object — where $\delta_{\rm c}\approx 1.7$ is the collapse threshold and $\sigma^{2}(M_{\rm halo},z)$ is the variance of the linear density field at redshift $z$ smoothed on a scale containing an average mass of $M_{\rm halo}$ — is a measure of the fraction of mass in the Universe contained in virialized objects more massive than $M_{\rm halo}$ at redshift $z$. Typical halos at $z$ have $\nu=1$, which corresponds to 24% of the mass in the Universe residing in halos at least that massive; larger values of $\nu$ indicate increasingly massive and therefore rare peaks in the density field at that epoch. The comoving baryon density for each peak height in the figure is given in the legend; multiplying this number by the volume of a survey gives the total amount of baryons contained above the mass corresponding to that peak height and redshift. The L22 galaxies have peak heights of at least $\nu=4.5$ (assuming $\epsilon=1)$, meaning that at most a fraction $6.2\times 10^{-5}$ of the baryons in the Universe are contained in halos massive enough to host these galaxies. For reference, $\nu=4.5$ at $z=0$ corresponds to $M_{\rm halo}\approx 5\times 10^{15}\,M_{\odot}$. Adopting more reasonable efficiencies of $\epsilon=0.32$ or $0.1$ results in rarer peaks with $\nu\approx 5.4$ or $6.4$. Figure 2 shows the cumulative stellar mass density reported by L22 at $z\approx 9$ (left) and $z\approx 7.5$. The data, which come from individual massive objects, lie at the extreme of $\Lambda$CDM expectations even in the most optimistic scenario: at both redshifts, the measurements lie at the theoretical limit of $\rho_{\star}(>M_{\star})=f_{\rm{b}}\,\rho_{\rm m}(>M_{\star}/f_{\rm{b}})$, implying physically implausible values of $\epsilon(z\approx 9)=0.99$ and $\epsilon(z\approx 7.5)=0.84$. When considering the $1\,\sigma$ error (which incorporates uncertainties in the stellar mass estimation, Poisson fluctuations, and sample variance), the data become marginally consistent with the available baryon reservoirs for an efficiency of $\epsilon(z\approx 9)\geq 0.57$, which is likely an unrealistically high value. Assuming a more plausible value of $\epsilon=0.1$ or $0.32$ yields a strong discrepancy with $\Lambda$CDM expectations at both redshifts even when considering observational uncertainties. ## 4 Discussion The first glimpse of high-redshift galaxy formation with JWST has revealed surprisingly massive galaxy candidates at early cosmic times. These systems provide a way to test a bedrock property of the $\Lambda$CDM model (or, e.g., assumptions in derivations of stellar masses or the viability of high-redshift galaxy candidates): the stellar content of halos should not exceed the available baryonic material in those halos. This requirement does not rely on assumptions such as abundance matching but rather is simply a statement about the distribution of virialized mass in the Universe as a function of redshift and the baryonic reservoirs associated with those virialized halos: galaxies of mass $M_{\star}$ can only form if halos of mass $M_{\star}/(\epsilon\,f_{\rm{b}})$ have formed. It is also more stringent than the requirement that the observed galaxy UV luminosity function not exceed the theoretical maximum coming from a nearly instantaneous (10 Myr) conversion of a halo’s full baryonic reservoir into stars (Mason et al., 2022), as it is an integral constraint as opposed to a differential one. The massive, high- redshift galaxy candidates cataloged in L22 lie at or just beyond the stellar mass density constraint in $\Lambda$CDM. There are several sources of observational uncertainty that enter these results. The flux calibration of NIRCam is continually being updated; L22 use calibrations that take into account updated detector offsets that are not yet part of the official JWST reduction pipeline (see, e.g., Boyer et al. 2022 for examples of this effect and Nardiello et al. 2022 for related discussions of empirical point spread function modeling for JWST). With NIRCam photometry, a Balmer or $4000$ Å break at $z\sim 5$ can be mistaken for a Lyman-$\alpha$ break at $z\gtrsim 12$ (Zavala et al., 2023); the L22 sample was selected to contain both Lyman and Balmer breaks, however, and is at low enough redshift (relative to $z\sim 15$ sources) that NIRCam filters can typically exclude $z\sim 5$ photometric solutions. The resulting photometric redshift estimates have single, narrow ($\sigma_{z}\approx 0.25$) peaks. The masses of the galaxies are computed using the median of four methods for fitting the photometry (see L22 for details) and assume a Salpeter (1955) IMF. Different assumptions about the photometry (in particular, properties of nebular emission lines) or IMF could affect the derived stellar masses, with the latter being a particularly intriguing possibility. The mass of the candidate at $z\sim 7.5$ was also corrected for the possibility of amplification by mild gravitational lensing; this effect is estimated by L22 to be $0.15$ dex, and the reported mass (and stellar mass density) of this object are therefore reduced by this amount to compensate. The error bars in Figure 2 include errors in the volume estimates coming from both sample variance and Poisson noise, with the latter always being dominant in the regime considered here (Trenti & Stiavelli, 2008; Behroozi & Silk, 2018). The discrepancy between the observed high-redshift galaxy candidates and $\Lambda$CDM expectations is robust to uncertainties in cosmological parameters in the base $\Lambda$CDM model: the precision on each of the relevant parameters is at the $\lesssim 1\%$ level (Planck Collaboration et al., 2020). Intriguingly, extensions to the base $\Lambda$CDM with enhanced values of $\sigma_{8}$ and the physical matter density $\Omega_{\rm m}h^{2}$ — such as some Early Dark Energy (EDE) models whose aim is to resolve the Hubble Tension — predict earlier structure formation and a higher abundance of halos at fixed mass at high redshift (Klypin et al., 2021), which would enhance the baryonic reservoirs available for forming early massive galaxies. Taking the best-fit EDE parameters from Smith et al. (2022), the cumulative comoving baryonic density contained in halos more massive than $M_{\rm halo}=M_{\star}/f_{\rm{b}}$ for the most massive L22 galaxy candidate at $z\approx 9.1$ is a factor of 3.3 larger in EDE than in base $\Lambda$CDM, which is non-negligible; the L22 data points would then lie at $\epsilon=0.72$ instead of $\epsilon=0.99$. However, this EDE cosmology is in stronger tension with values of $S_{8}=\sigma_{8}\,\sqrt{\Omega_{\rm m}/0.3}$ measured at low redshift and predict that the Universe is $\approx 13$ billion years old (as opposed to $13.8$ billion years in the base $\Lambda$CDM model), which is in moderate tension with the measured ages of ultra-faint galaxies and globular clusters (Boylan-Kolchin & Weisz, 2021). At the redshifts studied here, $z\approx 7-10$, the Sheth-Tormen mass function overestimates the abundance of massive halos by $20-50\%$ relative to numerical simulations (Reed et al., 2003; Despali et al., 2016; Shirasaki et al., 2021; Wang et al., 2022), meaning their true abundance at high redshift is likely lower than the Sheth-Tormen prediction and the constraints derived here are conservative. However, the lack of detailed comparisons between theory and simulations at high redshifts and high masses points to the importance of continued theoretical work in understanding the universality and applicability of halo mass function parameterizations in regimes relevant for JWST observations (and other forthcoming observatories). The tension discussed in this paper is straightforward: the masses measured by L22 are only consistent with expectations from the standard cosmological model at the reported redshifts if star formation in the earliest phases of galaxy formation is incredibly efficient ($\epsilon\geq 0.57$). In the low-redshift Universe, such efficiencies are never seen, with $\epsilon\lesssim 0.2$ for all galaxies. The theoretical expectation is that efficiencies do indeed increase at high redshift (Tacchella et al., 2018), though $\epsilon\gtrsim 0.57$ is still highly extreme and likely implausibly high. If the explanation of the L22 galaxies is indeed a very high star formation efficiency, it implies that the star formation histories of such systems must rise steeply with time, following the behavior of the baryon reservoirs inside of virialized structures in $\Lambda$CDM. The results presented here could also be explained if the stellar IMF differs substantially from the assumed Salpeter form, the flux calibration of NIRCam changes from the latest post- flight determinations, or the volumes currently surveyed turn out to be highly atypical. If none of these explanations holds up and these massive galaxies are spectroscopically confirmed, they will pose a serious challenge for $\Lambda$CDM structure formation with parameters given by Planck Collaboration et al. (2020) because they signify the existence of a larger reservoir of collapsed baryons than is possible in this model. Forthcoming wider-field JWST surveys, along with JWST spectroscopy of massive galaxy candidates, should be able to quickly confirm or refute the existence of this tension. Furthermore, the compatibility of any additional high-redshift galaxies or galaxy candidates discovered in JWST observations with $\Lambda$CDM expectations can be assessed in a straightforward way via Fig. 1. If analysis of JWST data continues to reveal the presence of strikingly massive galaxies at very early cosmic epochs, more exciting surprises lie ahead for the fields of galaxy formation and cosmology. ## Data Availability Data from L22, including $M_{\star}$ estimates and photometric redshifts, are available at https://github.com/ivolabbe/red-massive-candidates; this paper uses data from sample_revision3_2207.12446.ecsv, commit 59fbbfa (from 2023.01.02). All calculations that go into the figures in this paper will be made publicly available at https://github.com/mrbk/JWST_MstarDensity. ## Acknowledgments This paper is dedicated to the memory of Steven Weinberg, who would have been thrilled to see how well JWST is working and excited to learn what it will reveal about cosmology and galaxy formation across a variety of cosmic epochs. I thank Pieter van Dokkum and Ivo Labbé for sharing data from L22, Steve Finkelstein, Pawan Kumar, and Dan Weisz for helpful discussions, and the referees of the paper for providing helpful comments that improved the clarity of this paper. I acknowledge support from the University of Texas at Austin through the Faculty Research Assignment program, NSF CAREER award AST-1752913, NSF grants AST-1910346 and AST-2108962, NASA grant 80NSSC22K0827, and HST- AR-15809, HST-GO-15658, HST-GO-15901, HST-GO-15902, HST-AR-16159, and HST- GO-16226 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. I am very grateful to the developers of the python packages that I used in preparing this paper: numpy (Harris et al., 2020), scipy (Virtanen et al., 2020), matplotlib (Hunter, 2007), hmf (Murray et al., 2013; Murray, 2014), and ipython (Pérez & Granger, 2007). This research has made extensive use of NASA’s Astrophysics Data System (http://adsabs.harvard.edu/) and the arXiv e-Print service (http://arxiv.org). ## References * Adams et al. (2023) Adams N. J., et al., 2023, MNRAS, 518, 4755 * Atek et al. (2023) Atek H., et al., 2023, MNRAS, 519, 1201 * Behroozi & Silk (2018) Behroozi P., Silk J., 2018, MNRAS, 477, 5382 * Boyer et al. (2022) Boyer M. L., et al., 2022, Research Notes of the American Astronomical Society, 6, 191 * Boylan-Kolchin & Weisz (2021) Boylan-Kolchin M., Weisz D. R., 2021, MNRAS, 505, 2764 * Castellano et al. (2022) Castellano M., et al., 2022, ApJ, 938, L15 * Despali et al. (2016) Despali G., Giocoli C., Angulo R. E., Tormen G., Sheth R. K., Baso G., Moscardini L., 2016, MNRAS, 456, 2486 * Donnan et al. (2023) Donnan C. T., et al., 2023, MNRAS, 518, 6011 * Finkelstein et al. (2022) Finkelstein S. L., et al., 2022, ApJ, 940, L55 * Harris et al. (2020) Harris C. R., et al., 2020, Nature, 585, 357 * Hunter (2007) Hunter J. D., 2007, Computing In Science & Engineering, 9, 90 * Klypin et al. (2021) Klypin A., et al., 2021, MNRAS, 504, 769 * Labbé et al. (2022) Labbé I., et al., 2022, arXiv:2207.12446 [astro-ph], p. arXiv:2207.12446 * Mason et al. (2022) Mason C. A., Trenti M., Treu T., 2022, arXiv:2207.14808 [astro-ph], p. arXiv:2207.14808 * Morishita & Stiavelli (2022) Morishita T., Stiavelli M., 2022, arXiv:2207.11671 [astro-ph], p. arXiv:2207.11671 * Murray (2014) Murray S., 2014, HMF: Halo Mass Function calculator, Astrophysics Source Code Library, record ascl:1412.006 (ascl:1412.006) * Murray et al. (2013) Murray S. G., Power C., Robotham A. S. G., 2013, Astronomy and Computing, 3, 23 * Naidu et al. (2022) Naidu R. P., et al., 2022, ApJ, 940, L14 * Nardiello et al. (2022) Nardiello D., Bedin L. R., Burgasser A., Salaris M., Cassisi S., Griggio M., Scalco M., 2022, MNRAS, 517, 484 * Pérez & Granger (2007) Pérez F., Granger B. E., 2007, Computing in Science and Engineering, 9, 21 * Planck Collaboration et al. (2020) Planck Collaboration et al., 2020, A&A, 641, A6 * Press & Schechter (1974) Press W. H., Schechter P., 1974, ApJ, 187, 425 * Reed et al. (2003) Reed D., Gardner J., Quinn T., Stadel J., Fardal M., Lake G., Governato F., 2003, MNRAS, 346, 565 * Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161 * Sheth & Tormen (1999) Sheth R. K., Tormen G., 1999, MNRAS, 308, 119 * Shirasaki et al. (2021) Shirasaki M., Ishiyama T., Ando S., 2021, ApJ, 922, 89 * Smith et al. (2022) Smith T. L., Lucca M., Poulin V., Abellan G. F., Balkenhol L., Benabed K., Galli S., Murgia R., 2022, Phys. Rev. D, 106, 043526 * Steinhardt et al. (2016) Steinhardt C. L., Capak P., Masters D., Speagle J. S., 2016, ApJ, 824, 21 * Tacchella et al. (2018) Tacchella S., Bose S., Conroy C., Eisenstein D. J., Johnson B. D., 2018, ApJ, 868, 92 * Trenti & Stiavelli (2008) Trenti M., Stiavelli M., 2008, ApJ, 676, 767 * Virtanen et al. (2020) Virtanen P., et al., 2020, Nature Methods, 17, 261 * Wang et al. (2022) Wang Q., Gao L., Meng C., 2022, MNRAS, 517, 6004 * Yan et al. (2023) Yan H., Ma Z., Ling C., Cheng C., Huang J.-S., 2023, ApJ, 942, L9 * Zavala et al. (2023) Zavala J. A., et al., 2023, ApJ, 943, L9
# For how many iterations should we run Markov chain Monte Carlo? Charles C. Margossian Center for Computational Mathematics, Flatiron Institute, New York, NY Andrew Gelman Department of Statistics and Political Science, Columbia University, New York, NY ###### Abstract Standard Markov chain Monte Carlo (MCMC) admits three fundamental control parameters: the number of chains, the length of the warmup phase, and the length of the sampling phase. These control parameters play a large role in determining the amount of computation we deploy. In practice, we need to walk a line between achieving sufficient precision and not wasting precious computational resources and time. We review general strategies to check the length of the warmup and sampling phases, and examine the three control parameters of MCMC in the contexts of CPU- and GPU-based hardware. Our discussion centers around three tasks: (1) inference about a latent variable, (2) computation of expectation values and quantiles, and (3) diagnostics to check the reliability of the estimators. This chapter begins with general recommendations on the control parameters of MCMC, which have been battle-tested over the years and often motivate defaults in Bayesian statistical software. Usually we do not know ahead of time how a sampler will interact with a target distribution, and so the choice of MCMC algorithm and its control parameters, tend to be based on experience, re- evaluated after simulations have been obtained and analyzed. The second part of this chapter provides a theoretical motivation for our recommended approach, with pointers to some concerns and open problems. We also examine recent developments on the algorithmic and hardware fronts, which motivate new computational approaches to MCMC. “All human wisdom is contained in these two words: wait and hope.” — Alexandre Dumas, The Count of Monte Cristo ## Part I: Practical considerations If well constructed, an MCMC sampler converges to a stationary distribution $p$ of interest and, given enough compute time, the resulting Monte Carlo estimators can achieve any precision with high probability. In practice the asymptotic properties of MCMC are merely approximate since we always work under a finite computational budget. Hence we must carefully reason about the behavior of finite nonstationary Markov chains and understand how approximate samples—ultimately not drawn from the stationary distribution $p$—can still produce useful Monte Carlo estimators. Approximate convergence is a means of controlling the bias of our estimators, and so our goal is not to generate samples from $p$, rather from an approximation $\hat{p}$ that incurs an acceptable bias. We must then generate enough samples to reduce the variance of our estimators. These objectives respectively inform the warmup and the sampling phases of MCMC. ## 1 Recommendations for an MCMC workflow $p_{0}$$\hat{p}\approx p$$\underbrace{\hskip 128.0374pt}$warmup phase$\underbrace{\hskip 128.0374pt}$sampling phase Figure 1: Standard MCMC workflow. We initialize at least three Markov chains from a starting distribution $p_{0}$. We then run a warmup phase, during which the Markov chains move from $p_{0}$ to a warm distribution $\hat{p}$, close to the target distribution $p$. The warmup is primarily used to reduce bias and tune the sampler. We then discard the warmup iterations and run a sampling phase, during which we collect samples to reduce variance. We begin by reviewing a typical MCMC workflow (Figure 1). Many of these recommendations are implemented in state-of-the-art statistical software for Bayesian analysis, including Stan (Carpenter et al.,, 2017), PyMC (Salvatier et al.,, 2016), and others, and have been applied across a range of statistical problems. 1. 1. Simulate three or more chains with distinct initializations, for instance based on a crude estimate of the target distribution $p$. In a Bayesian context, we may elect to use the prior on the model parameters to generate starting points. We can typically run 4 or 8 chains in parallel on a modern laptop and more when using a computer cluster. Having multiple chains with different initializations and seed can help diagnose issues with convergence, and reduces the variance of our Monte Carlo estimator. 2. 2. Each Markov chain should be split into a warmup phase, during which we allow the MCMC sampler to learn its tuning parameter (Andrieu and Thoms,, 2008), and a sampling phase during which we freeze the tuning parameters. Only use the sampling iterations to construct Monte Carlo estimates. The warmup iterations may be saved to diagnose issues with our Markov chains. As a starting point, we recommend splitting the chains roughly half-way into a warmup and a sampling phase. The warmup phase allows us to learn the tuning parameter of the sampler and reduce the bias due to an imperfect initialization; the sampling phase’s primary role, on the other hand, is to reduce the variance of our estimators. 3. 3. Check that, despite the distinct seeds and initializations, the Markov chains all produce estimates which are in reasonable agreement. This can be assessed using visual tools, such as trace and density plots, or convergence diagnostics such as $\widehat{R}$ (Gelman and Rubin,, 1992; Vehtari et al.,, 2021). In addition, it can be useful to split each chain into subchains to identify transient behaviors (Gelman et al.,, 2014). Some samplers, such as Hamiltonian Monte Carlo (HMC) (Neal,, 2011; Betancourt,, 2018), provide additional diagnostics such as divergent transitions, which can indicate poor mixing. These diagnostics are complementary, in the sense that they may identify failures even when $\widehat{R}$ looks fine. In general, it is a good idea to use multiple diagnostics, since no diagnostic alone is completely reliable (Brooks et al.,, 2003). 4. 4. If approximate convergence is diagnosed, examine the variance of the Monte Carlo estimators using the Monte Carlo Standard error (MCSE) and the effective sample size (ESS) for any quantity of interest. Be mindful that the ESS is specific to a Monte Carlo estimator and so can vary between two estimators, even when they are computed using samples from the same Markov chains. You should have a rough understanding of the precision required for your problem. Targeting a number of significant digits can be expressed as a target ESS (Vats et al.,, 2019; Vehtari,, 2022). 5. 5. For a complicated model, consider first fitting a simpler model and building up. The simpler models can serve as a baseline. A short run of MCMC can also help us spot obvious errors and get a better sense of how much computation may be required. A difficult question is what to do when the Markov chains fail to converge (reduce bias) or mix (reduce variance). Throwing more computation at the problem—either by running more iterations, changing the tuning parameters of the sampler, or the sampler altogether—may help but it is not always the best approach. In our experience, computational problems can indicate modeling problems (the “folk theorem of computational statistics”) and we may try choosing better starting points, debugging the model, reparameterizing, improving identification (simplifying or changing the model or adding prior information if available), etc; see Gelman et al., (2020) for a review. Beyond that, since the development of Stan and more broadly popular Bayesian inference software based on HMC (Štrumbelj et al.,, 2023), there have been several methodical and hardware developments which can motivate departures from the current defaults. ## 2 Inferential goals Sound use of computation requires understanding how much precision is useful at various stages of an analysis and estimating this precision. Following Gelman and Shirley, (2011), we distinguish two sorts of inference or computation tasks: * Task 1. Inference about an unknown quantity $\theta$, or more generally any quantity of interest $f(\theta)$. Such inference uses samples to summarize $p(f(\theta))$111In a Bayesian context, the measure of interest is the posterior $p(f(\theta)\mid y)$, but MCMC is more generally applicable to any distribution of $f(\theta)$. To keep our presentation general, we write $p(f(\theta))$, with the understanding that it might be conditional on data. and typically combines a measure of a central tendency, such as the mean and median, with a measure of uncertainty, such as the variance and 90% interval. * Task 2. Computation of $\mathbb{E}(f(\theta))$ or quantiles of $f(\theta)$ with respect to the measure $p$. There is overlap between the two tasks: summarizing $p(f(\theta))$ with the mean and variance essentially requires constructing Monte Carlo estimates of $\mathbb{E}f(\theta)$ and $\mathbb{E}\left(f(\theta)^{2}\right)$. This is what is commonly done in Bayesian statistics, where our goal is inferential. How much we learn about $f(\theta)$ is limited by both the Monte Carlo standard error (MCSE) and the posterior standard deviation. It is useful for the MCSE to be small relative to the posterior standard deviation to some extent: at some point further decreasing the MCSE no longer improves our understanding of $f(\theta)$, because the uncertainty is dominated by the posterior standard deviation and moreover limited information we have from our model and data. In other problems, the goal is not to learn about $f(\theta)$ but to actually compute $\mathbb{E}\left(f(\theta)\right)$, as is the case in statistical physics, where expectations correspond to physically meaningful quantities (Landau and Binder,, 2009). Then the uncertainty in our results stems from the MCSE only, which we can work harder to reduce. #### Example from epidemiology: the number of iterations depends on the inferential goal. To illustrate this point, we provide an example from epidemiology. Our goal is to characterize the dynamics of an influenza A (H1N1) outbreak at a British boarding school. The data consists of the daily number of students in bed over two weeks and is available in the R package outbreaks (Campbell et al.,, 2023). We fit a negative binomial distribution parameterized by a susceptible- infected-recovered (SIR) model. This pedagogical example confronts us with several practical questions and serves as the foundation for many models used to study the Covid-19 outbreak; see Grinsztajn et al., (2021) for more details. An epidemiologist can learn many things from such an analysis. Here we focus on the recovery time $T_{\text{R}}$, which is derived as a function of the model parameters. With what precision should we estimate this quantity? How much computation should we invest in our Bayesian inference? We fit the model using the out-of-the-box HMC sampler provided by Stan, which runs 1000 warmup iterations, which we discard, and 1000 sampling iterations. We run 4 chains in parallel. Table 1 reports the posterior median and 90% posterior interval for $T_{\text{R}}$, obtained across 5 runs of MCMC. Each run entails 4 Matkov chains with 1000 warmup and 1000 sampling iterations, but each time we use a different seed. All 5 runs produce consistent estimates of the posterior median at least within 2 significant digits. From one run to the other we get slight disagreements for estimates of the 5${}^{\text{th}}$ and 95${}^{\text{th}}$ posterior quantiles, which suggests that, even after generating thousands of samples, the precision is only with one digit. This is not surprising: tail quantities, such as extreme quantiles, are more difficult estimate than central quantities such as the median. This is not a concern for this particular application, since we do not need a precise estimate of the recovery time. Medical staff only need to know that symptoms will be experienced most likely for 2 days, perhaps 1 or 3 days in some unlikely cases. Using 4000 draws | Using 40 draws ---|--- 1.85 [1.61, 2.12] | 1.79 [1.45, 2.18] 1.85 [1.60, 2.11] | 1.82 [1.59, 2.09] 1.85 [1.62, 2.11] | 1.80 [1.62, 2.04] 1.85 [1.62, 2.11] | 1.86 [1.66, 2.25] 1.85 [1.62, 2.12] | 1.84 [1.65, 2.10] Table 1: Estimated posterior median and 90% interval for the recovery time $T_{\text{R}}$ from an influenza infection. Results from multiple MCMC runs with (left) a long sampling phase and (right) a short sampling phase. If we wanted a better understanding of a patient’s recovery time, running more simulations would not be helpful. The uncertainty in our inference comes not from the MCSE but from the posterior standard deviation, which is turn driven by the limited information in our data and the natural variability among patients. A next step might be to construct a more sophisticated model with patient-level predictors, and collect more data to fit this model. With regard to the quality of our Monte Carlo estimator, we may wonder if Stan’s defaults were not overkill for this problem. What if the sampling phase only entailed 10 iterations rather than 1000? With 4 chains, this would mean generating a mere 40 samples, which in our academic zeitgeist seems unacceptable. Looking at the Monte Carlo estimates thus obtained (Table 1) we see that the results exhibit more variance. Yet this much cruder estimate provides just as much medical insight as the more precise calculation, and all that for much less computation. Accounting for the chains’ autocorrelation, this cheap estimate has an effective sample size (ESS) of $\sim$20, which agrees with MacKay’s (in)famous comment that an ESS as small as a dozen can be sufficient (Mackay,, 2003, chapter 29), at least for the purpose of doing inference (task 1). In the next section, we will continue this example and discuss how much computation we need to assess the quality of our Monte Carlo estimators. $\mid\mid$ This example is somewhat provocative (but only somewhat). Depending on the application, we may require more precise estimates which warrant a long sampling phase. Drawing from our own experience, we once generated 12000 sampling iterations for a genomic study (Piironen and Vehtari,, 2017; Margossian et al.,, 2020). One of our goals was to identify explanatory predictors, which required characterizing a high-dimensional and multimodal posterior distribution, and estimating extreme quantiles.222Admittedly we did not conduct a careful analysis to see how much our scientific conclusions would change if we used a shorter sampling phase. We did, however, find that the results produced by a fast variational inference approximation were less accurate and did change our insights in a meaningful manner. Studies in statistical physics, where task 2 (computation of expectation values) is the goal rather than a means to task 1 (inference), can also justify deploying a large amount of computation. Another example to muse about is Buffon’s needle experiment to estimate $\pi$ (Laplace,, 1812).333 Buffon himself did not try to estimate $\pi$, and it was Laplace who proposed to use a Monte Carlo estimator based on Buffon’s study of a dropping needle; see Badger, (1994). Granted, this is direct Monte Carlo, without any Markov chain, but the point still stands: the number of times we drop the needle may vary greatly depending on the number of digits with which we want to estimate $\pi$. All in all, understanding the precision needed from our Monte Carlo estimators should lead to more computationally sensible deployments of MCMC, which may depart from general recommendations and software defaults. When MCMC is used to fit a model, there can be many quantities of interest. The same reasoning as above applies, with the added complication that we should now have a sense of how the inferences will be used. For example, when training a Bayesian neural network, our goal is not to do inference on the weights of the network, rather to make accurate predictions. Then our intent should be bent on summarizing the posterior distribution of the predictions themselves, or perhaps even some decision based on those predictions, and not on the weights. The required accuracy of MCMC also depends on the stage of data analysis we find ourselves in. In Bayesian workflow, model development is an iterative process during which we loop between model building, inference, and model criticism (Box and Draper,, 1987; Blei,, 2014; Gelman et al.,, 2020). In the early stages of model development, we are trying to identify shortcomings in our model—anything ranging from an implementation error to more substantial flaws in our modeling assumptions. Many limitations can be identified using a short run of MCMC, rather than waiting hours, days, or weeks to construct long Markov chains. As we refine our model, we use more precise inference, and additional computational effort can be expanded for the polished models. ## 3 Reliability goals The accuracy of a Monte Carlo estimator is in general unknown and must be estimated from the simulations themselves. Returning to our example from epidemiology, 40 sampling iterations provide more than sufficiently accurate estimators but we would not be able to confirm this without either running more chains or running longer chains. Tacitly, we took comfort in how similar our results were to what we obtained with 4000 samples. Determining if our estimators are reliable entails * (i) assessing approximate convergence of the Markov chains, which is an indirect way of checking that the bias of our estimators is small, * (ii) estimating the variance of the estimators. For instance, we may use $\widehat{R}$ (Gelman and Rubin,, 1992; Vehtari et al.,, 2021) and an estimator of ESS—which we denote $\widehat{\text{ESS}}$—based on autocorrelation functions (Vehtari et al.,, 2021; Gelman et al.,, 2014; Geyer,, 1992), respectively, for items (i) and (ii). These estimators measure properties of the MCMC sampler, based on one realization (or seed) of the stochastic algorithm, and so it instructive to compare the estimates between runs (Table 2 and Figue 2). When building long chains, $\widehat{R}$ is systematically below 1.01, the threshold recommended by Vehtari et al., (2021) to establish approximate convergence. On the other hand, with short chains, $\widehat{R}$ varies between runs and we do not consistently detect convergence. From more extensive studies, we know that for this problem the Markov chains are nearly stationary after a warmup phase of 1000 iterations, even if $\widehat{R}$ fails to detect it, and that running a longer warmup phase would not improve our results. When computed using 40 samples, $\widehat{R}$ is noisy and furthermore biased towards reporting a lack of convergence (Margossian et al.,, 2023). Taking $\widehat{R}$ at face value and concluding the Markov chains have not converged, we would then question the relevance of the ESS as a measure of the Monte Carlo error, since the ESS is a property of stationary Markov chains and does not account for bias. ${\bf N=4000}$ | $\widehat{R}$ | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 ---|---|---|---|---|---|--- | $\widehat{\text{ESS}}$ | 2659 | 2818 | 2698 | 2457 | 2613 ${\bf N=40}$ | $\widehat{R}$ | 1.22 | 1.08 | 0.95 | 1.06 | 0.98 | $\widehat{\text{ESS}}$ | 20 | 20 | 20 | 20 | 20 Table 2: $\widehat{R}$ and estimated ESS for the recovery time $T_{\text{R}}$ from an influenza infection. We report the results from multiple MCMC runs with (top) a long sampling phase and (bottom) a short sampling phase. When using $N=40$, the $\widehat{R}$ and ESS estimators are noisy. Figure 2: $\widehat{R}$ for the recovery time $T_{\text{R}}$ from an influenza infection. We report the results from multiple MCMC runs with (left) a short sampling phase and (right) a long sampling phase. With a short sampling phase, $\widehat{R}$ is noisy and for most seeds incorrectly reports non-convergence, i.e. $\widehat{R}>1.01$. The variance and inaccuracy of our error estimators adds a substantial difficulty, which we should see as a third and distinct challenge: * Task 3. Build diagnostics to check the accuracy of our Monte Carlo estimates. This task is particularly tricky, in that it may provide endless justification for skepticism and using more computation. A common example is the fear of a hidden mode which the chains have failed to find, and so we should keep running the sampler in the off chance this mode exists and that one of the Markov chains stumbles into it. Setting aside the possibilities of dragons in unexplored regions of probability space, Vehtari et al., (2021) provide a general recommendation to aim for a measured ESS of 100 to ensure good estimates of $\widehat{R}$ and the ESS, and accordingly Stan warns you that its diagnostics may not be reliable if $\widehat{\text{ESS}}<100$. All this does not necessarily contradict the advice by Mackay, (2003) that a sample size of a dozen may suffice, which applies to the true ESS, assumed to be measured perfectly, rather than the measured ESS. Here it is essential to clearly distinguish the three tasks and recognize that, in general, there is a gap between the number of iterations required to do well in task 2 (computation) and in task 3 (diagnostics), with the latter being more expensive. Reducing this gap constitutes an important step towards decreasing the run time and increasing the reliability of MCMC. ## Part II: Theoretical motivation, warnings, and open problems We now examine the question of MCMC computation from a more theoretical perspective. We provide justification for our general recommendations (Section 1), as well as reasons to sometimes depart from these in light of recent methodological and hardware advances. ## 4 Error in Monte Carlo estimators Consider a state space $\Theta$ over which the target distribution $p$ is defined. Moving on, we focus on estimating expectations $\mathbb{E}\left(f(\theta)\right)$, where $\theta\in\Theta$ and $f$ maps $\theta$ to a univariate scalar. Usually, we are interested in multiple such scalar summaries. Let $\bar{f}$ be our Monte Carlo estimator, obtained by averaging the MCMC samples ${\bf f}=\left(f(\theta^{(1)}),f(\theta^{(2)}),\cdots,f(\theta^{(N)})\right)$, $\bar{f}=\frac{1}{N}\sum_{n=1}^{N}f\left(\theta^{(n)}\right),$ (4.1) and let $\Gamma$ be the distribution from which this estimator is sampled, $\bar{f}\sim\Gamma.$ (4.2) When determining the control parameters of MCMC, we aim to control the expected squared error, which decomposes into a squared bias and a variance component $\mathbb{E}_{\Gamma}\left(\bar{f}-\mathbb{E}_{p}f(\theta)\right)^{2}=\underbrace{\left(\mathbb{E}_{\Gamma}\bar{f}-\mathbb{E}_{p}f(\theta)\right)^{2}}_{\text{Squared Bias}}+\underbrace{\text{Var}_{\Gamma}\bar{f}}_{\text{Variance}},$ (4.3) where the subscripts denote, for clarity, the distribution with respect to which the expectation value is taken. The bias is due to the fact we start the Markov chain at $p_{0}$ rather than $p$, and the variance arises because we compute sample estimates with a finite number of samples. #### Example: Langevin diffusion approximation of MCMC. Consider a Langevin diffusion that evolves from the univariate distributions $p_{0}=\text{normal}(\mu_{0},\sigma_{0})$ to $p=\text{normal}(\mu,\sigma)$, and suppose $f$ is the identity function, $f(\theta)\equiv\theta$. The Langevin diffusion provides a continuous approximation of MCMC and the discrete number of iterations is now replaced by the continuous time $T$ (Gelman et al.,, 1997; Roberts and Rosenthal,, 1998). The Monte Carlo estimator is given by the integral, $\bar{\theta}=\frac{1}{T}\int_{0}^{T}\theta^{(t)}\text{d}t.$ (4.4) What makes this example particularly enlightening is that the distribution of the sample $\theta^{(t)}$ can be written analytically, $\theta^{(t)}\sim\text{normal}\left(\mu+(\mu_{0}-\mu)e^{-t},\sqrt{\sigma^{2}+(\sigma_{0}^{2}-\sigma^{2})e^{-2t}}\right).$ (4.5) In this example the bias decays at an exponential rate, although for $t<\infty$ it never hits 0. Similarly standard MCMC algorithms cannot produce unbiased estimators, nor will finite computation return a single sample from the target distribution $p$. This is fine: all we need is a small enough bias, just as achieving approximate stationarity suffices. Remark. On a computer with finite precision, we may not be able to distinguish the bias from 0 for a sufficiently large $T$, that is the bias may be 0 to floating point precision. $||$ As we generate longer Markov chains, the bias produced by the initial samples is averaged out. Under the constraint of finite computation, we discard the first iterations as part of a warmup phase (also termed “burn-in”). Moreover the primary task of the warmup is to allow the bias of the Markov chain to decay before we start sampling. The primary goal of the sampling phase is to then reduce the variance, which would still exist even in the ideal circumstance where the Markov chains are stationary. This dichotomy suggests a bias-variance tradeoff, when choosing the number of samples to discard in warmup, a problem which has been tackled in the context of Stein methods; see South et al., (2021) and references therein. But the story admits some nuances. Typically the variance also decays during warmup, especially if $\sigma_{0}>\sigma$ and also because of the Markov chain’s drift (averaging samples with different means increases the variance of $\bar{f}$). This additional variability, which we later define as the nonstationary variance, plays a key role in diagnosing whether the Markov chains are close to their stationary distribution. Similarly, the bias continues to decay during the sampling phase. However, once the drift becomes minor and the initial variance less influential, there is a net benefit, in terms of expected squared error, to including still biased samples rather than discarding them. While we ultimately care about the expected squared error, we tend to tolerate variance more than bias because of task 3 (diagnostics). That is, we can estimate the variance of our estimator, usually reported via the Monte Carlo standard error (MCSE) or the ESS, while no good estimator of the bias exists. The hope is then that the bias is negligible next to the measured variance and the squared error well characterized by the estimated variance. This will be the case for nearly stationary Markov chains, and so a common strategy is to first assess approximate convergence as a proxy for bias decay. In some cases, it is possible to construct unbiased estimators. In our view, the main limitation of MCMC is not its bias but rather that we cannot guarantee that this bias has vanished. Therefore techniques which entirely remove the bias are primarily useful for task 3 (diagnostics). The unbiased MCMC framework (Jacob et al.,, 2020) is in that respect promising, although the construction of the unbiased estimator can, for certain problems, require a long warmup phase (or coupling time). This ties back to the question of how much additional computation can we afford to pay, on top of what might be required to achieve an acceptable precision, to assess the reliability of our estimators. ## 5 Variance of Monte Carlo estimator Beyond the standard advice to run the chain longer, there exist several techniques to reduce the variance of our Monte Carlo estimators. These techniques can help us achieve our goals with fewer sampling iterations. We review recent work on the use of massive parallelization to reduce variance (Lao et al.,, 2020). Another variance reduction technique of interest is control variates. We do not discuss this method here, and direct the readers to some references (e.g South et al.,, 2023; Mira et al.,, 2013). ### Effective sample size The length of the chain is a poor proxy for how precise an estimator is, given the chain’s autocorrelation drastically impacts how much the variance decreases with each iteration. A useful measure is the ESS, which can be estimated using autocorrelation functions. The are several ways to interpret the ESS for stationary Markov chains (and, for our purpose, approximately stationary Markov chains): * • It is the number of independent samples from $p(\theta)$ we would require to construct an estimator with the same standard deviation as our Monte Carlo estimator $\bar{f}$. * • It is a ratio of variances, $\text{ESS}=\text{Var}_{p}f(\theta)/\text{Var}_{\Gamma}\bar{f}$. For independent samples, this ratio becomes the number of samples $N$. Understanding the ESS required for an application can help us determine if the sampling phase is sufficiently long. The ESS can be tied to the number of significant digits in our Monte Carlo estimator, after scaling by the standard deviation. Precise recommendations require determining with what probability we would allow a significant digit to vary between MCMC runs and we refer the reader to Vats et al., (2019) and Vehtari, (2022). As a rule thumb, achieving an MCSE which is 0.1 the posterior standard deviation requires an ESS of 100; adding another significant digit requires an ESS of 10,000. The ESS provides a scale free measure of our Monte Carlo estimator’s precision and complements the non-scale free MCSE. Crucially the ESS can vary between quantities of interest, even when these are evaluated using the same MCMC samples: for example, anticorrelated samples produce precise estimates of the first moment but poor estimates of the second moment. Therefore the ESS should be examined for all quantities of interest. ### Choosing a sampler and tuning it to reduce variance Given a fixed number $N$ of sampling iterations, the ESS can fluctuate dramatically depending on the MCMC algorithm and the target distribution $p$. It is critical to chose an appropriate sampler to handle the pathologies that can arise in a target distribution, including high dimension, correlation, uneven curvature, multimodality, and more. How to chose a sampler is a broad topic. We usually start with HMC which is general purpose and benefits from several high performance implementations, for example in Stan, PyMC, and more. HMC is not universal—it cannot directly handle discrete parameters and will not fare well with well separated modes and multiscale distributions—, and so other samplers can be invoked to tackle a particular problem. Even within a class of MCMC algorithms, the chain’s autocorrelation can be sensitive to certain tuning parameters, which we must either set manually or learn during the warmup phase. Self-tuning algorithms are a critical component of Bayesian workflow, since they allow us to move from one model to the other without having to revise our inference algorithm, and are also much more accessible to the broader scientific community. In Stan, we employ dynamic HMC (Betancourt,, 2018), based on the no-U-turn sampler (NUTS) (Hoffman and Gelman,, 2014), which learns HMC’s tuning parameters to maximize the expected squared jump distance. After warmup, we freeze the tuning parameters of HMC because continuously adapting the tuning parameters can produce the wrong stationary distribution (Andrieu and Thoms,, 2008). Certain adaptation strategies are not susceptible to this phenomenon and continuously target the right stationary distribution while adapting (e.g., Gilks et al.,, 1994; Hoffman and Sountsov,, 2022). ### The many-short-chains regime $p_{0}$$\hat{p}\approx p$$\underbrace{\hskip 128.0374pt}$warmup phase$\underbrace{\hskip 128.0374pt}$sampling phase$\vdots$ Figure 3: The many-short-chains regime of MCMC. Increasing the number of Markov chains reduces the variance of our Monte Carlo estimators, meaning we can achieve a desired precision with a short sampling phase. However the bias cannot be averaged out and so each chain must still be properly warmed up. A natural way to increase the number of samples is to run more chains. It has long been recognized that we can trade the length of the sampling phase for the number of chains when controlling the variance of $\bar{f}$, and furthermore that the additional compute time can be mitigated by running all chains in parallel (Rosenthal,, 2000). There are some difficulties we need be mindful of and, in general, increasing the number of chains still increases the run time, even with parallelization. Some of these difficulties include potentially failing machines (Rosenthal,, 2000), algorithms with a heavy control flow (Lao et al.,, 2020; Hoffman et al.,, 2021) and Markov chains with wildly varying run times (du Ché and Margossian,, 2023). Over the past decade, the balance of computing power has shifted from CPUs to GPUs. A GPU boasts thousands of cores,444 The GPU core has different qualities than its CPU counterpart, which can be problematic for certain operations. For example, sequential calculations, such as numerically solving a differential equation, work much better on a CPU core than a GPU one. which raises the possibility of running this many chains in parallel. To do this efficiently, hardware alone is not enough, and several challenges must be overcome on the algorithmic front. Here we refer the reader to the recent literature on GPU- friendly MCMC (e.g., Lao et al.,, 2020; Hoffman et al.,, 2021). Programs such as TensorFlow Probability in Python are expressly designed to run hundreds or even thousands of Markov chains on a GPU (TensorFlow Probability Development Team,, 2023). With each additional Markov chain, we further reduce the variance of $\bar{f}$. Equivalently, we reduce the number of sampling iterations required to achieve a target variance (Figure 3). In the next section, we will discuss how running multiple chains can also affect the warmup phase but for now, we assume the warmup of each chain is sufficiently long and that the Markov chains are (approximately) stationary. Given enough Markov chains, the sampling phase can be arbitrarily short, potentially containing a single iteration per chain. For example, if our target ESS is, say, 1000, then warming up 1000 Markov chains and generating one sampling iteration per chain produces the target ESS. Furthermore, this target ESS is achieved for every variable $f(\theta)$, and if the chains are independent, we also have a central limit theorem (CLT) along the number of chains, rather than along the number of sampling iterations. However this CLT, unlike the MCMC CLT, does not ensure that the bias vanishes—that is why we we still need a warmup phase—rather it allows us to characterize the Monte Carlo error due to variance. If we keep running the chains longer, the ESS further decreases for a relatively small computational cost but this might be overkill if we have already achieved the wanted precision. Similarly, running more chains would not help us attain our target precision faster. In this sense, the requisite ESS sets an upper bound on how many Markov chains it is useful to run for the purpose of reducing variance. If we only require an ESS of a few dozen, then there is no use exploiting all 10,000 cores available on a GPU to run that many chains. These cores can always be put to work elsewhere, perhaps to do within-chain parallelization (e.g., Lee et al.,, 2010; Češnovar et al.,, 2020). One appeal of running many short chains is that this can sidestep the problem of choosing the length of the sampling phase, since the variance reduction for every variable can be entirely handled by the number of chains. The warmup phase then dominates the computation, and algorithmic choices can solely focus on building a sampler with a fast decaying bias. We will see in the next section that the use of cross-chain adaptation can also lead to faster bias decay during the warmup phase as we increase the number of chains. In addition to bias and variance considerations, which relate to task 1 (inference) and task 2 (computation), running many chains can also improve task 3 (diagnostics), for instance by increasing the probability finding multiple modes and reducing the variance of statistics such as $\widehat{R}$. ## 6 Bias of MCMC and approximate convergence Bias reduction has played a less prominent role than variance reduction in the MCMC literature. For many problems, we can expect the bias to decay faster than the variance as the length of the Markov chain increases. In the earlier example of the Langevin diffusion (Eq 4.5), the bias decreases at an exponential rate. On the other hand, the variance decreases linearly with the time $T$ of the process. These types of examples may explain why the empirical performance of MCMC algorithms is mostly reported in terms of ESS per operation, which, above all, tells us how well an algorithm performs once approximate stationarity is achieved. More recent papers, notably on the topic of GPU-friendly samplers, experimentally examine the rate at which the bias decreases (e.g., Hoffman and Sountsov,, 2022), perhaps foreshadowing a shift in paradigm. Indeed the recent development of general-purpose variance reduction techniques, such as control variates or plainly running many chains, means our target variance can be attained with a much shorter sampling phase than what would traditionally be prescribed. In general these techniques do not reduce the bias of our estimators. Hence, for problems amiable to variance reduction techniques, the bias is the primary computational bottleneck and the main requirement for an MCMC algorithm is to converge quickly to its stationary distribution. ### Choosing a sampler and tuning it to reduce bias Many tuning criterions, such as maximizing the expected squared jump distance (Pasarica and Gelman,, 2010), tacitly focus on the asymptotic variance of MCMC, once the Markov chains are (approximately) stationary. A complementary measure of efficiency is the speed of convergence, usually expressed in terms of the total variational distance ($D_{\text{TV}}$) between the approximate distribution $\hat{p}^{(t)}$ of $\theta^{(t)}$ and the stationary distribution $p$, $D_{\text{TV}}=\sup_{A\subseteq\Theta}\left|\int_{\theta\in A}\hat{p}^{(t)}(\theta)-p(\theta)\text{d}\theta\right|.$ (6.1) We then define the relaxation time as the number of iterations required to move from an initial distribution $p_{0}$ to $\hat{p}^{(t)}$ such that $D_{\text{TV}}\left(\hat{p}^{(t)},p\right)\leq\epsilon$ for some pre-defined $\epsilon>0$. The total variation distance is not a perfect proxy for the bias, but it provides an upper bound for the absolute bias when estimating the expectation value of a bounded function $f$ (e.g Roberts and Rosenthal,, 2004, Proposition 3). Specifically, given two scalars $a<b$, $D_{\text{TV}}=\frac{1}{b-a}\sup_{f:\Theta\to[a,b]}\left|\mathbb{E}_{\hat{p}^{(t)}}\left(f(\theta)\right)-\mathbb{E}_{p}\left(f(\theta)\right)\right|.$ (6.2) We might argue that all functions $f$ are bounded, given finite machine precision. In this sense, the speed of convergence and relaxation time provides a conservative indication of how efficient a sampler is during the warmup phase, i.e. the rate per iterations at which we reduce the bias for all estimators of interest. The asymptotic variance, on the other hand, tells us how efficient the MCMC sampler is during the sampling phase. Asymptotic efficiency and speed of convergence may not prescribe the same optimal tuning of an MCMC algorithm (Besag and Green,, 1993; Mira,, 2001). We may in theory favor one efficiency goal over the other, depending on whether we expect bias or variance reduction to be the main computational bottleneck. #### Example: the transition kernel that minimizes the relaxation time does not minimize the asymptotic variance. Consider a finite discrete space. The choice that minimizes $D_{\text{TV}}$ is trivially the transition kernel that generates an independent sample from $p$. With such an “oracle” choice, the relaxation time is a single iteration. Now suppose we are at a point $\theta^{(t)}$. The oracle generates with a nonzero probability $\theta^{(t+1)}=\theta^{(t)}$, but when estimating $\mathbb{E}(\theta)$, it is often preferable to pick a transition kernel that always moves to another point, such that $\theta^{(t+1)}\neq\theta^{(t)}$ and the samples are anti-correlated. Therefore, the choice that optimizes speed of convergence differs from the one that would minimize the asymptotic variance. $||$ In practice, we find that algorithms which are asymptotically efficient also perform well in terms of speed of convergence. We are also not aware of practical tuning criterions, which do not involve oracle knowledge and still explicitly target speed of convergence. Developing such methods—and whether it is useful to do so—constitutes an open research question. Adaptive algorithms which learn a sampler’s tuning parameters in fewer iterations may also converge faster, even if the tuning criterion is based on optimizing the asymptotic variance. We illustrate this with ChEES-HMC (Hoffman et al.,, 2021), a tuning free HMC which targets the expected squared jump distance and uses cross-chain adaptation. In cross-chain adaptation, samples across multiple chains are pooled to tune the sampler’s parameters, and so increasing the number of chains can lead to faster adaptation and in turn faster bias decay. We demonstrate this phenomenon when applying ChEES-HMC to an ill-conditioned Gaussian with dimension $d=501$. In ChEES-HMC, the computation cost is dominated by gradient evaluations of $\log p(\theta)$. We run 1000 warmup iterations using 2, 4, and 8 communicating chains, and report the number of gradient evaluations of $\log p(\theta)$ per chain required to achieve a target bias (Figure 4). As we increase the number of chains, we attain an acceptable bias with fewer gradient evaluations per chain. With parallelization, this result translates into an optimal warmup phase with a shorter computation time. A handful of algorithms now use cross-chain adaptation (Sountsov and Hoffman,, 2021; Hoffman and Sountsov,, 2022; Garbrié et al.,, 2022; Riou-Durand et al.,, 2023) and for these, running more chains can justify a shorter warmup phase (Figure 5). Figure 4: Number of gradient evaluations per chain required for ChEES-HMC to achieve a target bias across all $d=501$ dimensions of an ill-conditioned Gaussian. Gradient evaluation is the dominant computational operation for ChEES-HMC. Due to cross-chain adaptation, increasing the number chains can lead to a faster bias decay. When using 2 chains, we do not achieve a bias below $10^{-2}$ after 1000 warmup iterations. $p_{0}$$\hat{p}\approx p$$\underbrace{\hskip 128.0374pt}$warmup phase$\underbrace{\hskip 128.0374pt}$sampling phase$\vdots$ Figure 5: Many-short-chains regime using cross-chain adaptation. Using many chains reduces the length of the sampling phase required to achieve a target precision. Cross-chain adaptation pools information between the chains to tune the sampler during the warmup phase. For some problems, the improved adaptation means we achieve an acceptable bias with fewer warmup iterations per chain. ### Assessing approximate convergence with multiple chains A general framework to assess convergence is to compare multiple chains (Gelman and Rubin,, 1992). Each chain is distinguished by its initialization and its seed but neither should have a strong influence on our estimate of $\mathbb{E}f(\theta)$. It is therefore natural to check that, subject to varying initializations and seeds, our algorithm still returns the same result. This idea underlies many standard diagnostics, including visual checks such as comparing trace and density plots generated by different chains, as well as numerical diagnostics such as $\widehat{R}$ and its variants (Gelman and Rubin,, 1992; Brooks and Gelman,, 1998; Vehtari et al.,, 2021; Moins et al.,, 2023; Vats and Knudson,, 2021; Lambert and Vehtari,, 2022; Margossian et al.,, 2023). A quantity of particular interest is the sample variance of the per chain Monte Carlo estimator, obtained by averaging the samples within a single chain. Let $\theta^{(nm)}$ denote the $n^{\text{th}}$ sampling iteration in the $m^{\text{th}}$ chain. Then the per chain Monte Carlo estimator is $\bar{f}^{(m)}=\frac{1}{N}\sum_{n=1}^{N}f\left(\theta^{(nm)}\right),$ (6.3) and the final Monte Carlo estimator is obtained by averaging the means of all chains, $\bar{f}=\frac{1}{M}\sum_{m=1}^{M}\bar{f}^{(m)}.$ (6.4) We then study the empirical variance, $\widehat{B}=\frac{1}{M-1}\sum_{m=1}^{M}\left(\bar{f}^{(m)}-\bar{f}\right)^{2},$ (6.5) and check that it is smaller than a predetermined threshold. Doing so is analogous to the graphical comparisons between chains with trace and density plots (although more information can be inferred from these plots), and it is exactly what happens when we use the $\widehat{R}$ convergence diagnostic. To see this, consider the within-chain variance, defined as the average per- chain sample variance, $\widehat{W}=\frac{1}{M}\sum_{m=1}^{M}\frac{1}{N-1}\sum_{n=1}^{N}\left(f\left(\theta^{(nm)}\right)-\bar{f}^{(m)}\right)^{2}.$ (6.6) Then we define $\widehat{R}$ as $\widehat{R}=\sqrt{\frac{N-1}{N}+\frac{\widehat{B}}{\widehat{W}}}\ ,$ (6.7) and so checking that $\widehat{R}\leq 1+\epsilon$ is equivalent to setting a scaled tolerance on $\widehat{B}$, $\widehat{B}\lessapprox 2\epsilon\widehat{W}+\mathcal{O}(\epsilon^{2}).$ (6.8) This is not the original motivation of $\widehat{R}$, and there are other useful perspectives on the subject (see Gelman and Rubin,, 1992; Vehtari et al.,, 2021, and discussion). Here we follow the argument of Margossian et al., (2023). Convergence diagnostics based on multiple chains largely rely on the variance of $\bar{f}^{(m)}$, which raises a paradox: how can a variance-based diagnostic tell us whether the warmup phase, whose primary role is to reduce bias, is sufficiently long? To answer this question, we decompose the MCMC process $\Gamma$ into an initial draw $\theta^{(0)}\sim p_{0}$ and then a stochastic process $\gamma$, which is the subsequent application of the transition kernels (warmup and sampling included). Then applying the law of total variance, $\text{Var}_{\Gamma}\ \bar{f}^{(m)}=\underbrace{\text{Var}_{p_{0}}\left[\mathbb{E}_{\gamma}(\bar{f}^{(m)}\mid\theta_{0})\right]}_{\text{nonstationary variance}}+\underbrace{\mathbb{E}_{p_{0}}\left[\text{Var}_{\gamma}(\bar{f}^{(m)}\mid\theta_{0})\right]}_{\text{persistent variance}}.$ (6.9) The nonstationary variance tells us how much the expectation value of our estimator varies with the initialization and provides an intuitive measure of how well the Markov chains forget their starting points. This term is related to other notions of convergence: as the total variation distance $D_{\text{TV}}$ and the bias go to 0, so does the nonstationary variance. Margossian et al., (2023) show that in the example of a Langevin diffusion evolving from one Gaussian to the other, the squared bias and the nonstationary variance both decay at the same rate $\sim e^{-2t}$, and so one can be used as a “proxy clock” of the other. The persistent variance, named so because it does not go to zero after the Markov chains converge, is of interest when determining whether the sampling phase is sufficiently long. For stationary Markov chains, the ratio $\widehat{W}/\widehat{B}$ provides an estimator of the ESS (Gelman et al.,, 2003; Vats and Knudson,, 2021), albeit a less stable one than estimators based on autocorrelation functions. In some sense, $\widehat{R}$ serves as a diagnostic for both the lengths of the warmup and the sampling phases, and this can be seen as a desirable feature: it is easier to reason about a single summary quantity than multiple ones. But if $\widehat{R}$ is large, it is unclear how to disentangle whether our warmup or our sampling phase is too short. Another potential limitation is that $\widehat{R}$ is always large when we use a short sampling phase, even when the error of our final estimator $\bar{f}$ is small, for example when we run many short chains. This and other considerations led to the development of a nested $\widehat{R}$, which compares groups of chains initialized at the same starting point and, in doing so, can provide a direct measure of the nonstationary variance, rather than of the total variance (Margossian et al.,, 2023). The role of the convergence diagnostic is then clearly confined to checking the length of the warmup phase; once we establish that the warmup is long enough, we then look at measures of the ESS and the MCSE, and decide whether the sampling phase has a satisfactory length. A limitation with diagnostics based on multiple chains is that they essentially rely on problems in our MCMC manifesting as disagreements between chains. A common concern is that all the chains may be in good agreement but still all be wrong. For example, they may all gravitate towards the same mode and ignore a hidden mode. Here we must reason about our diagnostics as estimators (of some relevant quantity), vulnerable to bias and variance, with steps that can be taken to reduce the error: for instance, running many chains for more iterations increases our ability to detect the hidden mode. ## 7 Initial distribution Another decision when running MCMC is the choice of initial values. Here we must balance a few considerations that will in turn accommodate task 2 (computation) or task 3 (diagnostics). With respect to task 2 (computation), starting with a distribution which is close to the stationary distribution $p$ means we can achieve a target bias with less iterations. In the Langevin diffusion example, it is clearly desirable to have $(\mu_{0},\sigma_{0})$ as close as possible to $(\mu,\sigma)$. We may resort to fast approximations of the target distribution $p$, for example using a Laplace approximation or variational inference. Recently, Zhang et al., (2022) showed that Pathfinder variational inference can initially produce better approximations faster than HMC-NUTS across a range of Bayesian inference problems. For certain problems, a poor initialization can completely kill our computation. We have encountered this when dealing with likelihoods whose evaluation and differentiation involves a non-trivial numerical operation, such as solving a differential equation. In some regions of $\theta$’s space, the differential equation may be extremely difficult to solve, requiring a great deal of computation per iteration, even though it is better behaved, with high probability, once we approximately sample from the stationary distribution (e.g., Margossian and Gelman,, 2020; Margossian et al.,, 2021). This problem manifests as an MCMC run with an extremely slow warmup start before the compute time per iteration improves. In such a case, a good initialization can bypass pathological regions that frustrate our computation. For certain problems, values of $\theta$ that pose numerical difficulty are unavoidable, even in the ideal circumstance where we sample from $p$, and the numerical problem cannot be alleviated by any choice of starting distribution. Task 3 (diagnostics), in conjunction with diagnostics using multiple chains, encourages a starting distribution with a large variance, also termed an overdispersed initialization (Gelman and Rubin,, 1992). This is a valid strategy for identifying multiple modes and making sure all our Markov chains are not trapped inside the same basin of attraction. A large initial variance also implies the expected squared error of $\bar{f}$ is dominated by the nonstationary variance, which we can monitor, rather than the unknown squared bias. This choice of initialization increases the variance of our estimator, but this may be acceptable, since the additional variance decays with the bias and the Markov chains eventually forget their starting point. A compromise must be struck. A large variance makes our diagnostics more reliable, however it increases the relaxation time and makes us vulnerable to slow computation in potentially irrelevant regions. In a Bayesian inference context, we recommend drawing the start of the Markov chain from the prior distribution. When the data is informative, the posterior tends to be more compact than the prior, and so the prior variance is large relative to the posterior variance. At the same time, a well-constructed prior often—though not always—excludes the patently absurd values of $\theta$ that can make evaluation and differentiation of a likelihood challenging. We are not saying that model choices should be determined by computational constraints. Rather, if a prior allows for values that make a likelihood numerically unstable, it is worth investigating whether we have accounted for all available information when constructing the prior. ## 8 Summary and discussion Given the challenges of exploring arbitrary distributions in high dimension, it is natural to feel skepticism, demand conservative thresholds for diagnostics, and encourage researchers to continuously increase the length of their Markov chains. In our view, such skepticism is mostly justified with regards to task 3 (estimating accuracy of Monte Carlo estimates), rather than task 1 (inference about parameters and quantities of interest) and task 2 (computation of posterior expectations). But the computation we do not spend on one MCMC run can almost always be put to work elsewhere, for example on a more sophisticated model. What might be a prudent choice from an inferential perspective can involve a risk from a modeling standpoint. The field of pharmacometrics offers models with a wide range of sophistication: from the standard model of the human body as a handful of communicating compartments (Gilbaldi and Perrier,, 1982) to quantitative systems pharmacology models such as Peterson and Riggs, (2010). In our experience, practical use of Bayesian inference lies somewhere between those two extremes, with so-called semi-mechanistic models taking hours or days to fit with MCMC. In other words, precise Bayesian inference currently precludes the use of certain models, which we might tackle with cheap and approximate inference, even though this is not ideal. To take another example, this time from machine learning, there are many compelling reasons to fit a Bayesian neural network (Neal,, 1996) and the benefits of doing so with MCMC, in terms of accuracy, have been demonstrated (Izmailov et al.,, 2021). But under the constraint of a computational budget, we should wonder whether it is better to fit a small neural network with long MCMC, or a large neural network with a fast approximation. Ongoing research can, in the longer run, push back against this compromise. Current MCMC sampler require several conscious decisions from the practitioner, who needs to understand not only their scientific problem but also certain mechanics of MCMC. A more black box Bayesian inference would leave the user to specify a target ESS, while choices such as the length of the warmup and sampling phases, and even the number of Markov chains are handled automatically under the hood. This would lead to a completely tuning- free MCMC. We see many promising directions to implement such a black box MCMC—adaptive warmup and sampling lengths based on online diagnostics (Geweke,, 1992; Cowles et al.,, 1998; Jones et al.,, 2006), unbiased MCMC using coupling chains (Jacob et al.,, 2020), the many-short-chains regime (Lao et al., (2020), Margossian et al., (2023), and Section 5 of this chapter), and more—and we expect in the future to see some of these ideas implemented in probabilistic programming languages. ## 9 Acknowledgments We thank Bob Carpenter, Manny Mokel, Mitzi Morris, Aki Vehtari, and Brian Ward for reading the manuscript and providing feedback, and we thank the U.S. Office of Naval Research for partial support of this work. The authors are also indebted to the Bayesian computation reading group at the Flatiron Institute and its participants for many fruitful conversations which inspired several passages in this article. ## References * Andrieu and Thoms, (2008) Andrieu, C. and Thoms, J. (2008). A tutorial on adaptive MCMC. Statistics and Computing, 18:343–376. * Badger, (1994) Badger, L. (1994). Lazzarini’s lucky approximation of $\pi$. Mathematics Magazine, 67:83–91. * Besag and Green, (1993) Besag, J. and Green, P. J. (1993). Spatial statistics and Bayesian computation. Journal of the Royal Statistical Society, Series B, 55:25–37. * Betancourt, (2018) Betancourt, M. (2018). A conceptual introduction to Hamiltonian Monte Carlo. arXiv:1701.02434v1. * Blei, (2014) Blei, D. M. (2014). Build, compute, critique, repeat: Data analysis with latent variable models. Annual Review of Statistics and Its Application, 1. * Box and Draper, (1987) Box, G. E. P. and Draper, N. (1987). Empirical Model-Building and Response Surfaces. Wiley. * Brooks et al., (2003) Brooks, S., Giudici, P., and Phillipe, A. (2003). Nonparametric convergence assessment for MCMC model selection. Journal of Computational and Graphical Statistics, 12:1–22. * Brooks and Gelman, (1998) Brooks, S. P. and Gelman, A. (1998). General methods for monitoring convergence of iterative simulations. Journal of Computational and Graphical Statistics, 7:434–455. * Campbell et al., (2023) Campbell, F., Frost, S., Jombart, T., Nouvellet, P., Park, S. W., Pulliam, J. R., Schumacher, J., and Sudre, B. (2023). outbreaks: a compilation of disease outbreak data. * Carpenter et al., (2017) Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M. A., Guo, J., Li, P., and Riddel, A. (2017). Stan: A probabilistic programming language. Journal of Statistical Software, 76:1–32. * Cowles et al., (1998) Cowles, M. K., Roberts, G. O., and Rosenthal, J. S. (1998). Possible biases induced by MCMC convergence diagnostics. Journal of Statistical Computation and Simulation, 64:87–104. * du Ché and Margossian, (2023) du Ché, S. and Margossian, C. C. (2023). Parallelization for Markov chain Monte Carlo with heterogeneous runtimes. BayesComp. * Garbrié et al., (2022) Garbrié, M., Rotskoff, G. M., and Vanden-Eijnden, E. (2022). Adaptive Monte Carlo augmented with normalizing flows. Proceedings of the National Academy of Sciences, 119:e2109420119. * Gelman et al., (2014) Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2014). Bayesian Data Analysis, third edition. CRC Press, London. * Gelman et al., (2003) Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2003). Bayesian Data Analysis, second edition. CRC Press. * Gelman et al., (1997) Gelman, A., Gilks, W. R., and Roberts, G. O. (1997). Weak convergence and optimal scaling of random walk Metropolis algorithms. Annals of Applied Probability, 7:110–120. * Gelman and Rubin, (1992) Gelman, A. and Rubin, D. B. (1992). Inference from iterative simulation using multiple sequences (with discussion). Statistical Science, 7:457–511. * Gelman and Shirley, (2011) Gelman, A. and Shirley, K. (2011). Inference from simulations and monitoring convergence. In Brooks, S., Gelman, A., Jones, G. L., and Meng, X.-L., editors, Handbook of Markov Chain Monte Carlo, chapter 6. CRC Press. * Gelman et al., (2020) Gelman, A., Vehtari, A., Simpson, D., Margossian, C. C., Carpenter, B., Yao, Y., Kennedy, L., Gabry, J., Bürkner, P.-C., and Modrák, M. (2020). Bayesian workflow. arXiv:2011.01808. * Geweke, (1992) Geweke, J. (1992). Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. In Bayesian Statistics 4, pages 169–193. Oxford University Press. * Geyer, (1992) Geyer, C. (1992). Practical Markov chain Monte Carlo. Statistical Science, 7:473–483. * Gilbaldi and Perrier, (1982) Gilbaldi, M. and Perrier, D. (1982). Pharmacokinetics, second edition. CRC Press. * Gilks et al., (1994) Gilks, W. R., Roberts, G. O., and George, E. I. (1994). Adaptive direction sampling. Journal of the Royal Statistical Society: Series D, 43:179–189. * Grinsztajn et al., (2021) Grinsztajn, L., Semenova, E., Margossian, C. C., and Riou, J. (2021). Bayesian workflow for disease transmission modeling in Stan. Statistics in Medicine, 40:6209–6234. * Hoffman and Sountsov, (2022) Hoffman, M. and Sountsov, P. (2022). Tuning-free generalized Hamiltonian Monte Carlo. Artificial Intelligence and Statistics. * Hoffman and Gelman, (2014) Hoffman, M. D. and Gelman, A. (2014). The no-U-turn sampler: Adaptively setting path lengths in Hamiltonian Monte Carlo. Journal of Machine Learning Research, 15:1593–1623. * Hoffman et al., (2021) Hoffman, M. D., Radul, A., and Sountsov, P. (2021). An adaptive MCMC scheme for setting trajectory lengths in Hamiltonian Monte Carlo. Artificial Intelligence and Statistics. * Izmailov et al., (2021) Izmailov, P., Vikram, S., Hoffman, M. D., and Wilson, A. G. (2021). What are Bayesian neural network posteriors really like? International Conference on Machine Learning. * Jacob et al., (2020) Jacob, P. E., O’Leary, J., and Atchadé, Y. F. (2020). Unbiased Markov chain Monte Carlo methods with couplings. Journal of the Royal Statistical Society, Series B, 82:543–600. * Jones et al., (2006) Jones, G. L., Haran, M., Caffo, B. S., and Neath, R. (2006). Fixed-width output analysis for Markov chain Monte Carlo. Journal of the American Statistical Association, 101:1537–1547. * Lambert and Vehtari, (2022) Lambert, B. and Vehtari, A. (2022). $R^{*}$: A robust MCMC convergence diagnostic with uncertainty using decision tree classifiers. Bayesian Analysis, 17:353–379. * Landau and Binder, (2009) Landau, D. and Binder, K. (2009). A Guide to Monte Carlo Simulations in Statistical Physics. Cambridge University Press. * Lao et al., (2020) Lao, J., Suter, C., Langmore, I., Chimisov, C., Saxena, A., Sountsov, P., Moore, D., Saurous, R. A., Hoffman, M. D., and Dillon, J. V. (2020). tfp.mcmc: Modern Markov chain Monte Carlo tools built for modern hardware. arXiv:2002.01184. * Laplace, (1812) Laplace, P.-S. (1812). Théorie Analytique des Probabilités. * Lee et al., (2010) Lee, A., Yau, C., Giles, M. B., Doucet, A., and Holmes, C. C. (2010). On the utility of graphics cards to perform massively parallel simulation of advanced Monte Carlo methods. Journal of Computational and Graphical Statistics, 19:769–789. * Mackay, (2003) Mackay, D. J. (2003). Information Theory, Inference, and Learning Algorithms. Cambridge University Press. * Margossian and Gelman, (2020) Margossian, C. C. and Gelman, A. (2020). Bayesian model of planetary motion: Exploring ideas for a modeling workflow when dealing with ordinary differential equations and multimodality. In Stan Case Studies, volume 7. * Margossian et al., (2023) Margossian, C. C., Hoffman, M. D., Sountsov, P., Riou-Durand, L., Vehtari, A., and Gelman, A. (2023). Nested $\widehat{R}$: Assessing the convergence of Markov chain Monte Carlo when running many short chains. arXiv:2110.13017. * Margossian et al., (2020) Margossian, C. C., Vehtari, A., Simpson, D., and Agrawal, R. (2020). Hamiltonian Monte Carlo using an adjoint-differentiated Laplace approximation: Bayesian inference for latent Gaussian models and beyond. Neural Information Processing Systems. * Margossian et al., (2021) Margossian, C. C., Zhang, L., Weber, S., and Gelman, A. (2021). Solving ODEs in a bayesian context: challenges and opportunities. Population Approach Group in Europe. * Mira, (2001) Mira, A. (2001). Ordering and improving the performance of Monte Carlo Markov chains. Statistical Science, 16:340–350. * Mira et al., (2013) Mira, A., Solgi, R., and Imparato, D. (2013). Zero-variance principle for monte carlo algorithms. Statistics and Computing, 23. * Moins et al., (2023) Moins, T., Arbel, J., Dutfoy, A., and Girard, S. (2023). On the use of a local $\widehat{R}$ to improve MCMC convergence diagnostic. Bayesian Analysis. * Neal, (1996) Neal, R. M. (1996). Bayesian Learning for Neural Networks. Lecture Notes in Statistics. Springer. * Neal, (2011) Neal, R. M. (2011). MCMC using Hamiltonian dynamics. In Handbook of Markov Chain Monte Carlo. CRC Press. * Pasarica and Gelman, (2010) Pasarica, C. and Gelman, A. (2010). Adaptively scaling the Metropolis algorithm using expected squared jumped distance. Statistica Sinica, 20:343–364. * Peterson and Riggs, (2010) Peterson, M. C. and Riggs, M. M. (2010). A physiologically based mathematical model of integrated calcium homeostasis and bone remodeling. Bone, 46:49–63. * Piironen and Vehtari, (2017) Piironen, J. and Vehtari, A. (2017). Sparsity information and regularization in the horseshoe and other shrinkage priors. Electronic Journal of Statistics, 11:5018–5051. * Riou-Durand et al., (2023) Riou-Durand, L., Sountsov, P., Vogrinc, J., Margossian, C. C., and Power, S. (2023). Adaptive tuning for Metropolis adjusted Langevin trajectories. Artificial Intelligence and Statistics. * Roberts and Rosenthal, (1998) Roberts, G. O. and Rosenthal, J. S. (1998). Optimal scaling of discrete approximations to Langevin diffusions. Journal of the Royal Statistical Society, Series B, 60:255–268. * Roberts and Rosenthal, (2004) Roberts, G. O. and Rosenthal, J. S. (2004). General state space Markov chains and MCMC algorithms. Probability Surveys, 1:20–71. * Rosenthal, (2000) Rosenthal, J. S. (2000). Parallel computing and Monte Carlo algorithms. Far East Journal of Theoretical Statistics, 4:207–236. * Salvatier et al., (2016) Salvatier, J., Wiecki, T. V., and Fonnesbeck, C. (2016). Probabilistic programming in Python using PyMC3. PeerJ Computer Science, 2. * Sountsov and Hoffman, (2021) Sountsov, P. and Hoffman, M. D. (2021). Focusing on difficult directions for learning HMC trajectory lengths. arXiv:2110.11576. * South et al., (2023) South, L. F., Oates, C. J., Mira, A., and Drovandi, C. (2023). Regularized zero-variance control variates. Bayesian Analysis, 18:865–888. * South et al., (2021) South, L. F., Riabiz, M., Teymur, O., and Oates, C. J. (2021). Post-processing of MCMC. Annual Review of Statistics and Its Application, 9:1–30. * TensorFlow Probability Development Team, (2023) TensorFlow Probability Development Team (2023). Tensorflow probability. * Vats et al., (2019) Vats, D., Flegal, J. M., and Jones, G. L. (2019). Multivariate output analysis for Markov chain Monte Carlo. Biometrika, 106:321–337. * Vats and Knudson, (2021) Vats, D. and Knudson, D. (2021). Revisiting the Gelman-Rubin diagnostic. Statistical Science, 36:518–529. * Češnovar et al., (2020) Češnovar, R., Bronder, S., Sluga, D., Demšar, Ciglarič, T., Talts, S., and Štrumbelj, E. (2020). GPU-based parallel computation support for Stan. arXiv:1907.01063v2. * Vehtari, (2022) Vehtari, A. (2022). Bayesian workflow book - digits. * Vehtari et al., (2021) Vehtari, A., Gelman, A., Simpson, D., Carpenter, B., and Bürkner, P.-C. (2021). Rank-normalization, folding, and localization: An improved $\widehat{R}$ for assessing convergence of MCMC (with discussion). Bayesian Analysis, 16:667–718. * Štrumbelj et al., (2023) Štrumbelj, E., Bouchard-Côté, A., Corander, J., Gelman, A., Hȧvard, R., Murray, L., Pesonen, H., Plummer, M., and Vehtari, A. (2023). Past, present, and future of software for Bayesian inference. Statistical Science. * Zhang et al., (2022) Zhang, L., Carpenter, B., Gelman, A., and Vehtari, A. (2022). Pathfinder: Parallel quasi-Newton variational inference. Journal of Machine Learning Research, 23(306):1–49.
# Disentangling Learning Representations with Density Estimation Eric Yeats1 Frank Liu2 Hai Li1 1Department of Electrical and Computer Engineering, Duke University 2Computer Science and Mathematics Division, Oak Ridge National Laboratory {eric.yeats<EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Disentangled learning representations have promising utility in many applications, but they currently suffer from serious reliability issues. We present Gaussian Channel Autoencoder (GCAE), a method which achieves reliable disentanglement via flexible density estimation of the latent space. GCAE avoids the curse of dimensionality of density estimation by disentangling subsets of its latent space with the Dual Total Correlation (DTC) metric, thereby representing its high-dimensional latent joint distribution as a collection of many low-dimensional conditional distributions. In our experiments, GCAE achieves highly competitive and reliable disentanglement scores compared with state-of-the-art baselines. ## 1 Introduction The notion of disentangled learning representations was introduced by Bengio et al. (2013) \- it is meant to be a robust approach to feature learning when trying to learn more about a distribution of data $X$ or when downstream tasks for learned features are unknown. Since then, disentangled learning representations have been proven to be extremely useful in the applications of natural language processing Jain et al. (2018), content and style separation John et al. (2018), drug discovery Polykovskiy et al. (2018); Du et al. (2020), fairness Sarhan et al. (2020), and more. Density estimation of learned representations is an important ingredient to competitive disentanglement methods. Bengio et al. (2013) state that representations ${\mathbf{z}}\sim Z$ which are disentangled should maintain as much information of the input as possible while having components which are mutually invariant to one another. Mutual invariance motivates seeking representations of $Z$ which have independent components extracted from the data, necessitating some notion of ${p_{Z}({\mathbf{z}})}$. Leading unsupervised disentanglement methods, namely $\beta$-VAE Higgins et al. (2016), FactorVAE Kim & Mnih (2018), and $\beta$-TCVAE Chen et al. (2018) all learn ${p_{Z}({\mathbf{z}})}$ via the same variational Bayesian framework Kingma & Welling (2013), but they approach making ${p_{Z}({\mathbf{z}})}$ independent with different angles. $\beta$-VAE indirectly promotes independence in ${p_{Z}({\mathbf{z}})}$ via enforcing low $D_{\mathrm{KL}}$ between the representation and a factorized Gaussian prior, $\beta$-TCVAE encourages representations to have low Total Correlation (TC) via an ELBO decomposition and importance weighted sampling technique, and FactorVAE reduces TC with help from a monolithic neural network estimate. Other well- known unsupervised methods are Annealed $\beta$-VAE Burgess et al. (2018), which imposes careful relaxation of the information bottleneck through the VAE $D_{\mathrm{KL}}$ term during training, and DIP-VAE I & II Kumar et al. (2017), which directly regularize the covariance of the learned representation. For a more in-depth description of related work, please see Appendix D. While these VAE-based disentanglement methods have been the most successful in the field, Locatello et al. (2019) point out serious reliability issues shared by all. In particular, increasing disentanglement pressure during training doesn’t tend to lead to more independent representations, there currently aren’t good unsupervised indicators of disentanglement, and no method consistently dominates the others across all datasets. Locatello et al. (2019) stress the need to find the right inductive biases in order for unsupervised disentanglement to truly deliver. We seek to make disentanglement more reliable and high-performing by incorporating new inductive biases into our proposed method, Gaussian Channel Autoencoder (GCAE). We shall explain them in more detail in the following sections, but to summarize: GCAE avoids the challenge of representing high- dimensional ${p_{Z}({\mathbf{z}})}$ via disentanglement with Dual Total Correlation (rather than TC) and the DTC criterion is augmented with a scale- dependent latent variable arbitration mechanism. This work makes the following contributions: * • Analysis of the TC and DTC metrics with regard to the curse of dimensionality which motivates use of DTC and a new feature-stabilizing arbitration mechanism * • GCAE, a new form of noisy autoencoder (AE) inspired by the Gaussian Channel problem, which permits application of flexible density estimation methods in the latent space * • Experiments111Code available at https://github.com/ericyeats/gcae- disentanglement which demonstrate competitive performance of GCAE against leading disentanglement baselines on multiple datasets using existing metrics ## 2 Background and Initial Findings To estimate ${p_{Z}({\mathbf{z}})}$, we introduce a discriminator-based method which applies the density-ratio trick and the Radon-Nikodym theorem to estimate density of samples from an unknown distribution. We shall demonstrate in this section the curse of dimensionality in density estimation and the necessity for representing ${p_{Z}({\mathbf{z}})}$ as a collection of conditional distributions. The optimal discriminator neural network introduced by Goodfellow et al. (2014a) satisfies: $\operatorname*{arg\,max}_{D}\mathbb{E}_{{\mathbf{x}}_{r}\sim X_{real}}\left[\log D({\mathbf{x}})\right]+\mathbb{E}_{{\mathbf{x}}_{f}\sim X_{fake}}\left[\log\left(1-D({\mathbf{x}}_{f})\right)\right]\triangleq D^{*}({\mathbf{x}})=\frac{p_{real}({\mathbf{x}})}{p_{real}({\mathbf{x}})+p_{fake}({\mathbf{x}})}$ where $D({\mathbf{x}})$ is a discriminator network trained to differentiate between “real” samples ${\mathbf{x}}_{r}$ and “fake” samples ${\mathbf{x}}_{f}$. Given the optimal discriminator $D^{*}({\mathbf{x}})$, the density-ratio trick can be applied to yield $\frac{p_{real}({\mathbf{x}})}{p_{fake}({\mathbf{x}})}=\frac{D^{*}({\mathbf{x}})}{1-D^{*}({\mathbf{x}})}$. Furthermore, the discriminator can be supplied conditioning variables to represent a ratio of conditional distributions Goodfellow et al. (2014b); Makhzani et al. (2015). Consider the case where the “real” samples come from an unknown distribution ${\mathbf{z}}\sim Z$ and the “fake” samples come from a known distribution ${\mathbf{u}}\sim U$. Permitted that both ${p_{Z}({\mathbf{z}})}$ and ${p_{U}({\mathbf{u}})}$ are finite and ${p_{U}({\mathbf{u}})}$ is nonzero on the sample space of ${p_{Z}({\mathbf{z}})}$, the optimal discriminator can be used to retrieve the unknown density ${p_{Z}({\mathbf{z}})}=\frac{D^{*}({\mathbf{z}})}{1-D^{*}({\mathbf{z}})}p_{U}({\mathbf{z}})$. In our case where ${\mathbf{u}}$ is a uniformly distributed variable, this “transfer” of density through the optimal discriminator can be seen as an application of the Radon-Nikodym derivative of ${p_{Z}({\mathbf{z}})}$ with reference to the Lebesgue measure. Throughout the rest of this work, we employ discriminators with uniform noise and the density-ratio trick in this way to recover unknown distributions. (a) Joint distributions (b) Conditional distributions Figure 1: Empirical KL divergence between the true and estimated distributions as training iteration and distribution dimensionality increase. Training parameters are kept the same between both experiments. We employ Monte-Carlo estimators of KL divergence, leading to transient negative values when KL is near zero. This technique can be employed to recover the probability density of an $m$-dimensional isotropic Gaussian distribution. While it works well in low dimensions ($m\leq 8$), the method inevitably fails as $m$ increases. Figure 1(a) depicts several experiments of increasing $m$ in which the KL-divergence of the true and estimated distributions are plotted with training iteration. When number of data samples is finite and the dimension $m$ exceeds a certain threshold, the probability of there being any uniform samples in the neighborhood of the Gaussian samples swiftly approaches zero, causing the density-ratio trick to fail. This is a well-known phenomenon called the curse of dimensionality of density estimation. In essence, as the dimensionality of a joint distribution increases, concentrated joint data quickly become isolated within an extremely large space. The limit $m\leq 8$ is consistent with the limits of other methods such as kernel density estimation (Parzen-Rosenblatt window). Fortunately, the same limitation does not apply to conditional distributions of many jointly distributed variables. Figure 1(b) depicts a similar experiment of the first in which $m-1$ variables are independent Gaussian distributed, but the last variable ${\mathbf{z}}_{m}$ follows the distribution ${\mathbf{z}}_{m}\sim\mathcal{N}(\mu=(m-1)^{-\frac{1}{2}}\sum_{i=1}^{m-1}{\mathbf{z}}_{i},\ \sigma^{2}=\frac{1}{m})$ (i.e., the last variable is Gaussian distributed with its mean as the sum of observations of the other variables). The marginal distribution of each component is Gaussian, just like the previous example. While it takes more iterations to bring the KL-divergence between the true and estimated conditional distribution to zero, it is not limited by the curse of dimensionality. Hence, we assert that conditional distributions can capture complex relationships between subsets of many jointly distributed variables while avoiding the curse of dimensionality. ## 3 Methodology ### Analysis of Dual Total Correlation Recent works encourage disentanglement of the latent space by enhancing the Total Correlation (TC) either indirectly Higgins et al. (2016); Kumar et al. (2017) or explicitly Kim & Mnih (2018); Chen et al. (2018). TC is a metric of multivariate statistical independence that is non-negative and zero if and only if all elements of ${\mathbf{z}}$ are independent. ${\text{TC}(Z)}=\mathbb{E}_{\mathbf{z}}\log\frac{{p_{Z}({\mathbf{z}})}}{\prod_{i}{p_{Z_{i}}({\mathbf{z}}_{i})}}=\sum_{i}{h(Z_{i})}-{h(Z)}$ Locatello et al. (2019) evaluate many TC-based methods and conclude that minimizing their measures of TC during training often does not lead to VAE $\mu$ (used for representation) with low TC. We note that computing ${\text{TC}(Z)}$ requires knowledge of the joint distribution ${p_{Z}({\mathbf{z}})}$, which can be very challenging to model in high dimensions. We hypothesize that the need for a model of ${p_{Z}({\mathbf{z}})}$ is what leads to the observed reliability issues of these TC-based methods. Consider another metric for multivariate statistical independence, Dual Total Correlation (DTC). Like TC, DTC is non-negative and zero if and only if all elements of ${\mathbf{z}}$ are independent. $\text{DTC}({\mathbf{z}})=\mathbb{E}_{\mathbf{z}}\log\frac{\prod_{i}{p_{Z_{i}}({\mathbf{z}}_{i}|{\mathbf{z}}_{\setminus i})}}{{p_{Z}({\mathbf{z}})}}={h(Z)}-\sum_{i}{h(Z_{i}|Z_{\setminus i})}$ We use ${\mathbf{z}}_{\setminus i}$ to denote all elements of ${\mathbf{z}}$ except the $i$-th element. At first glance, it appears that $\text{DTC}({\mathbf{z}})$ also requires knowledge of the joint density $p({\mathbf{z}})$. However, observe an equivalent form of DTC manipulated for the $i$-th variable: ${\text{DTC}(Z)}={h(Z)}-{h(Z_{i}|Z_{\setminus i})}-\sum_{j\neq i}{h(Z_{j}|Z_{\setminus j})}={h(Z_{\setminus i})}-\sum_{j\neq i}{h(Z_{j}|Z_{\setminus j})}.$ (1) Here, the $i$-th variable only contributes to DTC through each set of conditioning variables ${\mathbf{z}}_{\setminus j}$. Hence, when computing the derivative $\partial\,{\text{DTC}(Z)}/\partial{\mathbf{z}}_{i}$, no representation of ${p_{Z}({\mathbf{z}})}$ is required - only the conditional entropies ${h(Z_{j}|Z_{\setminus j})}$ are necessary. Hence, we observe that the curse of dimensionality can be avoided through gradient descent on the DTC metric, making it more attractive for disentanglement than TC. However, while one only needs the conditional entropies to compute gradient for DTC, the conditional entropies alone don’t measure how close ${\mathbf{z}}$ is to having independent elements. To overcome this, we define the summed information loss ${\mathcal{L}_{\Sigma I}}$: ${\mathcal{L}_{\Sigma I}}(Z)\triangleq\sum_{i}{I(Z_{i};Z_{\setminus i})}=\left[\sum_{i}{h(Z_{i})}-{h(Z_{i}|Z_{\setminus i})}\right]+{h(Z)}-{h(Z)}={\text{TC}(Z)}+{\text{DTC}(Z)}.$ (2) If gradients of each ${I(Z_{i};Z_{\setminus i})}$ are taken only with respect to ${\mathbf{z}}_{\setminus i}$, then the gradients are equal to $\frac{\partial{\text{DTC}(Z)}}{\partial{\mathbf{z}}}$, avoiding use of any derivatives of estimates of ${p_{Z}({\mathbf{z}})}$. Furthermore, minimizing one metric is equivalent to minimizing the other: ${\text{DTC}(Z)}=0\Leftrightarrow{\text{TC}(Z)}=0\Leftrightarrow{\mathcal{L}_{\Sigma I}}=0$. In our experiments, we estimate ${h(Z_{i})}$ with batch estimates $\mathbb{E}_{{{\mathbf{z}}_{\setminus i}}}{p_{Z_{i}}({\mathbf{z}}_{i}|{\mathbf{z}}_{\setminus i})}$, requiring no further hyperparameters. Details on the information functional implementation are available in Appendix A.1. #### Excess Entropy Power Loss We found it very helpful to “stabilize” disentangled features by attaching a feature-scale dependent term to each ${I(Z_{i};Z_{\setminus i})}$. The entropy power of a latent variable ${\mathbf{z}}_{i}$ is non-negative and grows analogously with the variance of ${\mathbf{z}}_{i}$. Hence, we define the Excess Entropy Power loss: ${\mathcal{L}_{\text{EEP}}}(Z)\triangleq\frac{1}{2\pi e}\sum_{i}\left[{I(Z_{i};Z_{\setminus i})}\cdot e^{2{h(Z_{i})}}\right],$ (3) which weighs each component of the ${\mathcal{L}_{\Sigma I}}$ loss with the marginal entropy power of each $i$-th latent variable. Partial derivatives are taken with respect to the ${{\mathbf{z}}_{\setminus i}}$ subset only, so the marginal entropy power only weighs each component. While $\nabla_{\phi}{\mathcal{L}_{\text{EEP}}}\neq\nabla_{\phi}{\mathcal{L}_{\Sigma I}}$ in most situations ($\phi$ is the set of encoder parameters), this inductive bias has been extremely helpful in consistently yielding high disentanglement scores. An ablation study with ${\mathcal{L}_{\text{EEP}}}$ can be found in Appendix C. The name “Excess Entropy Power” is inspired by DTC’s alternative name, excess entropy. ### Gaussian Channel Autoencoder Figure 2: Depiction of the proposed method, GCAE. Gaussian noise with variance $\sigma^{2}$ is added to the latent space, smoothing the representations for gradient-based disentanglement with ${\mathcal{L}_{\text{EEP}}}$. Discriminators use the density-ratio trick to represent the conditional distributions of each latent element given observations of all other elements, capturing complex dependencies between subsets of the variables whilst avoiding the curse of dimensionality. We propose Gaussian Channel Autoencoder (GCAE), composed of a coupled encoder $\phi:X\rightarrow Z_{\phi}$ and decoder $\psi:Z\rightarrow\hat{X}$, which extracts a representation of the data ${\mathbf{x}}\in\mathbb{R}^{n}$ in the latent space ${\mathbf{z}}\in\mathbb{R}^{m}$. We assume $m\ll n$, as is typical with autoencoder models. The output of the encoder has a bounded activation function, restricting ${\mathbf{z}}_{\phi}\in(-3,3)^{m}$ in our experiments. The latent space is subjected to Gaussian noise of the form ${\mathbf{z}}={\mathbf{z}}_{\phi}+\nu_{\sigma}$, where each $\nu_{\sigma}\sim\mathcal{N}(0,\sigma^{2}I)$ and $\sigma$ is a controllable hyperparameter. The Gaussian noise has the effect of “smoothing” the latent space, ensuring that ${p_{Z}({\mathbf{z}})}$ is continuous and finite, and it guarantees the existence of the Radon-Nikodym derivative. Our reference noise for all experiments is ${\mathbf{u}}\sim\text{Unif}(-4,4)$. The loss function for training GCAE is: $\mathcal{L}_{\text{GCAE}}=\mathbb{E}_{{\mathbf{x}},\nu_{\sigma}}\left[\frac{1}{n}\|\hat{{\mathbf{x}}}-{\mathbf{x}}\|^{2}_{2}\right]+\lambda\,{\mathcal{L}_{\text{EEP}}}(Z),$ (4) where $\lambda$ is a hyperparameter to control the strength of regularization, and $\nu_{\sigma}$ is the Gaussian noise injected in the latent space with the scale hyperparameter $\sigma$. The two terms have the following intuitions: the mean squared error (MSE) of reconstructions ensures ${\mathbf{z}}$ captures information of the input while ${\mathcal{L}_{\text{EEP}}}$ encourages representations to be mutually independent. ## 4 Experiments We evaluate the performance of GCAE against the leading unsupervised disentanglement baselines $\beta$-VAE Higgins et al. (2016), FactorVAE Kim & Mnih (2018), $\beta$-TCVAE Chen et al. (2018), and DIP-VAE-II Kumar et al. (2017). We measure disentanglement using four popular supervised disentanglement metrics: Mutual Information Gap (MIG) Chen et al. (2018), Factor Score Kim & Mnih (2018), DCI Disentanglement Eastwood & Williams (2018), and Separated Attribute Predictability (SAP) Kumar et al. (2017). The four metrics cover the three major types of disentanglement metrics identified by Carbonneau et al. (2020) in order to provide a complete comparison of the quantitative disentanglement capabilities of the latest methods. We consider two datasets which cover different data modalities. The Beamsynthesis dataset Yeats et al. (2022) is a collection of $360$ timeseries data from a linear particle accelerator beamforming simulation. The waveforms are $1000$ values long and are made of two independent data generating factors: duty cycle (continuous) and frequency (categorical). The dSprites dataset Matthey et al. (2017) is a collection of $737280$ synthetic images of simple white shapes on a black background. Each $64\times 64$ pixel image consists of a single shape generated from the following independent factors: shape (categorical), scale (continuous), orientation (continuous), x-position (continuous), and y-position (continuous). All experiments are run using the PyTorch framework Paszke et al. (2019) using 4 NVIDIA Tesla V100 GPUs, and all methods are trained with the same number of iterations. Hyperparameters such as network architecture and optimizer are held constant across all models in each experiment (with the exception of the dual latent parameters required by VAE models). Latent space dimension is fixed at $m=10$ for all experiments, unless otherwise noted. More details are in Appendix B. (a) Scatter plot of $\log({\mathcal{L}_{\Sigma I}})$ vs. MSE for GCAE on Beamsynthesis and dSprites. Higher $\sigma$ and lower $\log({\mathcal{L}_{\Sigma I}})$ (through increased disentanglement pressure) tend to increase MSE. However, the increase in MSE subsides as the model becomes disentangled. (b) Scatter plot of $\log({\mathcal{L}_{\Sigma I}})$ vs. MIG for GCAE on Beamsynthesis and dSprites (both marked with dots). There is a moderate relationship between $\log({\mathcal{L}_{\Sigma I}})$ and MIG ($r=-0.823$), suggesting $\log({\mathcal{L}_{\Sigma I}})$ is a promising indicator of (MIG) disentanglement. Figure 3: Scatter plots of $\log({\mathcal{L}_{\Sigma I}})$ vs MSE and MIG, respectively, as $\sigma$ is increased. In general, increasing $\lambda$ and $\sigma$ led to lower ${\mathcal{L}_{\Sigma I}}$ but higher MSE at the end of training. Figure 3(a) depicts this relationship for Beamsynthesis and dSprites. Increasing $\sigma$ shifts final loss values towards increased independence (according to ${\mathcal{L}_{\Sigma I}}$) but slightly worse reconstruction error. This is consistent with the well-known Gaussian channel - as the relative noise level increases, the information capacity of a power-constrained channel decreases. The tightly grouped samples in the lower right of the plot correspond with $\lambda=0$ and incorporating any $\lambda>0$ leads to a decrease in ${\mathcal{L}_{\Sigma I}}$ and increase in MSE. As $\lambda$ is increased further the MSE increases slightly as the average ${\mathcal{L}_{\Sigma I}}$ decreases. Figure 3(b) plots the relationship between final ${\mathcal{L}_{\Sigma I}}$ values with MIG evaluation scores for both Beamsynthesis and dSprites. Our experiments depict a moderate negative relationship with correlation coefficient $-0.823$. These results suggest that ${\mathcal{L}_{\Sigma I}}$ is a promising unsupervised indicator of successful disentanglement, which is helpful if one does not have access to the ground truth data factors. ### Effect of $\lambda$ and $\sigma$ on Disentanglement (a) Beamsythesis (b) dSprites Figure 4: Effect of $\lambda$ and $\sigma$ on different disentanglement metrics. $\lambda$ is varied in the $x$-axis. Starting from the top left of each subfigure and moving clockwise within each subfigure, we report MIG, FactorScore, SAP, and DCI Disentanglement. Noise levels $\sigma=\\{0.2,0.3\\}$ are preferable for reliable disentanglement performance. KEY: Dark lines - average scores. Shaded areas - one standard deviation. In this experiment, we plot the disentanglement scores (average and standard deviation) of GCAE as the latent space noise level $\sigma$ and disentanglement strength $\lambda$ vary on Beamsynthesis and dSprites. In each figure, each dark line plots the average disentanglement score while the shaded area fills one standard deviation of reported scores around the average. Figure 4(a) depicts the disentanglement scores of GCAE on the Beamsynthesis dataset. All $\sigma$ levels exhibit relatively low scores when $\lambda$ is set to zero (with the exception of FactorScore). In this situation, the model is well-fit to the data, but the representation is highly redundant and entangled, causing the “gap” or “separatedness” (in SAP) for each factor to be low. However, whenever $\lambda>0$ the disentanglement performance increases significantly, especially for MIG, DCI Disentanglement, and SAP with $\lambda\in[0.1,0.2]$. There is a clear preference for higher noise levels, as $\sigma=0.1$ generally has higher variance and lower disentanglement scores. FactorScore starts out very high on Beamsynthesis because there are just two factors of variation, making the task easy. Figure 4(b) depicts the disentanglement scores of GCAE on the dSprites dataset. Similar to the previous experiment with Beamsynthesis, no disentanglement pressure leads to relatively low scores on all considered metrics ($\sim 0.03$ MIG, $\sim 0.47$ FactorScore, $\sim 0.03$ DCI, $\sim 0.08$ SAP), but introducing $\lambda>0$ signficantly boosts performance on a range of scores $\sim 0.35$ MIG, $\sim 0.6$ FactorScore, $\sim 0.37$ SAP, and $\sim 0.45$ DCI (for $\sigma=\\{0.2,0.3\\}$). Here, there is a clear preference for larger $\sigma$; $\sigma=\\{0.2,0.3\\}$ reliably lead to high scores with little variance. ### Comparison of GCAE with Leading Disentanglement Methods We incorporate experiments with leading VAE-based baselines and compare them with GCAE $\sigma=0.2$. Each solid line represents the average disentanglement scores for each method and the shaded areas represent one standard deviation around the mean. Figure 5: Disentanglement metric comparison of GCAE with VAE baselines on Beamsynthesis. GCAE $\lambda$ is plotted on the lower axis, and VAE-based method regularization strength $\beta$ is plotted on the upper axis. KEY: Dark lines - average scores. Shaded areas - one standard deviation. Figure 5 depicts the distributional performance of all considered methods and metrics on Beamsynthesis. When no disentanglement pressure is applied, disentanglement scores for all methods are relatively low. When disentanglement pressure is applied ($\lambda,\beta>0$), the scores of all methods increase. GCAE scores highest or second-highest on each metric, with low relative variance over a large range of $\lambda$. $\beta$-TCVAE consistently scores second-highest on average, with moderate variance. FactorVAE and $\beta$-VAE tend to perform relatively similarly, but the performance of $\beta$-VAE appears highly sensitive to hyperparameter selection. DIP-VAE-II performs the worst on average. Figure 6 shows a similar experiment for dSprites. Applying disentanglement pressure significantly increases disentanglement scores, and GCAE performs very well with relatively little variance when $\lambda\in[0.1,0.5]$. $\beta$-VAE achieves high top scores with extremely little variance but only for a very narrow range of $\beta$. $\beta$-TCVAE scores very high on average for a wide range of $\beta$ but with large variance in scores. FactorVAE consistently scores highest on FactorScore and it is competitive on SAP. DIP- VAE-II tends to underperform compared to the other methods. Figure 6: Disentanglement metric comparison of GCAE with VAE baselines on dSprites. GCAE $\lambda$ is plotted on the lower axis, and VAE-based method regularization strength $\beta$ is plotted on the upper axis. KEY: Dark lines - mean scores. Shaded areas - one standard deviation. ### Disentanglement Performance as Z Dimensionality Increases Figure 7: Comparison of GCAE with FactorVAE on dSprites as $m$ increases. $\lambda$ is plotted below, and $\beta$ is plotted above. KEY: Dark lines - mean scores. Shaded areas - one standard deviation. We report the disentanglement performance of GCAE and FactorVAE on the dSprites dataset as $m$ is increased. FactorVAE Kim & Mnih (2018) is the closest TC-based method: it uses a single monolithic discriminator and the density-ratio trick to explicitly approximate ${\text{TC}(Z)}$. Computing ${\text{TC}(Z)}$ requires knowledge of the joint density ${p_{Z}({\mathbf{z}})}$, which is challenging to compute as $m$ increases. Figure 7 depicts an experiment comparing GCAE and FactorVAE when $m=20$. The results for $m=10$ are included for comparison. The average disentanglement scores for GCAE $m=10$ and $m=20$ are very close, indicating that its performance is robust in $m$. This is not the case for FactorVAE - it performs worse on all metrics when $m$ increases. Interestingly, FactorVAE $m=20$ seems to recover its performance on most metrics with higher $\beta$ than is beneficial for FactorVAE $m=10$. Despite this, the difference suggests that FactorVAE is not robust to changes in $m$. ## 5 Discussion Overall, the results indicate that GCAE is a highly competitive disentanglement method. It achieves the highest average disentanglement scores on the Beamsynthesis and dSprites datasets, and it has relatively low variance in its scores when $\sigma=\\{0.2,0.3\\}$, indicating it is reliable. The hyperparameters are highly transferable, as $\lambda\in[0.1,0.5]$ works well on multiple datasets and metrics, and the performance does not change with $m$, contrary to the TC-based method FactorVAE. GCAE also used the same data preprocessing (mean and standard deviation normalization) across the two datasets. We also find that ${\mathcal{L}_{\Sigma I}}$ is a promising indicator of disentanglement performance. While GCAE performs well, it has several limitations. In contrast to the VAE optimization process which is very robust Kingma & Welling (2013), the optimization of $m$ discriminators is sensitive to choices of learning rate and optimizer. Training $m$ discriminators requires a lot of computation, and the quality of the learned representation depends heavily on the quality of the conditional densities stored in the discriminators. Increasing the latent space noise $\sigma$ seems to make learning more robust and generally leads to improved disentanglement outcomes, but it limits the corresponding information capacity of the latent space. ## 6 Conclusion We have presented Gaussian Channel Autoencoder (GCAE), a new disentanglement method which employs Gaussian noise and flexible density estimation in the latent space to achieve reliable, high-performing disentanglement scores. GCAE avoids the curse of dimensionality of density estimation by minimizing the Dual Total Correlation (DTC) metric with a weighted information functional to capture disentangled data generating factors. The method is shown to consistently outcompete existing SOTA baselines on many popular disentanglement metrics on Beamsynthesis and dSprites. ## Acknowledgements This research is supported by grants from U.S. Army Research W911NF2220025 and U.S. Air Force Research Lab FA8750-21-1-1015. We would like to thank Cameron Darwin for our helpful conversations regarding this work. This research is supported, in part, by the U.S. Department of Energy, through the Office of Advanced Scientific Computing Research’s “Data-Driven Decision Control for Complex Systems (DnC2S)” project. This research used resources of the Experimental Computing Laboratory (ExCL) at ORNL. This manuscript has been authored by UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the US Department of Energy (DOE). The US government retains and the publisher, by accepting the article for publication, acknowledges that the US government retains a nonexclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for US government purposes. DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe- public-access-plan). ## References * Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. _IEEE transactions on pattern analysis and machine intelligence_ , 35(8):1798–1828, 2013. * Burgess et al. (2018) Christopher P Burgess, Irina Higgins, Arka Pal, Loic Matthey, Nick Watters, Guillaume Desjardins, and Alexander Lerchner. Understanding disentangling in $\beta$-vae. _arXiv preprint arXiv:1804.03599_ , 2018. * Carbonneau et al. (2020) Marc-André Carbonneau, Julian Zaidi, Jonathan Boilard, and Ghyslain Gagnon. Measuring disentanglement: A review of metrics. _arXiv preprint arXiv:2012.09276_ , 2020. * Chen et al. (2018) Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disentanglement in variational autoencoders. _Advances in neural information processing systems_ , 31, 2018. * Du et al. (2020) Yuanqi Du, Xiaojie Guo, Amarda Shehu, and Liang Zhao. Interpretable molecule generation via disentanglement learning. In _Proceedings of the 11th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics_ , pp. 1–8, 2020\. * Eastwood & Williams (2018) Cian Eastwood and Christopher KI Williams. A framework for the quantitative evaluation of disentangled representations. In _International Conference on Learning Representations_ , 2018. * Goodfellow et al. (2014a) Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _Advances in neural information processing systems_ , 27, 2014a. * Goodfellow et al. (2014b) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. _arXiv preprint arXiv:1412.6572_ , 2014b. * Higgins et al. (2016) Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. beta-vae: Learning basic visual concepts with a constrained variational framework. 2016\. * Jain et al. (2018) Sarthak Jain, Edward Banner, Jan-Willem van de Meent, Iain J Marshall, and Byron C Wallace. Learning disentangled representations of texts with application to biomedical abstracts. In _Proceedings of the Conference on Empirical Methods in Natural Language Processing. Conference on Empirical Methods in Natural Language Processing_ , volume 2018, pp. 4683. NIH Public Access, 2018. * John et al. (2018) Vineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. Disentangled representation learning for non-parallel text style transfer. _arXiv preprint arXiv:1808.04339_ , 2018. * Kim & Mnih (2018) Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In _International Conference on Machine Learning_ , pp. 2649–2658. PMLR, 2018. * Kingma & Welling (2013) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_ , 2013. * Klambauer et al. (2017) Günter Klambauer, Thomas Unterthiner, Andreas Mayr, and Sepp Hochreiter. Self-normalizing neural networks. In _Proceedings of the 31st international conference on neural information processing systems_ , pp. 972–981, 2017. * Kumar et al. (2017) Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. _arXiv preprint arXiv:1711.00848_ , 2017. * Locatello et al. (2019) Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Raetsch, Sylvain Gelly, Bernhard Schölkopf, and Olivier Bachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In _international conference on machine learning_ , pp. 4114–4124. PMLR, 2019. * Makhzani et al. (2015) Alireza Makhzani, Jonathon Shlens, Navdeep Jaitly, Ian Goodfellow, and Brendan Frey. Adversarial autoencoders. _arXiv preprint arXiv:1511.05644_ , 2015. * Matthey et al. (2017) Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner. dsprites: Disentanglement testing sprites dataset. https://github.com/deepmind/dsprites-dataset/, 2017. * Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. _Advances in neural information processing systems_ , 32:8026–8037, 2019. * Polykovskiy et al. (2018) Daniil Polykovskiy, Alexander Zhebrak, Dmitry Vetrov, Yan Ivanenkov, Vladimir Aladinskiy, Polina Mamoshina, Marine Bozdaganyan, Alexander Aliper, Alex Zhavoronkov, and Artur Kadurin. Entangled conditional adversarial autoencoder for de novo drug discovery. _Molecular pharmaceutics_ , 15(10):4398–4405, 2018. * Sarhan et al. (2020) Mhd Hasan Sarhan, Nassir Navab, Abouzar Eslami, and Shadi Albarqouni. Fairness by learning orthogonal disentangled representations. In _European Conference on Computer Vision_ , pp. 746–761. Springer, 2020. * Tishby et al. (2000) Naftali Tishby, Fernando C Pereira, and William Bialek. The information bottleneck method. _arXiv preprint physics/0004057_ , 2000. * Yeats et al. (2022) Eric Yeats, Frank Liu, David Womble, and Hai Li. Nashae: Disentangling representations through adversarial covariance minimization. In _Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVII_ , pp. 36–51. Springer, 2022. ## Appendix A Implementation ### A.1 Information Functional We estimate the information between each subset of variables $I(Z_{i};Z_{j})$ used in ${\mathcal{L}_{\Sigma I}}$ with a uniform estimate of the information functional: ${I(Z_{i};Z_{\setminus i})}\approx(b-a)\mathbb{E}_{{{\mathbf{z}}_{\setminus i}}\sim\phi_{\sigma}(X)}\left[\mathbb{E}_{{\mathbf{u}}_{i}\sim\text{Unif}(a,b)}\ p_{Z_{i}}({\mathbf{u}}_{i}|{{\mathbf{z}}_{\setminus i}})\left[\log p_{Z_{i}}({\mathbf{u}}_{i}|{{\mathbf{z}}_{\setminus i}})-\log p_{Z_{i}}({\mathbf{u}}_{i})\right]\right],$ where $(a,b)$ are the bounds of the Uniform distribution ($-4$ and $4$ in our experiments), and $p_{Z_{i}}({\mathbf{u}}_{i}|{\mathbf{z}}_{\setminus i})$ is the conditional density of the $i$-th discriminator evaluated with noise from the Uniform distribution. 50 uniform samples are taken per batch to estimate the functional in all experiments. Furthermore, we found it beneficial (in terms of disentanglement performance) to estimate the functional using ${\mathbf{z}}_{\phi}$ (i.e., the noiseless form of ${\mathbf{z}}$)222Our intuition is that each ${{\mathbf{z}}_{\setminus i}}$ comes from one of the “modes” of the corresponding Gaussian-blurred distribution, ensuring that the loss is defined. This avoids the case where the learned conditional distribution is not defined when given a novel ${{\mathbf{z}}_{\setminus i}}$.. Gradient is only taken through the $p_{Z_{i}}({\mathbf{u}}_{i}|{\mathbf{z}}_{\setminus i})$ term with respect to the ${{\mathbf{z}}_{\setminus i}}$ variables. The marginal entropy ${h(Z_{i})}$ upper bounds the conditional entropy ${h(Z_{i}|Z_{\setminus i})}$ with respect to the conditioning variables, so the information functional is a natural path to maximizing ${h(Z_{i}|Z_{\setminus i})}$ and thereby minimizing DTC. ## Appendix B Main Experiment Details Each method uses the same architecture (besides the $\mu$, $\log\sigma^{2}$ heads for the VAE) and receies the same amount of data during training. In all experiments, the GCAE AE and discriminator learning rates are $5e-5$ and $2e-4$, respectively. The VAE learning rate is $1e-4$ and the FactorVAE discriminator learning rate is $2e-4$. All methods use the Adam optimizer with $(\beta_{1},\beta_{2})=(0.9,0.999)$ for the AE subset of parameters and $(\beta_{1},\beta_{2})=(0.5,0.9)$ for the discriminator(s) subset of parameters (if applicable). The number of discriminator updates per AE update $k$ is set to 5 when $m=10$ and $10$ when $m=20$. All discriminators are warmed up with 500 batches before training begins to ensure they approximate a valid density. VAE architectures are equipped with a Gaussian decoder for Beamsynthesis and a Bernoulli decoder for dSprites. SELU refers to the SeLU activation function Klambauer et al. (2017). Table 1: MLP Architecture Dataset | GCAE Architecture | VAE Architecture ---|---|--- Beamsynthsis | Linear($n$, 1024), SELU | Linear($n$, 1024), SELU BatchSize=64 | Linear(1024, 1024), SELU | Linear(1024, 1024), SELU Mean/STD Norm | Linear(1024, 512), SELU | Linear(1024, 512), SELU 2000 Iterations | Linear(512, $m$), SoftSign | 2 $\times$ Linear(512, $m$) | Linear($m$, 512), SELU | Linear($m$, 512), SELU dSprites | Linear(512, 1024), SELU | Linear(512, 1024), SELU BatchSize=256 | Linear(1024, 1024), SELU | Linear(1024, 1024), SELU Mean/STD Norm (GCAE) | Linear(1024, $n$) | Linear(1024, $n$) 20000 Iterations | | Table 2: Discriminator Architectures. The FactorVAE architecture follows the suggestion of Kim & Mnih (2018). The GCAE discriminator is much smaller, but there are $m$ of them compared to just $1$ FactorVAE discriminator. GCAE Discriminator Architecture | FactorVAE Discriminator Architecture ---|--- Linear($m$, 256), SELU | Linear($m$, 1024), SELU Linear(256, 256), SELU | Linear(1024, 1024), SELU Linear(256, 1), Sigmoid | Linear(1024, 1024), SELU - | Linear(1024, 1024), SELU - | Linear(1024, 1024), SELU - | Linear(1024, 1), Sigmoid ## Appendix C Ablation Study Figure 8: Ablation study: Comparison of MIG scores with and without ${\mathcal{L}_{\text{EEP}}}$. ${\mathcal{L}_{\Sigma I}}$ corresponds to direct gradient descent on ${\mathcal{L}_{\Sigma I}}$. Figure 8 depicts an ablation study for training with ${\mathcal{L}_{\text{EEP}}}$ vs. directly with ${\mathcal{L}_{\Sigma I}}$. We found that training directly with ${\mathcal{L}_{\Sigma I}}$ promotes independence between the latent variables, but the learned variables were not stable (i.e., their variance fluctuated significantly in training). The results indicate that ${\mathcal{L}_{\text{EEP}}}$ is a helpful inductive bias for aligning representations with interpretable data generating factors in a way that is stable throughout training. ## Appendix D Related Work ### D.1 Disentanglement Methods GCAE is an unsupervised method for disentangling learning representations - hence, the most closely related works are the state-of-the-art unsupervised VAE baselines: $\beta$-VAE Higgins et al. (2016), FactorVAE Kim & Mnih (2018), $\beta$-TCVAE Chen et al. (2018), and DIP-VAE-II Kumar et al. (2017). All methods rely on promoting some form of independence in ${p_{Z}({\mathbf{z}})}$, and we shall cover them in more detail in the following sections. #### $\beta$-VAE The disentanglement approach of $\beta$-VAE Higgins et al. (2016) is to promote independent codes in $Z$ by constraining the information capacity of $Z$. This is done with a VAE model by maximizing the expectation (on ${\mathbf{x}}$) of the following loss: $\mathcal{L}_{\beta\text{-VAE}}=\mathbb{E}_{q_{\phi}({\mathbf{z}}|{\mathbf{x}})}\left[p_{\theta}({\mathbf{x}}|{\mathbf{z}})\right]-\beta D_{\mathrm{KL}}\left(q_{\phi}({\mathbf{z}}|{\mathbf{x}})\big{|}\big{|}p_{\theta}({\mathbf{z}})\right),$ where $q_{\phi}({\mathbf{z}}|{\mathbf{x}})$ is the approximate posterior (inferential distribution of the encoder), $p_{\theta}({\mathbf{x}}|{\mathbf{z}})$ is the decoder distribution, $p_{\theta}({\mathbf{z}})$ is the prior distribution (typically spherical Gaussian), and $\beta$ is a hyperparameter controlling the strength of the ”Information Bottleneck” Tishby et al. (2000) induced on $Z$. Higher $\beta$ are associated with improved disentanglement performance. #### FactorVAE The authors of FactorVAE Kim & Mnih (2018) assert that the information bottleneck of $\beta$-VAE is too restrictive, and seek to improve the reconstruction error vs. disentanglement performance tradeoff by isolating the Total Correlation (TC) component of the $D_{\mathrm{KL}}\left(q_{\phi}({\mathbf{z}}|{\mathbf{x}})\big{|}\big{|}p_{\theta}({\mathbf{z}})\right)$ term. They employ a large discriminator neural network, the density-ratio trick, and a data shuffling strategy to estimate the TC. FactorVAE maximizes the following loss: $\mathcal{L}_{\text{FactorVAE}}=\mathbb{E}_{q_{\phi}({\mathbf{z}}|{\mathbf{x}})}\left[p_{\theta}({\mathbf{x}}|{\mathbf{z}})\right]-D_{\mathrm{KL}}\left(q_{\phi}({\mathbf{z}}|{\mathbf{x}})\big{|}\big{|}p_{\theta}({\mathbf{z}})\right)-\text{TC}_{\rho}(Z),$ where $\text{TC}_{\rho}(Z)$ is the discriminator’s estimate of ${\text{TC}(Z)}$. The discriminator is trained to differentiate between “real” jointly distributed ${\mathbf{z}}$ and “fake” ${\mathbf{z}}$ in which all the elements have been shuffled across a batch. #### $\beta$-TCVAE $\beta$-TCVAE Chen et al. (2018) seeks to isolate ${\text{TC}(Z)}$ via a batch estimate. They avoid significantly underestimating ${p_{Z}({\mathbf{z}})}$, by constructing an importance-weighted estimate of ${h(Z)}$: $\mathbb{E}_{q({\mathbf{z}})}\left[\log q({\mathbf{z}})\right]\approx\frac{1}{B}\sum_{i=1}^{B}\left[\log\frac{1}{BC}\sum_{j=1}^{B}q(\phi(x_{i})|x_{j})\right]$ where $q({\mathbf{z}})$ is an estimate of ${p_{Z}({\mathbf{z}})}$, $B$ is the minibatch size, $C$ is the size of the dataset, $\phi(x_{i})$ is a stochastic sample from the $i$-th $x$, and $q(\phi(x_{i})|x_{j})$ is the density of the posterior at $\phi(x_{i})$ when ${\mathbf{x}}=x_{j}$. This estimate is used to compute an estimate of ${\text{TC}(Z)}$, and the following loss is maximized: $\mathcal{L}_{\beta\text{-TCVAE}}=\mathbb{E}_{q_{\phi}({\mathbf{z}}|{\mathbf{x}})}\left[p_{\theta}({\mathbf{x}}|{\mathbf{z}})\right]-I_{q}(Z;X)-\beta\text{TC}_{\rho}(Z)-\sum_{j=1}^{m}D_{\mathrm{KL}}\left(q({\mathbf{z}}_{j})\big{|}\big{|}p({\mathbf{z}}_{j})\right),$ where $I_{1}(Z;X)$ is the “index-code” mutual information, $\text{TC}_{\rho}(Z)$ is an estimate of ${\text{TC}(Z)}$ computed with their estimate of $q({\mathbf{z}})$, $\beta$ is a hyperparameter controlling ${\text{TC}(Z)}$ regularization, and $\sum_{j=1}^{m}D_{\mathrm{KL}}\left(q({\mathbf{z}}_{j})\big{|}\big{|}p({\mathbf{z}}_{j})\right)$ is a dimension-wise Kullback-Leibler divergence. #### DIP-VAE-II The approach of DIP-VAE-II is that the aggregate posterior of a VAE model should be factorized in order to promote disentanglement Kumar et al. (2017). This is done efficiently using batch estimates of the covariance matrix. The loss to be maximized for DIP-VAE-II is: $\mathcal{L}_{\text{DIP-VAE-II}}=\\\ \mathbb{E}_{q_{\phi}({\mathbf{z}}|{\mathbf{x}})}\left[p_{\theta}({\mathbf{x}}|{\mathbf{z}})\right]-D_{\mathrm{KL}}\left(q_{\phi}({\mathbf{z}}|{\mathbf{x}})\big{|}\big{|}p_{\theta}({\mathbf{z}})\right)-\beta\left(\sum_{i=1}^{m}\left[\text{Cov}({\mathbf{z}}_{ii})-1\right]^{2}+\sum_{i=1}^{m}\sum_{j\neq i}\left[\text{Cov}({\mathbf{z}}_{ij})\right]^{2}\right).$ Hence, the covariance matrix of the sampled representation ${\mathbf{z}}$ should be equal to the identity matrix. $\beta$ is a hyperparameter controlling regularization strength. We did not consider DIP-VAE-I since it implicitly assumes knowledge of how many data generating factors there are. ### D.2 Disentanglement Metrics We evaluate GCAE and the leading VAE baselines with four metrics: Mutual Information Gap (MIG), FactorScore, Separated Attribute Predictability (SAP), and DCI Disentanglement. #### Mutual Information Gap MIG is introduced by Chen et al. (2018) as an axis-aligned, unbiased, and general detector for disentanglement. In essence, MIG measures the average gap in information between the latent feature which is most selective for a unique data generating factor and the latent feature which is second runner up. MIG is a normalized metric on $[0,1]$, and higher scores indicate better capturing and disentanglement of the data generating factors. MIG is defined as follows: $\text{MIG}(Z,V)\triangleq\frac{1}{K}\sum^{K}_{k=1}\frac{1}{H(V_{k})}\left(I(Z_{a};V_{k})-I(Z_{b};V_{k})\right),$ where $K$ is the number of data generating factors, $H(V_{k})$ is the discrete entropy of the $k$-th data generating factor, and ${\mathbf{z}}_{a}\sim Z_{a}$ and ${\mathbf{z}}_{b}\sim Z_{b}$ (where $a\neq b$) are the latent elements which share the most and next-most information with ${\mathbf{v}}_{k}\sim V_{k}$, respectively. For Beamsynthesis, we calculate MIG on the full dataset using a histogram estimate of the latent space with 50 bins (evenly spaced maximum to minimum). For dSprites, we calculate MIG using 10000 samples, and we use 20 histogram bins following Locatello et al. (2019). #### Factor Score FactorScore is introduced by Kim & Mnih (2018). The intuition is that change in one dimension of $Z$ should result in change of at most one factor of variation. It starts off by generating many batches of data in which one factor of variation is fixed for all samples in a batch. Then the variance of each dimension on each batch is calculated and normalized by its standard deviation (without interventions). The index of the latent dimension with smallest variance and the index of the fixed factor of variation for the given batch is used as a training point for a majority-vote classifier. The score is the accuracy of the classifier on a test set of data. For Beamsynthesis, we train the majority-vote classifier on 1000 training points and evaluate on 200 separate points. For dSprites, we train the majority-vote classifier on 5000 training points and evaluate on 1000 separate points. #### Separated Attribute Predictability Separated Attribute Predictability (SAP) is introduced by Kumar et al. (2017). SAP involves creating a $m\times k$ score matrix, where $ij$-th entry is the “predictability” of factor $j$ from latent element $i$. For discrete factors, the score is the balanced classification accuracy of predicting the factor given knowledge of the $i$-th latent, and for continuous factors, the score is the $R$-squared value of the $i$-th latent in (linearly) predicting the factor. The resulting score is the difference in predictability of the most- predictive and second-most predictive latents for a given factor, averaged over all factors. For Beamsynthesis, we use a training size of 240 and a test size of 120. For dSprites, we use a training size of 5000 and a test size of 1000. #### DCI Disentanglement DCI Disentanglement is introduced by Eastwood & Williams (2018). It complements other metrics introduced by the paper: completeness and informativeness. The intuition is that each latent variable should capture at most one factor. $k$ decision tree regressors are trained to predict each factor given the latent codes ${\mathbf{z}}$. The absolute importance weights of each decision tree regressor are extracted and inserted as columns in a $m\times k$ importance matrix. The rows of the importance matrix are normalized, and the (discrete) $k$-entropy of each row is computed. The difference of one and each row $k$-entropy is weighted by the relative importance of each row to compute the final score. For Beamsynthesis, we use 240 training points and 120 testing points. For dSprites, we use 5000 training points and 1000 testing points. ## Appendix E Training Time Comparison Table 3: Comparison of training times of the discriminator-based disentanglement algorithms on Beamsynthesis. Latent space size is fixed to $m=10$ and discriminator training iterations is fixed to $k=5$. Method | Average (s) | Standard Deviation (s) ---|---|--- GCAE | 955.0 | 13.6 FactorVAE | 1024.4 | 5.8
11institutetext: Sorbonne Université, LERMA, Observatoire de Paris, PSL university, CNRS, F-75014, Paris, France 11email<EMAIL_ADDRESS><EMAIL_ADDRESS>22institutetext: Steward Observatory, University of Arizona, 933 N Cherry Ave,Tucson, AZ 85721, USA 33institutetext: Collège de France, 11, Place Marcelin Berthelot, F-75005, Paris, France 44institutetext: Center for Astrophysics – Harvard and Smithsonian, 60 Garden St. MS09, Cambridge, MA, 02138, USA 55institutetext: Sternberg Astronomical Institute, M.V. Lomonosov Moscow State University, 13 Universitetsky prospect, Moscow, 119991, Russia # The origin of double-peak emission-line galaxies: rotating discs, bars or galaxy mergers? Daniel Maschmann 11 2 2 Anaëlle Halle 11 Anne-Laure Melchior 11 Francoise Combes 11 3 3 Igor V. Chilingarian 4455 (Received 26 July 2022/ accepted ) Emission lines with a double-peak (DP) shape, detected in the centre of galaxies, have been extensively used in the past to identify peculiar kinematics such as dual active galactic nuclei (AGN), outflows or mergers. With a more general approach considering a large DP galaxy sample selected from the SDSS, a connection to minor merger galaxies with ongoing star formation was suggested. To gain a better understanding of different mechanisms creating a DP signature, we here explore synthetic SDSS spectroscopic observations computed from disc models and simulations. We show how a DP signature is connected to the central part of the rotation curve of galaxies, which is mostly shaped by the stellar bulge. We, furthermore, find that bars can create strong DP emission-line signatures when viewed along their major axis. Major mergers can form a central rotating disc in late post- coalescence merger stages (1 Gyr after the final coalescence), which creates a DP signature. Minor mergers tend to show a DP feature with no correlation to the galaxy inclination within 350 Myr after the final coalescence. Comparisons of these scenarii with observations disfavour major mergers, since these show predominantly elliptical and only a few S0 morphologies. Furthermore, at such a late merger stage the enhanced star formation is most likely faded. Bars and minor mergers, on the other hand, can be compared quite well with the observations. Both observations coincide with increased star formation found in observations, and minor mergers in particular do not show any dependency with the observation direction. However, observations resolving the galaxy kinematics spatially are needed to distinguish between the discussed possibilities. More insight into the origin of DP will be gained by a broader comparison with cosmological simulations. The understanding of the DP origin can provide important tools to study the mass growth of galaxies in future high redshift surveys. ###### Key Words.: galaxies: kinematics and dynamics, galaxies: interactions, galaxies: evolution, Methods: numerical, techniques: spectroscopic ## 1 Introduction The evolution of galaxies involves dynamical processes such as galaxy mergers whose frequency remain difficult to measure over cosmic time. Studies based on photometry may for example not always be efficient at identifying these processes, while kinematics may be misleading. Mergers have been extensively studied using simulations (e.g. Toomre & Toomre, 1972; Athanassoula & Bosma, 1985; Hernquist & Mihos, 1995; Bournaud et al., 2005b; Di Matteo et al., 2007; Lotz et al., 2010) and observations (e.g. Combes et al., 1994; Bergvall et al., 2003; Lotz et al., 2004; De Propris et al., 2005; Ellison et al., 2008, 2013), resulting in a good understanding on how galaxy mergers can fuel star formation, trigger active galactic nuclei (AGN) and transform the morphology of galaxies. Especially studies dealing with different stages of galaxy merger rely on an accurate identification of mergers. Interacting galaxies can be identified through their projected separation (De Propris et al., 2005; Ellison et al., 2008; Patton et al., 2011). Major mergers in an early phase of their coalescence show strong tidal features and can be identified though their perturbed morphology (e.g. Lotz et al., 2004). After the final coalescence, tidal features and perturbations will gradually fade and it becomes increasingly difficult to correctly distinguish between post-merger galaxies and isolated galaxies. From hydrodynamical simulations, major (resp. minor) mergers can be identified after $\sim 200-400$ Myr (resp. 60 Myr) using photometric diagnostics (Lotz et al., 2010). Using a combination of several photometric classifiers to a linear discriminant analysis, Nevin et al. (2019) succeeded in identifying galaxy mergers over a merger timescale of 2 Gyr. Including stellar kinematics measured with integrated field spectroscopic observations, Nevin et al. (2021) increased the detection sensitivity for post-coalescence mergers. However, it remains challenging to apply these techniques to observations and identify post-coalescence mergers. As predicted in Begelman et al. (1980), the two super-massive black holes of the progenitors of a merger will eventually merge in the course of the coalescence. Previous to this event, the two nuclei will stay at a separation $>1\,{\rm kpc}$ for $\sim\,100\,{\rm Myr}$. When both nuclei are AGNs, it is possible to observe this phenomenon using telescope providing high enough resolution. Such dual AGNs were observed using X-ray observation (Komossa et al., 2003), radio observations (Maness et al., 2004; Rodriguez et al., 2006) and long-slit spectroscopy, revealing a double-peak (DP) signature (Gerke et al., 2007). The connection between the kinematic footprint and a dual AGN was further discussed in Comerford et al. (2009a). Systematic studies on DP emission-line AGNs using additional high resolution observations were able to distinguish between dual AGNs, AGN driven outflows or rotating discs (Comerford et al., 2011; Comerford & Greene, 2014; Comerford et al., 2015, 2018; Müller-Sánchez et al., 2015; Nevin et al., 2016). In general a DP emission-line profile traces multiple line-of-sight velocities. AGNs are compact and bright sources and therefore dual AGNs, moving at two different velocities, are particularly interesting to study late stages of mergers. Ge et al. (2012) built up a DP-galaxy sample, including also non-AGNs and gathered 3 030 galaxies, of which only 30 % are classified as AGNs. These DP emission-line signatures can have various causes: a compact rotating disc, gas outflow or inflow, two nuclei or the alignment of two galaxies inside the line-of-sight. In Maschmann et al. (2020) (hereafter M20) 5 663 DP emission-line galaxies were selected using an automated selection procedure. Interestingly, only 14 % were found to be AGNs. Different scenarii were discussed to explain the origin of DP emission lines and a recent minor merger was favoured as the underlying process. As these results are particularly relevant for this work, the main findings are explained in detail in Sect. 2.1. On the one hand, it is still challenging to conclude on the origin of DP emission lines for an individual galaxy, relying only on one optical spectrum and a snapshot. On the other hand, a merger scenario becomes increasingly likely if one finds different characteristics in the two emission-line components (Maschmann & Melchior, 2019). Using integrated field spectroscopy, Mazzilli Ciraulo et al. (2021) detected two galaxies aligned inside the line-of-sight, creating a DP emission line. In a recent study, the molecular gas content of DP galaxies selected from above the main star-forming sequence was studied in Maschmann et al. (2021). 20 % of the DP galaxies show the same kinematic feature in the CO emission line distribution which traces the molecular gas, indicating highly concentrated gas reservoir. Furthermore, in nearly all galaxies, a central star formation enhancement was found, and 50 % of the sample were identified as visual mergers or showed tidal features. Taking into account that the observed galaxies have a significantly larger molecular gas reservoir than expected for galaxies situated above the main sequence, the most plausible explanation of the DP emission line profile was found to be a recent minor merger which funnelled gas into the central regions and fuels a compact star-formation region. To better understand the observed DP emission-lines, we here use models and simulations of galaxies. We investigate possible origins of DP emission lines in this work and determine under which conditions a DP signature may be detected in isolated galaxies, ongoing mergers and post-mergers. More precisely, we seek to identify DP emission lines in the conditions of observations with a SDSS-like $3^{\prime\prime}$ spectroscopic fibre observations centred on the brightest region of the targeted system. We study the connection between identified DP signatures in the line-of-sight and the kinematic processes inside the observed systems. In Sect. 3, we describe axisymmetric models of disc galaxies and then study numerical simulations of such galaxies in which non-axisymmetric patterns, especially bars (in the central regions of interest), form. In Sect. 4, we characterise major and minor-merger simulations and identify under which circumstances a DP emission line can be detected. We then discuss in Sect. 5 the found results in the context of past work on DP emission line galaxies and conclude in Sect. 6. In this work, a cosmology of $\Omega_{m}=0.3$, $\Omega_{\Lambda}=0.7$ and $h=0.7$ is assumed. ## 2 Observations of double-peak emission-line galaxies in the SDSS The focus of this work is to determine the origin of DP emission-line profiles. To accomplish this, we analyse synthetic emission-line spectra from galaxy models and galaxy simulations. To frame this analysis in the context of observations, we here recapitulate the results of M20 and summarise the most important sample characteristics of their assembled DP galaxy sample. We then select three redshift values in order to represent the redshift distribution of the DP sample found in M20 and describe how to detect DP profiles in synthetic emission-line spectra. ### 2.1 Double-peak detection in M20 The selection procedure of M20 is divided into multiple stages which make use of emission-line parameters provided by the Reference Catalogue of Spectral Energy Distribution (RCSED) (Chilingarian et al., 2017). In a first step, galaxies with a high enough signal-to-noise ratio of ${\rm S/N}>10$ in either the H$\alpha$ or the [OIII]$\lambda 5008$ emission lines were selected. Then, galaxies with emission-lines which are better described by a non-parametric fit than by a single-Gaussian fit were selected and all emission-lines with a ${\rm S/N}>5$ were stacked. The resulting emission-line profile was fitted by both a single and a double-Gaussian function. Relying on an F-test of the two fits, an amplitude ratio threshold of the two double-Gaussian components, and a minimal threshold in velocity difference $\Delta v_{\rm DP}>3\delta v$, with $\delta v$ the SDSS bin-width of $69\,{\rm km\,s^{-1}}$, 7 479 DP-candidates were selected. In a second stage, each emission line was individually fitted with a single and a double-Gaussian fit. The double-Gaussian fit is restrained to the parameters found from the stacked emission line, however, the parameters can still vary within their uncertainties. All emission lines with a ${\rm S/N}>5$ were flagged as a DP emission line if they satisfy the following conditions: (1) the reduced chi-square value of the double Gaussian fit must be smaller than the value for the single Gaussian fit, (2) the double-Gaussian amplitude ratio $A_{1}/A_{2}$ must fulfill the condition ${1}/{3}<A_{1}/A_{2}<3$, and (3) each of the double-Gaussian emission-line component must be detected with at least ${\rm S/N}>3$. In a third stage, galaxies were selected with a DP in their strongest emission lines, resulting in a final sample of 5 663 DP galaxies. In order to compare the selected DP sample to galaxies with only a single peaked (SP) emission-line profile, a no-bias-control-sample was selected with the same emission-line S/N properties, redshift distribution and stellar mass distribution as the DP sample. Analysing the morphology of these two samples, the same visual merger rate was found between DP and SP galaxy. However, DP galaxies are more likely to be classified as S0 galaxies (36 %) in comparison to SP galaxies (20 %). Furthermore, DP galaxies classified as spiral galaxies tend to have larger bulges and are more likely classified as Sa or Sb galaxies whereas SP galaxies tend to be classified as Sc and Sd. A detailed analysis of the spectroscopic kinematics revealed a significant higher stellar velocity dispersion in DP galaxies in comparison to SP galaxies. A correlation between the galaxy inclination and the gas kinematics was found for SP galaxies, but not for DP galaxies. DP galaxies also deviate from the Tully-Fisher relation in contrast to SP galaxies. When considering each individual fit component of the DP sample, however, a good agreement with the Tully-Fisher relation is found. Considering star-forming galaxies, a central star-formation enhancement was found for DP galaxies but not for SP galaxies. Conclusively, these observations agree in particular with a model of repetitive minor mergers which effectively transport gas into the central regions and drive bulge growth as described in Bournaud et al. (2007). ### 2.2 SDSS spectroscopic measurements at different redshifts Figure 1: Redshift distribution of the DP galaxy sample (M20) (top panel) and the conversion curve between redshift and the fibre diameter in kpc of the SDSS 3′′ (red curve in the bottom panel). We mark with blue dashed line the three representative redshifts and corresponding fibre diameters: $z=0.05$, $z=0.1$ and $z=0.17$ corresponding to a fibre diameter of 3, 6 and 10 kpc, respectively. The spectroscopic observation in the SDSS is taken within a 3′′ region centred on the brightest spot of a galaxy (Abazajian et al., 2009). Hence, this spectrum probes the central 0.6 kpc in low redshift galaxies at $z=0.01$ and 30 kpc for the most distant spectroscopic observations in the SDSS at about $z=0.55$. In the latter case, the SDSS spectrum probes roughly the entire galaxy, whereas for a nearby galaxy the spectroscopic measurement probes only the very centre. In Fig. 1, we show the redshift distribution of the DP galaxy sample of M20 and a conversion curve between the fibre diameter and the redshift. The DP sample has a median redshift of $z=0.11$ and 99 % of the sample has a redshift of $z<0.22$. Only 57 galaxies are situated at higher redshift up to a value of $z=0.34$. In order to represent this distribution, we select three representative redshift values: $z=0.05$, $z=0.1$, and $z=0.17$, corresponding to a fibre diameter of 3, 6, and 10 kpc, respectively. In the following, we will analyse simulated SDSS spectral observations of analytical models and galaxy simulations with these three fibre diameters. ### 2.3 Double-peak detection in synthetic emission-line spectra In order to test whether a computed emission-line profile from an axisymmetric model or a galaxy simulation shows a double-peak feature, we develop a detection algorithm similar to M20. In a first step, we convolve the produced line-of-sight velocity profiles with the mean instrumental broadening of $61\,{\rm km\,s^{-1}}$ (M20) from the SDSS spectral detector, and compute the resulting signal with the SDSS bin-width of $\delta v=69\,{\rm km\,s^{-1}}$. We then fit a single and a double Gaussian function to the velocity profile and select DP galaxies satisfying the following criteria: 1. 1. $\chi^{2}_{\nu}({\rm single})>\chi^{2}_{\nu}({\rm double})$ 2. 2. ${1}/{3}<A_{1}/A_{2}<3$ 3. 3. $\Delta v_{\rm DP}=|\mu_{2}-\mu_{2}|>3\,\delta v$ where $\chi^{2}_{\nu}({\rm single})$ (resp. $\chi^{2}_{\nu}({\rm double})$) is the reduced chi-square computed for the single (resp. double) Gaussian fit, $A_{1}$ and $A_{2}$ are the amplitudes of the two Gaussian functions in the double Gaussian fit and $\Delta v_{\rm DP}$ is the velocity difference between the blue and redshifted component. In a first step of selection of DP candidates in M20, an F-test was used. However, this was mostly motivated to distinguish a DP from a SP profile in the case of a noisy spectra. Since we do not include noise in our synthetic emission-line profiles, we only use the chi-square ratio as such a selection criterion. ## 3 Rotating discs Double-peaked emission lines can be due to the rotation of discs. In order to investigate when such a detection of DP is possible, we first construct an idealised galaxy model with an axisymmetric rotating gas disc. We modify the rotation curve of the model by varying the mass concentration of a stellar bulge and study the resulting gas line-of-sight velocity distribution. We also study the effect of a change in the concentration of the gas density profile. Using simulations of isolated galaxies, we then investigate how the presence of a bar may impact the detection of a DP signature. ### 3.1 Axisymmetric models Table 1: Mass and length parameters for the Sa galaxy. $M_{\rm gas}$ | $M_{\rm*\,disc}$ | $M_{\rm*\,bulge}$ | $M_{\rm DM}$ | $[2.3\times 10^{9}M_{\odot}]$ ---|---|---|---|--- 4 | 40 | 10 | 50 | $a_{\rm gas}$ | $h_{\rm gas}$ | $a_{\rm*\,disc}$ | $h_{\rm*\,disc}$ | $b_{\rm*\,bulge}$ | $b_{\rm DM}$ | [kpc] ---|---|---|---|---|---|--- 5 | 0.2 | 4 | 0.5 | 0.2-3 | 10 | Figure 2: Rotation curves of a disc galaxy for different characteristic radii of the stellar bulge. We show with coloured lines the contribution of each component, and in black the total rotation curve. In blue (resp. orange), we show the contributions of the stellar (resp. gaseous) disc, described by Miyamoto-Nagai density profiles. With a green (resp. red) line, we show the contributions of the stellar bulge (resp. dark-matter halo), described by Plummer density profiles. We show with different line styles the contribution of the bulge and the total rotation curve for bulges with different characteristic radii $b$. A characteristic bulge radius $b=2$ (thick solid green and black lines) corresponds to the fiducial Sa galaxy. Vertical grey lines are plotted at the radii of the simulated fibre for the redshifts $z=0.05$, 0.1 and 0.17. Figure 3: Gaseous disc model at two different inclinations. We show 2D projections of a Miyamoto-Nagai density profile as described in Equation 1 at an inclination of $i=30^{\circ}$ (top panel) and $i=60^{\circ}$ (bottom panel). Both discs are turned by a position angle of $20^{\circ}$. The colour-bar indicates the surface density. With black lines, we indicate iso-velocity curves, with velocity values separated by 30 km s-1 (with a value of 0 km s-1 on the minor axes). With an orange (resp. green) circle, we show the area observed by a $3^{\prime\prime}$ spectroscopic fibre at a redshift of $z=0.05$ (resp. $z=0.17$). The more inclined the disc, the smaller the distance of equidistant velocity lines and thus the steeper the velocity gradient probed by the spectroscopic fibre. The model, for its fiducial set of parameters, reproduces an Sa galaxy. Potential-density pairs are used for all four components. The gas and stellar discs each have a Miyamoto-Nagai density profile (Miyamoto & Nagai, 1975): $\rho_{d}(R,z)=\left(\dfrac{h^{2}M}{4\pi}\right)\dfrac{aR^{2}+(a+3\sqrt{z^{2}+h^{2}})(a+\sqrt{z^{2}+h^{2}})^{2}}{\left[a^{2}+(a+\sqrt{z^{2}+h^{2}})^{2}\right]^{\frac{5}{2}}(z^{2}+h^{2})^{\frac{3}{2}}},$ (1) where $M$ is the total mass of the disc, $a$ is a radial scale length, and $h$ is a vertical scale length. The stellar bulge and the dark-matter halo each have a Plummer profile (Binney & Tremaine, 1987, pag. 42): $\rho_{s}(r)=\left(\frac{3M}{4\pi r^{3}}\right)\left(1+\frac{r^{2}}{b^{2}}\right)^{-\frac{5}{3}},$ (2) where $M$ is the total mass of the component and $b$ a characteristic radius. The profile parameters for the four components are given in Table 1, for an Sa galaxy. The rotation curve is shown on Fig. 2 (thick black curve for an Sa), with the detail of the contributions of the different components. The individual contribution of each disc component is $\sqrt{R\dfrac{\partial\Phi_{d}}{\partial R}\bigg{\rvert}_{z=0}}$ with $\Phi_{d}$ the gravitational potential of the disc component: $\Phi_{d}(R,z)=-\dfrac{GM}{\sqrt{R^{2}+(a+\sqrt{z^{2}+h^{2}})^{2}}},$ (3) and the individual contribution of each spherical component (stellar bulge or dark-matter halo) is $\sqrt{r\dfrac{\partial\Phi_{s}}{\partial r}}$ with $r$ the spherical radius and $\Phi_{s}$ the gravitational potential of the spherical component: $\Phi_{s}(r)=-\dfrac{GM}{\sqrt{r^{2}+b^{2}}}.$ (4) The rotation curve is then obtained as the square root of the quadratic sum of the four contributions. For such an Sa galaxy, the bulge dominates the rotation curve in the central parts, creating a steep rise of the rotation curve at small galactocentric radii (see the thick green curve representing the bulge contribution on Fig. 2). #### 3.1.1 Emission-lines of a fiducial Sa galaxy Figure 4: Emission-line profiles of gaseous-disc model observed at different inclinations. On the top left, we show a rotation curve calculated by a model including a stellar and gaseous disc, a stellar bulge and a dark-matter halo, parametrised as summarised in Table 1 with $b=2.0\,{\rm kpc}$. We compute the emission-line profiles observed within a $3^{\prime\prime}$ spectroscopic fibre for two redshift values: $z=0.05$ and $z=0.17$. The region probed by these observations are marked by orange and green line, respectively. For the emission-line profiles we also show the two spectroscopic observations in green and orange, with an off-set to the observation of $z=0.05$ (orange spectra) to show them above the observation of $z=0.17$ (green spectra). In the second column from the left, we show the measured line of sight velocity as described by Equation 6. In the third column from the left, we show the observed spectra convolved by the SDSS mean instrumental broadening of 61 km s-1. On the rightmost column, we show this signal binned to the detector resolution of the SDSS with a bin-width of 69 km s-1. We fit a single and a double Gaussian function to the observations presented by yellow and black lines, respectively. For the double Gaussian function we show its blueshifted (resp. redshifted) component by dotted blue (resp. dashed red) lines. The different rows show different values of inclination as indicated in the titles. Figure 3 shows the mass surface density of the gas disc of this Sa galaxy for two different disc inclinations. Iso-velocity curves with values spaced by 30 km/s are over-plotted, starting at 0 km/s on the minor axes. The line-of-sight velocity $V$ is such that: $V=V_{\rm rot}\cos\phi\sin i,$ (5) with $V_{\rm rot}$ the rotation velocity (obtained from the rotation curve, assuming a zero gas velocity dispersion), $\phi$ is the azimutal angle in the disc plane ($\phi=0\,[\pi]$ on the major axis), and $i$ is the inclination of the disc with respect to the line-of-sight ($i=0$ for a face-on disc). The larger the inclination of the disc, the larger the amplitude in line-of-sight velocity: for $i=30^{\circ}$ the iso-velocity contours have extreme values of -120 and 120 km/s (closed iso-contours near the major axis) while for $i=60^{\circ}$, the smallest and largest values are -210 and 210 km/s. The distribution of line-of-sight velocities is thus wider for larger inclinations, and the number of iso-velocity curves encompassed by a given fibre size is larger, as can be seen from the two represented fibres, of diameters 3 and 10 kpc. The fraction of gas observed with a line-of-sight velocity $V$, i.e. the line- of-sight-velocity spectrum, can be computed following Wiklind et al. (1997), as: $\dfrac{\mathrm{d}M}{\mathrm{d}v}(V)=\int_{0}^{R_{\rm max}}\dfrac{\Sigma_{\rm gas}(R)R\mathrm{d}R}{V_{\rm rot}(R)\sqrt{1-\left(\dfrac{V}{V_{\rm rot}(R)\sin i}\right)^{2}}\sin i}$ (6) where the integration goes from $R=0$ to a maximum galactocentric radius $R_{\rm max}$ (corresponding to the simulated SDSS fibre, for example), and $\Sigma_{\rm gas}(R)$ is the gas surface density. In particular, a double-horn profile can be found for a constant $V_{\rm rot}$ (see Wiklind et al. (1997)). However, the formula is only approximate when applied for a radius Rmax smaller than the disc size, with an error increasing with inclination. We thus use simulated models of gas discs with Miyamoto-Nagai density profiles, setting the rotation velocity from the modelled rotation curve, and we measure the line-of-sight velocity of gas inside the different fibres for different inclinations. We simulate the detection of double peaks for this Sa galaxy with the fibres of diameters 3 and 10 kpc for four different inclinations of the disc in Fig. 4. The spectra obtained with Eq. 6 and shown on the second column from the left are, as explained in Sect. 2.3, convolved with the mean instrumental broadening of $61\,{\rm km\,s^{-1}}$ from the SDSS spectral detector (the result of this convoion is shown in the third column from the left), and then binned with the SDSS bin-width of $\delta v=69\,{\rm km\,s^{-1}}$. The spectra are broader for higher inclinations because of the $\sin i$ term in Eq. 5. For fixed gas density-profile and rotation curve, the shape of the spectra depends on the fibre size. For the small fibre, encompassing the beginning of the rise of the rotation curve, the spectra are single-peaked. However, a ”double-horn” structure, with a central dip and sharp vertical limits at terminal velocities, appears for the larger fibre size. When viewing the disc edge-on, the double-horn shape changes to a box-like shape. This is due to the fraction of the disc moving perpendicular to the observer and which is only covered at an edge-on perspective. The instrumental broadening significantly alters the emission-line shape: e.g. the maxima of a horn are made closer to the centre of the spectrum, making the spectra single-peaked for low inclinations, and the steepness of the edges is reduced. The result of the binning is shown on the rightmost column, with both a Gaussian fit and a double-Gaussian fit. For the latter fit, the two components are shown in blue and red. Using the three criteria of Sect. 2.3, a double-peak is identified only for $z=0.17$ (10 kpc diameter fibre) for an inclination of $90^{\circ}$. The difference of velocities of the two peaks $\Delta v_{\rm DP}$ is too small in the other cases for a double-peak to be identified according to our criteria. Depending on the amount of dust within the line of sight, the signal of each gas particle decreases. As shown by Baes & Dejonghe (2000) and Baes et al. (2000), this can cause a significant decrease in the intensity at 0 km s-1 and alter the emission line shape. This effect would favour a DP structure and might lead to a higher DP detection rate. However, the inclusion of this effect is not straightforward. The estimation of dust extinction is strongly depending on the wavelength (Fitzpatrick, 1999) and factors like the dust-to- gas mass ratio (Bohlin et al., 1978) and the metallicity (Salim & Narayanan, 2020). In practice this means that for the simple galaxy models chosen in this work it would be difficult to select a certain set of extinction models. In addition, since we are interested in the qualitative question of how different mechanisms can cause DP signatures, we will not include dust extinction in this work. #### 3.1.2 Effect of total mass concentration on the emission-lines Figure 5: Scans of DP detections for different inclinations and bulge concentrations of a modelled galaxy. Following Eq. 6, we computed the line-of- sight velocity profile as a function of inclination and the characteristic radius of the bulge $b$. We perform the DP selection procedure described in Sect. 2.3. We show from left to right the results for a redshift $z=0.05$, $z=0.1$ and $z=0.17$, respectively. In each of the three panels, we show the $\Delta v$ resulting from the double-Gaussian fit with the colour coding. We mark the parameter combinations where we do not detect a DP profile with black hatches. On the left side of each panel, we show the ratio between the maximal velocity value inside the spectroscopic fibre and the maximal velocity found in the entire rotation curve. For a given (non-zero) disc inclination and a given fibre size, the detection of a double-peak is favoured by a combination of a gas density profile and a rotation curve such that more gas is probed at large line-of-sight-velocities than at small velocities (corresponding to gas on the minor axis of the disc). In order to show the effect of the shape of the rotation curve, which depends on the total mass concentration, we now keep a constant gas density profile, constant stellar disc and dark-matter halo profiles, but change the steepness of rising of the rotation curve by varying the concentration of the stellar bulge. The effect of this change is visible on Fig. 2, in which the scale length of the bulge spans from 0.3 kpc to 3 kpc (decreasing the mass concentration of the bulge and hence also of the galaxy). The rotation curve rises monotonously in the first 5 kpc for large scale lengths (low mass concentrations) while it peaks very near the centre of the galaxy for small scale lengths (high mass concentrations). The difference of velocity of the two peaks obtained by the fitting procedure of Sect. 2.3, $\Delta v_{\rm DP}$, is represented (colour-coded) for different bulge scale-lengths and disc inclinations on the three panels of Fig. 5, with one panel per redshift (fibre size). The part of the rotation curves encompassed by the fibres can be seen on Fig. 2, while on the sub-panels of Fig. 5 at the left of each main panel, we represent the ratio of the maximal velocity value inside the fibre to the maximal velocity in the rotation curve. Double peaks are identified with our criteria in the non-hatched regions of the panels of Fig. 5. At a given bulge scale-length, $\Delta v_{\rm DP}$ increases with inclination because of the broadening of the velocity distribution. At fixed inclination, $\Delta v_{\rm DP}$ increases with the concentration of the bulge (with decreasing bulge scale-length), with a steepness of the increase more pronounced for a small fibre. For the most mass concentrated galaxy models with a high rotation curve peak close to the centre of the galaxy, a double-peak is thus detected at small inclinations $40^{\circ}$ for all redshifts. At a given mass concentration (bulge scale- length), the threshold inclination for the double-peak detection generally increases with decreasing redshift (fibre size), with no detection for scale- lengths $>0.7$ kpc for the smallest redshift and for scale-lengths $>1.1$ kpc (resp. $>2.7$ kpc) for the intermediate (resp. highest) redshift. #### 3.1.3 Effect of gas-disc concentration on the emission-lines Figure 6: Emission-line profiles of gaseous-disc models with different scale- lengths, with a disc inclination of $80^{\circ}$. On the left columns, we show the total rotation curves (black) and gas disc contribution (blue) in solid lines and the reference $a_{\rm gas}=5$ kpc (middle panel) in dashed lines for the top and bottom plots. For a description of the other panels, see the caption of Fig. 4. The shape of the spectra and the double-peak detection depend on the gas density-profile, which we qualitatively show on Fig. 6, varying only the scale-length of the gas density profile. Because of the relative small mass of the gas component in this galaxy model with respect to the other components, changing the gas profile concentration alters very little the total rotation curve, as can be seen on the left column of the figure. For a less concentrated profile (a larger scale-length), the spectra indicate steeper horn features but also a higher intensity in the centre since at an inclination of $80^{\circ}$ more gas is probed close to a zero line-of-sight velocity. Using the three criteria of Sect. 2.3, a DP is identified for $z=0.17$ (10 kpc diameter fibre) for scale lengths of $5$ kpc (Sa fiducial model), while the gas of the profile with a scale length of $2$ kpc is too concentrated for a DP detection. At a scale length of $12$ kpc, we do not detect a DP as the concentration at 0 km s-1 is leading to a more single- Gaussian shape. With the smallest fibre size, we do not detect any DP signatures. However, we observe the largest $\Delta v$ value for the smallest scale-length of $2$ kpc with $\Delta v=117$ km s-1. For a scale length of $5$ kpc (resp. $12$ kpc), we find $\Delta v=106$ km s-1 (resp. $\Delta v=103$ km s-1). This is not a strong trend but it shows that for higher central gas concentrations we can see a larger contribution of the rotation in small fibres. ### 3.2 N-body simulations of isolated disc galaxies The kinematic signature of emission lines is a direct probe of the gas distribution inside the spectroscopic observation area. In reality, gas is found in clumps, discs, rings, spiral arms and bars. Such structures deviate significantly from a model of an axisymmetric disc with a simple density profile such as described in Sect. 3.1. In order to explore how DP signatures can be found in more realistic isolated galaxies, we will here analyse simulated isolated disc galaxies. We make use of the simulations database GalMer, which is described in detail in Chilingarian et al. (2010). This database is designed to systematically explore galaxy mergers with various initial orbital parameters, galaxy inclinations and galaxy types. To understand how galaxies evolve in isolation in comparison to the interactions, this database provides isolated galaxy simulations for each morphological type. The reading and analysis of the outputs of the simulations is based on the visualisation software GalaXimView111https://vm- weblerma.obspm.fr/~ahalle/galaximview/. #### 3.2.1 Simulation design We here explore the evolution of isolated Sa and Sb galaxies. In Sect. 4, we will further explore major-merger (giant + giant) and minor-merger (giant + dwarf) systems. The simulated isolated galaxies are giant galaxies and we thus refer to them as gSa and gSb. DP emission lines are mostly found in S0 and spiral galaxies of the type Sa and Sb. In the GalMer database, S0 galaxies are designed without a gaseous disc since this galaxy type is usually observed with an exhausted gas content (e.g. Somerville & Davé, 2015). The gSa and gSb galaxies considered here consist of rotating gas and stellar discs, a non- rotating stellar bulge and a non-rotating dark-matter halo. The initial conditions of simulations are modelled with the same density profiles as the axisymmetric models described in Sect. 3.1: disc components are described by a Miyamoto-Nagei density profile and the stellar bulge and dark-matter halo by a Plummer density profile. Velocities are set by the method of Hernquist (1993). The discs components have initial Toomre parameters of $Q=1.2$. The simulation code is described in detail in Di Matteo et al. (2007). It uses a Tree algorithm for the computation of the gravitational forces (Barnes & Hut, 1986) and smoothed particle hydrodynamics (Lucy, 1977; Gingold & Monaghan, 1982) for the gas with individual smoothing lengths. The gas is considered as isothermal with a temperature ${\rm T_{gas}=10^{4}K}$. To emulate star formation, hybrid particles, corresponding initially to pure gas particles with a stellar fraction of 0, are gradually changed into stellar particles following a star formation law described in Mihos & Hernquist (1994). Once the gas fraction drops below 5 %, a hybrid particle is converted into a stellar particle. During the star-formation process, the total mass of the hybrid particle is constant. There is no feedback from AGN, but there is stellar mass loss, and energy re-injected in the ISM by supernovae, cf Chilingarian et al. (2010). Time integration is performed with a leapfrog integrator with a time-step $\Delta t=5\times 10^{5}{\rm yr}$ and snapshots are output every $5\times 10^{7}{\rm yr}$. The simulations are carried out for a time-span of 3 or 3.5 Gyr. The initial parameters for gSa and gSb galaxies are given in the Appendix A, Table 2. Isolated galaxies are simulated with a total number of 480 000 particles and a softening length of $\epsilon=200$ pc. The same softening length is used for giant-dwarf interaction simulations while a softening length of $\epsilon=280$ pc is used for giant-giant interactions (See Sect. 4). #### 3.2.2 Characterisation of the structure of the galaxies In order to conduct a systematic analysis of simulated galaxies, we compute at each simulation step the following characteristic values: the position and the velocity of the centre of baryonic mass (COM), the half-mass radius $r_{1/2}$, and the spin vector of the stellar particles. We calculate the COM from the baryonic particles (gas + stars). Therefore, we compute a 3D histogram with a bin-width of 1 kpc and select the bin containing the highest mass. We then calculate the position and the velocity of the COM of the particles inside this bin. For each COM, we calculate the $r_{1/2}$, describing the radius containing half of the baryonic mass of a galaxy. The spin vector of each galaxy is estimated by calculating the angular-momentum vector of the stellar particles which are outside the $r_{1/2}$ but within a radius $<15$ kpc. In bulge-dominated galaxies with a large central velocity dispersion, the spin vector, computed with all particles, would not weigh sufficiently the rotation of the outer disc. Hence, a spin vector, calculated only with the outer particles, provides a better approximation of the disc orientation. As it will be discussed in Sect. 4, during a violent merger with complex geometry and kinematics, this vector does not have any meaningful direction and will only be considered as a point of reference. In the following, spectroscopic observations are computed from an observer perspective, orientated with a polar angle $\theta$ and an azimuthal angle $\phi$ defined with respect to the spin vector and to a reference vector in the plane orthogonal to it for $\phi$. When the spin vector truly defines a disc plane, $\theta=0^{\circ}$ (resp $\theta=90^{\circ}$) corresponds to a face-on (resp. edge-on) observation. The inclination angle $i$ is thus $i=\theta$ for $\theta\in[0,90^{\circ}]$ and $180^{\circ}-\theta$ for $\theta\in[90^{\circ},180^{\circ}]$. ### 3.3 Double-peak signatures from bars The initial conditions of the simulated galaxies are computed with the exact same models as discussed in Sect. 3.1. However, one important aspect is a velocity dispersion which is not included in the line-of-sight velocity distribution with the previous models. Comparisons between a simulated gSa galaxy and an axisymmetric model lead to the same DP detection dependencies. For low inclinations (nearly face-on), we find larger emission-line profiles than in the axisymmetric model, which is due to the contribution of the velocity dispersion. The additional velocity dispersion broadens the emission- line profile and we can therefore detect a DP signature at lower inclinations. As visualised in Fig. 5, we detect a DP signature for inclinations larger than $70^{\circ}$ using the axisymmetric model with a parametrisation of the fiducial gSa galaxy. For the initial conditions of a simulated gSa galaxy, we detect a DP signature for inclinations larger than $50^{\circ}$, due to the contribution of the velocity dispersion. Figure 7: Observation of an isolated barred galaxy. On the top panel, we show the gas distribution in the 3D space and define the definition of the observation angles $\phi$ and $\theta$. On the middle panel, two 2D projections are shown for an inclination of $\theta=40^{\circ}$. On the left (resp. right ) we show an azimuth of $\phi=0^{\circ}$ (resp. $\phi=90^{\circ}$) which corresponds to an observation parallel (resp. perpendicular) to the bar. With red circles, we mark a $3^{\prime\prime}$ spectral fibre observation situated at a redshift of $z=0.05$. On the bottom panels, we show the gas emission line line-of-sight distribution inside the fibre. We fit a double and single Gaussian function to the emission lines. The simulated galaxies undergo a rapid evolution in the first 0.5-1 Gyr. Gas condensates into thin and dense structures and clumps, spiral arms and a stellar bar are formed. These features however vanish after at least 1 Gyr. We observe a homogenisation of the disc with no arm structure while most of the gas has fallen into the centre. This high central gas concentration is then dominated by velocity dispersion and no DP emission-line structure can be observed any more. This likely unrealistic evolution stage is favoured by the low supernovae feedback and absence of AGN feedback in the simulations. From observations we know that about two thirds of disc galaxies are barred (e.g. Eskridge et al., 2000; Menéndez-Delmestre et al., 2007). However, this does not imply that bars have a long life time. In fact, bars can be weakened or destroyed (Bournaud et al., 2005a), but with a high gas fraction they can be re-formed (Bournaud & Combes, 2002). Relying on cosmological simulations, the bar fraction is expected to be constant at about 66% for massive spiral galaxies (${\rm M_{*}\geq 10^{10.6}M_{\odot}}$) over a redshift range of $z=0-1$ (Zhao et al., 2020). Gas clumps, spiral arm structures and turbulence in the simulations lead to some minor fluctuations of the DP detection. A stellar bar, however, is significantly changing the DP detection: we find strong $\Delta v$ values of more than 300 km s-1 when observing parallel to the bar at an inclination of $\theta=60^{\circ}$. Observations of a gSa galaxy with a characteristic bar structure is shown in Fig. 7, after an evolution of 250 Myr from the initial axisymmetric condition. We define the observation angles in the top and show the 2D-projection of the observed gas in the middle panels: on the left, the disc is seen parallel to the bar and on the right, perpendicular to the bar. On the bottom panels, we show the spectroscopic observation of the gas for the two cases. We find a strong DP feature in the observation taken parallel to the bar but no DP signature in the one observed perpendicular to the bar. This is due to the fact that when observing perpendicular to the bar, the majority of the gas is moving also perpendicular to the line-of sight. Hence, we do not probe a large velocity gradient. In comparison to that, when observing parallel to the bar, we measure gas moving alongside the line of sight due to its streaming motion along the bar. Figure 8: $\Delta v$ values measured at different observation angles with a double Gaussian fit at a redshift $z=0.05$. Each individual measurement covers a solid angle of 0.013 sr and the colour code indicates the measured $\Delta v$ value. We choose the Hammer-projection to represent the observation points on the surface. The longitudes represent the azimuth angle $\phi$ which is measuring the observation angle relative to the central bar of the galaxy. $\phi=0^{\circ}$ and $\phi=\pm 180^{\circ}$ correspond to an observation parallel to the bar and an azimuth angle $\phi=\pm 90^{\circ}$ to an observation perpendicular to the bar. The latitudes represent the inclination of the observer. At an inclination of $\theta=90^{\circ}$ the galaxy is observed edge-on whereas at $\theta=0^{\circ}$ and $\theta=180^{\circ}$ one sees the galaxy face-on. In order to compute from which observation angles one can find a DP signature, we systematically place the observer on a sphere around the galaxy with the COM as its centre. We choose a uniform sampling of the sphere so that each observation covers a solid angle of 0.013 sr. In Fig. 8, we show a scan of all observation angles observed at $z=0.05$ for the gSa galaxy exhibiting a bar which is visualised in Fig. 7. We indicate the $\Delta v$ computed from the double Gaussian fit with a colour code and mark the angular positions that do not exhibit a DP signature with white hatches. We show the full map (here and in other figures) but note that in the absence of any attenuation, the map contains redundant information: the value at $\theta$ and $\phi$ is the same as the value at $180^{\circ}-\theta$ and $\phi+180^{\circ}$ (modulo $360^{\circ}$). If DP signatures originated from uniform rotation, a DP would be observed at all azimuth angles with a strong inclination of $60^{\circ}<\theta<120^{\circ}$ as we found in Sect. 3.1. However, this is not the case: we see a strong DP signature when observing parallel to the bar ($\phi\sim 0^{\circ}$ and $\phi\sim\pm 180^{\circ}$) and single-peak signatures when observed perpendicular to it. Furthermore, we do not see the highest $\Delta v$ values when observing fully edge-on ($\theta=90^{\circ}$) but at a lower inclination of $\theta\sim 75^{\circ}$. This is due to the fact that when observing fully edge-on along the bar direction, the spectroscopic measurement probes as well gas, at the ends of the bar or elsewhere in the disc, moving perpendicular to the observer and contributing to the line-of- sight velocity distribution at $v=0\,{\rm km\,s^{-1}}$. This makes the two Gaussian functions of the double Gaussian fit shift closer together and the $\Delta v$ become smaller. In contrast to that, when observing at a smaller inclination, the observation fibre of 3 kpc in diameter (seen at a redshift of $z=0.05$) will mostly probe gas with a motion along the bar direction. This gas moves at the highest velocity parallel to the line-of-sight and only a small contribution of gas moving perpendicular is measured. This leads to a strong DP feature. Figure 9: $\Delta v$ values from a double Gaussian fit at different azimuth angles $\phi$. We define $\phi$ as the azimuth angle with respect to the bar as visualised in Fig. 7. We computed spectroscopic observations at a fixed inclination of $\theta=60^{\circ}$ and at a redshift of $z=0.05$ . We show all snapshots of gSa and gSb simulations, which indicate a bar. With a red line we show the value of three times the bin-width of the SDSS. A $\Delta v$ larger than this value is one criteria for a DP detection (see Sect. 2.3). This effect can be seen on further galaxy examples. In Fig. 9, we have included all snapshots of gSa and gSb simulations which have a bar. We have determined the $\Delta v$ value for a constant inclination of $\theta=60^{\circ}$ and for azimuth values between $\phi=0^{\circ}$ (parallel to the bar) and $\phi=90^{\circ}$ (perpendicular to the bar). The spectroscopic observations are evaluated within a spectroscopic fibre of a diameter of 3 kpc, corresponding to a SDSS fibre at redshift $z=0.05$. For observations with an angle $\phi$ up to $\sim 40^{\circ}$, we find $\Delta v$ values exceeding three times the SDSS bin-width ($3\times\delta v_{\rm sdss}$), the definition threshold for a DP signature (see Sect. 2.3). We find higher $\Delta v$ values for snapshots of the gSa galaxy in comparison to the gSb galaxy, because of the more massive stellar bulge of the gSa galaxy, resulting in a deeper gravitational potential and thus in faster rotation in the centre. For all observations, the $\Delta v$ value drops below the threshold of $3\times\delta v_{\rm sdss}$ when observing perpendicular to the bar. This means that gas motion created by bars can indeed be at the origin of a strong DP feature. Furthermore, as a bar seen parallel to the line-of-sight is difficult to identify as such in galaxy images, the fraction of DP galaxies in observational studies may show a deficit of bars while bars are in fact the origin of a part of the double peaks. We can compute the DP fraction ${\rm f_{DP}}$ as the fraction of directions from which a DP signature is observed. When observing at a redshift $z=0.05$, the mean DP fraction is ${\rm f_{DP}}=0.24$. This fraction drops to a mean value of ${\rm f_{DP}}=0.2$ at redshift $z=0.1$. As the fibre covers a larger part of the galaxy, more gas at lower velocities is included in the line-of- sight measurements, diluting the DP signature. However, at $z=0.17$ we detect a mean DP fraction ${\rm f_{DP}}=0.24$. This DP feature is partly originating from the bar and partly from a rotating disc. The latter effect becomes significant only at higher redshift as a larger part of the rotating disc is included in the line-of-sight velocity measurement. ## 4 Mergers and post-mergers In the previous section, we showed that a DP signature can be the result of a rotating disc or a bar. However, in the course of a galaxy merger, two gas components can fall into the gravitational potential well of the interacting system with different line-of-sight velocities. This, in turn, can be observed as DP emission lines in a central spectroscopic observation. Late stages of post-coalescence major mergers are known to mostly form elliptical galaxies (e.g. Steinmetz & Navarro, 2002). However, the expelled gas during a merger can be re-accreted and form a disc (e.g. Barnes, 2002; Robertson et al., 2006; Lotz et al., 2008; Puech et al., 2009). Merger events can cause a contraction of a gas disc which then forms a central rotating star-formation site (Dekel & Burkert, 2014). Such a nuclear disc can have a DP emission-line signature. Since a single minor merger is not expected to cause radical morphological transformations, we examine, besides major mergers, also the possibility of how a minor merger can funnel gas into the central region and create a DP emission-line signature. In order to explore a DP signature which is related to galaxy mergers, we here explore major mergers with a mass-ratio of 1:1 (giant + giant) and minor mergers with a mass ratio of 1:10 (giant + dwarf). As discussed in M20, DP signatures are mostly associated with spiral galaxies of type Sa and Sb and S0 galaxies. At high redshift, it is difficult to distinguish an elliptical galaxy of e.g. Hubble type E6 from a S0 or Sa galaxy. This motivates merger scenarii leading to earlier Hubble types. We thus select from the GalMer database the major-merger simulations gSa + gSa and gSb + gSb. For minor- merger simulations, we explore gSa + dSb and gSa + dSd. We evaluate possible DP signatures of the selected merger simulations from all directions, in the same way as in Sect. 3.3, for all three representative SDSS spectroscopic fibre diameters at redshift $z=0.05$, $z=0.1$, and $z=0.17$ (see Sect. 2.2). We consider major-merger simulations between two galaxies of the same type, leading to an equal contribution of gas in the resulting system. Even though we selected dwarf galaxies for the minor-merger simulations with the highest gas fraction compared to the giant gSa galaxy, the resulting gas mass ratio is still of 1:10 to 1:5 (see Table 2). In order to identify two Gaussian components in an emission line as a DP signature, an amplitude ratio of at least three is necessary (See Sect. 2.3). However, dwarf galaxies have a significant lower metallicity than giant galaxies (Tremonti et al., 2004), which in fact leads to a stronger emission-line signal (e.g. Wolfire et al., 2010; Bolatto et al., 2013; Kewley et al., 2019). Since we aim to clarify quantitatively how a minor merger can generate a DP signature through internal kinematic processes, we multiply the signal from the giant galaxy by a factor of 0.5. The choice of this factor is purely empirical, since it results in a DP detection with two Gaussian components inside the line-of-sight. However, if one aims to obtain a more complete picture of the contributions of different gas populations in galaxy mergers, an accurate calibration of a radiative transfer would be necessary. ### 4.1 Merger simulation parameters Figure 10: Top: Initial configuration of a direct-direct merger between two giant galaxies. In the retrograde-retrograde configuration, the orbital spin is flipped. The inclination $i_{2}$ is set to 0, 45, 75 or $90^{\circ}$. Galaxy 1 has a spin of coordinates $(0,0,1)$, and galaxy 2, $(0,\sin i_{2},\cos i_{2})$. Bottom: Initial configuration of a direct-retrograde merger between a giant and a dwarf. In the retrograde-direct configuration, the orbital spin is flipped. The giant galaxy has an inclination $i_{1}=33^{\circ}$ and the dwarf galaxy, $i_{2}=130^{\circ}$. Galaxy 1 has a spin of coordinates $(0,\sin i_{1},\cos i_{1})$, and galaxy 2, $(0,\sin i_{2},\cos i_{2})$. In both panels, dashed lines indicate that patterns are below the orbital plane or behind the discs. The GalMer database provides major and minor galaxy merger simulations. Major mergers are simulated with a total particle number of ${\rm N_{tot}=240\,000}$ (120 000 + 120 000) particles, while minor mergers have ${\rm N_{tot}=528\,000}$ (480 000 + 48 000), i.e. 4 times more particles for a giant galaxy than in a major merger, in order to resolve the dwarf galaxy (Chilingarian et al., 2010). Thus, the softening length for major mergers is $\epsilon=280$ pc, and $\epsilon=200$ pc for minor mergers. The initial conditions for each galaxy are set in the same way as for isolated galaxies described in Sect.3.2.1 with initial parameters for the different galaxy types summarised in Table 2. The two galaxies are initially set at a distance of 100 kpc with an orbit characterised by the orbital angular momentum $\mathbf{L}$. Giant-giant mergers simulations are carried out either with a direct-direct configuration in which the spins of both galaxies have a positive projection on the orbital spin (unit vector aligned with the orbital angular momentum), or with a retrograde-retrograde configuration, where the orbital spin is flipped. For the giant-dwarf mergers, simulations are carried out either with a direct-retrograde configuration as shown in Fig. 10, or with a retrograde- direct configuration. In the giant-giant mergers, the disc plane of one galaxy is always initially in the orbital plane while the other one has an inclination $i_{2}$ with respect to the plane (see Fig. 10). The giant-dwarf configuration is more generic: both discs are inclined with respect to the orbital plane (see Fig. 10) A detailed description of the orbital parameters for the GalMer database is given in Chilingarian et al. (2010). In the Tables 3, we summarise the orbital parameters used in this work. We are interested in DP emission-line signatures during mergers and after coalescence. We, therefore, sort out all fly-by simulations, in which the two galaxies only move away from each other after one single encounter, and retrograde minor mergers (with a retrograde configuration for the giant galaxy), whose final coalescence does not happen during the simulated period. Galaxy mergers with a retrograde orbit last longer in comparison to direct ones (Villalobos et al., 2012; Solanes et al., 2018). As mentioned in Sect. 3.2.1, the simulation design of the GalMer database does not include AGN feedback. This leads to high concentration in the very centre at the end, where almost no rotation is visible in the central gas. Depending on the merger process, the central in-fall of gas can happen with no gas being expelled. In such a situation, we do not see any DP signature and thus such scenarii are not interesting for this work and are sorted out. This happens more frequently in gSa + gSa mergers which is most probably related to the deeper gravitational potential from the final stellar bulge in comparison to gSb + gSb mergers. These selection criteria lead us to a final simulation sample of 16 major-merger simulations (5 gSa + gSa and 11 gSb + gSb) and 11 minor-merger simulations (6 gSa + dSd and 5 gSa + dSb). ### 4.2 Characterisation of the merger process and DP fraction measurement Figure 11: Visualisation of a major merger process. We show characteristic parameters of the galaxy merger simulation of gSb + gSb with an orbit id 02dir and a merger inclination of $0^{\circ}$ (See Sect. 4.1 and Table 3). On the top panel, we show with a black line the distance between the COM of the two galaxies which corresponds to the distance between the two galaxies. The red line represents the half mass radius r1/2 of the first galaxy. In order to illustrate the merger process, we show snapshots of only the gas at different merger stages. Black arrows indicate the exact stage of the merger process. On the second panel, we show the velocity difference between the two COM. On the third panel, we show the DP fraction which corresponds to the fraction of observation angles from which one detects a DP. On the bottom panels, we show zoomed-in observations of the central kiloparsecs of the first galaxy, we display, on the top panels, the 2D projection of the gas surface density $\Sigma$ observed from a face-on view and the measured velocity dispersion $\sigma$ on the panels beneath. To illustrate the gas dynamics, we show arrows representing the 2D projected in-plane velocity of the particles. To describe the orbit of a galaxy merger, we compute the COM and r1/2 of each individual galaxy as described in Sect. 3.2.2. This allows us to compute the distance between the two galaxies at each simulation step, shown as the black line on the top panel of Fig. 11. The r1/2 value of the first galaxy is shown with a red line. In order to visualise the morphology of the gas during the merger, we show snapshots of the gas distribution of some simulation steps and use arrows to mark their position on the evolution of the simulation. In the second panel, we show the velocity difference between the two COMs. With these parameters, we can characterise the merger simulations: we clearly see the first peri-passage after about 250 Myr. This is the point where the velocity difference between the two galaxies is the highest. The two galaxies then recede from each other until the point at about 500 Myr where we see a maximum of their distance and a minimum of their velocity difference. The two galaxies then fall back onto each other and finally merge. We estimate some coalescence time as the time after which the distance between the two galaxies no longer exceeds the half-mass radius $r_{1/2}$ of the first galaxy. The velocity difference is also then dropping to 0. In order to understand at what merger stage a DP emission line signature can be observed, we scan each simulation step from all directions as described in Sect. 3.3, with a uniform sampling of the sphere. For the major mergers, the origin of this scan is set to the COM of the galaxy whose disc is initially in the orbital plane. For the minor mergers, the origin is set to the COM of the giant galaxy. We also orientate the viewing angle with the spin of these reference galaxies. This provides us a DP fraction at each simulation step, which we show in the third panel from the top in Fig. 11 and 19. For the gSb galaxies in major-merger simulations, we do not observe any DP emission-line signature in the initial conditions. However, we find for a redshift of $z=0.17$ a DP fraction of about 0.6 for gSa galaxies which is in good agreement with the DP signatures found for an Sa galaxy with the same parameters using an axisymmetric model in Sect. 3.1, where we find a DP signature for inclinations larger than $\theta=55^{\circ}$, which covers about 60 % of a sphere. While the galaxies in major-merger simulations start with the initial parameters described in Sect. 3.1. minor-merger simulations start with already evolved galaxies. Therefore, the initial DP fraction in gSa galaxies is quite different in the initial snapshot of minor-merger simulations. During the merger process, we always observe a peak of DP fraction during 50-100 Myr after a peri-passage. This phenomenon of two galaxies observed in the act of merging was analysed by Mazzilli Ciraulo et al. (2021) and will be discussed systematically in Halle et al (in prep.). As we know from observations, DP emission line signatures are not significantly more common in visually identified galaxy merger systems (Maschmann et al., 2020). Therefore, we here focus on DP signatures which appear in the post-coalescence phase of major and minor mergers. ### 4.3 Double-peak signatures in major mergers Here we discuss systematically at what merger stage we can observe a DP signature. We furthermore discuss the significance of the observation angle and further discuss the morphology of the resulting galaxy. #### 4.3.1 Central discs in post major mergers Major mergers are known to show strong morphological perturbations during the merger. In Lotz et al. (2008), the timescale during which a merger is observable from the photometry of equal-mass galaxy mergers was estimated to be of the order of $1.1-1.9$ Gyr. This timescale can vary due to different orbital parameters which determine when the final coalescence happens. Looking at the exemplary gSb + gSb major merger shown in Fig. 11 and the gSa + gSa major merger in Fig. 19, there is no DP signature directly after the final coalescence. However, at about 1 Gyr after the final coalescence, an increasing DP fraction is detected. On the bottom panels of Fig. 11 and 19, we display 10 snapshots of the central parts of the first galaxy at different simulation steps, marked with black dots in the galaxy separation diagram. We show for each selected time the gas surface brightness and the velocity dispersion. The line of sight is parallel to the spin vector so that discs are seen face-on. Gas motion in the plane is illustrated with orange velocity arrows. During the simulation of the gSb + gSb merger (Fig. 11), we observe a peak in DP fraction in an early phase at 400 Myr, shortly after the first encounter. A second peak is observed at 800 Myr, at the moment of post coalescence. In the snapshot of the central region of the first galaxy at 400 Myr, we identify a bar structure as the origin of the increase in the DP fraction. As discussed in Sect. 3.3, a central bar structure in the gas distribution can create strong DP signatures, especially for small spectroscopic fibre diameters. For the second peak in DP fraction at 800 Myr, we can identify the two galaxies at a separation less than 4 kpc and with a velocity difference of 300 km s-1, creating a DP signature as two gas populations with high $\Delta v$ are captured inside the spectroscopic fibres. The two galaxies are no longer moving away from each other and this moment marks the final coalescence. Shortly after this final coalescence, the detection of DP stops abruptly. We observe in these stages a high concentration of gas in the very centre with a strong velocity dispersion which dominates in the observed region. In fact, the strong velocity dispersion is not sufficient to produce a broad emission- line profile which can be identified as a DP. About a few 100 Myr after the final coalescence, a gaseous central disc with a radius smaller than 5 kpc starts to form. In contrast to the strong perturbations during the coalescence the gas starts to settle in the disc and the velocity dispersion decreases. The in-falling gas originates from parts of the tidal tails which gradually fall back onto the galaxy. As the stellar bulge of the post-merger galaxy gradually grows, the rotation curve becomes increasingly steep in the centre. As we know from Sect. 3.1, a steep rotation curve is needed in order to detect a DP signature at lower redshift because e.g. at $z=0.05$, only the central 3 kpc of the rotation curve is measured. This gradual steepening of the rotation curve explains why we start detecting a DP signature later during post- coalescence in low redshift observations than high redshift ones. The detected DP signature eventually disappears at about 2500 Myr. At this point, the gas contracts drastically to the very centre and the central part of the disc is dominated by random motion which can be seen as the velocity dispersion increases. As mentioned in Sect. 3.2.1, this is due to low feedback efficiency. It is therefore difficult to say whether such a rapid collapse is realistic or whether a central disc can fall so quickly into the centre. Therefore, in the following, we will only consider simulation snapshots up to the moment when we also see a gas distribution that is not contracted below he resolution. Figure 12: Evolution of the DP fraction of major-merger simulations after the final coalescence observed at $z=0.05$. We identify the snapshot in each simulation where the distance between the two galaxies remains below the half mass radius of the first galaxy for the rest of the simulation. We use this snapshot as reference point and show the time starting 250 Myr before this snapshot on the $x$-axis. On the $y$-axis, we show the DP fraction ${\rm f_{DP}}$. On the top (resp. bottom) panel, we show gSb + gSb (resp. gSa + gSa) simulations. We mark the time before the final coalescence in grey. We mark simulations with an inclination of $0^{\circ}$, $45^{\circ}$ and $90^{\circ}$ of the second disc (see Fig. 10) with blue, orange and pink lines, respectively. Mergers with direct (resp. retrograde) orbit are presented by solid (resp. dashed) lines. We computed the DP fraction for all selected major-merger simulations and observe a recurring pattern: strong DP detection is observed at close interactions and a gradually increasing DP fraction emerges between 500 and 1000 Myr after the final coalescence. In Fig. 12, we show the DP fraction observed at $z=0.05$ seen after the final coalescence for all selected major- merger simulations. We see, that for gSb + gSb mergers, a DP can be detected between 500 and 1000 Myr after the final coalescence and in some cases we see a DP continuously since the final coalescence. We identify in all these cases a gaseous disc which is progressively formed from gas of the tidal tails falling into the central kiloparsecs of the galaxy. In the post-coalescence phase of the gSa + gSa simulations, we observe an increase in DP fraction but starting 1000 Myr after the final coalescence. This delay is mostly due to the fact that in comparison to gSb + gSb mergers, gSa + gSa mergers stabilise the gas due to a deeper gravitational potential and therefore, the gas takes longer to migrate towards the central region. As can be seen in the appendix in Fig. 19, the central discs found in post-merger gSa + gSa galaxies are significantly smaller at a radius below 3 kpc in comparison to the discs observed in gSb + gSb simulations. However, this simulation stands out among other gSa + gSa simulations as it shows the smallest disc which we observed in any post-coalescence mergers. This strong concentration leads to a high DP fraction of about 0.8 at the end of the simulation. The post-coalescence behaviour of the DP fraction is only shown for observations at $z=0.05$ in Fig. 12. For the observations at $z=0.1$ and $z=0.17$, we find a similar evolution of the DP fractions but higher fractions than one can see for the example shown in Fig. 11. In Fig. 19, however, we see in the late development of the central disc a larger DP fraction for the observation at $z=0.05$ which only covered the central 3 kpc than for the other redshifts. This is due to the fact, that observations at higher redshift includes gas located outside the central disc and with more random motion, diluting the DP signal. Considering the different merger orbits we discussed here, we do not find any dependence on the orbital geometry of the resulting DP feature. #### 4.3.2 Angular dependent double-peak emission lines in the post coalescence Figure 13: Direction maps of DP detection probability ${\rm p_{DP}}$ in post- coalescence major-merger simulations. We present maps of ${\rm p_{DP}}$ for the three different evaluated redshifts ($z=0.05,0.1$ and 0.17) and for the two discussed merger simulations gSb + gSb and gSa + gSa separately. We calculate the probability of each scanned direction from all simulation steps 1000 Myr after the final coalescence that have a DP fraction $>0.1$. To visualise the observation angles from where we mostly observe a DP signature in the post-coalescence phase, we calculate an observation angular depending DP probability ${\rm p_{DP}}$. We select all simulation snapshots 1000 Myr after the final coalescence, which show a DP fraction of at least 0.1. We then calculate a DP probability for each viewing angle as the ratio between the number of DP detections and the number of included snapshots. We do this separately for gSb + gSb and gSa + gSa simulations and further divide them into the three observed redshifts $z=0.05,0.1$ and 0.17. This provides maps, presented in Fig. 13, indicating the most favourable observation direction for DP signatures or rule out specific observation angles. We do not find any DP detections when observing face-on which is expected, as we observe rotating discs in all post-coalescence phases. However, for gSa + gSa simulations we find in some cases a DP signature up to an angle of $20^{\circ}$. For the observations at low redshift ($z=0.05$), we see for gSb + gSb galaxies that edge-on observations are less favourable to detect a DP than observations at an inclination of about $\theta\sim 60^{\circ}$. This is due to the same reason as we discussed for DP observations in galaxies with bar signatures in Sect. 3.3: when seen perfectly edge-on, more gas moves perpendicular to the observer. Since we observe only a small part of the velocity gradient in the central 3 kpc at a redshift of $z=0.05$, this gas moving at a null projected velocity dominates the emission-line profile, and the DP signature produced by the rotation is not detectable anymore. However, at a smaller inclination, significantly less gas moving perpendicular to the observer contributes to the observed spectrum and the rotation footprint dominates. This effect gets weaker at a redshift $z=0.1$ and disappears at $z=0.17$ because the spectroscopic fibre probes a larger part of the rotation curve, and a broader emission line profile is observed. For gSa + gSa merger simulations, we find an even stronger direction dependency. In fact, we see for all three different redshifts a strong DP fraction at inclinations of $30^{\circ}<\theta<60^{\circ}$. This is due to the fact that the central disc is more concentrated in comparison to what we see in gSb + gSb simulations. In such a case, a DP signature gets more diluted when viewed edge-on due to gas moving perpendicular to the line-of-sight. At redshift $z=0.05$ and $z=0.1$, it is furthermore very unlikely to observe a DP signature from an edge-on perspective. Only for a redshift of $z=0.17$, we start to detect DP signatures from the edge-on view. Since these observations cover a larger surface, gas that is just about to fall back to the central regions is included in the line-of-sight velocity distribution and broadens the emission line. #### 4.3.3 The morphology of post-coalescence major mergers Figure 14: Mock $rgb$ snapshots created with $g^{\prime}$, $r^{\prime}$ and $i^{\prime}$ bands of gSb + gSb merger simulations. The images were produced using the radiative transfer software PEGASE-HR (Le Borgne et al., 2004). We precise the orbital parameters in the title and show below each simulation from the face-on ($40\times 40\,{kpc}$, $200\times 200$ pixels) and edge-on ($20\times 40\,{kpc}$, $100\times 200$ pixels) perspective at 1000 Myr and 1500 Myr after the final coalescence. Figure 15: Same as Fig. 14, but for gSa + gSa simulations. One of the central results of M20 is that DP emission-line signatures are more likely found in S0 galaxies and in bulge-dominated disc galaxies. Furthermore, no higher merger rate was found in comparison with single-peak emission-line galaxies at the same redshift and with the same stellar mass distribution. In order to discuss how relevant major mergers are for the discussion on the origin of DP signatures, two aspects are of particular interest when looking at the morphology: (1) do the post-coalescence mergers still show disc components and (2) can tidal features and merger remnants still be identified with photometric observations? In order to test this hypothesis, we computed mock $rgb$-images created with the $g^{\prime}$, $r^{\prime}$ and $i^{\prime}$-band filters of the galaxies 1000 Myr and 1500 Myr after the final coalescence. The broadband colours are computed from stellar population PEGASE-HR models (Le Borgne et al., 2004) which is implemented in the GalMer database access222http://galmer.obspm.fr/ (Chilingarian et al., 2010). In order to estimate the intensity in each band, light rays are traced along the line-of-sight and attenuation through dust was included. The dust was modelled as explained in Chilingarian et al. (2010). In Fig. 14 (resp. 15), we show the $rgb$-images for the face-on and edge-on perspective for the gSb + gSb (resp. gSa + gSa) merger simulations. In only some snapshots, we are able to identify small tidal features or a miss-aligned dust-lane that can indicate a recent merger. For the majority of snapshots, we observe a smooth morphology. In the two cases of gSb + gSb galaxies with a collision angle of $0^{\circ}$, we even observe a prominent disc. However, as discussed in Chilingarian et al. (2010), these kinds of orbits are unlikely to happen. For all other simulations, we observe an elliptical galaxy which in some cases still has a disc or has a high ellipticity and is of Hubble type E6 or can be identified as S0, as discussed in Eliche-Moral et al. (2018). ### 4.4 Double-peak signatures in minor mergers Since minor mergers are discussed to be responsible for a large fraction of observed DP emission-line galaxies in the literature (e.g. M20, ), we here discuss how such a kinematic signature can originate from a minor merger event. We will explore the merger orbits in the same manner as we discussed major mergers (see Sect. 4.3) and discuss their morphology. #### 4.4.1 Two gas populations detected in one spectra Figure 16: Visualisation of a minor merger between a gSa + dSd, simulated with the orbit-id 3 and a direct collision. We show the top panels and two 2D projected gas maps in the same manner as described in Fig. 11. We show two specific snapshots: one close encounter and the moment of final coalescence. The two 2D-projected snapshots are indicated with numbers to better assign them to the orbit, shown on the top. On the two bottom panels, we show direction maps and indicate, with colour maps, the $\Delta v$ measured with the double Gaussian fit to each observed spectrum. In order to explore how a minor merger can produce a central DP signature, we compute the directional depending DP fraction of all minor-merger simulations selected in Sect. 4.1. In Fig. 16, we visualise the merger process of a direct (for the giant galaxy) merger encounter between a gSa + dSd and with the orbit-id 3. In comparison to the orbits observed for major-merger simulations (see Sect. 4.3), we observe longer merger timescales for minor mergers until the final coalescence. In fact, retrograde (for the giant galaxy) orbits take, for minor-merger simulations, longer than the simulated time span to reach coalescence. Since we are interested in post-coalescence behaviour of galaxies, we selected 6 gSa + dSd and 5 gSa + dSb simulation with direct orbits. During the merger process, we can clearly identify the two nuclei of the giant and the dwarf galaxies. During close encounters of the two nuclei, we can observe a DP signature. However, only at the final coalescence, where the nuclei of the dwarf galaxy migrates closer than the half mass radius of the giant galaxy, we clearly see a DP signature with observations of the very centre. In this simulation step, the two nuclei are inside the spectroscopic fibre measurements of the $z=0.05$ observation. In Fig. 16, we present the 2D projection of two close encounters and the direction maps indicating from which direction one can observe the highest $\Delta v$ with a double Gaussian fit to the line-of-sight velocity distribution. In the first encounter at 1400 Myr, the two nuclei are separated by a distance of less than 5 kpc. We observe a DP signature in more than 50 % of the directions at a redshift of $z=0.17$. This is due to the fact that the $3^{\prime\prime}$ spectroscopic fibre covers the central region of 10 kpc and therefore covers the two nuclei. This is not the case for observations at a redshift of $z=0.05$ and $z=0.1$ and the DP fraction for these observations is significantly smaller. In fact for smaller redshift one can only detect a DP when observing from an angle where both nuclei are covered by the fibre. This is shown in the bottom-left panels of Fig. 16, where we only detect a signal of the dwarf galaxy for a small set of observation angles. On the bottom-right panels of Fig. 16, we show the measured $\Delta v$ for the snapshot where the two nuclei are separated at about 1.5 kpc before finally merging to one nucleus. A spectroscopic observation at a redshift $z=0.05$ covered both nuclei and a large $\Delta v$ value of up to 400 km s-1 can be observed. For this specific case we also observe a DP for observations nearly face-on with an inclination of $\theta\sim 10^{\circ}$. Figure 17: Double peak fraction observed at different redshifts after the final coalescence of minor-merger simulation. On the $x$-axis, we show the post-coalescence time ${\rm t_{pc}}$, starting at the moment of final coalescence. The moment of final coalescence is marked by a black line and the values of DP-fraction are indicated on the $y$-axis. The line colour represents the orbit-id specified in Table 3. Taking all minor merger observations into account, we can see a clear pattern: at close encounters, we find higher DP fractions. However, for the redshift of $z=0.05$, the value is largest in the closest configuration directly before the two nuclei finally merge. After the final coalescence, no DP can be detected and no rotating disc as seen in major mergers is formed. In Fig. 17, we show the DP fraction of all minor-merger simulations at the final coalescence observed at different redshifts. As described in Sect. 4.2, the final coalescence is defined by the moment where the COMs of the two galaxies approach each other less than the half mass radius of the giant galaxy without subsequently moving away from each other. For the redshift $z=0.17$, we observe DP signatures between 50 to 350 Myr after the final coalescence. This is in all cases the moment when the two nuclei are closer than the spectroscopic fibre size. Since for the redshift at $z=0.1$ and $z=0.17$ the fibre size is larger we see in these observations a DP signature earlier in the final coalescence. However, the moment of the last detected signature does not depend on the redshift. This is due to the fact that the two nuclei finally merge. By observing the directions of detection of DP at the final detection, we do not see a preferred observation direction and a DP can be seen from a face-on view in some cases. #### 4.4.2 The morphology of minor mergers at the final coalescence Figure 18: Mock $rgb$-images of gSa + dSd and gSa + dSb merger simulations computed in the same manner as in Fig. 14 At the moment of final coalescence in major mergers, one can find high DP fractions similar to minor mergers. However, there is a big difference: major mergers show very strong perturbations at this moment which is easy to identify even at a higher redshift. Minor mergers, on the other hand, are not known to have such a strong impact on the morphology. In Fig. 18, we present the morphology of all the minor-merger simulations after the final coalescence when the largest DP fraction at $z=0.05$ is measured. In only one case, the two nuclei can be clearly identified from the face-on, although this would only be visible with high-resolution images. In all post-coalescence minor mergers, we see a nearly undisturbed disc structure and it would be difficult to distinguish such a galaxy from an isolated galaxy. ## 5 Discussion ### 5.1 Double-peak signatures from rotating discs: isolated galaxy vs. post merger A spectroscopic observation of an entire rotating gaseous disc is known to describe a double-horn profile, when observed inclined (e.g. Westmeier et al., 2014). However, this is well known for the HI line, measured for an entire galaxy. Ionised gas kinematics in the centre of a galaxy, on the other hand, traces only an inner small part of the rotation curve. Massive bulges in disc galaxies are known to create a strong velocity gradient in the central region (Sofue & Rubin, 2001). Using axisymmetric models of discs with pure rotation, we find that the DP signature primarily depends on the angle of observation: the higher the inclination, the larger the separation of a double-Gaussian fit. Furthermore, we find the strongest DP signatures for high bulge concentrations when only observing the central 3 kpc. This analytical view points out one aspect quite clearly: DP emission lines have a strong connection to the bulges of galaxies. Accordingly, massive or highly concentrated bulges in galaxies can create a sufficient deep gravitational potential to cause high velocity gradients at the centre. In Sect. 4.3, we find that a centralised disc can be formed in a late stage of a post major galaxy merger. Major mergers generally destroy the disc morphology of the two progenitors and result in an elliptical galaxy as demonstrated in Farouki & Shapiro (1982); Negroponte & White (1983). These findings were further confirmed for dry major mergers (Peschken et al., 2020). For gas rich major mergers however, a disc can be formed in the post merger phase from a gaseous disc that subsequently re-settles (Governato et al., 2009; Hopkins et al., 2009). In violent major mergers which undergo a phase of ultra-luminous infra-red emission, a centralised molecular gas disc was detected in Downes & Solomon (1998). Puech et al. (2009) reported a gas rich disc which might be the result of a collapse of a larger disc or a major merger. After the final coalescence of a major merger, in-falling gas from tidal tails can form a rotating gaseous disc over long periods of time (Barnes, 2002). In the major-merger simulations which we consider in this work, we do see this behaviour: at about 1000 Myr after the final coalescence, gas which was slung far outside the merging system due to tidal tails formed a central disc, which we observe as a double-peak emission line. We do see stronger DP signature for the resulting galaxy of a gSa + gSa galaxy merger in comparison to a gSb + gSb. This is most likely due to the progenitors of the latter having less massive bulges and the resulting rotation curve shows smaller velocities in the centre. Regarding the morphology of the late stages of major mergers, we observe indeed mostly early type morphologies. Only in mergers with the two discs in the orbital plane do we observe a prominent disc structure in the resulting galaxy. However, besides such mergers, we do not find any dependencies on the orbital geometry of the merger, which was further discussed in Mihos & Hernquist (1996). At the observed stage of post-coalescence, we do not observe strong tidal features in the central kiloparsecs, which is in line with the findings from Lotz et al. (2010). In this work, we only consider visual merger identification similar to e.g. Domínguez Sánchez et al. (2018) or Willett et al. (2013). This is only sensitive to prominent tidal features and perturbations which are detectable in early merger stages. The detection of post-coalescence galaxy mergers are difficult to detect and are often accompanied by large bulges (e.g. Barnes & Hernquist, 1991; Barnes, 1992) or dual nuclei (e.g. Komossa et al., 2003). However, the presence of large bulges do not give any insight concerning the merger time-scale, and in order to identify dual nuclei, high-resolution observations are needed. By combining multiple imaging predictors to a linear discriminant analysis method, it is possible to correctly identify post-coalescence galaxy mergers as shown with simulated galaxies by Nevin et al. (2019). When including stellar kinematics, observed with integrated-field spectroscopy, the post-coalescence mergers can be even better identified (Nevin et al., 2021). In a similar work on galaxy mergers from cosmological simulations, Bottrell et al. (2022) have shown that accurate identification can also be achieved with neuronal networks, even though they find that the kinematic input has a less significant contribution compared to the imaging input. For the post coalescence major mergers in this work, we find morphologies indicate a strong ellipticity so that the galaxies can be identified as lenticular galaxies. This was discussed for the same merger simulations in Eliche-Moral et al. (2018). These galaxies can correspond to the excess of DP S0 galaxies, found in M20. However, these configurations of post-major mergers form around 1 Gyr after the final coalescence. The increased star-formation rate associated with a merger has already faded away at this merger stage (Mihos & Hernquist, 1996; Di Matteo et al., 2007). Hence, this is in conflict with the increased star-formation rates found for DP galaxies (Maschmann & Melchior, 2019; Maschmann et al., 2020, 2021). ### 5.2 Strong double-peak features in disc galaxies: bars or minor mergers ? In contrast to major mergers, minor mergers are less violent and the merger morphology is detectable only up to 100 Myr after the final coalescence using photometry (e.g. Lotz et al., 2010). In addition to that, within this timescale an enhance star-formation rate can be induced by the merger (Di Matteo et al., 2007). Considering the excess of S0 and Sa galaxies and the central star formation enhancement found in star-forming DP galaxies (M20), a minor merger can explain the observed characteristics of DP emission-line galaxies. During a close encounter in a minor merger, one can observe a DP signature which is similar to the case discussed in Mazzilli Ciraulo et al. (2021). However, this is not necessarily the final stage of the merger but a superposition of two galaxies aligned with the line-of-sight. This phenomenon will be addressed in greater detail in Halle et al. (in prep.). In this work, however, we set the focus on how to create a DP signature which cannot be identified though visual inspection. Depending on the merger orbit, the dwarf galaxy can enter from any direction into the central region and we therefore do not detect any directional dependence. In some cases, we even observe a strong DP signature from a face-on perspective. Within 350 Myr, the two nuclei finally merge and no DP signature can be detected anymore. In the merger stage of final coalescence, when we observe the highest DP fraction, we only see weak tidal features in the central kiloparsecs. The two nuclei would be only visible with high resolution imaging or in very nearby galaxies. furthermore, minor mergers are considered to happen more frequently than major mergers in the late universe (Conselice et al., 2005; Noeske et al., 2007). As mentioned in Sect. 5.1, a bar feature in isolated galaxies can also create a DP emission line. In Sect. 3.3, we explore bars in simulated Sa and Sb galaxies and find a DP when viewing from a perspective parallel to the bar. Barred galaxies are considered to be effective in transporting cold gas inwards, leading to central growth and rejuvenation of SF in the central region (Chown et al., 2019). On the one hand, minor mergers can trigger a central star-formation enhancement (e.g. Dekel & Burkert, 2014), on the other hand, however, bars are considered to trigger central star bursts more effectively than galaxy-galaxy interactions (Ellison et al., 2011). Therefore, also barred galaxies would be a considerable mechanism to produce a strong DP emission-line signature accompanied by a central star-formation enhancement. Observing a bar parallel to its major axis can lead to a false classification of a disc galaxy with a symmetric bulge. In fact, M20 finds only 3 % barred galaxy for DP emission-line galaxies. However, the used identification of this galaxy type is favouring less inclined and face-on galaxies as they are detected with a machine-learning algorithm described in Domínguez Sánchez et al. (2018). Thus, a large part of more inclined barred galaxies might be not detected, the bar being hidden due to the viewing angle. In principle, bars can occur in spiral and S0 galaxies. These types make up half of the M20-DP galaxy sample (16 % spiral and 36 % S0 galaxies). By combining a bar fraction from observations and the estimated DP fraction due to a bar, we can estimate whether bars can be responsible alone for the significant increase in DP S0 galaxies observed by M20. We adopt a bar fraction of spiral galaxies of ${\rm P(bar|spiral)}=0.66$ (Eskridge et al., 2000) and for S0 galaxies of ${\rm P(bar|S0)}=0.46$ (Laurikainen et al., 2009). In order to estimate the frequency of S0 and spiral galaxies, we use the same morphological selection of these galaxy types as performed in M20, based on Domínguez Sánchez et al. (2018), for SDSS galaxies. We further restrain the selection to a similar stellar mass and redshift distribution by applying a stellar-mass cut of ${\rm M_{*}}\geq 10^{10.5}\,{\rm M_{\odot}}$ and a redshift cut of $z\leq 0.2$. This selection results in fractions of P(spiral) = 0.131 and P(S0) = 0.255. The fact that we find more S0 galaxies is due to the selection of galaxies with high stellar masses which is similar to the selection in M20. Since we assume a lower probability of S0 galaxies exhibiting a bar (${\rm P(bar|S0)}$), it is rather unlikely that the factor $\sim 2$ we see in the ratio between S0 and spiral galaxies in the M20-DP sample can be explained purely by bars. However, this estimation is based on two simplified assumptions: first, that bars in S0 galaxies produce the same DP fraction as in spiral galaxies and second, that the bar fraction is constant for all stellar masses $\geq 10^{10.5}\,{\rm M_{\odot}}$, which is not the case (Zhou et al., 2020; Zhao et al., 2020; Roshan et al., 2021). In this paper we address the fundamental question of which mechanisms can cause DP signatures. However, it is difficult to estimate which of these effects is more likely based on idealised simulations and we therefore plan to estimate this question in a future work. ### 5.3 Resolving double-peak emission lines and the importance of future surveys Here, we discussed multiple mechanisms which can lead to a DP signature observed in a central spectroscopic observation of a galaxy. However, considering only a central spectrum and a snapshot in the optical light, one cannot conclusively determine the origin of the DP emission line. In order to distinguish between the different mechanisms, discussed here, additional information about a spatial distribution of the kinematic signatures is needed. As shown in Mazzilli Ciraulo et al. (2021), relying on integrated field spectroscopy with the Mapping Nearby Galaxies at APO (MaNGA, Bundy et al., 2015) survey, one can spatially disentangle two different gas components. In this very case, the central DP signature found in the central $3^{\prime\prime}$ SDSS spectrum, originates from two superposed discs. Long- slit spectroscopic observations provide a spatial resolution and Comerford et al. (e.g. 2009b, 2011); Müller-Sánchez et al. (e.g. 2015); Nevin et al. (e.g. 2016) and Comerford et al. (2018) succeeded in resolving a dual AGN as the underlying mechanism of a detected DP and distinguished them from other mechanisms such as gas outflows or rotating discs. Therefore, the mechanisms, discussed in this work should be studied in greater detail by means of surveys such as MaNGA, but at the same time the basic understanding of these phenomena should be investigated with further simulations. Cosmological simulations, in particular, offer a special opportunity as they provide a much greater diversity of different merger scenarii, and galaxies are in constant interaction with their environment. Furthermore, the inclusion of AGN feedback allows to even further discuss how DP emission lines are connected to physical processes (Somerville & Davé, 2015; Vogelsberger et al., 2020). A complete analysis of SDSS-like spectroscopic observations in cosmological simulations may also provide insight into which underlying process is more likely (e.g. bar signatures or minor merger). By aiming for a better understanding of the kinematic footprint of gas in galaxies, we might also be able to apply such insights to upcoming surveys. The SDSS only observed galaxies in the late Universe. By using DP emission- line signatures as a tracer to study gaseous discs and galaxy mergers, we can better estimate the merger rate over larger ranges of redshift. This would help us to understand for example how galaxies evolve through mergers and quantify how the star-formation rate is connected to such phenomena. Two upcoming surveys are of special interest for this very task: the VLT 4MOST survey as it probes emission-line galaxies up to a redshift of $z=1.1$ (Richard et al., 2019) and the EUCLID mission which will provide spectroscopic data for galaxies up to $z\sim 2$ (Laureijs et al., 2011). Spectroscopic observations from the EUCLID mission will not be able to resolve DP signatures due to the insufficient spectral resolution of R=250 at a pixel size of $0.3^{\prime\prime}$, however, the high imaging resolution of $0.1^{\prime\prime}$ will enable to probe earlier stages of merger with an unprecedented sample size. Visual galaxy mergers and DP emission-line galaxies can be used as a tool to select promising candidates in the high redshift universe and compare the measured kinematic footprint and merger rate to the ones we know from the late Universe. ## 6 Conclusions A double-peak (DP) emission line, observed in the centre of a galaxy is a peculiar feature, as it offers insights into the central kinematic processes. This kinematic footprint has been used to find dual active galactic nuclei (AGN) or AGN-driven gas outflows. In recent studies, a broader search for DP galaxies has been conducted in order to shed light on this phenomenon from a more general perspective. The resulting DP sample showed that AGNs represent only a small subgroup and the majority shows only moderate or no AGN activity. Furthermore, DP galaxies are predominantly S0 or disc galaxies with large bulges and no increased merger rate was observed. Taking into account that star-forming DP galaxies exhibit a central star-formation enhancement, the most plausible explanation would be the observation of a minor merger. However, without followup observations one cannot conclusively determine the underlying mechanism for an individual galaxy. In order to get a better understanding of the internal kinematic processes creating a DP signatures, we investigated different possibilities in this work. We, therefore, computed synthetic SDSS spectroscopic emission-line observations from disc models and simulations and searched for DP signatures from all directions using a grid of observation angles. With axisymmetric models, we explored from which observation angle and for which rotation curves one can see a DP. To get a more realistic view, we searched in simulations of isolated galaxies from where we can observe a DP signature and found besides a rotation pattern that bars can create a strong DP when observed parallel to the major axis of the bar. We also observed minor and major-merger simulations over the course of their merger process. We found DP signatures during close encounters of two galaxies as two gas components are present inside the spectroscopic observation. Furthermore, after about 1 Gyr after the final coalescence, we see a central rotating disc in post-major mergers which create a distinct DP fraction. This phenomena however, is not detected in minor- merger simulations. However, a strong DP signature is observed within 350 Myr after the final coalescence. For the discussed stages of major and minor merger simulations, the morphology does not give a direct indication of a recent merger. Using axisymmetric models, we have gained a clear understanding of how the connection between the stellar bulge and the rotation curve can lead to a DP. Massive or highly concentrated bulges can create a strong central velocity gradient such that a DP can be observed at low inclinations of $\theta=40^{\circ}$ ($\theta=0^{\circ}$ would be face-on). In the context of observed DP galaxies in the SDSS, we must clearly say that late cycles of major mergers are unlikely, as they tend to produce S0 and mainly elliptical morphologies. Moreover, at the merger stage, we discuss here, they have already consumed the majority of their gas for star formation and an enhanced star-formation rate is close to impossible. Minor mergers and bars as a mechanism for DP signatures show great agreement with observations. On the one hand, both are known for central star-formation activity and, on the other hand, both phenomena occur frequently. Although the range in which we can observe a DP in minor mergers is relatively short (about 350 Myr), however, this footprint can be seen from a large range of angles and there is no correlation with the galaxy inclination. These findings show further possibilities of how one can interpret an observed DP emission line. And at the same time it is in line with the observations of which minor mergers were discussed as the most plausible explanation. In the context of future work on DP emission-line galaxies, we further discussed that using integrated-field spectroscopy can disentangle the underlying mechanisms. Furthermore, the understanding of DP emission lines is a crucial tool for upcoming spectroscopic surveys at high redshift, as they can help to identify galaxy mergers. ###### Acknowledgements. This work was supported by the Programme National Cosmology et Galaxies (PNCG) of CNRS/INSU with INP and IN2P3, co-funded by CEA and CNES. IC’s research is supported by the SAO Telescope Data Center. IC acknowledges support from the RScF grant 19-12-00281. ## References * Abazajian et al. (2009) Abazajian, K. N., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2009, ApJS, 182, 543 * Athanassoula & Bosma (1985) Athanassoula, E. & Bosma, A. 1985, ARA&A, 23, 147 * Baes & Dejonghe (2000) Baes, M. & Dejonghe, H. 2000, MNRAS, 313, 153 * Baes et al. (2000) Baes, M., Dejonghe, H., & De Rijcke, S. 2000, MNRAS, 318, 798 * Barnes & Hut (1986) Barnes, J. & Hut, P. 1986, Nature, 324, 446 * Barnes (1992) Barnes, J. E. 1992, ApJ, 393, 484 * Barnes (2002) Barnes, J. E. 2002, MNRAS, 333, 481 * Barnes & Hernquist (1991) Barnes, J. E. & Hernquist, L. E. 1991, ApJ, 370, L65 * Begelman et al. (1980) Begelman, M. C., Blandford, R. D., & Rees, M. J. 1980, Nature, 287, 307 * Bergvall et al. (2003) Bergvall, N., Laurikainen, E., & Aalto, S. 2003, A&A, 405, 31 * Binney & Tremaine (1987) Binney, J. & Tremaine, S. 1987, Galactic dynamics * Bohlin et al. (1978) Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132 * Bolatto et al. (2013) Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, ARA&A, 51, 207 * Bottrell et al. (2022) Bottrell, C., Hani, M. H., Teimoorinia, H., Patton, D. R., & Ellison, S. L. 2022, MNRAS, 511, 100 * Bournaud & Combes (2002) Bournaud, F. & Combes, F. 2002, A&A, 392, 83 * Bournaud et al. (2005a) Bournaud, F., Combes, F., & Semelin, B. 2005a, MNRAS, 364, L18 * Bournaud et al. (2005b) Bournaud, F., Jog, C. J., & Combes, F. 2005b, A&A, 437, 69 * Bournaud et al. (2007) Bournaud, F., Jog, C. J., & Combes, F. 2007, A&A, 476, 1179 * Bundy et al. (2015) Bundy, K., Bershady, M. A., Law, D. R., et al. 2015, ApJ, 798, 7 * Chilingarian et al. (2010) Chilingarian, I. V., Di Matteo, P., Combes, F., Melchior, A. L., & Semelin, B. 2010, A&A, 518, A61 * Chilingarian et al. (2017) Chilingarian, I. V., Zolotukhin, I. Y., Katkov, I. Y., et al. 2017, ApJS, 228, 14 * Chown et al. (2019) Chown, R., Li, C., Athanassoula, E., et al. 2019, MNRAS, 484, 5192 * Combes et al. (1994) Combes, F., Prugniel, P., Rampazzo, R., & Sulentic, J. W. 1994, A&A, 281, 725 * Comerford et al. (2009a) Comerford, J. M., Gerke, B. F., Newman, J. A., et al. 2009a, ApJ, 698, 956 * Comerford & Greene (2014) Comerford, J. M. & Greene, J. E. 2014, ApJ, 789, 112 * Comerford et al. (2009b) Comerford, J. M., Griffith, R. L., Gerke, B. F., et al. 2009b, ApJ, 702, L82 * Comerford et al. (2018) Comerford, J. M., Nevin, R., Stemo, A., et al. 2018, ApJ, 867, 66 * Comerford et al. (2015) Comerford, J. M., Pooley, D., Barrows, R. S., et al. 2015, ApJ, 806, 219 * Comerford et al. (2011) Comerford, J. M., Pooley, D., Gerke, B. F., & Madejski, G. M. 2011, ApJ, 737, L19 * Conselice et al. (2005) Conselice, C. J., Blackburne, J. A., & Papovich, C. 2005, ApJ, 620, 564 * De Propris et al. (2005) De Propris, R., Liske, J., Driver, S. P., Allen, P. D., & Cross, N. J. G. 2005, AJ, 130, 1516 * Dekel & Burkert (2014) Dekel, A. & Burkert, A. 2014, MNRAS, 438, 1870 * Di Matteo et al. (2007) Di Matteo, P., Combes, F., Melchior, A. L., & Semelin, B. 2007, A&A, 468, 61 * Domínguez Sánchez et al. (2018) Domínguez Sánchez, H., Huertas-Company, M., Bernardi, M., Tuccillo, D., & Fischer, J. L. 2018, MNRAS, 476, 3661 * Downes & Solomon (1998) Downes, D. & Solomon, P. M. 1998, ApJ, 507, 615 * Eliche-Moral et al. (2018) Eliche-Moral, M. C., Rodríguez-Pérez, C., Borlaff, A., Querejeta, M., & Tapia, T. 2018, A&A, 617, A113 * Ellison et al. (2013) Ellison, S. L., Mendel, J. T., Patton, D. R., & Scudder, J. M. 2013, MNRAS, 435, 3627 * Ellison et al. (2011) Ellison, S. L., Nair, P., Patton, D. R., et al. 2011, MNRAS, 416, 2182 * Ellison et al. (2008) Ellison, S. L., Patton, D. R., Simard, L., & McConnachie, A. W. 2008, AJ, 135, 1877 * Eskridge et al. (2000) Eskridge, P. B., Frogel, J. A., Pogge, R. W., et al. 2000, AJ, 119, 536 * Farouki & Shapiro (1982) Farouki, R. T. & Shapiro, S. L. 1982, ApJ, 259, 103 * Fitzpatrick (1999) Fitzpatrick, E. L. 1999, PASP, 111, 63 * Ge et al. (2012) Ge, J.-Q., Hu, C., Wang, J.-M., Bai, J.-M., & Zhang, S. 2012, ApJS, 201, 31 * Gerke et al. (2007) Gerke, B. F., Newman, J. A., Lotz, J., et al. 2007, ApJ, 660, L23 * Gingold & Monaghan (1982) Gingold, R. A. & Monaghan, J. J. 1982, Journal of Computational Physics, 46, 429 * Governato et al. (2009) Governato, F., Brook, C. B., Brooks, A. M., et al. 2009, MNRAS, 398, 312 * Hernquist (1993) Hernquist, L. 1993, ApJS, 86, 389 * Hernquist & Mihos (1995) Hernquist, L. & Mihos, J. C. 1995, ApJ, 448, 41 * Hopkins et al. (2009) Hopkins, P. F., Cox, T. J., Younger, J. D., & Hernquist, L. 2009, ApJ, 691, 1168 * Kewley et al. (2019) Kewley, L. J., Nicholls, D. C., & Sutherland, R. S. 2019, ARA&A, 57, 511 * Komossa et al. (2003) Komossa, S., Burwitz, V., Hasinger, G., et al. 2003, ApJ, 582, L15 * Laureijs et al. (2011) Laureijs, R., Amiaux, J., Arduini, S., et al. 2011, arXiv e-prints, arXiv:1110.3193 * Laurikainen et al. (2009) Laurikainen, E., Salo, H., Buta, R., & Knapen, J. H. 2009, ApJ, 692, L34 * Le Borgne et al. (2004) Le Borgne, D., Rocca-Volmerange, B., Prugniel, P., et al. 2004, A&A, 425, 881 * Lotz et al. (2008) Lotz, J. M., Jonsson, P., Cox, T. J., & Primack, J. R. 2008, MNRAS, 391, 1137 * Lotz et al. (2010) Lotz, J. M., Jonsson, P., Cox, T. J., & Primack, J. R. 2010, MNRAS, 404, 575 * Lotz et al. (2004) Lotz, J. M., Primack, J., & Madau, P. 2004, AJ, 128, 163 * Lucy (1977) Lucy, L. B. 1977, AJ, 82, 1013 * Maness et al. (2004) Maness, H. L., Taylor, G. B., Zavala, R. T., Peck, A. B., & Pollack, L. K. 2004, ApJ, 602, 123 * Maschmann & Melchior (2019) Maschmann, D. & Melchior, A.-L. 2019, A&A, 627, L3 * Maschmann et al. (2021) Maschmann, D., Melchior, A.-L., Combes, F., et al. 2021, arXiv e-prints, arXiv:2112.12796 * Maschmann et al. (2020) Maschmann, D., Melchior, A.-L., Mamon, G. A., Chilingarian, I. V., & Katkov, I. Y. 2020, A&A, 641, A171 * Mazzilli Ciraulo et al. (2021) Mazzilli Ciraulo, B., Melchior, A.-L., Maschmann, D., et al. 2021, A&A, 653, A47 * Menéndez-Delmestre et al. (2007) Menéndez-Delmestre, K., Sheth, K., Schinnerer, E., Jarrett, T. H., & Scoville, N. Z. 2007, ApJ, 657, 790 * Mihos & Hernquist (1994) Mihos, J. C. & Hernquist, L. 1994, ApJ, 437, 611 * Mihos & Hernquist (1996) Mihos, J. C. & Hernquist, L. 1996, ApJ, 464, 641 * Miyamoto & Nagai (1975) Miyamoto, M. & Nagai, R. 1975, PASJ, 27, 533 * Müller-Sánchez et al. (2015) Müller-Sánchez, F., Comerford, J. M., Nevin, R., et al. 2015, ApJ, 813, 103 * Negroponte & White (1983) Negroponte, J. & White, S. D. M. 1983, MNRAS, 205, 1009 * Nevin et al. (2019) Nevin, R., Blecha, L., Comerford, J., & Greene, J. 2019, ApJ, 872, 76 * Nevin et al. (2021) Nevin, R., Blecha, L., Comerford, J., et al. 2021, ApJ, 912, 45 * Nevin et al. (2016) Nevin, R., Comerford, J., Müller-Sánchez, F., Barrows, R., & Cooper, M. 2016, ApJ, 832, 67 * Noeske et al. (2007) Noeske, K. G., Weiner, B. J., Faber, S. M., et al. 2007, ApJ, 660, L43 * Patton et al. (2011) Patton, D. R., Ellison, S. L., Simard, L., McConnachie, A. W., & Mendel, J. T. 2011, MNRAS, 412, 591 * Peschken et al. (2020) Peschken, N., Łokas, E. L., & Athanassoula, E. 2020, MNRAS, 493, 1375 * Puech et al. (2009) Puech, M., Hammer, F., Flores, H., Neichel, B., & Yang, Y. 2009, A&A, 493, 899 * Richard et al. (2019) Richard, J., Kneib, J. P., Blake, C., et al. 2019, The Messenger, 175, 50 * Robertson et al. (2006) Robertson, B., Bullock, J. S., Cox, T. J., et al. 2006, ApJ, 645, 986 * Rodriguez et al. (2006) Rodriguez, C., Taylor, G. B., Zavala, R. T., et al. 2006, ApJ, 646, 49 * Roshan et al. (2021) Roshan, M., Ghafourian, N., Kashfi, T., et al. 2021, MNRAS, 508, 926 * Salim & Narayanan (2020) Salim, S. & Narayanan, D. 2020, ARA&A, 58, 529 * Sofue & Rubin (2001) Sofue, Y. & Rubin, V. 2001, ARA&A, 39, 137 * Solanes et al. (2018) Solanes, J. M., Perea, J. D., & Valentí-Rojas, G. 2018, A&A, 614, A66 * Somerville & Davé (2015) Somerville, R. S. & Davé, R. 2015, ARA&A, 53, 51 * Steinmetz & Navarro (2002) Steinmetz, M. & Navarro, J. F. 2002, New A, 7, 155 * Toomre & Toomre (1972) Toomre, A. & Toomre, J. 1972, ApJ, 178, 623 * Tremonti et al. (2004) Tremonti, C. A., Heckman, T. M., Kauffmann, G., et al. 2004, ApJ, 613, 898 * Villalobos et al. (2012) Villalobos, Á., De Lucia, G., Borgani, S., & Murante, G. 2012, MNRAS, 424, 2401 * Vogelsberger et al. (2020) Vogelsberger, M., Marinacci, F., Torrey, P., & Puchwein, E. 2020, Nature Reviews Physics, 2, 42 * Westmeier et al. (2014) Westmeier, T., Jurek, R., Obreschkow, D., Koribalski, B. S., & Staveley-Smith, L. 2014, MNRAS, 438, 1176 * Wiklind et al. (1997) Wiklind, T., Combes, F., Henkel, C., & Wyrowski, F. 1997, A&A, 323, 727 * Willett et al. (2013) Willett, K. W., Lintott, C. J., Bamford, S. P., et al. 2013, MNRAS, 435, 2835 * Wolfire et al. (2010) Wolfire, M. G., Hollenbach, D., & McKee, C. F. 2010, ApJ, 716, 1191 * Zhao et al. (2020) Zhao, D., Du, M., Ho, L. C., Debattista, V. P., & Shi, J. 2020, ApJ, 904, 170 * Zhou et al. (2020) Zhou, Z.-B., Zhu, W., Wang, Y., & Feng, L.-L. 2020, ApJ, 895, 92 ## Appendix A Initial galaxy parameters In this section we provide detailed parameters of galaxy simulations of the GalMer project, described in Sect. 3.2.1 and 4.1. Table 2 summarises the initial parameters of all individual galaxy types used in this work and Table 3 summarises orbital parameters of the merger simulations. Table 2: Initial parameters of simulated galaxies in the GalMer database. | gSa | gSb | dSb | dSd ---|---|---|---|--- $M_{\rm gas}[2.3\times 10^{9}M_{\odot}]$ | 4 | 4 | 0.4 | 0.75 $M_{\rm*\,disc}[2.3\times 10^{9}M_{\odot}]$ | 40 | 20 | 2 | 2.5 $M_{\rm*\,bulge}[2.3\times 10^{9}M_{\odot}]$ | 10 | 5 | 0.5 | 0 $M_{DM}[2.3\times 10^{9}M_{\odot}]$ | 50 | 75 | 7.5 | 7.5 $a_{\rm gas}$ [kpc] | 5 | 6 | 1.6 | 2.2 $h_{\rm gas}$ [kpc] | 0.2 | 0.2 | 0.06 | 0.06 $a_{\rm*,disc}$ [kpc] | 4 | 5 | 1.6 | 1.9 $h_{\rm*,disc}$ [kpc] | 0.5 | 0.5 | 0.16 | 0.16 $b_{\rm*,bulge}$ [kpc] | 2 | 1 | 0.3 | - $b_{\rm DM}$ [kpc] | 10 | 12 | 3.8 | 4.7 333The values are taken from Chilingarian et al. (2010). Table 3: Orbital parameters for major and minor mergers used in the GalMer database. orb.id | rini | vini | L | spin ---|---|---|---|--- | kpc | 102 km s-1 | 102 km s-1 kpc | Major merger 01dir | 100 | 2. | 56.6 | up 01ret | 100 | 2. | 56.6 | down 02dir | 100 | 3. | 59.3 | up 02ret | 100 | 3. | 59.3 | down 03dir | 100 | 3.7 | 62.0 | up 03ret | 100 | 3.7 | 62.0 | down 04dir | 100 | 5.8 | 71.5 | up 04ret | 100 | 5.8 | 71.5 | down 05dir | 100 | 2. | 80.0 | up 05ret | 100 | 2. | 80.0 | down Minor merger 01dir | 100 | 1.48 | 29.66 | up 01ret | 100 | 1.48 | 29.66 | down 02dir | 100 | 1.52 | 29.69 | up 02ret | 100 | 1.52 | 29.69 | down 03dir | 100 | 1.55 | 29.72 | up 03ret | 100 | 1.55 | 29.72 | down 04dir | 100 | 1.48 | 36.33 | up 04ret | 100 | 1.48 | 36.33 | down 05dir | 100 | 1.52 | 36.38 | up 05ret | 100 | 1.52 | 36.38 | down 444The values are taken from Chilingarian et al. (2010). ## Appendix B Merger orbit of major merger galaxies In this section, an additional figure of a major merger simulation of two gSa galaxies is presented in Fig. 19. This is supplementary to Fig. 11 which is used to discuss a major merger simulation. Figure 19: Visualisation of a gSa + gSa galaxy merger. The presentation is the same as described in Fig. 11. However, in the bottom panels, presenting the 2D projection of different snapshots, we display the central 4 kpc to better visualise the central disc. The merger process is characterised by an retrograde orbit with the orbit-id 5 and an inclination of $45^{\circ}$.
$\displaystyle\tau_{j+1}\cdot\Big{[}|\mbox{Cov}(U_{j+1},V_{j+1})|+(\tau_{1}\cdots\tau_{j})\cdot\sum_{i=1}^{j}|\mbox{Cov}(U_{i},V_{i})|\Big{]}$ $\displaystyle\leq$ $\displaystyle(\tau_{1}\cdots\tau_{j+1})\cdot\sum_{i=1}^{j+1}|\mbox{Cov}(U_{i},V_{i})|.$ This confirms (87). The proof is completed by taking $C=\tau_{k}$ and $K=C^{k}.$ $\square$ #### 6.2.3 Combinatorics In this section we will work on some combinatorics problems. They will be used to evaluate covariances between squared sample correlations coefficients in Section 6.2.4. We always assume $\alpha_{1},\cdots,\alpha_{m},\beta_{1},\cdots,\beta_{m},\gamma_{1},\cdots,\gamma_{m},\delta_{1},\cdots,\delta_{m}$ are non-negative integers. Set $\boldsymbol{\alpha}=(\alpha_{1},\cdots,\alpha_{m})$, $\boldsymbol{\beta}=(\beta_{1},\cdots,\beta_{m})$, $\boldsymbol{\gamma}=(\gamma_{1},\cdots,\gamma_{m})$ and $\boldsymbol{\delta}=(\delta_{1},\cdots,\delta_{m})$. ###### LEMMA 6.18 Let $m\geq 2$, $a\geq 0$ and $b\geq 1$ be integers. Then the following hold with constant $K$ depending on $a$ and $b$ but not $m$. (i) Let $N_{1}$ be the total number of non-negative integer solutions $(x_{1},\cdots,x_{m})$ of $x_{1}+\cdots+x_{m}=a$, then $N_{1}\leq Km^{a}$. (ii) Let $N_{2}$ be the total number of non-negative integer solutions $(x_{1},\cdots,x_{m})$ of $x_{1}+\cdots+x_{m}=a$ with $x_{1}\geq b$. Then $N_{2}\leq Km^{a-b}$. (iii) Given $1\leq n<m$ and $c_{1}\geq 1,\cdots,c_{n}\geq 1$ with $c_{1}+\cdots+c_{n}\leq a$, let $N_{3}$ the total number of non-negative integer solutions $(x_{1},\cdots,x_{m})$ of $x_{1}+\cdots+x_{m}=a$ with $x_{i}\geq c_{i}$ for $1\leq i\leq n$. Then $N_{3}\leq Km^{a-c_{1}-\cdots- c_{n}}$. A quick comment is that (ii) is a special case of (iii). We single it out because $N_{2}$ has a much neater statement and it will be used very frequently. Proof of Lemma 6.18. (i) If $a=0$, the only non-negative integer solution of $x_{1}+\cdots+x_{m}=a$ is $(0,\cdots,0)$. Then $N_{1}=1$ and the conclusion follows with any constant $K\geq 1.$ We assume next that $a\geq 1$. It is well-known that $\displaystyle N_{1}=\binom{m+a-1}{a}\leq(m+a-1)^{a}\leq(1+a)^{a}m^{a}.$ (91) (iii) Set $y_{i}=x_{i}-c_{i}$ for $i=1,\cdots,n$ and $y_{i}=x_{i}$ for $n+1\leq i\leq m$. Then $N_{3}$ is equal to the total number of non-negative integer solutions $(y_{1},\cdots,y_{m})$ of $y_{1}+\cdots+y_{m}=a-c_{1}-\cdots-c_{n}.$ From (i) we see $N_{3}\leq Km^{a-c_{1}-\cdots-c_{n}}$. The proof is completed. The statement (ii) follows because it is a special case of (iii). $\square$ In the following when we say a non-negative integer solution $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ of a certain equation, we mean $(\alpha_{1},\cdots,\alpha_{m},\beta_{1},\cdots,\beta_{m},\gamma_{1},\cdots,\gamma_{m},\delta_{1},\cdots,\delta_{m})$ satisfies that equation with each of $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ being a non-negative integer. ###### LEMMA 6.19 Let $m\geq 4$ and $\alpha,\beta,\gamma,\delta$ be non-negative integers. Let $N_{1}$ be the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying $\displaystyle\sum_{i=1}^{m}\alpha_{i}=\alpha,\ \ \sum_{i=1}^{m}\beta_{i}=\beta,\ \ \sum_{i=1}^{m}\gamma_{i}=\gamma,\ \ \sum_{i=1}^{m}\delta_{i}=\delta.$ (92) Set $I_{1}:=\\{1\\}\cup\\{2\leq i\leq m;\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\geq 1\\}$ and $\displaystyle I_{2}:=\\{2,3\\}\cup\big{\\{}i\in\\{1,4,5,\cdots,m\\};\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \gamma_{i}+\delta_{i}\geq 1\big{\\}}.$ Let $N_{2}$ be the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $I_{1}\cap I_{2}\neq\emptyset$. Then, there exists a constant $K$ depending on $\alpha,\beta,\gamma,\delta$ but not $m$ such that (i) $N_{1}\leq K\cdot m^{\alpha+\beta+\gamma+\delta}$; (ii) $N_{2}\leq K\cdot m^{\alpha+\beta+\gamma+\delta-1}.$ Proof of Lemma 6.19. First, recall the fact that the total number of non- negative integer solutions of $x_{1}+\cdots+x_{m}=k$ for any non-negative integer $k$ is $\binom{m+k-1}{k}.$ Therefore, considering the four equations in (92) separately, the total numbers of non-negative integer solutions are $\displaystyle\binom{m+\alpha-1}{\alpha},\ \binom{m+\beta-1}{\beta},\ \binom{m+\gamma-1}{\gamma}\ \ \mbox{and}\ \ \binom{m+\delta-1}{\delta},$ (93) respectively. (i) By (91) and (93), $\displaystyle N_{1}$ $\displaystyle\leq$ $\displaystyle\binom{m+\alpha-1}{\alpha}\binom{m+\beta-1}{\beta}\binom{m+\gamma-1}{\gamma}\binom{m+\delta-1}{\delta}$ (94) $\displaystyle\leq$ $\displaystyle(1+\alpha)^{\alpha}(1+\beta)^{\beta}(1+\gamma)^{\gamma}(1+\delta)^{\delta}\cdot m^{\alpha+\beta+\gamma+\delta}.$ The conclusion follows by taking $K=(1+\alpha)^{\alpha}(1+\beta)^{\beta}(1+\gamma)^{\gamma}(1+\delta)^{\delta}$. (ii) If $\alpha+\beta=0$ and $\gamma+\delta=0$, then $I_{1}=\\{1\\}$ and $I_{2}=\\{2,3\\}$, and hence $I_{1}\cap I_{2}=\emptyset$. Thus $N=0$. The conclusion holds. So we assume next that either $\alpha+\beta\geq 1$ or $\gamma+\delta\geq 1$. Notice $I_{1}\cap I_{2}=A_{1}\cup A_{2}\cup A_{3}$, where $\displaystyle A_{1}$ $\displaystyle=$ $\displaystyle\\{i\in\\{1\\};\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \gamma_{i}+\delta_{i}\neq 0\big{\\}};$ $\displaystyle A_{2}$ $\displaystyle=$ $\displaystyle\\{\\{i\in\\{2,3\\};\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\neq 0\\};$ $\displaystyle A_{3}$ $\displaystyle=$ $\displaystyle\big{\\{}i\in\\{4,5,\cdots,m\\};\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\neq 0\ \mbox{and}\ \gamma_{i}+\delta_{i}\neq 0\big{\\}}.$ Hence, if $I_{1}\cap I_{2}\neq\emptyset$, then either $A_{1}\neq\emptyset$, $A_{2}\neq\emptyset$ or $A_{3}\neq\emptyset$. Let us consider the three scenarios one by one next. Scenario 1: $A_{1}\neq\emptyset$. In this situation, $\gamma_{1}+\delta_{1}\geq 1$. Consequently, either $\gamma_{1}\geq 1$ or $\delta_{1}\geq 1$. Thus, taking $b=1$ in Lemma 6.18(i) and (ii), we know the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $A_{1}\neq\emptyset$ is bounded by $\displaystyle K_{1}m^{\alpha}\cdot K_{1}m^{\beta}\cdot K_{1}m^{\gamma-1}\cdot K_{1}m^{\delta}+K_{1}m^{\alpha}\cdot K_{1}m^{\beta}\cdot K_{1}m^{\gamma}\cdot K_{1}m^{\delta-1}$ $\displaystyle=$ $\displaystyle 2(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}$ where $K_{1}$ here and below is a constant depending on $\alpha,\beta,\gamma,\delta$ but not $m$. Scenario 2: $A_{2}\neq\emptyset$. In this situation, either $\alpha_{2}+\beta_{2}\geq 1$ or $\alpha_{3}+\beta_{3}\geq 1$. Similar to the first case, the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $A_{2}\neq\emptyset$ is bounded by $2(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}+2(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}=4(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}$. Scenario 3: $A_{3}\neq\emptyset$. In this situation, there exists $i\in\\{4,5,\cdots,m\\}$ such that $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$. For fixed $i$, if $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$, then one of the four cases must be true: (a) $\alpha_{i}\geq 1$ and $\gamma_{i}\geq 1$; (b) $\alpha_{i}\geq 1$ and $\delta_{i}\geq 1$; (c) $\beta_{i}\geq 1$ and $\gamma_{i}\geq 1$; (d) $\beta_{i}\geq 1$ and $\delta_{i}\geq 1$. From Lemma 6.18(i) and (ii) again, we have that the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (a) is dominated by $K_{1}m^{\alpha-1}\cdot K_{1}m^{\beta}\cdot K_{1}m^{\gamma-1}\cdot K_{1}m^{\delta}=(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-2}$. By symmetry, the same inequality holds if “(a)” is replaced by (b), (c) and (d), respectively. In conclusion, for fixed $i$, the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$ is controlled by $4(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-2}$. Now, $i\in\\{4,5,\cdots,m\\}$ has at most $m$ choices. Then the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $A_{3}\neq\emptyset$ is bounded by $m\cdot 4(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-2}=4(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}.$ Finally, add the bounds up in the above three scenarios, we get $N_{2}\leq 10(K_{1})^{4}\cdot m^{\delta+\beta+\gamma+\delta-1}$. The proof is completed by taking $K=10(K_{1})^{4}$. $\square$ ###### LEMMA 6.20 Assume $m\geq 5$ and $\alpha,\beta,\gamma,\delta$ are non-negative integers. Define $\displaystyle S=\\{5\leq i\leq m;(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922},\,\alpha_{i}+\beta_{i}\geq 1\ \mbox{and}\ \gamma_{i}+\delta_{i}\geq 1\\}.$ Then the following statements hold with a constant $K$ depending on $\alpha,\beta,\gamma,\delta$ but not $m$. (i) The total number of solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $S\neq\emptyset$ is bounded by $Km^{\alpha+\beta+\gamma+\delta-1}.$ (ii) The total number of solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) with $\gamma_{1}+\delta_{1}\geq 2$ is bounded by $K\cdot m^{\alpha+\beta+\gamma+\delta-2}$. Proof of Lemma 6.20. (i) Since $S\neq\emptyset$, then there exists some $5\leq i\leq m$ such that $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$. According to Scenario 3 in the proof of Lemma 6.19, the total number of solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$ is dominated by $K\cdot m^{\delta+\beta+\gamma+\delta-2}$, where $K$ is a constant depending on $\alpha,\beta,\gamma,\delta$ but not $m$. Noticing $5\leq i\leq m$, then the desired number is bounded by $(m-4)\cdot Km^{\delta+\beta+\gamma+\delta-2}\leq Km^{\delta+\beta+\gamma+\delta-1}$. (ii) Let $K$ be a constant in Lemma 6.18(i) with $a=\alpha$ or $\beta$. Also, the $K$ satisfies Lemma 6.18(ii) with $a\in\\{\gamma,\delta\\}$ and $b\in\\{1,2\\}$. Since $\gamma_{1}+\delta_{1}\geq 2$, then one of the three cases must be true: (a) $\gamma_{1}\geq 2$, (b) $\delta_{1}\geq 2$ or (c) $\gamma_{1}\geq 1$ and $\delta_{1}\geq 1$ simultaneously. By Lemma 6.18(ii), the total number of solutions $(\boldsymbol{\gamma},\boldsymbol{\delta})$ of the last two equations from (92) with $\gamma_{1}\geq 2$ is no more than $Km^{\gamma+\delta-2}$. The same holds if “$\gamma_{1}\geq 2$” is replaced by “$\delta_{1}\geq 2$”. Similarly, by Lemma 6.18(ii) again, the total number of solutions $(\boldsymbol{\gamma},\boldsymbol{\delta})$ of the last two equations from (92) satisfying $\gamma_{1}\geq 1$ and $\delta_{1}\geq 1$ is bounded by $Km^{\gamma-1}\cdot Km^{\delta-1}=K^{2}m^{\gamma+\delta-2}$. Consequently, the total number of solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) with $\gamma_{1}+\delta_{1}\geq 2$ is bounded by $\displaystyle Km^{\alpha}\cdot Km^{\beta}\cdot(Km^{\gamma+\delta-2}+Km^{\gamma+\delta-2}+K^{2}m^{\gamma+\delta-2})=(K^{4}+2K^{2})m^{\alpha+\beta+\gamma+\delta-2}.$ Therefore, the desired conclusion follows by regarding $K^{4}+2K^{2}$ as new constant $K$. $\square$ ###### LEMMA 6.21 Assume $m\geq 5$ and $\alpha,\beta,\gamma,\delta$ are non-negative integers. Define $\displaystyle S$ $\displaystyle=$ $\displaystyle\\{i\in\\{3,4\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\geq 1\\}\cup$ $\displaystyle\\{j\in\\{1,2\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \gamma_{j}+\delta_{j}\geq 1\\}.$ Let $T_{m,1}$ be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $|S|=1$. Let $T_{m,2}$ be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $|S|\geq 2$. Let $T_{m,3}$ be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $|S|=1$ and one of the following: (1) $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{1,2\\}$; (2) $\gamma_{j}+\delta_{j}\geq 1$ for some $j\in\\{3,4\\}$; (3) $\alpha_{k}+\beta_{k}\geq 2$ for some $5\leq k\leq m$; (4) $\gamma_{l}+\delta_{l}\geq 2$ for some $5\leq l\leq m$; (5) $\alpha_{t}+\beta_{t}=1$ and $\gamma_{t}+\delta_{t}=1$ simultaneously for some $5\leq t\leq m$. Then, there exists a constant $K$ depending on $\alpha,\beta,\gamma,\delta$ but not $m$ such that $|T_{m,1}|\leq Km^{\alpha+\beta+\gamma+\delta-1}$, $|T_{m,2}|\leq Km^{\alpha+\beta+\gamma+\delta-2}$ and $|T_{m,3}|\leq K\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ Proof of Lemma 6.21. If $\alpha=\beta=\gamma=\delta=0$, then $T_{m,1}=T_{m,2}=T_{m,3}=\emptyset$, the conclusion obviously holds. So we assume next that at least one of the four numbers is positive. Note that the bounds in the conclusions are $Km^{\alpha+\beta+\gamma+\delta-1}$ and $Km^{\alpha+\beta+\gamma+\delta-2}$. So, in case one of $\\{\alpha,\beta,\gamma,\delta\\}$ is zero, say, $\alpha=0$, any discussions below related to $\alpha$ will disappear by convention. In the following we will prove the three conclusions one by one. The bound for $T_{m,1}$. If $|S|=1$, then one of the following four situations must occur: (a) $\alpha_{i}\geq 1$ for some $i\in\\{3,4\\}$; (b) $\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$; (c) $\gamma_{j}\geq 1$ for some $j\in\\{1,2\\}$; (d) $\delta_{j}\geq 1$ for some $j\in\\{1,2\\}$. If $\alpha_{i}\geq 1$, by Lemma 6.18(ii), the total number of non-negative integer solutions $\boldsymbol{\alpha}$ of $\alpha_{1}+\cdots+\alpha_{m}=\alpha$ is no more than $K_{1}m^{\alpha-1}$. Here and later $K_{1}$ represents a constant depending on $\alpha,\beta,\gamma,\delta$ but not $m$, and could be different from line to line. By Lemma 6.18(i), the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (a) is controlled by $\displaystyle K_{1}m^{\alpha-1}\cdot K_{1}m^{\beta}\cdot K_{1}m^{\gamma}\cdot K_{1}m^{\delta}=(K_{1})^{4}\cdot m^{\alpha+\beta+\gamma+\delta-1}.$ Likewise the same bound holds if “(a)” is replaced by “(b)”, “(c)” or “(d)”. This implies $|T_{m,1}|$ is dominated by the sum of the four bounds, that is, $4(K_{1})^{4}\cdot m^{\alpha+\beta+\gamma+\delta-1}.$ The bound for $T_{m,2}$. The assumption $|S|\geq 2$ implies one of the following three statements must be true: (e) $\alpha_{3}+\beta_{3}\geq 1$ and $\alpha_{4}+\beta_{4}\geq 1$; (f) $\gamma_{1}+\delta_{1}\geq 1$ and $\gamma_{2}+\delta_{2}\geq 1$; (g) $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$ and $\gamma_{j}+\delta_{j}$ for some $j\in\\{1,2\\}$. Under (e), one of the next four cases has to be true: (e1) $\alpha_{3}\geq 1$ and $\alpha_{4}\geq 1$; (e2) $\alpha_{3}\geq 1$ and $\beta_{4}\geq 1$; (e3) $\alpha_{4}\geq 1$ and $\beta_{3}\geq 1$; (e4) $\beta_{3}\geq 1$ and $\beta_{4}\geq 1$. By Lemma 6.18(i) and (ii), the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (e1) is no more than $\displaystyle K_{1}m^{\alpha-2}\cdot K_{1}m^{\beta}\cdot K_{1}m^{\gamma}\cdot K_{1}m^{\delta}=(K_{1})^{4}\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ By the same spirit, the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (e2) is controlled by $\displaystyle K_{1}m^{\alpha-1}\cdot K_{1}m^{\beta-1}\cdot K_{1}m^{\gamma}\cdot K_{1}m^{\delta}=(K_{1})^{4}\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ By similar discussions, the same conclusion above also holds if “(e2)” is replaced by “(e3)” and “(e4)”, respectively, and “$(K_{1})^{4}$ is replaced by another polynomial of $K_{1}$. In conclusion, by summing the four bounds corresponding to $(e1)-(e4)$, we see that the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (e) is no more than $K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ Similarly, the same conclusion holds if “(e)” is replaced by “(f)” and “(g)”, respectively. The desired conclusion is then yielded by adding up the three bounds corresponding to (e), (f) and (g). The bound for $T_{m,3}$. Let $A_{1}$ be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $|S|=1$ and (1). Similarly, we define $A_{2},A_{3},A_{4},A_{5}$ with “(1)” is replaced by “(2)”, “(3)”, “(4)”, “(5)”, respectively. It suffices to show $\displaystyle|A_{i}|\leq C_{i}\cdot m^{\alpha+\beta+\gamma+\delta-2}$ (95) for $i=1,2,3,4,5$, where $C_{i}$ is a constant depending on $\alpha,\beta,\gamma,\delta$ but not $m$. We first look into $A_{1}$ and $A_{2}$. Assuming $|S|=1$ and (1), then there are two possibilities: $\alpha_{i}+\beta_{i}\geq 1$ and $\alpha_{j}+\beta_{j}\geq 1$ for a pair $(i,j)$ with $i,j\in\\{1,2,3,4\\}$ and $i\neq j$; $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{j}+\delta_{j}\geq 1$ for a pair $(i,j)$ with $i,j\in\\{1,2\\}$. Review the analysis of case (e) and (g) above and the conclusion that the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and (e) is no more than $K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ We know (95) is true for $i=1$. By symmetry, (95) is also true for $i=2$. Now we work on $A_{3}$ and $A_{4}$. For fixed $k\in\\{5,\cdots,m\\}$, the assumption $\alpha_{k}+\beta_{k}\geq 2$ hints that either $\alpha_{k}\geq 2$, $\beta_{k}\geq 2$ or the third possibility that $\alpha_{k}\geq 1$ and $\beta_{k}\geq 1$. On the other hand, the condition $|S|=1$ implies that either $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$ or $\gamma_{j}+\delta_{j}\geq 1$ for some $j\in\\{1,2\\}$. In total we see $3\times 2=6$ scenarios. The only scenario we have not encountered so far comes from the combination $\alpha_{k}\geq 2$ and $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$. In this case, either $\alpha_{k}\geq 2$ and $\alpha_{i}\geq 1$ for some $i\in\\{3,4\\}$ or the second possibility $\alpha_{k}\geq 2$ and $\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$. By Lemma 6.18(ii) and (iii), the total number of points $(\boldsymbol{\alpha},\boldsymbol{\beta})$ satisfying (92) and this combination is bounded by $K_{1}(2m^{\alpha-3}\cdot m^{\beta}+m^{\alpha-2}\cdot(2m^{\beta-1})=(4K_{1})m^{\alpha+\beta-3}.$ By using this and earlier argument, we have the same bound for any of the six scenarios. Adding them up and noting $k$ has $m-4$ choices, we obtain (95) for $i=3.$ Similarly, (95) also holds for $i=4.$ Finally we study $A_{5}$. Fix $5\leq t\leq m$. Then the assumptions that $\alpha_{t}+\beta_{t}=1$ and $\gamma_{t}+\delta_{t}=1$ have four possibilities: $\alpha_{t}=1$ and $\gamma_{t}=1$; $\alpha_{t}=1$ and $\delta_{t}=1$; $\beta_{t}=1$ and $\gamma_{t}=1$; $\beta_{t}=1$ and $\delta_{t}=1$. As aforementioned, the condition $|S|=1$ implies that either $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{3,4\\}$ or $\gamma_{j}+\delta_{j}\geq 1$ for some $j\in\\{1,2\\}$. So there are eight combinations with a common feature that the values of three different members of $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ are required to be at least $1$. By Lemma 6.18 and the assumption that $t$ has no more than $m$ choices, we know (95) is true for $i=5.$ After the verification of (95) for $i=1,2,3,4,5$, we obtain the bound for $T_{m,3}.$ Observe the three upper bounds for $T_{m,1}$, $T_{m,2}$ and $T_{m,3}$ are involved with polynomials of $K_{1}$. We choose $K$ to be the maximum of the three polynomials. The whole proof is completed. $\square$ ###### LEMMA 6.22 Assume $m\geq 5$ and $\alpha,\beta,\gamma,\delta$ are non-negative integers. Let $S$ be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and one of the following holds: (i) $\alpha_{i}+\beta_{i}\geq 1$ for some $i\in\\{1,2,3\\};$ (ii) $\gamma_{i}+\delta_{i}\geq 1$ for some $i\in\\{1,2,3\\};$ (iii) $\alpha_{i}+\beta_{i}\geq 2$ or $\gamma_{i}+\delta_{i}\geq 2$ for some $4\leq i\leq m$; (iv) $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$ simultaneously for some $1\leq i\leq m.$ Then $|S|\leq Km^{\alpha+\beta+\gamma+\delta-1}$ for some constant $K$ depending on $\alpha,\beta,\gamma,\delta$ but not $m$. Proof of Lemma 6.22. The proof is very similar to that of Lemma 6.21 and is even easier. We omit the details. $\square$ #### 6.2.4 Evaluation of Covariances between Polynomials of Gaussian Random Variables With the previous preparation, we are now ready to study covariances between polynomials of Gaussian random variables. The basic setting is that $\displaystyle\mbox{Let}\ m\geq 5\ \mbox{and}\ \\{(X_{1j},X_{2j},X_{3j},X_{4j})^{T}\in\mathbb{R}^{4};\,1\leq j\leq m\\}\ \mbox{be i.i.d. random vectors}$ $\displaystyle\mbox{with distribution}\ N_{4}(\mathbb{0},\mathbb{R}),\ \mbox{where}\ \mathbb{R}=(r_{ij})_{4\times 4}\ \mbox{and}\ r_{ii}=1\ \mbox{for each}\ i.\ \ \ $ (96) In this section, $K$ and $K_{1}$ always represent constants depending on $\alpha,\beta,\gamma,\delta$ but not $m$ or $\mathbb{R}$, and can be different from line to line. Review the notation $\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta}$ before the statement of Lemma 6.18. ###### LEMMA 6.23 Assume (6.2.4) holds. Let $\\{a,b,c,d,a_{i},b_{i},c_{i},d_{i};\,1\leq i\leq m\\}$ be non-negative integers with $a=\sum_{i=1}^{m}a_{i}$, $b=\sum_{i=1}^{m}b_{i}$, $c=\sum_{i=1}^{m}c_{i}$, $d=\sum_{i=1}^{m}d_{i}$. Define $U_{i}=X_{i1}^{a_{i}}X_{i2}^{b_{i}}$ and $V_{i}=X_{i3}^{c_{i}}X_{i4}^{d_{i}}$ for $1\leq i\leq m$. If $a_{i}+b_{i}$ and $c_{i}+d_{i}$ are both even for each $1\leq i\leq m$, then $\displaystyle\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ Proof of Lemma 6.23. If $a+b=0$, then $U_{i}=1$ for each $i$, and the conclusion trivially holds. If $c+d=0$, then $V_{i}=1$ for each $i$, and the conclusion is still valid. Now we assume that both $a+b\geq 1$ and $c+d\geq 1$. For any random variable $\xi$, its $L_{s}$-norm $\|\xi\|_{s}=(E|\xi|^{s})^{1/s}$ is non-decreasing in $s\geq 1$ by Hölder’s inequality. Furthermore, by the same inequality, since $X_{ij}\sim N(0,1)$ for each $i,j$, we have $E(|X_{11}|^{2a_{i}s})\leq E(|X_{11}|^{(2a+1)s})^{a_{i}/(2a+1)}\leq 1+E(|X_{11}|^{(2a+1)s}).$ A similar conclusion holds for $E(|X_{11}|^{2b_{i}s})$. Consequently, $\displaystyle\|U_{i}\|_{s}^{s}=E\big{(}|X_{i1}|^{sa_{i}}\cdot|X_{i2}^{sb_{i}}|\big{)}$ $\displaystyle\leq$ $\displaystyle\big{[}E\big{(}|X_{11}|^{2a_{i}s}\big{)}\big{]}^{1/2}\cdot\big{[}E\big{(}|X_{11}|^{2b_{i}s}\big{)}\big{]}^{1/2}$ $\displaystyle\leq$ $\displaystyle\big{[}1+E(|X_{11}|^{(2a+1)s})\big{]}\cdot\big{[}1+E(|X_{11}|^{(2b+1)s})\big{]}.$ Hence, $\displaystyle\max\\{\|U_{i}\|_{s},\|V_{i}\|_{s};\,1\leq i\leq m\\}\leq K_{1},$ (97) where $K_{1}$ depends on $a,b,c,d$ and $s\geq 1$. By definition, $a=\sum_{i=1}^{m}a_{i}$, hence $|\\{1\leq i\leq m;\,a_{i}\geq 1\\}|\leq a$. The same is also true for the analogue of $b$, $c$ and $d$, respectively. Set $\Psi=\\{1\leq i\leq m;\,a_{i}+b_{i}\geq 1\,\mbox{or}\ c_{i}+d_{i}\geq 1\\}$. For any $i\in\Psi$, either $a_{i}\geq 1$, $b_{i}\geq 1$, $c_{i}\geq 1$ or $d_{i}\geq 1$, it follows that $|\Psi|\leq a+b+c+d$. On the contrary, if $i\notin\Psi$ then $U_{i}=V_{i}=1$, therefore $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\mbox{Cov}\Big{(}\prod_{i\in\Psi}U_{i},\prod_{i\in\Psi}V_{i}\Big{)}.$ Set $k=|\Psi|\geq 1.$ Then $C(k):=3\prod_{i=1}^{k}(1+\|U_{i}\|_{k})(1+\|V_{i}\|_{k})\leq C(l)$ with $l=a+b+c+d$ since $\|\cdot\|_{s}$ is non-decreasing in $s$. By Lemma 6.17, there exists a constant $K>0$ depending on $a,b,c,d$ but not $m$ such that $\displaystyle\Big{|}\mbox{Cov}\Big{(}\prod_{i\in\Psi}U_{i},\prod_{i\in\Psi}V_{i}\Big{)}\Big{|}\leq K\cdot\sum_{i\in\Psi}\Big{|}\mbox{Cov}(U_{i},V_{i})\big{|}\leq K\cdot|\Psi|\cdot\max_{1\leq i\leq m}|\mbox{Cov}(U_{i},V_{i})|.$ Use the fact $|\Psi|\leq a+b+c+d$ to see $\displaystyle\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq(a+b+c+d)K\cdot\max_{1\leq i\leq m}|\mbox{Cov}(U_{i},V_{i})|.$ For non-negative integers $x$ and $y$ with $x+y$ being even, we know that $x$ and $y$ have to be both even or both odd. The conclusion then follows from Lemmas 6.14-6.16. $\square$ ###### LEMMA 6.24 Assume the setting in (6.2.4). Define $\displaystyle A_{i}=\frac{1}{m}\sum_{j=1}^{m}X_{ij}^{2},\ i=1,2,3,4.$ (98) For given non-negative integers $\alpha,\beta,\gamma,\delta$ and $q\in\\{1,2\\}$, we have $\displaystyle\big{|}\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}\big{|}\leq K\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ (99) Proof of Lemma 6.24. Write $\displaystyle(mA_{1})^{\alpha}=\Big{(}\sum_{j=1}^{m}X_{1j}^{2}\Big{)}^{\alpha}=\sum\frac{\alpha!}{\alpha_{1}!\cdots\alpha_{m}!}X_{11}^{2\alpha_{1}}\cdots X_{1m}^{2\alpha_{m}};$ (100) $\displaystyle(mA_{2})^{\beta}=\Big{(}\sum_{j=1}^{m}X_{2j}^{2}\Big{)}^{\beta}=\sum\frac{\beta!}{\beta_{1}!\cdots\beta_{m}!}X_{21}^{2\beta_{1}}\cdots X_{2m}^{2\beta_{m}};$ (101) $\displaystyle(mA_{3})^{\gamma}=\Big{(}\sum_{j=1}^{m}X_{3j}^{2}\Big{)}^{\gamma}=\sum\frac{\gamma!}{\gamma_{1}!\cdots\gamma_{m}!}X_{31}^{2\gamma_{1}}\cdots X_{3m}^{2\gamma_{m}};$ (102) $\displaystyle(mA_{4})^{\delta}=\Big{(}\sum_{j=1}^{m}X_{4j}^{2}\Big{)}^{\delta}=\sum\frac{\delta!}{\delta_{1}!\cdots\delta_{m}!}X_{41}^{2\delta_{1}}\cdots X_{4m}^{2\delta_{m}},$ (103) where $\alpha_{i}$, $\beta_{i}$, $\gamma_{i}$ and $\delta_{i}$ are non- negative integers for each $i$ satisfying $\displaystyle\sum_{i=1}^{m}\alpha_{i}=\alpha,\ \ \sum_{i=1}^{m}\beta_{i}=\beta,\ \ \sum_{i=1}^{m}\gamma_{i}=\gamma,\ \ \sum_{i=1}^{m}\delta_{i}=\delta,$ respectively. This restriction is exactly the same as (92). To avoid repetition in the future, once this restriction is used, we will always quote (92). First, we consider the case $q=1$. Notice $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ (104) is a linear combination of $N_{1}$ terms of the form $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ with positive coefficients no more than $\alpha!\beta!\gamma!\delta!$, where $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2\alpha_{1}+2}X_{21}^{2\beta_{1}+2}\ \ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{1}$ $\displaystyle=$ $\displaystyle X_{31}^{2\gamma_{1}+2}X_{41}^{2\delta_{1}+2}\ \ \mbox{and}\ \ V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ (105) for $2\leq i\leq m$. Here $N_{1}$ is the total number of non-negative integer solutions of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying the set of equations from (92). By Lemma 6.19(i), $\displaystyle N_{1}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta}.$ (106) $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\big{|}\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}\big{|}$ (107) $\displaystyle\leq$ $\displaystyle(K_{1}\alpha!\beta!\gamma!\delta!)\cdot m^{\alpha+\beta+\gamma+\delta}\cdot\max\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|},$ where the maximum is taken over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). Note that $\displaystyle(2\alpha_{1}+2)+\sum_{i=2}^{m}2\alpha_{i}=2\alpha+2;~{}~{}(2\beta_{1}+2)+\sum_{i=2}^{m}2\beta_{i}=2\beta+2;$ $\displaystyle(2\gamma_{1}+2)+\sum_{i=2}^{m}2\gamma_{i}=2\gamma+2;~{}~{}(2\delta_{1}+2)+\sum_{i=2}^{m}2\delta_{i}=2\delta+2.$ (108) By Lemma 6.23, $\displaystyle\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ (109) Combining this with (107), we arrive at $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\big{|}\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}\big{|}$ (110) $\displaystyle\leq$ $\displaystyle K\cdot m^{\alpha+\beta+\gamma+\delta}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ where $K=K_{1}^{2}\alpha!\beta!\gamma!\delta!$. So (99) follows for the case $q=1$. For the case $q=2$, we keep $U_{i}$ in (105) unchanged but modify $V_{i}$ such that $V_{2}=X_{32}^{2\gamma_{2}+2}X_{42}^{2\delta_{2}+2}$ and $V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ for all $i=1,3,\cdots,m.$ By Lemma 6.23, (109) still holds. From (107) we then get (99) for the case $q=2$. The proof is completed. $\square$ ###### LEMMA 6.25 Assume the setting in (6.2.4). Let $A_{i}$ be defined as in (98). Given non- negative integers $\alpha,\beta,\gamma,\delta$, set $\displaystyle I_{m}(a,b)=\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ for integers $a\geq 1$ and $b\geq 1$. Then $\displaystyle|I_{m}(a,b)|\leq\begin{cases}K\sum_{1\leq i<j\leq 4}r_{ij}^{2}&\text{if $(a,b)=(1,2)$};\\\ \frac{K}{m}\sum_{1\leq i<j\leq 4}r_{ij}^{2}&\text{if $(a,b)=(2,3)$}.\end{cases}$ Proof of Lemma 6.25. We will use the same notation as in the proof of Lemma 6.24. Review (92) and (100). We will consider the two cases for $(a,b)$ separately, that is, $(a,b)=(1,2)$ or $(a,b)=(2,3)$. Case 1: $(a,b)=(1,2)$. Set $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2(\alpha_{1}+1)}X_{21}^{2(\beta_{1}+1)}\ \ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}}\ \mbox{for}\ 2\leq i\leq m;$ (111) $\displaystyle V_{1}$ $\displaystyle=$ $\displaystyle X_{31}^{2\gamma_{1}+1}X_{41}^{2\delta_{1}+1},\ \ V_{2}=X_{32}^{2\gamma_{2}+1}X_{42}^{2\delta_{2}+1}\ \ \mbox{and}\ \ V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ (112) for $3\leq i\leq m$. As before, let $N_{1}$ be the total number of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ of (92) with a bound provided in (106). Then $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{31}X_{41})(X_{32}X_{42})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ is a linear combination of $N_{1}$ terms of the form $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$ with positive coefficients no more than $\alpha!\beta!\gamma!\delta!$. From the restriction in (92), we know $\alpha_{i}\in\\{0,\cdots,\alpha\\}$, $\beta_{i}\in\\{0,\cdots,\beta\\}$, $\gamma_{i}\in\\{0,\cdots,\gamma\\}$ and $\delta_{i}\in\\{0,\cdots,\delta\\}$ for each $i$. By Lemma 6.23 and a discussion similar to (6.2.4), we obtain $\displaystyle\max\Big{|}\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})\Big{|}\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ where the maximum is taken over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). By (107), we obtain the bound for $I_{m}(1,2).$ Case 2: $(a,b)=(2,3)$. Again, $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{32}X_{42})(X_{33}X_{43})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ (113) is a linear combination of $N_{1}$ terms of the form $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$ with positive coefficients no more than $\alpha!\beta!\gamma!\delta!$, where $U_{i}$ is as in (111) and $\displaystyle V_{1}^{\prime}=X_{31}^{2\gamma_{1}}X_{41}^{2\delta_{1}},\ \ V_{2}^{\prime}=X_{32}^{2\gamma_{2}+1}X_{42}^{2\delta_{2}+1},\ \ V_{3}^{\prime}=X_{33}^{2\gamma_{3}+1}X_{43}^{2\delta_{3}+1}\ \mbox{and}\ \ V_{i}^{\prime}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}\ \ \ \ \ \ $ (114) for $4\leq i\leq m$. Set $\displaystyle I_{1}$ $\displaystyle=$ $\displaystyle\\{1\\}\cup\\{2\leq i\leq m;\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\geq 1\\};$ $\displaystyle I_{2}$ $\displaystyle=$ $\displaystyle\\{2,3\\}\cup\big{\\{}i\in\\{1,4,5,\cdots,m\\};\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \gamma_{i}+\delta_{i}\geq 1\big{\\}}.$ Recalling (6.2.4), $\\{(U_{i},V_{i})^{T};\,1\leq i\leq m\\}$ are independent and $m\geq 5$. Reviewing the form of $U_{i}$ from (111) and $V_{i}^{\prime}$ from (114), we see $U_{i}=1$ if $\alpha_{i}+\beta_{i}=0$ for $2\leq i\leq m$ and $V_{i}^{\prime}=1$ if $\gamma_{i}+\delta_{i}=0$ for $i\in\\{1,4,5,\cdots,m\\}.$ Consequently, if $I_{1}\cap I_{2}=\emptyset$, Then $\alpha_{2}=\beta_{2}=\alpha_{3}=\beta_{3}=0$ and $\gamma_{1}=\delta_{1}=0$. This says that $U_{2}=U_{3}=V_{1}^{\prime}=1$. Also, for each $4\leq i\leq m$, the following have to be true: $\alpha_{i}+\beta_{i}=0$ if $\gamma_{i}+\delta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}=0$ if $\alpha_{i}+\beta_{i}\geq 1$. These imply $\\{U_{i},V_{i};\,1\leq i\leq m\\}$ are independent, and hence $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}^{\prime})=0$. Thus, we only need to study the situation $I_{1}\cap I_{2}\neq\emptyset$. Let $N_{2}$ be defined as in Lemma 6.19. Thus, the quantity from (113) is a linear combination of $N_{2}$ terms of the form $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$. From Lemma 6.19, $N_{2}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-1}.$ By Lemma 6.23 and applying the same argument of (6.2.4) to (114), we have $\displaystyle\max\Big{|}\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})\Big{|}\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}$ where the maximum is taken over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). Combining all of these we get $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\big{|}\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{32}X_{42})(X_{33}X_{43})A_{3}^{\gamma}A_{4}^{\delta}\big{)}\big{|}$ $\displaystyle\leq$ $\displaystyle K_{1}^{2}(1+\alpha+\beta)\alpha!\beta!\gamma!\delta!\cdot m^{\alpha+\beta+\gamma+\delta-1}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ This gives the bound for $I_{m}(2,3)$. $\square$ ###### LEMMA 6.26 Assume the setting in (6.2.4). Let $A_{i}$ be defined as in (98). Given non- negative integers $\alpha,\beta,\gamma,\delta$, set $\displaystyle J_{m}(a,b)=\mbox{Cov}\big{(}(X_{11}X_{21})(X_{12}X_{22})A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ (115) for integers $a\geq 1$ and $b\geq 1$. Then $\displaystyle|J_{m}(1,2)|\leq K\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ Proof of Lemma 6.26. Set $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2\alpha_{1}+1}X_{21}^{2\beta_{1}+1},\ \ U_{2}=X_{12}^{2\alpha_{2}+1}X_{22}^{2\beta_{2}+1}\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{1}$ $\displaystyle=$ $\displaystyle X_{31}^{2\gamma_{1}+1}X_{41}^{2\delta_{1}+1},\ \ V_{2}=X_{32}^{2\gamma_{2}+1}X_{42}^{2\delta_{2}+1}\ \mbox{and}\ \ V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ (116) for $3\leq i\leq m$. Let $N_{1}$ be the total number of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). From (106), we see $N_{1}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta}$. Review the formulas between (100) and (103). We have $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\mbox{Cov}\big{(}(X_{11}X_{21})(X_{12}X_{22})A_{1}^{\alpha}A_{2}^{\beta},(X_{31}X_{41})(X_{32}X_{42})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ (117) is a linear combination of $N_{1}$ terms of the form $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$ with positive coefficients no more than $\alpha!\beta!\gamma!\delta!$. Recall the notation $J_{m}(a,b)$ and (117). We then have $\displaystyle m^{\alpha+\beta+\gamma+\delta}\cdot\big{|}J_{m}(1,2)\big{|}\leq N_{1}\cdot(\alpha!\beta!\gamma!\delta!)\cdot\max\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|},$ where the maximum is taken over all $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ satisfying (92). By (106), we have $\displaystyle\big{|}J_{m}(1,2)\big{|}\leq K_{1}\cdot\max\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|},$ (118) where the maximum is taken over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). By Lemma 6.23 and applying the same argument of (6.2.4) to (116), we have $\displaystyle\max|\mbox{Cov}(U_{i},V_{i})|\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ (119) where the maximum is taken over all $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ satisfying (92). This and (118) conclude $\displaystyle\big{|}J_{m}(1,2)\big{|}\leq K_{1}^{2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ This proves the inequality for $(a,b)=(1,2).$ $\square$ ###### LEMMA 6.27 Assume the setting in (6.2.4). Give non-negative integers $\alpha_{i},\beta_{i},\gamma_{j},\delta_{j}$ for $i=1,2$ and $j=3,4$, set $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}X_{21},\ \ U_{2}=X_{12}X_{22}\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}},\ i\in\\{3,4\\};$ $\displaystyle V_{3}$ $\displaystyle=$ $\displaystyle X_{33}X_{43},\ \ V_{4}=X_{34}X_{44}\ \mbox{and}\ \ V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}},\ i\in\\{1,2\\}.$ (120) Define $\displaystyle S$ $\displaystyle=$ $\displaystyle\\{i\in\\{3,4\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \alpha_{i}+\beta_{i}\geq 1\\}\cup$ $\displaystyle\\{i\in\\{1,2\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{and}\ \gamma_{i}+\delta_{i}\geq 1\\}.$ Then the following hold. (i) If $S=\\{1\\}$ and $\gamma_{1}+\delta_{1}=1$, then $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}=\begin{cases}2(r_{12}r_{23}r_{31})r_{34}^{2},&\text{if $\gamma_{1}=1$ and $\delta_{1}=0$};\\\ 2(r_{12}r_{24}r_{41})r_{34}^{2},&\text{if $\gamma_{1}=0$ and $\delta_{1}=1$}.\end{cases}$ (ii) If $S=\\{2\\}$ and $\gamma_{2}+\delta_{2}=1$, then $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}=\begin{cases}2(r_{12}r_{23}r_{31})r_{34}^{2},&\text{if $\gamma_{2}=1$ and $\delta_{2}=0$};\\\ 2(r_{12}r_{24}r_{41})r_{34}^{2},&\text{if $\gamma_{2}=0$ and $\delta_{2}=1$}.\end{cases}$ (iii) If $S=\\{3\\}$ and $\alpha_{3}+\beta_{3}=1$, then $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}=\begin{cases}2(r_{13}r_{34}r_{41})r_{12}^{2},&\text{if $\alpha_{3}=1$ and $\beta_{3}=0$};\\\ 2(r_{23}r_{34}r_{42})r_{12}^{2},&\text{if $\alpha_{3}=0$ and $\beta_{3}=1$}.\end{cases}$ (iv) If $S=\\{4\\}$ and $\alpha_{4}+\beta_{4}=1$, then $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}=\begin{cases}2(r_{13}r_{34}r_{41})r_{12}^{2},&\text{if $\alpha_{4}=1$ and $\beta_{4}=0$};\\\ 2(r_{23}r_{34}r_{42})r_{12}^{2},&\text{if $\alpha_{4}=0$ and $\beta_{4}=1$}.\end{cases}$ Proof of Lemma 6.27. By assumption, $\\{(X_{1j},X_{2j},X_{3j},X_{4j})^{T}\in\mathbb{R}^{4};\,1\leq j\leq m\\}$ are i.i.d. random vectors with distribution $N_{4}(\mathbb{0},\mathbb{R})$ where $\mathbb{R}=(r_{ij})_{4\times 4}$ and $r_{ii}=1$ for each $i.$ Thus, $\displaystyle EU_{1}=EU_{2}=r_{12}\ \ \mbox{and}\ \ EV_{3}=EV_{4}=r_{34}.$ (121) (i) Under the case $S=\\{1\\}$ and $\gamma_{1}+\delta_{1}=1$, we know that $\alpha_{3}=\beta_{3}=\alpha_{4}=\beta_{4}=\gamma_{2}=\delta_{2}=0$ and that $(\gamma_{1},\delta_{1})$ is equal to $(1,0)$ or $(0,1)$. Hence $\displaystyle U_{1}=X_{11}X_{21},\ U_{2}=X_{12}X_{22},\ U_{3}=1,\ U_{4}=1;$ $\displaystyle V_{1}=X_{31}^{2}\,\mbox{or}\,X_{41}^{2},\ V_{2}=1,\ V_{3}=X_{33}X_{43},\ V_{4}=X_{34}X_{44}.$ (122) This implies that $\\{(U_{1},V_{1})^{T},U_{i},V_{i};\,2\leq i\leq 4\\}$ are independent. By (121) and by Lemma 6.4, $\displaystyle\mbox{Cov}(U_{1},X_{31}^{2})=E(X_{11}X_{21}X_{31}^{2})-r_{12}=2r_{13}r_{23};$ $\displaystyle\mbox{Cov}(U_{1},X_{41}^{2})=E(X_{11}X_{21}X_{41}^{2})-r_{12}=2r_{14}r_{24}.$ Notice $\displaystyle\mbox{Cov}(\xi_{1}\eta_{1},\xi_{2}\eta_{2})=E\xi_{1}\cdot E\xi_{2}\cdot\mbox{Cov}(\eta_{1},\eta_{2})$ (123) if $\xi_{1}$ and $\xi_{2}$ are independent and $\\{\xi_{1},\xi_{2}\\}$ are independent of $\\{\eta_{1},\eta_{2}\\}$. Note $V_{1}=X_{31}^{2}$ if $(\gamma_{1},\delta_{1})=(1,0)$ and $V_{1}=X_{41}^{2}$ if $(\gamma_{1},\delta_{1})=(0,1)$. Then $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\mbox{Cov}(U_{1},V_{1})\cdot\prod_{i=2,3,4}\big{(}EU_{i}\cdot EV_{i}\big{)}$ $\displaystyle=$ $\displaystyle\begin{cases}2(r_{12}r_{23}r_{31})r_{34}^{2},&\text{if $\gamma_{1}=1$ and $\delta_{1}=0$};\\\ 2(r_{12}r_{24}r_{41})r_{34}^{2},&\text{if $\gamma_{1}=0$ and $\delta_{1}=1$}.\end{cases}$ (ii) Under the case $S=\\{2\\}$ and $\gamma_{2}+\delta_{2}=1$, we know that $\alpha_{3}=\beta_{3}=\alpha_{4}=\beta_{4}=\gamma_{1}=\delta_{1}=0$ and that $(\gamma_{2},\delta_{2})$ is equal to $(1,0)$ or $(0,1)$. Hence $\displaystyle U_{1}=X_{11}X_{21},\ U_{2}=X_{12}X_{22},\ U_{3}=1,\ U_{4}=1;$ $\displaystyle V_{1}=1,\ V_{2}=X_{32}^{2}\,\mbox{or}\,X_{42}^{2},\ V_{3}=X_{33}X_{43},\ V_{4}=X_{34}X_{44}.$ Then, $U_{1}$, $(U_{2},V_{2})^{T}$, $V_{3}$ and $V_{4}$ are independent. By (121), $\displaystyle\mbox{Cov}(U_{2},X_{32}^{2})=E(X_{12}X_{22}X_{32}^{2})-r_{12}=2r_{13}r_{23};$ $\displaystyle\mbox{Cov}(U_{2},X_{42}^{2})=E(X_{12}X_{22}X_{42}^{2})-r_{12}=2r_{14}r_{24}.$ By (123), $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\mbox{Cov}(U_{2},V_{2})\cdot\prod_{i=1,3,4}^{m}\big{(}EU_{i}\cdot EV_{i}\big{)}$ $\displaystyle=$ $\displaystyle\begin{cases}2(r_{12}r_{23}r_{31})r_{34}^{2},&\text{if $\gamma_{2}=1$ and $\delta_{2}=0$};\\\ 2(r_{12}r_{24}r_{41})r_{34}^{2},&\text{if $\gamma_{2}=0$ and $\delta_{2}=1$}.\end{cases}$ (iii) Under the case $S=\\{3\\}$ and $\alpha_{3}+\beta_{3}=1$, we know that $\alpha_{4}=\beta_{4}=\gamma_{1}=\delta_{1}=\gamma_{2}=\delta_{2}=0$ and that $(\alpha_{3},\beta_{3})$ is equal to $(1,0)$ or $(0,1)$. Hence $\displaystyle U_{1}=X_{11}X_{21},\ U_{2}=X_{12}X_{22},\ U_{3}=X_{13}^{2}\ \mbox{or}\ X_{23}^{2},\ U_{4}=1;$ $\displaystyle V_{1}=1,\ V_{2}=1,\ V_{3}=X_{33}X_{43},\ V_{4}=X_{34}X_{44}.$ Then, $U_{1}$, $U_{2}$, $(U_{3},V_{3})^{T}$ and $V_{4}$ are independent. By (121), $\displaystyle\mbox{Cov}(X_{13}^{2},V_{3})=E(X_{33}X_{43}X_{13}^{2})-r_{34}=2r_{13}r_{14};$ $\displaystyle\mbox{Cov}(X_{23}^{2},V_{3})=E(X_{33}X_{43}X_{23}^{2})-r_{34}=2r_{23}r_{24}.$ By (123), $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\mbox{Cov}(U_{3},V_{3})\cdot\prod_{i=1,2,4}^{m}\big{(}EU_{i}\cdot EV_{i}\big{)}$ $\displaystyle=$ $\displaystyle\begin{cases}2(r_{13}r_{34}r_{41})r_{12}^{2},&\text{if $\alpha_{3}=1$ and $\beta_{3}=0$};\\\ 2(r_{23}r_{34}r_{42})r_{12}^{2},&\text{if $\alpha_{3}=0$ and $\beta_{3}=1$}.\end{cases}$ (iv) Under the case $S=\\{4\\}$ and $\alpha_{4}+\beta_{4}=1$, we know that $\alpha_{3}=\beta_{3}=\gamma_{1}=\delta_{1}=\gamma_{2}=\delta_{2}=0$ and that $(\alpha_{4},\beta_{4})$ is equal to $(1,0)$ or $(0,1)$. Hence $\displaystyle U_{1}=X_{11}X_{21},\ U_{2}=X_{12}X_{22},\ U_{3}=1,\ U_{4}=X_{14}^{2}\ \mbox{or}\ X_{24}^{2};$ $\displaystyle V_{1}=1,\ V_{2}=1,\ V_{3}=X_{33}X_{43},\ V_{4}=X_{34}X_{44}.$ Then, $U_{1}$, $U_{2}$, $V_{3}$, $(U_{4},V_{4})^{T}$ are independent. By (121), $\displaystyle\mbox{Cov}(X_{14}^{2},V_{4})=E(X_{34}X_{44}X_{14}^{2})-r_{34}=2r_{13}r_{14};$ $\displaystyle\mbox{Cov}(X_{24}^{2},V_{4})=E(X_{34}X_{44}X_{24}^{2})-r_{34}=2r_{23}r_{24}.$ By (123), $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\mbox{Cov}(U_{4},V_{4})\cdot\prod_{i=1,2,3}^{m}\big{(}EU_{i}\cdot EV_{i}\big{)}$ $\displaystyle=$ $\displaystyle\begin{cases}2(r_{13}r_{34}r_{41})r_{12}^{2},&\text{if $\alpha_{4}=1$ and $\beta_{4}=0$};\\\ 2(r_{23}r_{34}r_{42})r_{12}^{2},&\text{if $\alpha_{4}=0$ and $\beta_{4}=1$}.\end{cases}$ The verification is finished. $\square$ Let $\alpha_{1},\cdots,\alpha_{m},\beta_{1},\cdots,\beta_{m},\gamma_{1},\cdots,\gamma_{m},\delta_{1},\cdots,\delta_{m}$ be non-negative integers, review the notation $\boldsymbol{\alpha}=(\alpha_{1},\cdots,\alpha_{m})$, $\boldsymbol{\beta}=(\beta_{1},\cdots,\beta_{m})$, $\boldsymbol{\gamma}=(\gamma_{1},\cdots,\gamma_{m})$ and $\boldsymbol{\delta}=(\delta_{1},\cdots,\delta_{m})$. For non-negative integers $\alpha,\beta,\gamma,\delta$ and $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), define $\displaystyle C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})=\frac{\alpha!}{\alpha_{1}!\cdots\alpha_{m}!}\cdot\frac{\beta!}{\beta_{1}!\cdots\beta_{m}!}\cdot\frac{\gamma!}{\gamma_{1}!\cdots\gamma_{m}!}\cdot\frac{\delta!}{\delta_{1}!\cdots\delta_{m}!}.$ (124) ###### LEMMA 6.28 Assume the setting in (6.2.4). Define $\\{U_{i},V_{i};\,1\leq i\leq m\\}$ such that $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2\alpha_{1}+1}X_{21}^{2\beta_{1}+1},\ \ U_{2}=X_{12}^{2\alpha_{2}+1}X_{22}^{2\beta_{2}+1}\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{3}$ $\displaystyle=$ $\displaystyle X_{33}^{2\gamma_{3}+1}X_{43}^{2\delta_{3}+1},\ \ V_{4}=X_{34}^{2\gamma_{4}+1}X_{44}^{2\delta_{4}+1}\ \mbox{and}\ \ V_{j}=X_{3j}^{2\gamma_{j}}X_{4j}^{2\delta_{j}}$ for $3\leq i\leq m$ and $j\in\\{1,2,\cdots,m\\}\backslash\\{3,4\\}$, where $\alpha_{i},\beta_{i},\gamma_{i},\delta_{i}$ satisfies (92). Define $\displaystyle S$ $\displaystyle=$ $\displaystyle\\{i\in\\{3,4\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{with}\ \alpha_{i}+\beta_{i}\geq 1\\}\cup$ $\displaystyle\\{j\in\\{1,2\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{with}\ \gamma_{j}+\delta_{j}\geq 1\\}.$ Obviously, $S\subset\\{1,2,3,4\\}$. Then $\displaystyle\sum_{(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta}):|S|=1}C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\rho_{m,1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\rho_{m,2}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\rho_{m,3}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]}$ where $\max\\{m^{2}|\rho_{m,1}|,m|\rho_{m,2}|,m|\rho_{m,3}|\\}\leq K$ and $\rho_{m,2}$ and $\rho_{m,3}$ do not depend on $\mathbb{R}.$ Proof of Lemma 6.28. First, $|S|=1$ implies that $S=\\{1\\}$, $S=\\{2\\}$, $S=\\{3\\}$ or $S=\\{4\\}.$ We will first examine the case $S=\\{1\\}$ next. Assume now $S=\\{1\\}$. Then $\gamma_{1}+\delta_{1}\geq 1$ and $\alpha_{3}=\beta_{3}=\alpha_{4}=\beta_{4}=\gamma_{2}=\delta_{2}=0$. Hence $\displaystyle U_{3}=U_{4}=V_{2}=1\ \mbox{and}\ U_{2},V_{3},V_{4}\ \mbox{are independent themselves and they are}$ $\displaystyle\mbox{also independent of}\ \\{(U_{i},V_{i})^{T};i=1,5,6,\cdots,m\\}$ (125) by the fact $\\{(U_{i},V_{i})^{T};\,1\leq i\leq m\\}$ are independent aforementioned. By Lemma 6.23 and applying the same argument of (6.2.4) to $\\{U_{i},V_{i},\,1\leq i\leq m\\}$, we obtain $\displaystyle\max|\mbox{Cov}(U_{i},V_{i})|\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ (126) where the maximum is taken over all $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ satisfying (92). Next we bound $\displaystyle\sum\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|},$ (127) where the sum runs over $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfing(92) and $S=\\{1\\}$. When $S=\\{1\\}$, we know $\gamma_{1}+\delta_{1}\geq 1$. We will distinguish two cases: $\gamma_{1}+\delta_{1}\geq 2$ and $\gamma_{1}+\delta_{1}=1$. Recall the definition of $T_{m,3}$ from Lemma 6.21. For the case $\gamma_{1}+\delta_{1}=1$, we will divide it into another two case: $T_{m,3}$ and $T_{m,3}^{c}$. The derivation of bounds for (127) under $\gamma_{1}+\delta_{1}\geq 2$ and $T_{m,3}^{c}$ are easier than that under $T_{m,3}$. We will take two steps next two handle the two cases. Step 1. By Lemma 6.20(ii), the total number of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ of (92) with $\gamma_{1}+\delta_{1}\geq 2$ is bounded by $K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}$. This joined with (126) implies that $\displaystyle\sum\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ (128) where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $S=\\{1\\}$ and $\gamma_{1}+\delta_{1}\geq 2$. Review the definition of $T_{m,3}$ in Lemma 6.21. We have $|T_{m,3}|\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}.$ This together with (126) yields $\displaystyle\sum\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ (129) where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $S=\\{1\\}$ and $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\in T_{m,3}.$ Step 2. We now estimate (127) as the index $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies (92), the event $S=\\{1\\}$ holds and $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\notin T_{m,3}$. Review the definition of $T_{m,3}$ and expressions of $U_{i}$ and $V_{i}$, under the new conditions, $U_{i}$ and $V_{i}$ take much simpler form: $\displaystyle U_{1}=X_{11}X_{21},\ U_{2}=X_{12}X_{22},\ U_{3}=1,\ U_{4}=1;$ $\displaystyle V_{1}=X_{31}^{2}\,\mbox{or}\,X_{41}^{2},\ V_{2}=1,\ V_{3}=X_{33}X_{43},\ V_{4}=X_{34}X_{44}.$ (130) Furthermore, if $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\notin T_{m,3}$, then $\alpha_{k}+\beta_{k}\leq 1$ for all $5\leq k\leq m$, $\gamma_{l}+\delta_{l}\leq 1$ for all $5\leq l\leq m$ and the two identities $\alpha_{t}+\beta_{t}=1$ and $\gamma_{t}+\delta_{t}=1$ cannot occur at the same time for any $5\leq t\leq m$. The key observation is that, if $\alpha_{t}+\beta_{t}=1$ then $U_{t}\sim\chi^{2}(1)$ and $V_{t}=1$. Similarly, if $\gamma_{t}+\delta_{t}=1$ then $U_{t}=1$ and $V_{t}\sim\chi^{2}(1)$. Therefore, the $2m-8$ random variables in $\\{U_{i},V_{i};\,5\leq i\leq m\\}$ are independent random variables, each has mean $1$. As used earlier, $\\{(U_{i},V_{i})^{T};\,1\leq i\leq m\\}$ are independent. This and the special structures in (6.2.4) imply that the $2m-1$ random quantities in $\\{(U_{1},V_{1})^{T},U_{i},V_{i};\,2\leq i\leq m\\}$ are independent. By Lemma 6.27(i) and (123), $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{4}U_{i},\prod_{i=1}^{4}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle\begin{cases}2(r_{12}r_{23}r_{31})r_{34}^{2},&\text{if $\gamma_{1}=1$ and $\delta_{1}=0$};\\\ 2(r_{12}r_{24}r_{41})r_{34}^{2},&\text{if $\gamma_{1}=0$ and $\delta_{1}=1$}.\end{cases}$ This says that $\displaystyle\sum\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=L_{m,1}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]},$ (131) where the sum runs over $\Gamma$, defined to be the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92), $|S|=1$ with $\gamma_{1}+\delta_{1}=1$, and $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\notin T_{m,3}$; $L_{m,1}:=2|\Gamma|$. Obviously, $L_{m,1}$ does not depend on the matrix $\mathbb{R}=(r_{ij})$. By the bound on $T_{m,1}$ in Lemma 6.21, we have $L_{m,1}\leq K_{1}m^{\alpha+\beta+\gamma+\delta-1}.$ Notice, if $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies (92), the event $S=\\{1\\}$ holds and $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\notin T_{m,3}$, then any one from $\\{\alpha_{i},\beta_{i},\gamma_{i},\delta_{i};\,1\leq i\leq m\\}$ is either $1$ or $0$. According to (124), $C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})=\alpha!\beta!\gamma!\delta!$. Then (131) becomes $\displaystyle\sum C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\alpha!\beta!\gamma!\delta!\cdot L_{m,1}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}.$ Thus, combining this with (128) and (129) and using the trivial fact that $C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\leq\alpha!\beta!\gamma!\delta!$, we arrive at $\displaystyle\sum C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\tau_{m,1}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}+\tau_{m,1}^{\prime},$ (132) where the sum runs over the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $S=\\{1\\}$, and where $\tau_{m,1}$ does not depend on $r_{ij}$ and $\displaystyle|\tau_{m,1}|\leq K_{1}m^{\alpha+\beta+\gamma+\delta-1}\ \ \mbox{and}\ \ |\tau_{m,1}^{\prime}|\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ (133) By applying the same argument as the derivation of (132) to $S=\\{2\\}$ and using Lemma 6.27(ii), we get an analogue of (132) as the sum runs over the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $S=\\{2\\}$, where $\tau_{m,1}$ and $\tau_{m1}^{\prime}$ will be replaced by two corresponding symbols but still satisfy (133). By applying the same argument as the derivation of (132) to $S=\\{3\\}$ and using Lemma 6.27(iii), we get $\displaystyle\sum\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\tilde{\tau}_{m,1}\cdot\big{[}(r_{13}r_{34}r_{41}+r_{23}r_{34}r_{42})r_{12}^{2}\big{]}+\tilde{\tau}^{\prime}_{m,1},$ (134) where the sum runs over the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $S=\\{3\\}$, and the inequalities from (133) still hold as “$(\tau_{m,1},\tau_{m,1}^{\prime})$” is replaced by “$(\tilde{\tau}_{m,1},\tilde{\tau}^{\prime}_{m,1})$”. By applying the same argument as the derivation of (132) to $S=\\{4\\}$ and using Lemma 6.27(iv), we get an analogue of (134) as the sum runs over the set of $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $S=\\{4\\}$, the quantities “$(\tilde{\tau}_{m,1},\tilde{\tau}_{m,1}^{\prime})$” is replaced by “$(T_{m,1},T_{m1}^{\prime\prime})$”, and $T_{m,1}$ does not depend on $r_{ij}$ and $\displaystyle|T_{m,1}|\leq K_{1}m^{\alpha+\beta+\gamma+\delta-1}\ \ \mbox{and}\ \ |T_{m,1}^{\prime}|\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ The proof is completed by summing the above four upper bounds corresponding to $S=\\{1\\},\\{2\\},\\{3\\}$ and $\\{4\\}$. $\square$ ###### LEMMA 6.29 Assume the setting in (6.2.4). Let $J_{m}(a,b)$ be defined as in Lemma 6.26. Then $\displaystyle J_{m}(3,4)$ $\displaystyle=$ $\displaystyle\tau_{m,1}r_{12}^{2}r_{34}^{2}+\tau_{m,2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\tau_{m,3}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\tau_{m,4}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]},$ where $\max\\{m|\tau_{m,1}|,m^{2}|\tau_{m,2}|,m|\tau_{m,3}|,m|\tau_{m,4}|\\}\leq K$ and $\tau_{m,3}$ and $\tau_{m,4}$ do not depend on $\mathbb{R}.$ Proof of Lemma 6.29. Set $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2\alpha_{1}+1}X_{21}^{2\beta_{1}+1},\ \ U_{2}=X_{12}^{2\alpha_{2}+1}X_{22}^{2\beta_{2}+1}\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{3}$ $\displaystyle=$ $\displaystyle X_{33}^{2\gamma_{3}+1}X_{43}^{2\delta_{3}+1},\ \ V_{4}=X_{34}^{2\gamma_{4}+1}X_{44}^{2\delta_{4}+1}\ \mbox{and}\ \ V_{j}=X_{3j}^{2\gamma_{j}}X_{4j}^{2\delta_{j}}$ (135) for $3\leq i\leq m$ and $j\in\\{1,2,,\cdots,m\\}\backslash\\{3,4\\}$. Let $N_{1}$ be the total number of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ for (92) with a bound provided in (106). Review the discussions between (100) and (105). We know that $m^{\alpha+\beta+\gamma+\delta}J_{m}(3,4)$ is a linear combination of $N_{1}$ terms of the form $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$ with positive coefficients no more than $\alpha!\beta!\gamma!\delta!$. Define $\displaystyle S$ $\displaystyle=$ $\displaystyle\\{i\in\\{3,4\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{with}\ \alpha_{i}+\beta_{i}\geq 1\\}\cup$ $\displaystyle\\{j\in\\{1,2\\};(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{with}\ \gamma_{j}+\delta_{j}\geq 1\\};$ $\displaystyle S_{1}$ $\displaystyle=$ $\displaystyle\\{5\leq i\leq m;\,(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\ \mbox{satisfies}\ \eqref{husoq922}\ \mbox{with either}\ \alpha_{i}+\beta_{i}\geq 1\ \mbox{or}\ \delta_{i}+\gamma_{i}\geq 1\\}.$ Similar to the proof of Lemma 6.23, we have $\displaystyle|S_{1}|$ $\displaystyle\leq$ $\displaystyle\alpha+\beta+\gamma+\delta.$ (136) We now estimate $\mbox{Cov}(\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i})$ by differentiating three cases: $|S|=0$, $|S|=1$ and $|S|\geq 2$. Quickly, for the case $|S|=1$, by reviewing $C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ from (124), we have from Lemma 6.28 that $\displaystyle\sum_{(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta}):|S|=1}C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ (137) $\displaystyle=$ $\displaystyle\rho_{m,1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\rho_{m,2}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\rho_{m,3}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]},$ where $\rho_{m,1}\leq Km^{-2}$ and $\rho_{m,2}\vee\rho_{m,3}\leq m^{-1}K$, and where $\rho_{m,2}$ and $\rho_{m,3}$ do not depend on $\mathbb{R}.$ To finish the proof, it remains to study the cases “$|S|=0$” and “$|S|\geq 2$”. This will be worked out in two steps. Step 1: First, the condition $|S|=0$ implies $\alpha_{3}=\alpha_{4}=\beta_{3}=\beta_{4}=\gamma_{1}=\gamma_{2}=\delta_{1}=\delta_{2}=0$, and hence we have from (135) that $\displaystyle U_{1}=X_{11}^{2\alpha_{1}+1}X_{21}^{2\beta_{1}+1},\ \ U_{2}=X_{12}^{2\alpha_{2}+1}X_{22}^{2\beta_{2}+1},\ U_{3}=1,U_{4}=1\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{1}=1,V_{2}=1,V_{3}=X_{33}^{2\gamma_{3}+1}X_{43}^{2\delta_{3}+1},\ \ V_{4}=X_{34}^{2\gamma_{4}+1}X_{44}^{2\delta_{4}+1}\ \mbox{and}\ \ V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ for $i=5,\cdots,m$. By assumption (6.2.4), $\\{(X_{1i},X_{2i},X_{3i},X_{4i})^{T}\in\mathbb{R}^{4};\,1\leq i\leq m\\}$ are i.i.d. random vectors with distribution $N_{4}(\mathbb{0},\mathbb{R})$, where $\mathbb{R}=(r_{ij})_{4\times 4}\ \mbox{and}\ r_{ii}=1$ for each $i$. In particular, $\\{(U_{i},V_{i})^{T};\,1\leq i\leq m\\}$ are independent. As a consequence, $U_{1},U_{2},V_{3},V_{4}$ are themselves independent, and furthermore $\\{U_{1},U_{2},V_{3},V_{4}\\}$ are also independent of $\\{U_{i},V_{i};\,5\leq i\leq m\\}$. By (123), $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle EU_{1}EU_{2}EV_{3}EV_{4}\cdot\mbox{Cov}\Big{(}\prod_{i=5}^{m}U_{i},\prod_{i=5}^{m}V_{i}\Big{)}$ (138) $\displaystyle=$ $\displaystyle EU_{1}EU_{2}EV_{3}EV_{4}\cdot\mbox{Cov}\Big{(}\prod_{i\in S_{1}}U_{i},\prod_{i\in S_{1}}V_{i}\Big{)}.$ By definition of $S_{1}$, we see that $\sum_{i\in S_{1}}\alpha_{i}=\alpha$, $\sum_{i\in S_{1}}\beta_{i}=\beta$, $\sum_{i\in S_{1}}\gamma_{i}=\gamma$ and $\sum_{i\in S_{1}}\delta_{i}=\delta$. By Lemma 6.23, $\displaystyle\big{|}\mbox{Cov}\Big{(}\prod_{i\in S_{1}}U_{i},\prod_{i\in S_{1}}V_{i}\Big{)}\big{|}\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}\leq 6K_{1}.$ Bounds for $EU_{1},EU_{2},EV_{3},EV_{4}$ are given in Lemma 6.13. By the lemma, we see $\displaystyle\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}$ $\displaystyle\leq$ $\displaystyle K_{1}r_{12}^{2}r_{34}^{2}\cdot\Big{|}\mbox{Cov}\Big{(}\prod_{i=5}^{m}U_{i},\prod_{i=5}^{m}V_{i}\Big{)}\Big{|}$ (139) $\displaystyle\leq$ $\displaystyle K_{1}\cdot r_{12}^{2}r_{34}^{2}.$ Let $S_{2}$ be the set of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) with $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$ simultaneously for some $5\leq i\leq m$. By Lemma 6.22, we have $|S_{2}|\leq Km^{\alpha+\beta+\gamma+\delta-1}$. Recall $U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}}$ and $V_{i}=X_{3i}^{2\gamma_{i}}X_{4i}^{2\delta_{i}}$ for $5\leq i\leq m.$ If $\alpha_{i}=\beta_{i}=0$ then $U_{i}=1$. Likewise, $V_{i}=1$ if $\gamma_{i}=\delta_{i}=0.$ This together with the fact $\\{(U_{i},V_{i});\,1\leq i\leq m\\}$ are independent implies $\prod_{i=5}^{m}U_{i}$ and $\prod_{i=5}^{m}V_{i}$ are independent (hence their covariance is zero) if there is no $i\in\\{5,\cdots,m\\}$ such that $\alpha_{i}+\beta_{i}\geq 1$ and $\gamma_{i}+\delta_{i}\geq 1$ at the same time. Therefore, by (139), $\displaystyle\sum\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta-1}r_{12}^{2}r_{34}^{2}$ (140) where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $|S|=0$. Step 2: Assume the index $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies that $|S|\geq 2$. Review the structures of $U_{i}$ and $V_{i}$ from (135), we have from Lemma 6.23 that $\displaystyle\max\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\cdot\sum_{1\leq i<j\leq 4}r_{12}^{2},$ where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). Review the definition of $T_{m,2}$ from Lemma (6.21), we have from the lemma that $|T_{m,2}|\leq K_{13}m^{\alpha+\beta+\gamma+\delta-2}$. It follows that $\displaystyle\sum\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}m^{\alpha+\beta+\gamma+\delta-2}\sum_{1\leq i<j\leq 4}r_{12}^{2},$ (141) where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and $|S|\geq 2$. Finally, we add up the three bounds from (137), (140) and (141). By changing “$K_{1}$” to “$\tau_{m,1}$”, “$\rho_{m,1}+K_{1}$” to “$\tau_{m,2}$”, “$\rho_{m,2}$” to “$\tau_{m,3}$” and “$\rho_{m,3}$” to “$\tau_{m,4}$”, we complete the proof. $\square$ ###### LEMMA 6.30 Assume the setting in (6.2.4). Let all notation be the same as those in Lemma 6.26. Then $\displaystyle J_{m}(2,3)$ $\displaystyle=$ $\displaystyle\tau_{m,1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\tau_{m,2}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)},$ where $\max\\{m|\tau_{m,1}|,|\tau_{m,2}|\\}\leq K$ and $\tau_{m,2}$ does not depend on $\mathbb{R}$. Proof of Lemma 6.30. Set $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}^{2\alpha_{1}+1}X_{21}^{2\beta_{1}+1},\ \ U_{2}=X_{12}^{2\alpha_{2}+1}X_{22}^{2\beta_{2}+1}\ \mbox{and}\ \ U_{i}=X_{1i}^{2\alpha_{i}}X_{2i}^{2\beta_{i}};$ $\displaystyle V_{2}$ $\displaystyle=$ $\displaystyle X_{32}^{2\gamma_{2}+1}X_{42}^{2\delta_{2}+1},\ \ V_{3}=X_{33}^{2\gamma_{3}+1}X_{43}^{2\delta_{3}+1}\ \mbox{and}\ \ V_{j}=X_{3j}^{2\gamma_{j}}X_{4j}^{2\delta_{j}}$ (142) for $3\leq i\leq m$ and $j\in\\{1,2,\cdots,m\\}\backslash\\{2,3\\}.$ Review $C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ in (124). By (100)-(103) and (115), $\displaystyle J_{m}(2,3)$ $\displaystyle=$ $\displaystyle\mbox{Cov}\big{(}(X_{11}X_{21})(X_{12}X_{22})A_{1}^{\alpha}A_{2}^{\beta},(X_{32}X_{42})(X_{33}X_{43})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ (143) $\displaystyle=$ $\displaystyle\frac{1}{m^{\alpha+\beta+\gamma+\delta}}\sum C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)},$ where the sum runs over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). Let $S$ be defined as in Lemma 6.22. By the lemma, $|S|\leq K_{1}m^{\alpha+\beta+\gamma+\delta-1}$. By Lemma 6.23 and structures of $U_{i}$ and $V_{i}$ in (142), $\displaystyle\max\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}\cdot\sum_{1\leq i<j\leq 4}r_{12}^{2},$ where the maximum is taken over all $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). Combining the two facts, we obtain that $\displaystyle\sum_{(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\in S}\Big{|}\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}\Big{|}\leq K_{1}m^{\alpha+\beta+\gamma+\delta-1}\sum_{1\leq i<j\leq 4}r_{12}^{2}.$ As used before, $1\leq C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\leq\alpha!\beta!\gamma!$ for any $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies (92). We rewrite the above as $\displaystyle\frac{1}{m^{\alpha+\beta+\gamma+\delta}}\sum_{(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\in S}C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\tau_{m,1}\sum_{1\leq i<j\leq 4}r_{12}^{2},$ (144) where $|\tau_{m,1}|\leq K_{1}m^{-1}.$ Now, if $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies (92) but not in $S$, then (i) $\alpha_{i}+\beta_{i}=0$ for each $i\in\\{1,2,3\\}$; (ii) $\gamma_{i}+\delta_{i}=0$ for each $i\in\\{1,2,3\\}$; (iii) $\alpha_{i}+\beta_{i}\leq 1$ and $\gamma_{i}+\delta_{i}\leq 1$ for each $4\leq i\leq m$; (iv) for each $4\leq i\leq m$, if $\alpha_{i}+\beta_{i}=1$ then $\gamma_{i}+\delta_{i}=0$, and if $\gamma_{i}+\delta_{i}=1$ then $\alpha_{i}+\beta_{i}=0$. This implies that $\\{U_{i},V_{i};\,4\leq i\leq m\\}$ are independent random variables and each of them is either $1$ or $\chi^{2}(1).$ Keep in mind that $\\{(U_{i},V_{i});\,1\leq i\leq m\\}$ are independent random variables and $E(\chi^{2}(1))=1$. Furthermore, it is readily seen from (142) that $\displaystyle U_{1}$ $\displaystyle=$ $\displaystyle X_{11}X_{21},\ \ U_{2}=X_{12}X_{22}\ \mbox{and}\ \ U_{3}=1;$ $\displaystyle V_{1}$ $\displaystyle=$ $\displaystyle 1,\ \ V_{2}=X_{32}X_{42},\ \ V_{3}=X_{33}X_{43}.$ Since $\\{(U_{i},V_{i})^{T};\,1\leq i\leq m\\}$ are independent by the assumption from (6.2.4), the three random quantities $\\{U_{1},(U_{2},V_{2})^{T},V_{3}\\}$ are independent and they are independent of $\\{(U_{i},V_{i})^{T};\,4\leq i\leq m\\}$. Thus, it follows from (123) that $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle E\Big{(}\prod_{i=4}^{m}U_{i}\Big{)}\cdot E\Big{(}\prod_{i=4}^{m}V_{i}\Big{)}\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{3}U_{i},\prod_{i=1}^{3}V_{i}\Big{)}$ $\displaystyle=$ $\displaystyle EU_{1}\cdot EV_{3}\cdot\mbox{Cov}(U_{2},V_{2}).$ Recall $\\{(X_{1j},X_{2j},X_{3j},X_{4j})^{T}\in\mathbb{R}^{4};\,1\leq j\leq m\\}$ are i.i.d. random vectors with distribution $N_{4}(\mathbb{0},\mathbb{R})$, where $\mathbb{R}=(r_{ij})_{4\times 4}$ and $r_{ii}=1$ for each $i$. Then $EU_{1}=r_{12}$, $EV_{3}=r_{34}$ and $\displaystyle\mbox{Cov}(U_{2},V_{2})=E\big{(}X_{12}X_{22}X_{32}X_{42}\big{)}-r_{12}r_{34}=r_{13}r_{24}+r_{14}r_{23}$ by Lemma 6.4. Thus, $\displaystyle\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=r_{12}r_{24}r_{43}r_{31}+r_{12}r_{23}r_{34}r_{41}.$ Recall (124), $C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})=\alpha!\beta!\gamma!\delta!$ in this case, that is, $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfies (92) but not in $S$. Let $N_{1}$ be the total number of solutions $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92). From (106), we see $N_{1}\leq K_{1}\cdot m^{\alpha+\beta+\gamma+\delta}$. Therefore, there exists a constant $\tau_{m,2}$ not depending $r_{ij}$ and $|\tau_{m,2}|\leq K_{1}$ such that $\displaystyle\frac{1}{m^{\alpha+\beta+\gamma+\delta}}\sum C(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})\cdot\mbox{Cov}\Big{(}\prod_{i=1}^{m}U_{i},\prod_{i=1}^{m}V_{i}\Big{)}=\tau_{m,2}\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ where the sum is taken over every $(\boldsymbol{\alpha},\boldsymbol{\beta},\boldsymbol{\gamma},\boldsymbol{\delta})$ satisfying (92) and the restriction in $S^{c}$. By connecting this fact to (143) and (144), we get desired the conclusion. $\square$ #### 6.2.5 A Study on Correlation Matrices As needed in the proof of Lemma 6.34 later, we have to handle certain functions of the entries of sample correlation matrices. They are interesting on their own. Through the whole section, we assume $\mathbb{R}=(r_{ij})_{p\times p}$ is a non-negative definite matrix with $r_{ii}=1$ for $1\leq i\leq p$. Review the Frobenius norm $\|\mathbb{R}\|_{F}=[\mbox{tr}(\mathbb{R}^{2})]^{1/2}=(\sum_{1\leq i,j\leq p}r_{ij}^{2})^{1/2}$. ###### LEMMA 6.31 Assume $\\{m_{p};\,p\geq 1\\}$ are positive constants with $\lim_{p\to\infty}m_{p}=\infty$. Define $\displaystyle W_{1}=\sum_{1\leq i,j,k\leq p}r_{ij}r_{jk}r_{ki}\ \ \mbox{and}\ \ \ W_{2}=\sum_{1\leq i,j,k,l\leq p}r_{ij}r_{jk}r_{kl}r_{li}.$ Then $\lim_{p\to\infty}\frac{W_{i}}{m\|\mathbb{R}\|_{F}^{4}}=0$ for $i=1,2.$ Proof of Lemma 6.31. Let $\lambda_{1}\geq 0,\cdots,\lambda_{p}\geq 0$ be the eigenvalues of $\mathbb{R}$. Write $\mathbb{R}=\mathbb{O}^{T}\mbox{diag}(\lambda_{1},\cdots,\lambda_{p})\mathbb{O}$, where $\mathbb{O}$ is a $p\times p$ orthogonal matrix. Recall the fact $\displaystyle\mbox{tr}\big{(}\mathbb{R}^{s}\big{)}=\sum_{1\leq i_{1},i_{2},\cdots,i_{s}\leq p}r_{i_{1}i_{2}}r_{i_{2}i_{3}}r_{i_{3}i_{4}}\cdots r_{i_{s}i_{1}}$ (145) for any integer $s\geq 2$. Easily, $\displaystyle W_{1}=\,\mbox{tr}(\mathbb{R}^{3})=\lambda_{1}^{3}+\cdots+\lambda_{p}^{3}\leq\big{(}\lambda_{1}^{2}+\cdots+\lambda_{p}^{2}\big{)}^{3/2}.$ In addition, $\|\mathbb{R}\|_{F}^{2}\geq\sum_{i=1}^{p}r_{ii}^{2}=p$. Therefore, $\displaystyle\frac{|W_{1}|}{[\mbox{tr}(\mathbb{R}^{2})]^{2}}\leq\frac{[\mbox{tr}(\mathbb{R}^{2})]^{3/2}}{[\mbox{tr}(\mathbb{R}^{2})]^{2}}=\frac{1}{[\mbox{tr}(\mathbb{R}^{2})]^{1/2}}\leq\frac{1}{\sqrt{p}}.$ It follows that $\lim_{p\to\infty}\frac{W_{1}}{m\|\mathbb{R}\|_{F}^{4}}=0$. Now prove the second conclusion. Note $\displaystyle W_{2}=\,\mbox{tr}(\mathbb{R}^{4})=\lambda_{1}^{4}+\cdots+\lambda_{p}^{4}\leq(\lambda_{1}^{2}+\cdots+\lambda_{p}^{2})^{2}.$ Consequently, $\displaystyle\frac{|W_{2}|}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\leq\frac{[\mbox{tr}(\mathbb{R}^{2})]^{2}}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}=\frac{1}{m}\to 0$ as $p\to\infty.$ The proof is completed. $\square$ ###### LEMMA 6.32 Given integer $\alpha\geq 0$, define $\displaystyle S_{m}=\sum_{1\leq i,j,k,l\leq p}r_{ij}r_{jk}r_{ki}r_{kl}^{\alpha}.$ If $\lim_{p\to\infty}\frac{p}{m\|\mathbb{R}\|_{F}}=0$, then $\lim_{p\to\infty}\frac{S_{m}}{m\|\mathbb{R}\|_{F}^{4}}=0$. Proof of Lemma 6.32. Set $a_{k}=\sum_{l=1}^{p}r_{kl}^{\alpha}$ for $k=1,2,\cdots,p$. Then $|a_{k}|\leq p$ for each $k$. Define a $p\times p$ matrix $\mathbb{D}=\mbox{diag}(a_{1},\cdots,a_{p}).$ Then the $(k,i)$-entry of $\mathbb{R}\mathbb{D}$ is $r_{ki}a_{i}$. It follows that $\displaystyle S_{m}=\sum_{1\leq i,j,k\leq p}r_{ij}r_{jk}r_{ki}a_{k}=\sum_{1\leq i,j,k\leq p}r_{ij}r_{jk}\big{(}\mathbb{R}\mathbb{D}\big{)}_{ki}.$ Therefore, $S_{m}=\mbox{tr}\,(\mathbb{R}\mathbb{R}(\mathbb{R}\mathbb{D}))=\mbox{tr}\,(\mathbb{R}^{3}\mathbb{D})$. Set $(b_{ij})_{p\times p}=\mathbb{B}=\mathbb{R}^{3}$. Then $\mathbb{B}$ is a non-negative definite matrix due to the fact that $\mathbb{R}$ is non-negative definite. Hence, $b_{ii}\geq 0$ for each $i$ and $\displaystyle|\mbox{tr}\,(\mathbb{R}^{3}\mathbb{D})|=|\mbox{tr}\,(\mathbb{B}\mathbb{D})|=\big{|}\sum_{i=1}^{p}b_{ii}a_{i}\big{|}\leq p\sum_{i=1}^{p}b_{ii}.$ This shows that $|S_{m}|\leq p\cdot\mbox{tr}\,(\mathbb{R}^{3})$. Let $\lambda_{1}\geq 0,\cdots,\lambda_{p}\geq 0$ be the eigenvalues of $\mathbb{R}$. Easily, $\displaystyle\mbox{tr}\,(\mathbb{R}^{3})=\lambda_{1}^{3}+\cdots+\lambda_{p}^{3}\leq\big{(}\lambda_{1}^{2}+\cdots+\lambda_{p}^{2}\big{)}^{3/2}.$ Therefore $|S_{m}|\leq p\cdot[\mbox{tr}(\mathbb{R}^{2})]^{3/2}.$ Consequently, $\displaystyle\frac{|S_{m}|}{m\,[\mbox{tr}(\mathbb{R}^{2})]^{2}}\leq\frac{p}{m\,[\mbox{tr}(\mathbb{R}^{2})]^{1/2}}\to 0$ by assumption. The proof is finished. $\square$ ###### LEMMA 6.33 Define $\Lambda_{p}:=\big{\\{}(i,j,k,l);\,1\leq i\neq j\leq p\ \mbox{and}\ 1\leq k\neq l\leq p\big{\\}}$, $\displaystyle V_{p,1}$ $\displaystyle=$ $\displaystyle\frac{1}{m\|\mathbb{R}\|_{F}^{4}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}r_{ij}r_{jk}r_{kl}r_{li},$ $\displaystyle V_{p,2}$ $\displaystyle=$ $\displaystyle\frac{1}{m\|\mathbb{R}\|_{F}^{4}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}(r_{ik}r_{kl}r_{li})r_{ij}^{2}.$ If $\lim_{p\to\infty}\frac{p}{m\|\mathbb{R}\|_{F}}=0$, then $\lim_{p\to\infty}V_{p,i}=0$ for $i=1,2.$ Proof of Lemma 6.33. By assumption, $r_{ii}=1$ for any $1\leq i\leq p$. Then $\displaystyle\sum_{1\leq i,j,k,l\leq m}r_{ij}r_{jk}r_{kl}r_{li}$ $\displaystyle=$ $\displaystyle\sum_{1\leq i=j,k,l\leq p}r_{ij}r_{jk}r_{kl}r_{li}+\sum_{1\leq i\neq j,k,l\leq p}r_{ij}r_{jk}r_{kl}r_{li}$ $\displaystyle=$ $\displaystyle\sum_{1\leq i,k,l\leq p}r_{ik}r_{kl}r_{li}+\sum_{1\leq i\neq j,k=l\leq p}r_{ij}r_{jk}r_{kl}r_{li}+\sum_{(i,j,k,l)\in\Lambda_{p}}r_{ij}r_{jk}r_{kl}r_{li}.$ Now $\displaystyle\sum_{1\leq i\neq j,k=l\leq m}r_{ij}r_{jk}r_{kl}r_{li}=\sum_{1\leq i\neq j,k\leq m}r_{ij}r_{jk}r_{ki}=\sum_{1\leq i,j,k\leq m}r_{ij}r_{jk}r_{ki}-\sum_{1\leq j,k\leq m}r_{jk}^{2}.$ Recall (145). The above two identities imply that $\displaystyle\sum_{(i,j,k,l)\in\Lambda_{p}}r_{ij}r_{jk}r_{kl}r_{li}=\,\mbox{tr}\big{(}\mathbb{R}^{4}\big{)}-2\,\mbox{tr}\big{(}\mathbb{R}^{3}\big{)}+\mbox{tr}\big{(}\mathbb{R}^{2}\big{)}.$ Then $V_{p,1}\to 0$ by using the fact $\|\mathbb{R}\|_{F}^{2}\geq p$ and the conclusions for $W_{1}$ and $W_{2}$ in Lemma 6.31. Now we prove $V_{p,2}\to 0$. Note that $\displaystyle\sum_{1\leq i,j,k,l\leq p}(r_{ik}r_{kl}r_{li})r_{ij}^{2}$ $\displaystyle=$ $\displaystyle\sum_{1\leq j,k,l\leq p}r_{jk}r_{kl}r_{lj}+\sum_{1\leq i\neq j,k,l\leq p}(r_{ik}r_{kl}r_{li})r_{ij}^{2}$ (146) $\displaystyle=$ $\displaystyle\mbox{tr}\big{(}\mathbb{R}^{3}\big{)}+\sum_{1\leq i\neq j,k\leq p}r_{ik}^{2}r_{ij}^{2}+\sum_{(i,j,k,l)\in\Lambda_{p}}(r_{ik}r_{kl}r_{li})r_{ij}^{2}.$ Now $\displaystyle 0\leq\sum_{1\leq i\neq j,k\leq p}r_{ik}^{2}r_{ij}^{2}\leq\sum_{i=1}^{p}\Big{(}\sum_{j=1}^{p}r_{ij}^{2}\Big{)}^{2}\leq\Big{(}\sum_{i=1}^{p}\sum_{j=1}^{p}r_{ij}^{2}\Big{)}^{2}=\,\big{[}\mbox{tr}\big{(}\mathbb{R}^{2}\big{)}\big{]}^{2}.$ Hence, $\displaystyle\frac{1}{m\,[\mbox{tr}(\mathbb{R}^{2})]^{2}}\sum_{1\leq i\neq j,k\leq p}r_{ik}^{2}r_{ij}^{2}\leq\frac{1}{m}\to 0.$ Also, $\frac{1}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\mbox{tr}(\mathbb{R}^{3})\to 0$ by using the conclusion for $W_{1}$ in Lemma 6.31. Then the conclusion $V_{p,2}\to 0$ follows from (146) and Lemma 6.32 with $\alpha=2.$ $\square$ Proof of Lemma 4.3. Write $\mathbb{M}=(m_{ij})$. Then $m_{ii}=1$ for each $i$. By Theorem 4.3.26 from Horn and Johnson (2012), $\\{\lambda_{1},\cdots,\lambda_{p}\\}$ majorizes $\\{m_{11},\cdots,m_{pp}\\}$, the diagonal matrix of $\mathbb{M}$. By the definition of majorization, $\lambda_{1}+\cdots+\lambda_{k}\geq m_{11}+\cdots+m_{kk}=k$ for each $1\leq k\leq p$ and $\lambda_{1}+\cdots+\lambda_{p}=m_{11}+\cdots+m_{pp}=p$. On the other hand, by the definition of majorization, the $p$ numbers $\\{1,\cdots,1\\}$ majorizes $\\{\tau_{1},\cdots,\tau_{p}\\}$. By Theorem 4.3.32 from Horn and Johnson (2012), there exists a symmetric matrix $\mathbb{B}=(b_{ij})_{p\times p}$ such that $b_{ii}=1$ for each $i$ and that $\mathbb{B}$ has non-negative eigenvalues $\tau_{1},\cdots,\tau_{p}$. By definition, $\mathbb{B}$ is a correlation matrix. $\square$ #### 6.2.6 The Proofs of Theorems 3 and 4 In this part, by using the preliminary results developed in Sections 6.2.1-6.2.5, we are now ready to prove the two main results Theorems 4 and 3 stated in Section 6.2. ###### LEMMA 6.34 Assume the setting in (6.2.4) with $\mathbb{R}=(r_{ij})_{4\times 4}$. Recall $A_{i}$ in (98). Define $\displaystyle B_{1}=\frac{1}{m}\sum_{j=1}^{m}X_{1j}X_{2j}\ \ \mbox{and}\ \ B_{2}=\frac{1}{m}\sum_{j=1}^{m}X_{3j}X_{4j}.$ Given integer $N\geq 1$, the covariance between $\displaystyle\sum_{0\leq j,k\leq N}(1-A_{1})^{j}(1-A_{2})^{k}B_{1}^{2}\ \ \mbox{and}\ \ \sum_{0\leq j^{\prime},k^{\prime}\leq N}(1-A_{3})^{j^{\prime}}(1-A_{4})^{k^{\prime}}B_{2}^{2}$ is equal to $\displaystyle\varrho_{m,1}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\varrho_{m,2}\cdot r_{12}^{2}r_{34}^{2}$ $\displaystyle+$ $\displaystyle\varrho_{m,3}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ $\displaystyle+$ $\displaystyle\varrho_{m,4}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\varrho_{m,5}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]},$ where $\\{\varrho_{m,i};\,3\leq i\leq 5\\}$ do not depend on $\mathbb{R}$, $\displaystyle|\varrho_{m,1}|\leq Km^{-2},\ \ |\varrho_{m,2}|\vee|\varrho_{m,3}|\vee|\varrho_{m,4}|\vee|\varrho_{m,5}|\leq Km^{-1}$ and $K$ is a constant depending on $N$ but not on $m$ or $\mathbb{R}.$ Proof of Lemma 6.34. For convenience, we use $\Delta_{m}$ to denote the covariance between $\displaystyle\sum_{0\leq j,k\leq N}(1-A_{1})^{j}(1-A_{2})^{k}B_{1}^{2}\ \ \mbox{and}\ \ \sum_{0\leq j^{\prime},k^{\prime}\leq N}(1-A_{3})^{j^{\prime}}(1-A_{4})^{k^{\prime}}B_{2}^{2}.$ Then $\displaystyle\Delta_{m}=\sum\mbox{Cov}\big{(}(1-A_{1})^{j}(1-A_{2})^{k}B_{1}^{2},(1-A_{3})^{j^{\prime}}(1-A_{4})^{k^{\prime}}B_{2}^{2}\big{)},\ \ \ \ $ (147) where the sum runs over all non-negative integers $j,j^{\prime},k,k^{\prime}$ such that $0\leq j,k\leq N$ and $0\leq j^{\prime},k^{\prime}\leq N$. For each $i=1,2,3,4$, write $\displaystyle(1-A_{i})^{l}=1+\sum_{\alpha=1}^{l}(-1)^{\alpha}\binom{l}{\alpha}A_{i}^{\alpha}$ for any $l\geq 1$. Trivially, $\mbox{Cov}(U_{1}+h_{1},U_{2}+h_{2})=\mbox{Cov}(U_{1},U_{2})$ for any random variables $U_{1}$ and $U_{2}$ and constants $h_{1}$ and $h_{2}.$ Then the last covariance from (147) is $\displaystyle\mbox{Cov}\big{(}(1-A_{1})^{j}(1-A_{2})^{k}B_{1}^{2},(1-A_{3})^{j^{\prime}}(1-A_{4})^{k^{\prime}}B_{2}^{2}\big{)}$ $\displaystyle=$ $\displaystyle~{}\mbox{a finite linear combination of}~{}H~{}\mbox{terms of the form }\mbox{Cov}\big{(}B_{1}^{2}A_{1}^{\alpha}A_{2}^{\beta},B_{2}^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ where the coefficients in the linear combination depend on $\alpha,\beta,\gamma,\delta$ but not $m$ or $\mathbb{R}$, and $H:=(j+1)(j^{\prime}+1)(k+1)(k^{\prime}+1)$. This and (147) imply that $\displaystyle\Delta_{m}=\mbox{a finite linear combination of}~{}N^{\prime}~{}\mbox{terms of the form }\mbox{Cov}\big{(}B_{1}^{2}A_{1}^{\alpha}A_{2}^{\beta},B_{2}^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}~{}~{}~{}~{}~{}$ (148) where $0\leq\alpha+\beta\leq N$ and $0\leq\gamma+\delta\leq N$ and the coefficients in the linear combination depend on $N$ but not on $m$ or $\mathbb{R}$, and $N^{\prime}$ is bounded by $\displaystyle\sum_{j,k\leq N,j^{\prime},k^{\prime}\leq N}(j+1)(j^{\prime}+1)(k+1)(k^{\prime}+1)$ $\displaystyle\leq$ $\displaystyle(N+1)^{4}\sum_{0\leq j,j^{\prime},k,k^{\prime}\leq N}1$ $\displaystyle\leq$ $\displaystyle(N+1)^{8}.$ As $\alpha+\beta=0$ and $\gamma+\delta=0$, the covariance becomes $\displaystyle\mbox{Cov}(B_{1}^{2},B_{2}^{2})=\frac{1}{m}(r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31})+\frac{\delta_{m}}{m^{2}}\sum_{1\leq i<j\leq 4}r_{ij}^{2}$ (149) by Lemma 6.9, where $|\delta_{m}|\leq\kappa$ and $\kappa$ is a numerical constant not depending on $m$, $\mathbb{R}$ or $N$. So we next only need to study $\mbox{Cov}\big{(}B_{1}^{2}A_{1}^{\alpha}A_{2}^{\beta},B_{2}^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ from (148) with an extra assumption that either $\alpha+\beta\geq 1$ or $\gamma+\delta\geq 1.$ Write $\displaystyle m^{2}\cdot B_{1}^{2}=\sum_{j=1}^{m}(X_{1j}X_{2j})^{2}+2\sum_{1\leq k<l\leq m}(X_{1k}X_{2k})(X_{1l}X_{2l});$ (150) $\displaystyle m^{2}\cdot B_{2}^{2}=\sum_{q=1}^{m}(X_{3q}X_{4q})^{2}+2\sum_{1\leq a<b\leq m}(X_{3a}X_{4a})(X_{3b}X_{4b}).$ (151) Then $\displaystyle m^{4}\cdot\mbox{Cov}\big{(}B_{1}^{2}A_{1}^{\alpha}A_{2}^{\beta},B_{2}^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}=D_{1}+2D_{2}+2D_{3}+4D_{4},$ (152) where $\displaystyle D_{1}$ $\displaystyle=$ $\displaystyle\sum_{1\leq j,q\leq m}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)},$ $\displaystyle D_{2}$ $\displaystyle=$ $\displaystyle\sum_{1\leq j\leq m,1\leq a<b\leq m}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)},$ $\displaystyle D_{3}$ $\displaystyle=$ $\displaystyle\sum_{1\leq q\leq m,1\leq k<l\leq m}\,\mbox{Cov}\big{(}(X_{1k}X_{2k})(X_{1l}X_{2l})A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ and $\displaystyle D_{4}=\sum_{1\leq k<l\leq m,1\leq a<b\leq m}\mbox{Cov}\big{(}(X_{1k}X_{2k})(X_{1l}X_{2l})A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}.$ We next study the four terms in steps Step 1: the estimate of $D_{1}$. Write $\displaystyle\sum_{1\leq j,q\leq m}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ $\displaystyle=$ $\displaystyle\sum_{j=1}^{m}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3j}X_{4j})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ $\displaystyle+\sum_{1\leq j\neq q\leq m}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3q}X_{4q})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}.$ Review $\\{(X_{1j},X_{2j},X_{3j},X_{4j})^{T}\in\mathbb{R}^{4};\,1\leq j\leq m\\}$ are i.i.d. random vectors. It follows that $\displaystyle D_{1}$ $\displaystyle=$ $\displaystyle m\cdot\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{31}X_{41})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ $\displaystyle+m(m-1)\cdot\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{32}X_{42})^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}.$ By Lemma 6.24, $\displaystyle|D_{1}|\leq K_{1}m^{2}\sum_{1\leq i<j\leq 4}r_{ij}^{2},$ (153) where $K_{1}$ here and later denotes a constant depending on $\alpha,\beta,\gamma,\delta$ but not $m$ or $\mathbb{R}$, and can be different from line to line. Step 2: the estimate of $D_{2}$. Write $\displaystyle D_{2}$ $\displaystyle=$ $\displaystyle\sum_{1\leq a<b\leq m}\Big{(}\sum_{j\in\\{a,b\\}}+\sum_{1\leq j\leq m,j\notin\\{a,b\\}}\Big{)}\,\mbox{Cov}\big{(}(X_{1j}X_{2j})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ $\displaystyle=$ $\displaystyle 2\cdot\binom{m}{2}\cdot I_{m}(1,2)+(m-2)\binom{m}{2}\cdot I_{m}(2,3),$ where $\displaystyle I_{m}(a,b):=\mbox{Cov}\big{(}(X_{11}X_{21})^{2}A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}.$ By Lemma 6.25, $\displaystyle|D_{2}|\leq K_{1}m^{2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ (154) Step 3: the estimate of $D_{3}$. By switching the roles of “$(X_{1j},X_{2j},A_{1},A_{2},\alpha,\beta)$” and “$(X_{3j},X_{4j},A_{3},A_{4},\gamma,\delta)$” in Step 2, and using (154), we obtain $\displaystyle|D_{3}|\leq K_{1}m^{2}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ (155) Step 4: the estimate of $D_{4}$. Rewrite $\displaystyle D_{4}=\sum_{1\leq a<b\leq m}\Big{(}\sum_{\Gamma_{1}}+\sum_{\Gamma_{2}}+\sum_{\Gamma_{3}}\Big{)}\mbox{Cov}\big{(}(X_{1k}X_{2k})(X_{1l}X_{2l})A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)},$ where $\displaystyle\Gamma_{1}=\\{(k,l):(k,l)=(a,b)\\},\ \ \Gamma_{2}=\\{(k,l):\,1\leq k<l\leq m,\,|\\{k,l\\}\cap\\{a,b\\}|=1\\},$ $\displaystyle\Gamma_{3}=\\{(k,l):1\leq k<l\leq m,\,\\{k,l\\}\cap\\{a,b\\}=\emptyset\\}.$ Given $1\leq a<b\leq m$, it is easy to see $|\Gamma_{1}|=1$, $|\Gamma_{2}|\leq 2m$ and $\Gamma_{3}\leq m^{2}.$ Set $\displaystyle J_{m}(a,b)=\mbox{Cov}\big{(}(X_{11}X_{21})(X_{12}X_{22})A_{1}^{\alpha}A_{2}^{\beta},(X_{3a}X_{4a})(X_{3b}X_{4b})A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ for $a\geq 1$ and $b\geq 1$. Then $\displaystyle D_{4}=\frac{1}{2}m(m-1)\big{[}J_{m}(1,2)+|\Gamma_{2}|\cdot J_{m}(2,3)+|\Gamma_{3}|\cdot J_{m}(3,4)\big{]}.$ By Lemma 6.26, $\displaystyle\big{|}J_{m}(1,2)\big{|}\leq K_{1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}.$ By Lemma 6.30, $\displaystyle J_{m}(2,3)$ $\displaystyle=$ $\displaystyle\tau_{m,1}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\tau_{m,2}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ where $|\tau_{m,1}|\leq K_{1}m^{-1}$, $|\tau_{m,2}|\leq K_{1}$ and $\tau_{m,2}$ does not depend on $\mathbb{R}$. By Lemma 6.29, $\displaystyle J_{m}(3,4)$ $\displaystyle=$ $\displaystyle\tau_{m,1}^{\prime}r_{12}^{2}r_{34}^{2}+\tau_{m,2}^{\prime}\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\tau_{m,3}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]}$ $\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}+\tau_{m,4}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]},$ where $|\tau_{m,1}^{\prime}|\leq K_{1}m^{-1}$, $|\tau_{m,2}^{\prime}|\leq K_{1}m^{-2}$, $|\tau_{m,3}|\vee|\tau_{m,4}|\leq K_{1}m^{-1}$, and $\tau_{m,3}$ and $\tau_{m,4}$ do not depend on $\mathbb{R}$. Combining all of the above we get $\displaystyle D_{4}=\rho_{m,1}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\rho_{m,2}\cdot r_{12}^{2}r_{34}^{2}$ $\displaystyle+$ $\displaystyle\rho_{m,3}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ $\displaystyle+$ $\displaystyle\rho_{m,4}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\rho_{m,5}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]},$ where $\\{\rho_{m,i},\,1\leq i\leq 5\\}$ satisfy that $\displaystyle|\rho_{m,1}|\leq K_{1}m^{2},\ \ \max\big{\\{}|\rho_{m,2}|,|\rho_{m,3}|,|\rho_{m,4}|,|\rho_{m,5}|\big{\\}}\leq K_{1}m^{3}$ and $\rho_{m,3}$, $\rho_{m,4}$ and $\rho_{m,5}$ do not depend on $\mathbb{R}$. Through combining the estimates of $D_{1},D_{2},D_{3},D_{4}$ and (152), we see $\mbox{Cov}\big{(}B_{1}^{2}A_{1}^{\alpha}A_{2}^{\beta},B_{2}^{2}A_{3}^{\gamma}A_{4}^{\delta}\big{)}$ is equal to $\displaystyle\rho_{m,1}^{\prime}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\rho_{m,2}^{\prime}\cdot r_{12}^{2}r_{34}^{2}$ $\displaystyle+$ $\displaystyle\rho_{m,3}^{\prime}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ $\displaystyle+$ $\displaystyle\rho_{m,4}^{\prime}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\rho_{m,5}^{\prime}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]},$ where $\\{\rho_{m,i}^{\prime};\,1\leq i\leq 5\\}$ satisfy that $\displaystyle|\rho_{m,1}^{\prime}|\leq K_{1}m^{-2},\ \ |\rho_{m,2}^{\prime}|\vee|\rho_{m,3}^{\prime}|\vee|\rho_{m,4}^{\prime}|\vee|\rho_{m,5}^{\prime}|\leq K_{1}m^{-1},$ and $\rho_{m,3}^{\prime},\rho_{m,4}^{\prime}$ and $\rho_{m,5}^{\prime}$ do not depend on $\mathbb{R}$. Recalling (148), we arrive at that $\Delta_{m}$ is equal to $\displaystyle\varrho_{m,1}\cdot\sum_{1\leq i<j\leq 4}r_{ij}^{2}+\varrho_{m,2}\cdot r_{12}^{2}r_{34}^{2}$ $\displaystyle+$ $\displaystyle\varrho_{m,3}\cdot\big{(}r_{12}r_{23}r_{34}r_{41}+r_{12}r_{24}r_{43}r_{31}\big{)}$ $\displaystyle+$ $\displaystyle\varrho_{m,4}\cdot\big{[}(r_{13}r_{34}r_{41})r_{12}^{2}+(r_{23}r_{34}r_{42})r_{12}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\varrho_{m,5}\cdot\big{[}(r_{12}r_{23}r_{31})r_{34}^{2}+(r_{12}r_{24}r_{41})r_{34}^{2}\big{]},$ where $\displaystyle|\varrho_{m,1}|\leq K_{1}m^{-2},\ \ |\varrho_{m,2}|\vee|\varrho_{m,3}|\vee|\varrho_{m,4}|\vee|\varrho_{m,5}|\leq K_{1}m^{-1},$ and $\varrho_{m,3},\varrho_{m,4}$ and $\varrho_{m,5}$ do not depend on $\mathbb{R}$. The proof is completed. $\square$ We will first prove Theorem 4 and then prove 3. Proof of Theorem 4. Recall the earlier notation that $\displaystyle A_{i}=\frac{1}{m}\sum_{j=1}^{m}X_{ij}^{2},\ \ \ \ \ \ B_{1}=\frac{1}{m}\sum_{j=1}^{m}X_{1j}X_{2j},\ \ \ \ \ \ \ B_{2}=\frac{1}{m}\sum_{j=1}^{m}X_{3j}X_{4j}$ for $i=1,2,3,4.$ Then $\displaystyle\hat{r}_{12}=\frac{B_{1}}{\sqrt{A_{1}A_{2}}}\ \ \ \mbox{and}\ \ \ \ \hat{r}_{34}=\frac{B_{2}}{\sqrt{A_{3}A_{4}}}.$ (156) Given $N\geq 1$, write $\displaystyle\frac{1}{x}=1+(1-x)+\cdots+(1-x)^{N}+\frac{1}{x}(1-x)^{N+1}$ for $x\neq 0$. Thus $\displaystyle\frac{1}{A_{1}A_{2}}$ $\displaystyle=$ $\displaystyle\Big{[}\frac{(1-A_{1})^{N+1}}{A_{1}}+\sum_{i=0}^{N}(1-A_{1})^{i}\Big{]}\cdot\Big{[}\frac{(1-A_{2})^{N+1}}{A_{2}}+\sum_{j=0}^{N}(1-A_{2})^{j}\Big{]}$ $\displaystyle=$ $\displaystyle\epsilon_{m,1}+\sum_{0\leq i,j\leq N}(1-A_{1})^{i}(1-A_{2})^{j},$ where $\displaystyle\epsilon_{m,1}=\frac{(1-A_{1})^{N+1}(1-A_{2})^{N+1}}{A_{1}A_{2}}$ $\displaystyle+$ $\displaystyle\sum_{j=0}^{N}\frac{(1-A_{1})^{N+1}(1-A_{2})^{j}}{A_{1}}$ $\displaystyle+$ $\displaystyle\sum_{i=0}^{N}\frac{(1-A_{1})^{i}(1-A_{2})^{N+1}}{A_{2}}.$ Similarly, $\displaystyle\frac{1}{A_{3}A_{4}}=\epsilon_{m,2}+\sum_{0\leq i,j\leq N}(1-A_{3})^{i}(1-A_{4})^{j}$ where $\displaystyle\epsilon_{m,2}=\frac{(1-A_{3})^{N+1}(1-A_{4})^{N+1}}{A_{3}A_{4}}$ $\displaystyle+$ $\displaystyle\sum_{j=0}^{N}\frac{(1-A_{3})^{N+1}(1-A_{4})^{j}}{A_{3}}$ $\displaystyle+$ $\displaystyle\sum_{i=0}^{N}\frac{(1-A_{3})^{i}(1-A_{4})^{N+1}}{A_{4}}.$ By (156), $\displaystyle\mbox{Cov}(\hat{r}_{12}^{2},\hat{r}_{34}^{2})$ $\displaystyle=$ $\displaystyle\,\mbox{Cov}\Big{(}\sum_{0\leq i,j\leq N}(1-A_{1})^{i}(1-A_{2})^{j}B_{1}^{2},\,\sum_{0\leq i,j\leq N}(1-A_{3})^{i}(1-A_{4})^{j}B_{2}^{2}\Big{)}$ (157) $\displaystyle+\,\mbox{Cov}\Big{(}\sum_{0\leq i,j\leq N}(1-A_{1})^{i}(1-A_{2})^{j}B_{1}^{2},\,\epsilon_{m,2}B_{2}^{2}\Big{)}$ $\displaystyle+\,\mbox{Cov}\Big{(}\epsilon_{m,1}B_{1}^{2},\,\sum_{0\leq i,j\leq N}(1-A_{3})^{i}(1-A_{4})^{j}B_{2}^{2}\Big{)}$ $\displaystyle+\,\mbox{Cov}\big{(}\epsilon_{m,1}B_{1}^{2},\,\epsilon_{m,2}B_{2}^{2}\big{)}.$ We claim that $\displaystyle\mbox{the absolute value of each of the last three covariances in \eqref{ewy129}}\ \leq\frac{K_{1}}{m^{(N+1)/2}}$ (158) where $K_{1}$ is a constant depending on $N$ but not $m$ or $\mathbb{R}.$ In fact, by writing $\bar{B}_{1}=B_{1}-r_{12}$ and $\bar{B}_{2}=B_{2}-r_{34}$, then $B_{1}^{2}=\bar{B}_{1}^{2}+2r_{12}\bar{B}_{1}+r_{12}^{2}$ and $B_{2}^{2}=\bar{B}_{2}^{2}+2r_{34}\bar{B}_{2}+r_{34}^{2}$, and hence by linearity of the covariance, each of the last three covariances from (157) is a linear combination of $N^{\prime}$ terms of the form $\displaystyle r_{12}^{a}r_{34}^{b}\cdot E\frac{\bar{B}_{1}^{t_{1}}\bar{B}_{2}^{t_{2}}(1-A_{1})^{n_{1}}(1-A_{2})^{n_{2}}(1-A_{3})^{n_{3}}(1-A_{4})^{n_{4}}}{A_{1}^{s_{1}}A_{2}^{s_{2}}A_{3}^{s_{3}}A_{4}^{s_{4}}}$ (159) where $N^{\prime}$ depends only on $N$ and all powers are non-negative integers with $\displaystyle a,b,t_{i}\in\\{0,1,2\\}~{}~{}~{}\mbox{and}\ ~{}~{}~{}s_{1}+s_{2}+s_{3}+s_{4}\geq 1;$ $\displaystyle n_{i}\leq N~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\mbox{and}\ ~{}~{}~{}n_{1}+n_{2}+n_{3}+n_{4}\geq N+1$ for each possible $i$. The crucial observation is that $n_{1}+n_{2}+n_{3}+n_{4}\geq N+1$. If some of $\\{t_{1},t_{2},n_{1},n_{2},n_{3},n_{4}\\}$ are zero, the corresponding terms simply disappear. Set $\displaystyle k=~{}\mbox{the count of positive values from}~{}\\{t_{1},t_{2},n_{1},n_{2},n_{3},n_{4}\\};$ $\displaystyle l=~{}\mbox{the count of positive values from}~{}\\{s_{1},s_{2},s_{3},s_{4}\\}.$ Then $k\geq 1$ and $l\geq 1$. Take $\displaystyle X_{i}=\begin{cases}\sqrt{m}\bar{B}_{i},&\text{$i=1,2$};\\\ \sqrt{m}(1-A_{i-2}),&\text{$i=3,4,5,6$}\end{cases}\ \ \ \ \ \mbox{and}\ \ \ \ \alpha_{i}=\begin{cases}t_{i},&\text{$i=1,2$};\\\ n_{i-2},&\text{$i=3,4,5,6$}\end{cases}$ and $p_{i}=m$ for $i=1,\cdots,6.$ Furthermore, take $Y_{j}=A_{j},q_{j}=m$ and $\beta_{j}=s_{j}$ for $j=1,2,3,4$. Easily, $2/(k+l)\leq 1$ and $(2-q_{j})/[2(k+l)]<0$ as $m\geq 5.$ Observe $\displaystyle X_{1}=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}(X_{1j}X_{2j}-r_{12}),\ \ X_{2}=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}(X_{3j}X_{4j}-r_{34}),$ $\displaystyle X_{i+2}=\frac{1}{\sqrt{m}}\sum_{j=1}^{m}(1-X_{ij}^{2}),\ \ \ \ \ \ \ \ \ \ Y_{j}\sim\frac{1}{m}\chi^{2}(m)$ for $i=1,2,3,4$ and $j=1,2,3,4.$ Set $\xi_{1j}=X_{1j}X_{2j}-r_{12}$, $\xi_{2j}=X_{3j}X_{4j}-r_{34}$ and $\xi_{i+2\,j}=1-X_{ij}^{2}$ for $i=1,2,3,4$. Notice $X_{ij}\sim N(0,1)$ and $|r_{ij}|\leq 1$ for each $i,j$. By Lemma 6.11, $\displaystyle\Big{|}r_{12}^{a}r_{34}^{b}\cdot E\frac{\bar{B}_{1}^{t_{1}}\bar{B}_{2}^{t_{2}}(1-A_{1})^{n_{1}}(1-A_{2})^{n_{2}}(1-A_{3})^{n_{3}}(1-A_{4})^{n_{4}}}{A_{1}^{s_{1}}A_{2}^{s_{2}}A_{3}^{s_{3}}A_{4}^{s_{4}}}\Big{|}\leq\frac{K_{1}}{m^{(N+1)/2}},$ where $K_{1}$ is a constant depending on $N$ but not on $m$ or $\mathbb{R}.$ This confirms claim (158). The first covariance on the right hand side of (157) is studied in Lemma 6.34. Combining this lemma and (158), we finish the proof. $\square$ ###### LEMMA 6.35 Let ${\boldsymbol{X}}_{1},\cdots,{\boldsymbol{X}}_{n}$ be a random sample from $N_{p}({\boldsymbol{\mu}},{\boldsymbol{\Sigma}})$ with correlation matrix $\mathbb{R}$. Let $\hat{\mathbb{R}}$ be defined in (7). Assume, for some $a>0$, $p\leq n^{a}$ for each $p\geq 1$. If $\lim_{p\to\infty}\frac{p}{n\|\mathbb{R}\|_{F}}=0$, then $\mbox{Var}(\mbox{tr}(\hat{\mathbb{R}}^{2}))\cdot\|\mathbb{R}\|_{F}^{-2}$ goes to zero as $p\to\infty$. Proof of Lemma 6.35. Set $m=n-1$. By (28) from the proof of Lemma 6.3, $\displaystyle\hat{\mathbb{R}}=\hat{\mathbb{R}}_{p}=(\hat{r}_{ij})_{p\times p}\ \overset{d}{=}\Big{(}\frac{\mathbb{v}_{i}^{T}\mathbb{v}_{j}}{\|\mathbb{v}_{i}^{T}\|\cdot\|\mathbb{v}_{j}\|}\Big{)}_{p\times p},$ where the $m$ rows of $(\mathbb{v}_{1},\cdots,\mathbb{v}_{p})_{m\times p}$ are i.i.d. with distribution $N_{p}(\mathbb{0},\mathbb{R})$. Write $\mbox{tr}(\hat{\mathbb{R}}^{2})=p+\sum_{1\leq i\neq j\leq m}\hat{r}_{ij}^{2}.$ Then, $\displaystyle\mbox{Var}\big{(}\mbox{tr}(\hat{\mathbb{R}}^{2})\big{)}=\mbox{Cov}\Big{(}\sum_{1\leq i\neq j\leq p}\hat{r}_{ij}^{2},\sum_{1\leq k\neq l\leq p}\hat{r}_{kl}^{2}\Big{)}=\sum\,\mbox{Cov}\big{(}\hat{r}_{ij}^{2},\hat{r}_{kl}^{2}\big{)},$ (160) where the last sum runs over all $(i,j,k,l)\in\Lambda_{p}$, where $\displaystyle\Lambda_{p}:=\big{\\{}(i,j,k,l);\,1\leq i\neq j\leq p\ \mbox{and}\ 1\leq k\neq l\leq p\big{\\}}.$ (161) Review Theorem 4. We never impose any condition on the $4\times 4$ correlation matrix $\mathbb{R}_{4\times 4}$ (not confuse the $p\times p$ correlation matrix $\mathbb{R}$ here) in the proposition. For example, if all of the entries of $\mathbb{R}_{4\times 4}$ are equal to $1$, then the four random variables are actually equal. Keeping this understanding in mind, by changing “$(1,2,3,4)$” in Theorem 4 to “$(i,j,k,l)$” and taking $N=7$ in the proposition, we see $\mbox{Cov}(\hat{r}_{ij}^{2},\hat{r}_{kl}^{2})$ is equal to $\displaystyle\varrho_{p,1}\cdot\sum_{u,v\in\\{i,j,k,l\\},u\neq v}r_{uv}^{2}+\varrho_{p,2}\cdot r_{ij}^{2}r_{kl}^{2}$ $\displaystyle+$ $\displaystyle\varrho_{p,3}\cdot\big{(}r_{ij}r_{jk}r_{kl}r_{li}+r_{ij}r_{jl}r_{lk}r_{ki}\big{)}$ (162) $\displaystyle+$ $\displaystyle\varrho_{p,4}\cdot\big{[}(r_{ik}r_{kl}r_{li})r_{ij}^{2}+(r_{jk}r_{kl}r_{lj})r_{ij}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\varrho_{p,5}\cdot\big{[}(r_{ij}r_{jk}r_{ki})r_{kl}^{2}+(r_{ij}r_{jl}r_{li})r_{kl}^{2}\big{]}$ $\displaystyle+$ $\displaystyle\frac{\varrho_{p,6}}{m^{4}},$ where the sum runs over the six pairs from $\\{i,j,k,l\\}$, $\displaystyle|\varrho_{p,1}|\leq Km^{-2};\ \ |\varrho_{p,2}|\vee|\varrho_{p,3}|\vee|\varrho_{p,4}|\vee|\varrho_{p,5}|\leq Km^{-1},\ |\varrho_{p,6}|\leq K,$ (163) $\\{\varrho_{p,i};\,3\leq i\leq 5\\}$ do not depend on $\mathbb{R}$, and $K$ is a constant not depending on $m$ or $\mathbb{R}.$ Notice $\displaystyle\sum_{(i,j,k,l)\in\Lambda_{p}}\sum_{u,v\in\\{i,j,k,l\\},u\neq v}r_{uv}^{2}\leq 6p^{2}\sum_{1\leq i,j\leq p}r_{ij}^{2}=6p^{2}\cdot\mbox{tr}(\mathbb{R}^{2}).$ Also, $\displaystyle\sum_{(i,j,k,l)\in\Lambda_{p}}r_{ij}^{2}r_{kl}^{2}\leq\sum_{1\leq i,j\leq p}r_{ij}^{2}\cdot\sum_{1\leq k,l\leq p}r_{kl}^{2}=\,\big{[}\mbox{tr}(\mathbb{R}^{2})\big{]}^{2}.$ Recall $\|\mathbb{R}\|_{F}^{4}=[\mbox{tr}(\mathbb{R}^{2})]^{2}$. The two facts together with (160) and (162) imply that $\displaystyle\frac{1}{\|\mathbb{R}\|_{F}^{4}}\cdot\mbox{Var}\big{(}\mbox{tr}(\hat{\mathbb{R}}^{2})\big{)}$ $\displaystyle\leq$ $\displaystyle(6K)\cdot\frac{p^{2}}{m^{2}\cdot\mbox{tr}(\mathbb{R}^{2})}+\frac{K}{m}+\Big{(}\sum_{i=3}^{5}|m\varrho_{p,i}|\cdot|Q_{p,i}|\Big{)}+\frac{Kp^{4}}{[\mbox{tr}(\mathbb{R}^{2})]^{2}\cdot m^{4}}$ where $\displaystyle Q_{p,3}$ $\displaystyle=$ $\displaystyle\frac{1}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}\big{(}r_{ij}r_{jk}r_{kl}r_{li}+r_{ij}r_{jl}r_{lk}r_{ki}\big{)},$ $\displaystyle Q_{p,4}$ $\displaystyle=$ $\displaystyle\frac{1}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}\big{[}(r_{ik}r_{kl}r_{li})r_{ij}^{2}+(r_{jk}r_{kl}r_{lj})r_{ij}^{2}\big{]},$ $\displaystyle Q_{p,5}$ $\displaystyle=$ $\displaystyle\frac{1}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}\big{[}(r_{ij}r_{jk}r_{ki})r_{kl}^{2}+(r_{ij}r_{jl}r_{li})r_{kl}^{2}\big{]}.$ Now, by assumption, $p=o(m\|\mathbb{R}\|_{F})$, hence $\displaystyle\frac{p^{2}}{m^{2}\cdot\mbox{tr}(\mathbb{R}^{2})}\to 0\ \ \mbox{and}\ \ \frac{Kp^{4}}{[\mbox{tr}(\mathbb{R}^{2})]^{2}\cdot m^{4}}\to 0.$ Because of (163), to prove the conclusion, it suffices to show $\lim_{p\to\infty}Q_{p,i}=0$ for $i=3,4,5.$ Recall (161). By switching “$k$” and “$l$” in $r_{ij}r_{jl}r_{lk}r_{ki}$, switching “$k$” and “$l$” in $(r_{jk}r_{kl}r_{lj})r_{ij}^{2}$ and switching “$k$” and “$l$” in $(r_{ij}r_{jl}r_{li})r_{kl}^{2}$, respectively, we obtain $\displaystyle Q_{p,3}$ $\displaystyle=$ $\displaystyle\frac{2}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}r_{ij}r_{jk}r_{kl}r_{li},$ $\displaystyle Q_{p,4}$ $\displaystyle=$ $\displaystyle\frac{2}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}(r_{ik}r_{kl}r_{li})r_{ij}^{2},$ $\displaystyle Q_{p,5}$ $\displaystyle=$ $\displaystyle\frac{2}{m[\mbox{tr}(\mathbb{R}^{2})]^{2}}\cdot\sum_{(i,j,k,l)\in\Lambda_{p}}(r_{ij}r_{jk}r_{ki})r_{kl}^{2}.$ By interchanging “$(i,j)$” with “$(k,l)$” in the last sum, we see $Q_{p,4}=Q_{p,5}$. Finally, we see from Lemma 6.33 that $\lim_{p\to\infty}Q_{p,i}=0$ for $i=3,4,5.$ $\square$ Proof of Theorem 3. Recall $\hat{\mathbb{R}}=\hat{\mathbb{R}}_{p}$. By Lemma 6.35, $\displaystyle\frac{\mbox{tr}(\hat{\mathbb{R}}_{p}^{2})-E\,\mbox{tr}(\hat{\mathbb{R}}_{p}^{2})}{\mbox{tr}(\mathbb{R}_{p}^{2})}\to 0$ (164) in probability as $p\to\infty.$ By Lemma 6.3, under the assumption $\limsup_{p\to\infty}\frac{p}{n^{a}}=0$ for some constant $a>0$, we have $\displaystyle E\,\mbox{tr}(\hat{\mathbb{R}}_{p}^{2})=\frac{p(p-1)}{n-1}+\mbox{tr}(\mathbb{R}_{p}^{2})\cdot\big{[}1+O(m^{-1/4})\big{]}.$ This implies that $\displaystyle\frac{1}{\mbox{tr}(\mathbb{R}_{p}^{2})}\cdot\Big{[}E\,\mbox{tr}(\hat{\mathbb{R}}_{p}^{2})-\frac{p(p-1)}{n-1}-\mbox{tr}(\mathbb{R}_{p}^{2})\Big{]}\to 0.$ The proof is completed by adding this and that from (164). $\square$ ### 6.3 The Proofs of Theorems 1 and 2 and Proposition 1 Let $\boldsymbol{\xi}_{1},\cdots,\boldsymbol{\xi}_{n}$ be i.i.d. $p$-dimensional random vectors with distribution $N_{p}(\boldsymbol{\mu},{\boldsymbol{\Sigma}})$. Let ${\bf D}$ be the diagonal matrix of ${\boldsymbol{\Sigma}}$. The $p\times p$ population correlation matrix is defined by ${\bf R}={\bf D}^{-1/2}{\boldsymbol{\Sigma}}{\bf D}^{-1/2}$. The sample mean is $\bar{\boldsymbol{\xi}}=\frac{1}{n}(\boldsymbol{\xi}_{1}+\cdots\boldsymbol{\xi}_{n})$ and the sample covariance matrix is defined by $\displaystyle\hat{{\bf S}}=\frac{1}{n}\sum_{i=1}^{n}(\boldsymbol{\xi}_{i}-\bar{\boldsymbol{\xi}})(\boldsymbol{\xi}_{i}-\bar{\boldsymbol{\xi}})^{T}.$ (165) Review $W_{p}(m,{\boldsymbol{\Sigma}})$ stands for the distribution of the Wishart matrix $\mathbb{U}^{T}\mathbb{U}$ for any $m\geq 1$, where $\mathbb{U}$ is an $m\times p$ matrix whose rows are i.i.d. with distribution $N_{p}(\mathbb{0},{\boldsymbol{\Sigma}})$. Then, $n\hat{{\bf S}}$ has the Wishart distribution $W_{p}(n-1,{\boldsymbol{\Sigma}})$; see, for example, Theorem 3.1.2 from Muirhead (1982). Let $\hat{{\bf D}}$ be the diagonal matrix of $\hat{{\bf S}}$. Then $\hat{{\bf R}}:=\hat{{\bf D}}^{-1/2}\hat{{\bf S}}\hat{{\bf D}}^{-1/2}$ is the sample correlation matrix generated by $\boldsymbol{\xi}_{1},\cdots,\boldsymbol{\xi}_{n}$. Before proving Theorems 1 and 2, we first will reduce the test statistic appearing in Theorem 1 to a simple form. Recall we assume $n$ depends on $p$ and sometimes write $n_{p}$ if there is any possible confusion. Also, the Frobenius norm $\|\mathbb{R}\|_{F}=[\mbox{tr}(\mathbb{R}^{2})]^{1/2}$ and the notation $o_{p}(1)$ representing a random variable converging to $0$ in probability. Proof of Lemma 4.1. We need to show $\displaystyle\frac{\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta}-\boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta}}{\sqrt{\,\mbox{tr}(\mathbb{R}^{2})}}\to 0$ (166) in probability as $p\to\infty.$ To do so, it suffices to prove that both its mean and variance converging to $0$. Step 1: the mean of random variable from (166). Write $\boldsymbol{\eta}={\boldsymbol{\Sigma}}^{1/2}\boldsymbol{\theta}$ where $\boldsymbol{\theta}\sim N_{p}(\mathbb{0},\boldsymbol{I}_{p})$ and ${\boldsymbol{\Sigma}}^{1/2}$ is a non-negative definite matrix satisfying ${\boldsymbol{\Sigma}}^{1/2}\cdot{\boldsymbol{\Sigma}}^{1/2}={\boldsymbol{\Sigma}}$. By assumption, $\boldsymbol{\theta}$ is independent of $\hat{{\bf S}}$. In particular, $\boldsymbol{\theta}$ is independent of $\hat{{\bf D}}$, the diagonal matrix of $\hat{{\bf S}}$. Notice $\displaystyle\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta}=\boldsymbol{\theta}^{T}\big{(}{\boldsymbol{\Sigma}}^{1/2}\hat{{\bf D}}^{-1}{\boldsymbol{\Sigma}}^{1/2}\big{)}\boldsymbol{\theta}\ \ \mbox{and}\ \ \boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta}=\boldsymbol{\theta}^{T}\big{(}{\boldsymbol{\Sigma}}^{1/2}{\bf D}^{-1}{\boldsymbol{\Sigma}}^{1/2}\big{)}\boldsymbol{\theta}.$ (167) For any $p\times p$ symmetric matrix $\mathbb{A}$ with eigenvalues $\lambda_{1},\cdots,\lambda_{p}$, by the orthogonal invariance of $N_{p}(\mathbb{0},\boldsymbol{I}_{p})$, we know $\boldsymbol{\theta}^{T}\mathbb{A}\boldsymbol{\theta}$ and $\lambda_{1}\theta_{1}^{2}+\cdots+\lambda_{p}\theta_{p}^{2}$ have the same distribution, where $\theta_{1},\cdots,\theta_{p}$ are i.i.d. $N(0,1)$-distributed random variables. Consequently, $\displaystyle E(\boldsymbol{\theta}^{T}\mathbb{A}\boldsymbol{\theta})=\,\mbox{tr}(\mathbb{A})\ \ \mbox{and}\ \ \mbox{Var}(\boldsymbol{\theta}^{T}\mathbb{A}\boldsymbol{\theta})=2\,\mbox{tr}(\mathbb{A}^{2}).$ (168) It follows from independence and conditioning on $\hat{{\bf D}}$ that $\displaystyle E(\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta})=E\,\mbox{tr}\big{(}{\boldsymbol{\Sigma}}^{1/2}\hat{{\bf D}}^{-1}{\boldsymbol{\Sigma}}^{1/2}\big{)}=\mbox{tr}\big{[}{\boldsymbol{\Sigma}}^{1/2}E\big{(}\hat{{\bf D}}^{-1}\big{)}{\boldsymbol{\Sigma}}^{1/2}\big{]}$ (169) by linearity of expectations and traces, where $E\big{(}\hat{{\bf D}}^{-1}\big{)}$ is the entry-wise expectation of the diagonal matrix $\hat{{\bf D}}^{-1}$. Set ${\boldsymbol{\Sigma}}=(\sigma_{ij})_{p\times p}$. Then $\mathbb{D}=\mbox{diag}(\sigma_{11},\cdots,\sigma_{pp})$. Set $m=n-1$. It is known $\displaystyle n\hat{{\bf S}}\overset{d}{=}\sum_{j=1}^{m}\hat{\boldsymbol{\xi}}_{j}\hat{\boldsymbol{\xi}}_{j}^{T}$ (170) and $\hat{{\bf S}}$ is independent of $\bar{\boldsymbol{\xi}}=\frac{1}{n}(\boldsymbol{\xi}_{1}+\cdots\boldsymbol{\xi}_{n})$, where $\hat{\boldsymbol{\xi}}_{1},\cdots,\hat{\boldsymbol{\xi}}_{m}$ are i.i.d. $N_{p}(\mathbb{0},\boldsymbol{\Sigma})$-distributed random vectors; see, for example, Theorem 3.1.2 from Muirhead (1982). Write $\hat{\boldsymbol{\xi}}_{j}=(\xi_{1j},\cdots,\xi_{pj})^{T}$ for each $j.$ Then the $(i,i)$-entry of $\hat{\boldsymbol{\xi}}_{j}\hat{\boldsymbol{\xi}}_{j}^{T}$ is equal to $\xi_{ij}^{2}$. As a result, $\displaystyle\mbox{the}\ \mbox{$(i,i)$-entry of}\ n\hat{{\bf S}}\ \mbox{is}\ \sum_{j=1}^{m}\xi_{ij}^{2}\sim\sigma_{ii}^{2}\cdot\chi^{2}(m)$ (171) for each $1\leq i\leq p$. Since $\hat{{\bf D}}=\mbox{diag}(s_{11},\cdots,s_{pp})$ is the diagonal matrix of $\hat{{\bf S}}:=(s_{ij})_{p\times p}$, we know $ns_{ii}/\sigma_{ii}\sim\chi^{2}(m)$ for each $i$. It is known that $\displaystyle E\frac{1}{\chi^{2}(k)}=\frac{1}{k-2}\ \ \mbox{and}\ \ \mbox{Var}\Big{(}\frac{1}{\chi^{2}(k)}\Big{)}=\frac{2}{(k-2)^{2}(k-4)}$ (172) for any integer $k\geq 3$. Therefore, $\displaystyle E\frac{1}{s_{ii}}=\frac{n}{(m-2)\sigma_{ii}}\ \ \mbox{and}\ \ \ \mbox{Var}\Big{(}\frac{1}{s_{ii}}\Big{)}=\frac{2n^{2}}{(m-2)^{2}(m-4)\sigma_{ii}^{2}}.$ (173) It follows that $E(\hat{{\bf D}}^{-1})=\frac{n}{m-2}\mathbb{D}^{-1}.$ Observe $\mbox{tr}({\boldsymbol{\Sigma}}^{1/2}\mathbb{D}^{-1}{\boldsymbol{\Sigma}}^{1/2})=\mbox{tr}(\mathbb{D}^{-1/2}{\boldsymbol{\Sigma}}\mathbb{D}^{-1/2})=\mbox{tr}(\mathbb{\mathbb{R}})=p$. From (169), we have $\displaystyle E(\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta})=\frac{n}{m-2}\cdot\mbox{tr}\big{(}{\boldsymbol{\Sigma}}^{1/2}\mathbb{D}^{-1}{\boldsymbol{\Sigma}}^{1/2}\big{)}=\frac{np}{n-3}.$ (174) Similarly, we have from (167) that $\displaystyle E(\boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta})=E(\boldsymbol{\theta}^{T}{\boldsymbol{\Sigma}}^{1/2}{\bf D}^{-1}{\boldsymbol{\Sigma}}^{1/2}\boldsymbol{\theta})=\,\mbox{tr}({\boldsymbol{\Sigma}}^{1/2}{\bf D}^{-1}{\boldsymbol{\Sigma}}^{1/2})=p.$ Therefore, $\displaystyle E(\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta})-E(\boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta})=\frac{np}{m-2}-p=\frac{3p}{m-2}.$ It follows that $\displaystyle\frac{1}{\sqrt{2\,\mbox{tr}(\mathbb{R}^{2})}}\big{[}E(\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta})-E(\boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta})\big{]}=\frac{\sqrt{4.5}\,p}{(m-2)\sqrt{\mbox{tr}(\mathbb{R}^{2})}}\to 0$ (175) by the assumption $\lim_{p\to\infty}\frac{p}{m\|\mathbb{R}\|_{F}}=0$. Step 2: the variance of random variable from (166). Set ${\bf B}={\boldsymbol{\Sigma}}^{1/2}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}^{1/2}$. It is seen from (167) that $\displaystyle\boldsymbol{\theta}^{T}{\bf B}\boldsymbol{\theta}=\boldsymbol{\eta}^{T}\hat{{\bf D}}^{-1}\boldsymbol{\eta}-\boldsymbol{\eta}^{T}{\bf D}^{-1}\boldsymbol{\eta}.$ (176) Recall the formula $\mbox{Var}(v)=E\mbox{Var}(v|{\bf B})+\mbox{Var}(E(v|{\bf B}))$ for any random variable $v$. Then, by the independence between $\boldsymbol{\theta}$ and ${\bf B}$ as well as (168), $\displaystyle\mbox{Var}(\boldsymbol{\theta}^{T}{\bf B}\boldsymbol{\theta})=2E\,\mbox{tr}({\bf B}^{2})+\mbox{Var}(\mbox{tr}({\bf B})).$ (177) Our focus next will be the evaluation of the two terms. Step 3: the evaluation of $E\mbox{tr}({\bf B}^{2})$ from (177). Let us consider the last two terms one by one. First, $\displaystyle\mbox{tr}({\bf B}^{2})$ $\displaystyle=$ $\displaystyle\mbox{tr}\big{(}{\boldsymbol{\Sigma}}^{1/2}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}^{1/2}\big{)}$ $\displaystyle=$ $\displaystyle\mbox{tr}\big{(}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}\big{)}.$ Let $\mathbb{Q}(i,j)$ denote the $(i,j)$-entry of a matrix $\mathbb{Q}$. For any matrices $\mathbb{Q}_{1},\mathbb{Q}_{2}$, $\mathbb{Q}_{3},\mathbb{Q}_{4}$, we have $\mbox{tr}(\mathbb{Q}_{1}\mathbb{Q}_{2}\mathbb{Q}_{3}\mathbb{Q}_{4})=\sum\mathbb{Q}_{1}(i,j)\mathbb{Q}_{2}(j,k)\mathbb{Q}_{3}(k,l)\mathbb{Q}_{4}(l,i)$, where the sum runs over all possible indices $i,j,k,l$. It follows that $\displaystyle\mbox{tr}({\bf B}^{2})$ $\displaystyle=$ $\displaystyle\sum_{1\leq i,j\leq p}\sigma_{ij}^{2}\Big{(}\frac{1}{s_{ii}}-\frac{1}{\sigma_{ii}}\Big{)}\Big{(}\frac{1}{s_{jj}}-\frac{1}{\sigma_{jj}}\Big{)}$ (178) $\displaystyle=$ $\displaystyle\sum_{1\leq i,j\leq p}r_{ij}^{2}\Big{(}\frac{\sigma_{ii}}{s_{ii}}-1\Big{)}\Big{(}\frac{\sigma_{jj}}{s_{jj}}-1\Big{)}$ since $r_{ij}=\sigma_{ij}(\sigma_{ii}\sigma_{jj})^{-1/2}$. By (173), $\displaystyle\frac{\sigma_{ii}}{s_{ii}}-1=\frac{\sigma_{ii}}{s_{ii}}-E\frac{\sigma_{ii}}{s_{ii}}+\frac{3}{m-2}.$ Therefore, $\displaystyle E\Big{(}\frac{\sigma_{ii}}{s_{ii}}-1\Big{)}\Big{(}\frac{\sigma_{jj}}{s_{jj}}-1\Big{)}=\mbox{Cov}\Big{(}\frac{\sigma_{ii}}{s_{ii}},\frac{\sigma_{jj}}{s_{jj}}\Big{)}+\frac{9}{(m-2)^{2}}.$ (179) The fact from (171) implies that $ns_{ii}=X_{1}^{2}+\cdots+X_{m}^{2}$ and $ns_{jj}=Y_{1}^{2}+\cdots+Y_{m}^{2}$, where $(X_{1},Y_{1})^{T},\cdots,(X_{m},Y_{m})^{T}$ are i.i.d. $2$-dimensional normal random vectors with $EX_{1}=EY_{1}=0$, $EX_{1}^{2}=\sigma_{ii}$ and $EY_{1}^{2}=\sigma_{jj}$ and $\mbox{Cov}(X_{1},Y_{1})=\sigma_{ij}$. Recall ${\bf R}=(r_{ij})_{p\times p}$ with $r_{ij}=\sigma_{ij}/\sqrt{\sigma_{ii}\sigma_{jj}}.$ Then $\mbox{Cov}(X_{1}/\sqrt{\sigma_{ii}},Y_{1}/\sqrt{\sigma_{jj}})=r_{ij}$. By Lemma 6.12, we have $\displaystyle\frac{m^{2}}{n^{2}}\cdot E\Big{(}\frac{\sigma_{ii}}{s_{ii}}\cdot\frac{\sigma_{ii}}{s_{jj}}\Big{)}=1+\frac{4+2r_{ij}^{2}}{m}+\frac{12+8r_{ij}^{2}+8r_{ij}^{4}}{m^{2}}+\frac{\delta_{m}(i,j)}{m^{3}},$ where $\max_{1\leq i,j\leq p}|\delta_{m}(i,j)|\leq C$ for all $m\geq 11$, where $C$ is a constant not depending on $m$ or $\mathbb{R}=(r_{ij})$. This and (173) conclude that $\mbox{Cov}(\frac{\sigma_{ii}}{s_{ii}},\,\frac{\sigma_{jj}}{s_{jj}})$ is identical to $\displaystyle\Big{[}1+\frac{4+2r_{ij}^{2}}{m}+\frac{12+8r_{ij}^{2}+8r_{ij}^{4}}{m^{2}}+\frac{\delta_{m}(i,j)}{m^{3}}\Big{]}\cdot\Big{(}\frac{m+1}{m}\Big{)}^{2}-\frac{m+1}{m-2}\cdot\frac{m+1}{m-2}$ (180) $\displaystyle=$ $\displaystyle\Big{[}1+\frac{4+2r_{ij}^{2}}{m}+\frac{\delta_{m}(i,j)^{\prime}}{m^{2}}\Big{]}\cdot\Big{(}1+\frac{2}{m}+\frac{1}{m^{2}}\Big{)}-\Big{[}1+\frac{6}{m}+\frac{21m-24}{m(m-2)^{2}}\Big{]}$ $\displaystyle=$ $\displaystyle\frac{2r_{ij}^{2}}{m}+\frac{\delta_{m}(i,j)^{\prime\prime}}{m^{2}},$ where $\displaystyle\max_{1\leq i,j\leq p}\big{\\{}|\delta_{m}(i,j)^{\prime}|,|\delta_{m}(i,j)^{\prime\prime}|\big{\\}}\leq K_{1}$ (181) for all $m\geq 11$ and $K_{1}$ here and later represents a constant not depending on $m$ or $r_{ij}$, and can be different from line to line. This, (178) and (179) conclude $\displaystyle E\,\mbox{tr}({\bf B}^{2})$ $\displaystyle=$ $\displaystyle\frac{2}{m}\Big{(}\sum_{1\leq i,j\leq p}r_{ij}^{4}\Big{)}+\sum_{1\leq i,j\leq p}r_{ij}^{2}\Big{[}\frac{\delta_{m}(i,j)^{\prime}}{m^{2}}+\frac{9}{(m-2)^{2}}\Big{]}$ (182) $\displaystyle\leq$ $\displaystyle\frac{2}{m}\,\mbox{tr}({\bf R}^{2})+\frac{K_{1}}{m^{2}}\,\mbox{tr}({\bf R}^{2})$ $\displaystyle\leq$ $\displaystyle\frac{3}{m}\,\mbox{tr}({\bf R}^{2})$ as $m$ is sufficiently large, which is guaranteed as $p\to\infty$ since $\lim_{p\to\infty}n_{p}=\infty$ and $m=n_{p}-1$. In the second step above we use the fact $\sum_{1\leq i,j\leq p}r_{ij}^{4}\leq\sum_{1\leq i,j\leq p}r_{ij}^{2}=\mbox{tr}({\bf R}^{2})$. Step 4: the evaluation of $\mbox{Var}(\mbox{tr}({\bf B}))$ from (177). Note $\displaystyle\mbox{tr}({\bf B})=\mbox{tr}\big{(}{\boldsymbol{\Sigma}}^{1/2}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}^{1/2}\big{)}$ $\displaystyle=$ $\displaystyle\mbox{tr}\big{(}(\hat{{\bf D}}^{-1}-{\bf D}^{-1}){\boldsymbol{\Sigma}}\big{)}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{m}\Big{(}\frac{1}{s_{ii}}-\frac{1}{\sigma_{ii}}\Big{)}\sigma_{ii}.$ Recall $\frac{ns_{ii}}{\sigma_{ii}}\sim\chi^{2}(m)$ for each $i$. It then follows from (173) and (181) that $\displaystyle\mbox{Var}(\mbox{tr}({\bf B}))$ $\displaystyle=$ $\displaystyle\mbox{Var}\Big{(}\sum_{i=1}^{m}\frac{\sigma_{ii}}{s_{ii}}\Big{)}$ $\displaystyle=$ $\displaystyle\sum_{i=1}^{m}\mbox{Var}\Big{(}\frac{\sigma_{ii}}{s_{ii}}\Big{)}+2\sum_{1\leq i<j\leq p}\mbox{Cov}\Big{(}\frac{\sigma_{ii}}{s_{ii}},\frac{\sigma_{jj}}{s_{jj}}\Big{)}$ $\displaystyle\leq$ $\displaystyle m\cdot\frac{2n^{2}}{(m-2)^{2}(m-4)}+2\sum_{1\leq i<j\leq p}\Big{[}\frac{2r_{ij}^{2}}{m}+\frac{\delta_{m}(i,j)^{\prime\prime}}{m^{2}}\Big{]}.$ Thus, $\displaystyle\mbox{Var}(\mbox{tr}({\bf B}))\leq 3+\frac{2}{m}\,\mbox{tr}({\bf R}^{2})+\frac{K_{1}p^{2}}{m^{2}}$ as $m$ is sufficiently large. Finally, combining the analysis of the two terms from (177) in Step 3 and Step 4, we eventually obtain $\displaystyle\mbox{Var}(\boldsymbol{\theta}^{T}{\bf B}\boldsymbol{\theta})$ $\displaystyle\leq$ $\displaystyle\frac{6}{m}\,\mbox{tr}({\bf R}^{2})+3+\frac{2}{m}\,\mbox{tr}({\bf R}^{2})+K_{1}\frac{p^{2}}{m^{2}}$ $\displaystyle=$ $\displaystyle 3+\frac{8}{m}\,\mbox{tr}({\bf R}^{2})+K_{1}\frac{p^{2}}{m^{2}}$ as $p$ is sufficiently large. Easily, $\mbox{tr}({\bf R}^{2})\geq p$. It follows that $\displaystyle\mbox{Var}\Big{(}\frac{\boldsymbol{\theta}^{T}{\bf B}\boldsymbol{\theta}}{\sqrt{2\,\mbox{tr}(\mathbb{R}^{2})}}\Big{)}=\frac{\mbox{Var}(\boldsymbol{\theta}^{T}{\bf B}\boldsymbol{\theta})}{2\,\mbox{tr}(\mathbb{R}^{2})}\leq\frac{3}{p}+\frac{4}{m}+K_{1}\frac{p^{2}}{m^{2}\,\mbox{tr}(\mathbb{R}^{2})}\to 0$ since $\lim_{p\to\infty}\frac{p}{m\|\mathbb{R}\|_{F}}=0$ and $\|\mathbb{R}\|_{F}^{2}=\,\mbox{tr}(\mathbb{R}^{2})$. This joined (175) concludes (166). $\square$ Proof of Lemma 4.2. First, by the monotone property of $a_{p,i}$, we obtain $\rho_{1}\geq\rho_{2}\geq\cdots$. Moreover, $1=a_{p,1}^{2}+\cdots+a_{p,p}^{2}\geq a_{p,1}^{2}+\cdots+a_{p,i}^{2}\geq ia_{p,i}^{2}$ for any $1\leq i\leq p.$ This implies that $0\leq a_{p,i}\leq i^{-1/2}$ for each $1\leq i\leq p.$ Take $p\to\infty$ to obtain $0\leq\rho_{i}\leq i^{-1/2}$ for each $i\geq 1.$ Also, by using the fact $1\geq a_{p,1}^{2}+\cdots+a_{p,i}^{2}$ for any $1\leq i\leq p$, and letting $p\to\infty$ first and then $i\to\infty$, we get $\sum_{i=1}^{\infty}\rho_{i}^{2}\leq 1.$ We first handle a trivial case: $\rho_{1}=0$. By monotonicity, $\rho_{i}=0$ for each $i\geq 1$. Notice $E\xi_{1}=0$ and $\mbox{Var}(\xi_{1})=2$. Thus, $s_{n}^{2}:=\mbox{Var}(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})=2(a_{p,1}^{2}+\cdots+a_{p,p}^{2})=2.$ Easily, $\displaystyle\frac{1}{s_{n}^{3}}\sum_{i=1}^{p}E(|a_{p,i}\xi_{i}|^{3})=\frac{E(|\xi_{1}|^{3})}{2\sqrt{2}}\sum_{i=1}^{p}a_{p,i}^{3}\leq\frac{E(|\xi_{1}|^{3})}{2\sqrt{2}}\cdot a_{p,1}\cdot\sum_{i=1}^{p}a_{p,i}^{2},$ which goes to zero by the assumption $\lim_{p\to\infty}a_{p,1}=\rho_{1}=0$. The desired result follows from the Lyapunov central limit theorem. From now on, we assume $\rho_{1}>0$. In the following a useful fact will be derived first. For each $p\geq 1$, let $b_{p,1}\geq b_{p,2}\geq\cdots\geq 0$ be constants satisfying $\sum_{i=1}^{\infty}b_{p,i}^{2}\leq 1$. We claim that $\displaystyle\prod_{i=m}^{\infty}\Big{[}e^{-tb_{p,i}}\big{(}1-2tb_{p,i}\big{)}^{-1/2}\Big{]}=e^{\gamma_{p,m}}\cdot\exp\Big{(}t^{2}\sum_{i=m}^{\infty}b_{p,i}^{2}\Big{)}$ (183) for all $m\geq 16$ and $|t|<1$, where $\sup_{p\geq 1}|\gamma_{p,m}|\leq\frac{8}{\sqrt{m}}$. In fact, write $\log(1-x)=-\sum_{i=1}^{\infty}\frac{1}{i}x^{i}:=-x-\frac{1}{2}x^{2}-B(x)$ for $|x|<1$. Then $\displaystyle|B(x)|\leq\sum_{i=3}^{\infty}\frac{1}{i}|x|^{i}\leq\sum_{i=3}^{\infty}|x|^{i}\leq\frac{|x|^{3}}{1-|x|}\leq 2|x|^{3}$ (184) if $|x|\leq\frac{1}{2}.$ By the same argument as that in the beginning, we know $0\leq b_{p,i}\leq\frac{1}{\sqrt{i}}$ for each $i\geq 1$ and $p\geq 1$. Observe $\displaystyle\prod_{i=m}^{\infty}\Big{[}e^{-tb_{p,i}}\big{(}1-2tb_{p,i}\big{)}^{-1/2}\Big{]}$ $\displaystyle=$ $\displaystyle\prod_{i=m}^{\infty}\exp\Big{[}-tb_{p,i}-\frac{1}{2}\log\big{(}1-2tb_{p,i}\big{)}\Big{]}$ $\displaystyle=$ $\displaystyle\prod_{i=m}^{\infty}\exp\Big{[}t^{2}b_{p,i}^{2}+\frac{1}{2}B(2tb_{p,i})\Big{]}.$ By the monotone property, $\max_{i\geq m}|2tb_{p,i}|=2|t|b_{p,m}\leq\frac{2|t|}{\sqrt{m}}\leq\frac{1}{2}|t|$ for $m\geq 16$. This and (184) say that $\displaystyle\sum_{i=m}^{\infty}\frac{1}{2}|B(2tb_{p,i})|\leq 8|t|^{3}\sum_{i=m}^{\infty}b_{p,i}^{3}\leq\frac{8|t|^{3}}{\sqrt{m}}\sum_{i=m}^{\infty}b_{p,i}^{2}\leq\frac{8}{\sqrt{m}}$ for any $t$ with $|t|<1$ and $p\geq 1$. These lead to (183). In two steps next we will apply (183) to $a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p}$ and its limit stated in the lemma, respectively. The limit case goes first. Step 1. Set $b=[2(1-\sum_{i=1}^{\infty}\rho_{i}^{2})]^{1/2}$ and $X=b\eta+\sum_{i=1}^{\infty}\rho_{i}\xi_{i}$, where $\eta\sim N(0,1)$ and $\eta$ is independent of $\\{\xi_{i};\,i\geq 1\\}$. Then, by independence and the fact $E\exp(t\chi^{2}(1))=(1-2t)^{-1/2}$ for $t<\frac{1}{2}$, we see $\displaystyle Ee^{tX}=\Big{(}\prod_{i=1}^{\infty}Ee^{t\rho_{i}\xi_{1}}\Big{)}\cdot Ee^{tb\eta}=e^{b^{2}t^{2}/2}\cdot\prod_{i=1}^{\infty}\Big{[}e^{-t\rho_{i}}\big{(}1-2t\rho_{i}\big{)}^{-1/2}\Big{]}$ (185) for $t$ with $|t\rho_{i}|<\frac{1}{2}$ for each $i\geq 1$, which holds as $|t|<\frac{1}{2\rho_{1}}.$ Take $b_{p,i}=\rho_{i}$ for all $i\geq 1$ and $p\geq 1$ in (183) to see $\displaystyle\prod_{i=m}^{\infty}\Big{[}e^{-t\rho_{i}}\big{(}1-2t\rho_{i}\big{)}^{-1/2}\Big{]}=e^{\gamma_{m}}\cdot\exp\Big{(}t^{2}\sum_{i=m}^{\infty}\rho_{i}^{2}\Big{)}$ (186) for $m\geq 16$ and $|t|<1$, where $|\gamma_{m}|\leq\frac{8}{\sqrt{m}}$. This and (185) especially indicate $Ee^{tX}<\infty$ for every $|t|<\frac{1}{2}.$ Recall $\sum_{i=1}^{\infty}\rho_{i}^{2}\leq 1$, by sending $m\to\infty$ we see the left hand side of (186) goes to $1$. Therefore, $\displaystyle\prod_{i=1}^{m-1}\Big{[}e^{-t\rho_{i}}\big{(}1-2t\rho_{i}\big{)}^{-1/2}\Big{]}\to\prod_{i=1}^{\infty}\Big{[}e^{-t\rho_{i}}\big{(}1-2t\rho_{i}\big{)}^{-1/2}\Big{]}$ (187) as $m\to\infty$ for every $|t|<\frac{1}{2}$. Step 2. Evidently, $\displaystyle E^{t(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})}=\prod_{i=1}^{p}Ee^{ta_{p,i}\xi_{1}}=\prod_{i=1}^{p}\Big{[}e^{-ta_{p,i}}\big{(}1-2ta_{p,i}\big{)}^{-1/2}\Big{]}$ (188) provided $|t|<\frac{1}{2a_{p,1}}$. In particular, this holds if $|t|<\frac{1}{2}$. Now, by taking $b_{p,i}=a_{p,i}$ for $1\leq i\leq p$ and $b_{p,i}=0$ for $i>p$ from (183), we obtain $\displaystyle E^{t(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})}=e^{\gamma_{p,m}}\cdot\exp\Big{(}t^{2}\sum_{i=m}^{p}a_{p,i}^{2}\Big{)}\cdot\prod_{i=1}^{m-1}\Big{[}e^{-ta_{p,i}}\big{(}1-2ta_{p,i}\big{)}^{-1/2}\Big{]}$ for any $m$ with $16\leq m\leq p$ and $|t|<\frac{1}{2}$, where $\sup_{p\geq 1}|\gamma_{p,m}|\leq\frac{8}{\sqrt{m}}$. Consequently, if $16\leq m\leq p$ and $|t|<\frac{1}{2}$ then $\displaystyle E^{t(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})}\leq e^{8/\sqrt{m}}\cdot\exp\Big{[}t^{2}\Big{(}1-\sum_{i=1}^{m-1}a_{p,i}^{2}\Big{)}\Big{]}\cdot\prod_{i=1}^{m-1}\Big{[}e^{-ta_{p,i}}\big{(}1-2ta_{p,i}\big{)}^{-1/2}\Big{]}$ (189) by the assumption $a_{p,1}^{2}+\cdots+a_{p,p}^{2}=1$, and $\displaystyle E^{t(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})}\geq e^{-8/\sqrt{m}}\cdot\exp\Big{[}t^{2}\Big{(}1-\sum_{i=1}^{m-1}a_{p,i}^{2}\Big{)}\Big{]}\cdot\prod_{i=1}^{m-1}\Big{[}e^{-ta_{p,i}}\big{(}1-2ta_{p,i}\big{)}^{-1/2}\Big{]}.$ (190) With the two steps established above, we are now ready to complete the proof. Recall the assumption $\lim_{p\to\infty}a_{p,i}=\rho_{i}$ for each $i\geq 1$. For fixed $m\geq 16$ we send $p\to\infty$ and then send $m\to\infty$ in (189) and (190), we have from (187) and then (185) that $\displaystyle E^{t(a_{p,1}\xi_{1}+\cdots+a_{p,p}\xi_{p})}\to\exp\Big{[}t^{2}\Big{(}1-\sum_{i=1}^{\infty}\rho_{i}^{2}\Big{)}\Big{]}\cdot\prod_{i=1}^{\infty}\Big{[}e^{-t\rho_{i}}\big{(}1-2t\rho_{i}\big{)}^{-1/2}\Big{]}=Ee^{tX}$ as $p\to\infty$ for $|t|<\frac{1}{2}$. The desired conclusion then follows from the uniqueness of the moment generating function. $\square$ Recall $F(1,m)$ stands for the $F$-distribution with degrees of freedoms $1$ and $m$. ###### LEMMA 6.36 Let $m=m_{p}\to\infty$ as $p\to\infty$. For each $p\geq 1$, let $X_{p,1},\cdots,X_{p,p}$ be i.i.d. with distribution $F(1,m)$. Then $(2p)^{-1/2}[X_{p,1}+\cdots+X_{p,p}-mp(m-2)^{-1}]\to N(0,1)$ in distribution as $p\to\infty.$ Proof of Lemma 6.36. First, by the property of $F$-distribution, $\displaystyle EX_{p,1}=\frac{m}{m-2}~{}~{}~{}\mbox{and}~{}~{}~{}\mbox{Var}(X_{p,1})=\frac{2m^{2}(m-1)}{(m-2)^{2}(m-4)}$ for $m\geq 5$. By definition, we write $X_{p,1}=\frac{m\xi_{0}^{2}}{\xi_{1}^{2}+\cdots+\xi_{m}^{2}}$, where $\xi_{0},\xi_{1},\cdots,\xi_{m}$ are i.i.d. $N(0,1)$. Then $\displaystyle E\big{(}X_{p,1}-EX_{p,1}\big{)}^{4}$ $\displaystyle=$ $\displaystyle E\Big{[}\Big{(}\frac{m}{\xi_{1}^{2}+\cdots+\xi_{m}^{2}}-1\Big{)}\xi_{0}^{2}+\xi_{0}^{2}-\frac{m}{m-2}\Big{]}^{4}$ $\displaystyle\leq$ $\displaystyle 3^{3}E\Big{[}\Big{(}\frac{m}{\xi_{1}^{2}+\cdots+\xi_{m}^{2}}-1\Big{)}^{4}\xi_{0}^{8}\Big{]}+3^{3}E(\xi_{0}^{8})+3^{3}\Big{(}\frac{m}{m-2}\Big{)}^{4}.$ By using the Cauchy-Schwartz inequality twice, $\displaystyle E\Big{[}\Big{(}\frac{m}{\xi_{1}^{2}+\cdots+\xi_{m}^{2}}-1\Big{)}^{4}\xi_{0}^{8}\Big{]}$ $\displaystyle\leq$ $\displaystyle\Big{\\{}E\Big{[}\Big{(}\frac{\xi_{1}^{2}+\cdots+\xi_{m}^{2}-m}{\xi_{1}^{2}+\cdots+\xi_{m}^{2}}\Big{)}^{8}\Big{]}\Big{\\}}^{1/2}\cdot\big{(}E\xi_{0}^{16}\big{)}^{1/2}$ $\displaystyle\leq$ $\displaystyle K\cdot\Big{[}E\big{(}\xi_{1}^{2}+\cdots+\xi_{m}^{2}-m\big{)}^{16}\Big{]}^{1/4}\cdot\Big{[}E\frac{1}{(\xi_{1}^{2}+\cdots+\xi_{m}^{2})^{16}}\Big{]}^{1/4},$ where $K$ here and later is a constant free of $m$ and $p$, and can be different from line to line. By using the Marcinkiewicz-Zygmund inequality [see, for example, the proof of Corollary 2 on p. 387 from Chow and Teicher (1997)], $E\big{(}\xi_{1}^{2}+\cdots+\xi_{m}^{2}-m\big{)}^{16}\leq Km^{8}$. Furthermore, take $\beta=-16$ in (57) to see $E[(\xi_{1}^{2}+\cdots+\xi_{m}^{2})^{-16}]\leq Km^{-16}$ for all $m\geq 34$. Combining all of the above calculation, we see $E\big{(}X_{p,1}-EX_{p,1}\big{)}^{4}\leq K$ as $m\geq 34$. Notice $\mbox{Var}(X_{p,1})\to 2$ as $p\to\infty$. Then $\displaystyle\frac{1}{(p\mbox{Var}(X_{p,1}))^{2}}\sum_{i=1}^{p}E(X_{p,i}-EX_{p,i})^{4}=O\Big{(}\frac{1}{p}\Big{)}\to 0$ as $p\to\infty$. By the Lyapunov CLT, we obtain the desired result. $\square$ Proof of Theorem 1. First, by Theorem 3, $\displaystyle\frac{1}{\mbox{tr}(\mathbb{R}^{2})}\Big{[}\mbox{tr}(\hat{\mathbb{R}}^{2})-\frac{p(p-1)}{n-1}\Big{]}\to 1$ (191) in probability as $p\to\infty$. In the following we will use this fact twice to show $T_{SD}$ and $T_{p,1}$ are equivalent. First, it follows from the assumption $\lim_{p\to\infty}\frac{p}{n\|\mathbb{R}\|_{F}}=0$ that $\displaystyle\frac{\mathrm{tr}(\hat{{\bf R}}^{2})-p^{2}(n-1)^{-1}}{\mathrm{tr}({\bf R}^{2})}=\frac{\mathrm{tr}(\hat{{\bf R}}^{2})-p(p-1)(n-1)^{-1}}{\mathrm{tr}({\bf R}^{2})}-\frac{p(n-1)^{-1}}{\mathrm{tr}({\bf R}^{2})}=1+o_{p}(1).$ As a consequence, $\displaystyle H_{p}:=\Big{[}\frac{\mathrm{tr}(\hat{{\bf R}}^{2})-p^{2}(n-1)^{-1}}{\mathrm{tr}({\bf R}^{2})}\Big{]}^{-1/2}=1+o_{p}(1).$ Review (2). We have $\displaystyle T_{SD}$ $\displaystyle=$ $\displaystyle\frac{[n\bar{{\boldsymbol{X}}}^{T}\hat{{\bf D}}^{-1}\bar{{\boldsymbol{X}}}-pn(n-3)^{-1}]+p(n-3)^{-1}}{\sqrt{2\mathrm{tr}({\bf R}^{2})}}\cdot H_{p}$ (192) $\displaystyle=$ $\displaystyle\frac{[n\bar{{\boldsymbol{X}}}^{T}\hat{{\bf D}}^{-1}\bar{{\boldsymbol{X}}}-pn(n-3)^{-1}]}{\sqrt{2\mathrm{tr}({\bf R}^{2})}}\cdot[1+o_{p}(1)]+o_{p}(1)$ by the assumption $\frac{p}{n\|\mathbb{R}\|_{F}}\to 0$ and the notation $\|\mathbb{R}\|_{F}^{2}=\mbox{tr}(\mathbb{R}^{2})$. By (9), $\displaystyle T_{p,1}=\frac{n\bar{{\boldsymbol{X}}}^{T}\hat{{\bf D}}^{-1}\bar{{\boldsymbol{X}}}-pn(n-3)^{-1}}{\sqrt{2\big{|}\mathrm{tr}(\hat{{\bf R}}^{2})-p(p-1)(n-1)^{-1}\big{|}}}.$ It follows from (191) that $\displaystyle T_{p,1}=\frac{[n\bar{{\boldsymbol{X}}}^{T}\hat{{\bf D}}^{-1}\bar{{\boldsymbol{X}}}-pn(n-3)^{-1}]}{\sqrt{2\mathrm{tr}({\bf R}^{2})}}\cdot[1+o_{p}(1)].$
# Constrained Labeling for Weakly Supervised Learning Chidubem Arachie1, Bert Huang2 ###### Abstract Curation of large fully supervised datasets has become one of the major roadblocks for machine learning. Weak supervision provides an alternative to supervised learning by training with cheap, noisy, and possibly correlated labeling functions from varying sources. The key challenge in weakly supervised learning is combining the different weak supervision signals while navigating misleading correlations in their errors. In this paper, we propose a simple data-free approach for combining weak supervision signals by defining a constrained space for the possible labels of the weak signals and training with a random labeling within this constrained space. Our method is efficient and stable, converging after a few iterations of gradient descent. We prove theoretical conditions under which the worst-case error of the randomized label decreases with the rank of the linear constraints. We show experimentally that our method outperforms other weak supervision methods on various text- and image-classification tasks. ## 1 Introduction Recent successful demonstrations of machine learning have created an explosion of interest. The key driver of these successes is the progress in deep learning. Researchers in different fields and industries are applying deep learning to their work with varying degrees of success. Training deep learning models typically requires massive amounts of data, and in most cases this data needs to be labeled for supervised learning. The process of collecting labels for large training datasets is often expensive and can be a major bottleneck for practical machine learning. To enable machine learning when labeled data is not available, researchers are increasingly turning to weak supervision. Weakly supervised learning involves training models using noisy labels. Using multiple sources or forms of weak supervision is common, as it provides diverse information to the model. However, each source of weak supervision has its own bias that can be transmitted to the model. Different weak supervision signals can also conflict, overlap, or—in the worst case—make dependent errors. Thus, a naive combination of these weak signals would hurt the quality of a learned model. The key problem then is how to reliably combine various sources of weak signals to train an accurate model. To solve this problem, we propose _constrained label learning_ (CLL), a method that processes various weak supervision signals and combines them to produce high-quality training labels. The idea behind CLL is that, given the weak supervision, we can define a constrained space for the labels of the unlabeled examples. The space will contain the true labels of the data, and any other label sampled from the space should be sufficient to train a model. We construct this space using the expected error of the weak supervision signals, and then we select a random vector from this space to use as training labels. Our analysis shows that, the space of labels considered by CLL improves to be tighter around the true labels as we include more information in the weak signals and that CLL is not confounded by redundant weak signals. CLL takes as input (1) a set of unlabeled data examples, (2) multiple weak supervision signals that label a subset of data and can abstain from labeling the rest, and (3) a corresponding set of expected error rates for the weak supervision signals. While the weak supervision signals can abstain on various examples, we require that the combination of the weak signals have full coverage on the training data. The expected error rates can be estimated if the weak supervision signals have been tested on historical data or a domain expert has knowledge about their performance. In cases where the expected error rates are unavailable, they can be treated as a hyperparameter. Our experiments in Section 3 show that CLL is still effective when it is trained with a loose estimate of the weak signals. Alternatively, we provide guidelines on how error rates can be estimated. We implement CLL as a stable, quickly converging, convex optimization over the candidate labels. CLL thus scales much better than many other weak supervision methods. We show in Section 4 experiments that compare the performance of CLL to other weak supervision methods. On a synthetic dataset, CLL trained with a constant error rate is only a few percentage points from matching the performance of supervised learning on a test set. On real text and image classification tasks, CLL achieves superior performance over existing weak supervision methods on test data. ## 2 Related Work Weakly supervised learning has gained prominence in recent years due to the need to train models without access to manually labeled data. The recent success of deep learning has exacerbated the need for large-scale data annotation, which can be prohibitively expensive. One weakly supervised paradigm, data programming, allows users to define _labeling functions_ that noisily label a set of unlabeled data (Bach et al. 2019; Ratner et al. 2017, 2016). Data programming then combines the noisy labels to form probabilistic labels for the data by using a generative model to estimate the accuracies and dependencies of the noisy/weak supervision signals. This approach underlies the popular software package _Snorkel_ (Ratner et al. 2017). Our method is related to this approach in that we use different weak signal sources and compile them into a single (soft) labeling. However, unlike Snorkel’s methods, we do not train a generative model and avoid the need for probabilistic modeling assumptions. Recently, Snorkel MeTaL was proposed for solving multi- task learning problems with hierarchical structure (Ratner et al. 2018). A user provides weak supervision for the hierarchy of tasks which is then combined in an end-to-end framework. Another recently developed approach for weakly supervised learning is adversarial label learning (ALL) (Arachie and Huang 2019a). ALL was developed for training binary classifiers from weak supervision. ALL trains a model to perform well in the worst case for the weak supervision by simultaneously optimizing model parameters and adversarial labels for the training data in order to satisfy the constraint that the error of the weak signals on the adversarial labels be within provided error bounds. The authors also recently proposed Stoch-GALL (Arachie and Huang 2019b), an extension for multi-class classification that incorporates precision bounds. Our work is related to ALL and Stoch-GALL in that we use the same error definition the authors introduced. However, the expected errors we use do not serve as upper bound constraints for the weak signals. Additionally, CLL avoids the adversarial setting that requires unstable simultaneous optimization of the estimated labels and the model parameters. Lastly, while ALL and Stoch-GALL require weak supervision signals to label every example, we allow for weak supervision signals that abstain on different data subsets. Crowdsourcing has become relevant to machine learning practitioners as it provide a means to train machine learning models using labels collected from different crowd workers (Carpenter 2008; Gao, Barbier, and Goolsby 2011; Karger, Oh, and Shah 2011; Khetan, Lipton, and Anandkumar 2017; Liu, Peng, and Ihler 2012; Platanios et al. 2020; Zhou et al. 2015; Zhou and He 2016). The key machine learning challenge when crowdsourcing is to effectively combine the different labels obtained from human annotators. Our work is similar in that we try to combine different weak labels. However, unlike most methods for crowdsourcing, we cannot assume that the labels are independent of each other. Instead, we train the model to learn while accounting for dependencies between the various weak supervision signals. Ensemble methods such as boosting (Schapire et al. 2002) combine different weak learners (low-cost, low-powered classifiers) to create classifiers that outperform the various weak learners. These weak learners are not weak in the same sense as weak supervision. These strategies are defined for fully supervised settings. Although recent work has proposed leveraging unlabeled data to improve the accuracies of boosting methods (Balsubramani and Freund 2015), our settings differs since we do not expect to have access to labeled data. A growing set of weakly supervised applications includes web knowledge extraction (Bunescu and Mooney 2007; Hoffmann et al. 2011; Mintz et al. 2009; Riedel, Yao, and McCallum 2010; Yao, Riedel, and McCallum 2010), visual image segmentation (Chen et al. 2014; Xu, Schwing, and Urtasun 2014), and tagging of medical conditions from health records (Halpern, Horng, and Sontag 2016). As better weakly supervised methods are developed, this set will expand to include other important applications. We will show an estimation method that is connected to those developed to estimate the error of classifiers without labeled data (Dawid and Skene 1979; Jaffe et al. 2016; Madani, Pennock, and Flake 2005; Platanios, Blum, and Mitchell 2014; Platanios, Dubey, and Mitchell 2016; Steinhardt and Liang 2016). These methods rely on statistical relationships between the error rates of different classifiers or weak signals. Unlike these methods, we show in our experiments that we can train models even when we do not learn the error rates of classifiers. We show that using a maximum error estimate of the weak signals, CLL learns to accurately classify. Like our approach, many other methods incorporate human knowledge or side information into a learning objective. These methods, including posterior regularization (Druck, Mann, and McCallum 2008) and generalized expectation (GE) criteria and its variants (Mann and McCallum 2008, 2010), can be used for semi- and weakly supervised learning. They work by providing parameter estimates as constraints to the objective function of the model so that the label distribution of the trained model tries to match the constraints. In our approach, we incorporate human knowledge as error estimates into our algorithm. However, we do not use the constraints for model training. Instead, we use them to generate training labels that satisfy the constraints, and these labels can then be used downstream to train any model. ## 3 Constrained Label Learning The goal of _constrained label learning_ (CLL) is to return accurate training labels for the data given the weak supervision signals. The estimation of these labels should be aware of the correlation among the weak supervision signals and should not be confounded by it. Toward this goal, we use the weak signals’ expected error to define a constrained space of possible labelings for the data. Any vector sampled from this space can then be used as training labels. We consider the setting in which the learner has access to a training set of unlabeled examples, and a set of weak supervision signals from various sources that provide approximate indicators of the target classification for the data. Along with the weak supervision signals, we are provided estimates of the expected error rates of the weak signals. Formally, let the data be $X=[x_{1},\ldots,x_{n}]$. These examples have corresponding labels $\boldsymbol{y}=[y_{1},\ldots,y_{n}]\in\\{0,1\\}^{n}$. For multi-label classification, where each example may be labeled as a member of $K$ classes, we expand the label vector to include an entry for each example-class combination, i.e., $\boldsymbol{y}=[y_{(1,1)},\ldots,y_{(n,1)},y_{(1,2)},\ldots,y_{(n-1,K)},y_{(n,K)}]$, where $y_{ij}$ is the indicator of whether the $i$th example is in class $j$.111We represent the labels as a vector for later notational convenience, even though it may be more naturally arranged as a matrix. See Fig. 1 for an illustration of this arrangement. Figure 1: Illustration of weak signals and label vectorized structure. For multi-class problems, we arrange the label vector so that it contains indicators for each example belonging to each class. The weak signals use the same indexing scheme. In this illustration, weak signals $\boldsymbol{w}_{1}$ and $\boldsymbol{w}_{2}$ estimate the probability of each example belonging to class 1 and abstain on estimating membership in all other classes. With weak supervision, the training labels $\boldsymbol{y}$ are unavailable. Instead, we have access to $m$ weak supervision signals $\\{\boldsymbol{w}_{1},\ldots,\boldsymbol{w}_{m}\\}$, where each weak signal $\boldsymbol{w}\in[\emptyset,0,1]^{n}$ is represented as a vector of indicators that each example is in each class. The weak signals can choose to abstain on some examples. In that case, they assign a null value $\emptyset$ to that example’s entry. In practice, weak signals for multi-class problems typically only label one class at a time, such as a one-versus-rest classification rule, so they effectively abstain on all out-of-class entries. The weak signals can be soft labels (probabilities) or hard labels (class assignments) of the data. In conjunction with the weak signals, the learner also receives the expected error rates of the weak signals $\boldsymbol{\epsilon}=[\epsilon_{1},\ldots,\epsilon_{m}]$. In practice, the error rates of the weak signals are estimated or treated as a hyperparameter. The expected empirical error of a weak signal $\boldsymbol{w}_{i}$ is $\displaystyle\epsilon_{i}$ $\displaystyle=\frac{1}{n_{i}}\left(\mathbf{1}_{(\boldsymbol{w}\neq\emptyset)}\boldsymbol{w}_{i}^{\top}(1-\boldsymbol{y}_{k})+\mathbf{1}_{(\boldsymbol{w}\neq\emptyset)}(1-\boldsymbol{w}_{i})^{\top}\boldsymbol{y}_{k}\right)$ (1) $\displaystyle=\frac{1}{n_{i}}\left(\mathbf{1}_{(\boldsymbol{w}\neq\emptyset)}(1-2\boldsymbol{w}_{i})^{\top}\boldsymbol{y}_{k}+\boldsymbol{w}_{i}^{\top}\mathbf{1}_{(\boldsymbol{w}\neq\emptyset)}\right),$ where $\boldsymbol{y}_{k}$ is the true label for the class $k$ that the weak signal $\boldsymbol{w}_{i}$ labels, $n_{i}=\sum\mathbf{1}_{(\boldsymbol{w}_{i}\neq\emptyset)}$ and $\mathbf{1}_{(\boldsymbol{w}_{i}\neq\emptyset)}$ is an indicator function that returns $1$ on examples the weak signals label (i.e., do not abstain on). Hence, we only calculate the error of the weak signals on the examples they label. Analogously to Eq. 1, we can express the expected error of all weak signals for the label vector as a system of linear equations in the form $\boldsymbol{A}\boldsymbol{y}=\boldsymbol{c}$. To do this, we define each row in $\boldsymbol{A}$ as $\boldsymbol{A}_{i}=\mathbf{1}_{(\boldsymbol{w}_{i}\neq\emptyset)}(1-2\boldsymbol{w}_{i}),$ (2) a linear transformation of a weak signal $\boldsymbol{w}$. Each entry in the vector $\boldsymbol{c}$ is the difference between the expected error of the weak signal and the sum of the weak signal, i.e., $\boldsymbol{c_{i}}=n_{i}\epsilon_{i}-\boldsymbol{w}_{i}^{\top}\mathbf{1}_{(\boldsymbol{w}\neq\emptyset)}.$ (3) Valid label vectors then must be in the space $\displaystyle\\{\boldsymbol{\tilde{y}}|\boldsymbol{A}\boldsymbol{\tilde{y}}=\boldsymbol{c}\wedge\boldsymbol{\tilde{y}}\in[0,1]^{n}\\}~{}.$ (4) The true label $\boldsymbol{y}$ is not known. Thus, we want to find training labels $\boldsymbol{\tilde{y}}$ that satisfy the system of linear equations. ### Algorithm Having defined the space of possible labelings for the data given the weak signals, we explain here how we efficiently sample a vector of training labels from the space. First, we initialize a random $\boldsymbol{\tilde{y}}$ from a uniform distribution $\boldsymbol{\tilde{Y}}\sim U(0,1)^{n}$. Then we minimize a quadratic penalty on violations of the constraints defining the space. The objective function is $\displaystyle\min_{\boldsymbol{\tilde{y}}\in[0,1]^{n}}~{}~{}\left\lVert\boldsymbol{A}\boldsymbol{\tilde{y}}-\boldsymbol{c}\right\rVert_{2}^{2}~{}.$ (5) The solution to this quadratic objective function gives us feasible labels for the training data. In our experiments, we estimate the error rates $\boldsymbol{\epsilon}$ of the weak signals. In cases where the error estimates make an infeasible space, this quadratic penalty acts as a squared slack. We solve Eq. 5 iteratively using projected Adagrad (Duchi, Hazan, and Singer 2011), clipping $\boldsymbol{\tilde{y}}$ values to $[0,1]^{n}$ between gradient updates. This approach is fast and efficient, even for large datasets. Our algorithm is a simple quadratic convex optimization that converges to a unique optimum for each initialization of $\boldsymbol{\tilde{y}}$. In our experiments, it converges after only a few iterations of gradient descent. We run the algorithm 3 times with random initialization of $\boldsymbol{\tilde{y}}$ and take the mean of the $\boldsymbol{\tilde{y}}$s as the estimated label. We observed that the labels returned from the different runs are very similar. We fix the number of iterations of gradient descent for each run to $200$ for all our experiments. The full algorithm is summarized in Algorithm 1. 0: Dataset $X=[x_{1},\ldots,x_{n}]$, weak signals $[\boldsymbol{w}_{1},\ldots,\boldsymbol{w}_{m}]$, and expected error $\boldsymbol{\epsilon}=[\epsilon_{1},\ldots,\epsilon_{m}]$ for the signals. 1: Define $\boldsymbol{A}$ from Eq. 2 and $\boldsymbol{c}$ from Eq. 3 using the weak signals and expected errors. 2: Initialize $\boldsymbol{\tilde{y}}$ as $\boldsymbol{\tilde{y}}\sim U(0,1)^{n}$ 3: while not converged do 4: Update $\boldsymbol{\tilde{y}}$ with its gradient from Eq. 5 5: Clip $\boldsymbol{\tilde{y}}$ to $[0,1]^{n}$ 6: end whilereturn estimated labels $\boldsymbol{\tilde{y}}$ Algorithm 1 Randomized Constrained Labeling ### Analysis We start by analyzing the case where we have the true error $\boldsymbol{\epsilon}$, in which case the true label vector $\boldsymbol{y}$ for CLL is a solution in the feasible space. Although the true error rates are not available in practice, this ideal setting is the motivating case for the CLL approach. To begin the analysis, consider an extreme case: if $\boldsymbol{A}$ is a square matrix with full rank, then the only valid label $\boldsymbol{\tilde{y}}$ in the space is the true label, $\boldsymbol{\tilde{y}}=\boldsymbol{y}$. Normally, $\boldsymbol{A}$ is usually underdetermined, which means we have more data examples than weak signals. In this case, there are many solutions for $\boldsymbol{\tilde{y}}$, so we can analyze this space to understand how distant any feasible vector is from the vector of all incorrect labels. Since label vectors are constrained to be in the unit box, the farthest possible label vector from the true labels is $(1-\boldsymbol{y})$. The result of our analysis is the following theorem, which addresses the binary classification case with non-abstaining weak signals. ###### Theorem 1. For any $\boldsymbol{\tilde{y}}\in[0,1]^{n}$ such that $\boldsymbol{A}\boldsymbol{\tilde{y}}=\boldsymbol{c}$, its Euclidean distance from the negated label vector $(1-\boldsymbol{y})\in\\{0,1\\}^{n}$ is bounded below by $||\boldsymbol{\tilde{y}}-(1-\boldsymbol{y})||\geq n||\boldsymbol{A}^{+}(1-2\boldsymbol{\epsilon})||,$ (6) where $\boldsymbol{A}^{+}$ is the Moore-Penrose pseudoinverse of $\boldsymbol{A}$. ###### Proof. We first relax the constrained space by removing the $[0,1]^{n}$ box constraints. We can then analyze the projection onto the feasible space: $\min_{\boldsymbol{\tilde{y}}}||(1-\boldsymbol{y})-\boldsymbol{\tilde{y}}||~{}~{}\textrm{s.t.}~{}~{}\boldsymbol{A}\boldsymbol{\tilde{y}}=\boldsymbol{c}.$ (7) Define a vector $\boldsymbol{z}:=\boldsymbol{\tilde{y}}-\boldsymbol{y}$. We can rewrite the distance as $\min_{\boldsymbol{z}}||(1-2\boldsymbol{y})-\boldsymbol{z}||~{}~{}\textrm{s.t.}~{}~{}\boldsymbol{A}\boldsymbol{z}=0.$ (8) The minimization is a projection of $(1-2y)$ onto the null space of $\boldsymbol{A}$. Since the null and row spaces of a matrix are complementary, $(1-2\boldsymbol{y})$ decomposes into $(1-2\boldsymbol{y})=\operatorname{\mathbb{P}}_{\textrm{row}}(1-2\boldsymbol{y})+\operatorname{\mathbb{P}}_{\textrm{null}}(1-2\boldsymbol{y}),$ where $\operatorname{\mathbb{P}}_{\textrm{row}}$ and $\operatorname{\mathbb{P}}_{\textrm{null}}$ are orthogonal projections into the row and null spaces of $\boldsymbol{A}$, respectively. We can use this decomposition to rewrite the distance of interest: $\displaystyle||(1-2\boldsymbol{y})-\operatorname{\mathbb{P}}_{\textrm{null}}(1-2\boldsymbol{y})||$ (9) $\displaystyle=||(1-2\boldsymbol{y})-((1-2\boldsymbol{y})-\operatorname{\mathbb{P}}_{\textrm{row}}(1-2\boldsymbol{y}))||$ $\displaystyle=||\operatorname{\mathbb{P}}_{\textrm{row}}(1-2\boldsymbol{y})||.$ For any vector $\boldsymbol{v}$, its projection into the row space of matrix $\boldsymbol{A}$ is $\boldsymbol{A}^{+}\boldsymbol{A}\boldsymbol{v}$, where $\boldsymbol{A}^{+}$ is the Moore-Penrose pseudoinverse of $\boldsymbol{A}$. The distance of interest is thus $||\boldsymbol{A}^{+}\boldsymbol{A}(1-2\boldsymbol{y})||$. We can use the definition of $\boldsymbol{A}$ to further simplify. Let $\boldsymbol{W}$ be the matrix of weak signals $\boldsymbol{W}=[\boldsymbol{w}_{1},\ldots,\boldsymbol{w}_{m}]^{\top}$. Then the distance is $\displaystyle||A^{+}(1-2\boldsymbol{W})(1-2\boldsymbol{y})||$ (10) $\displaystyle=||A^{+}((1-2\boldsymbol{W})\vec{1}_{n}-2(1-2\boldsymbol{W})\boldsymbol{y})||$ $\displaystyle=||A^{+}(n-2\boldsymbol{W}\vec{1}_{n}-2\boldsymbol{A}\boldsymbol{y})||.$ Because $\boldsymbol{A}\boldsymbol{y}=\boldsymbol{c}=n\boldsymbol{\epsilon}-\boldsymbol{W}\vec{1}_{n}$, terms cancel, yielding the bound in the theorem: $\displaystyle||A^{+}(n-2\boldsymbol{W}\vec{1}_{n}-2n\boldsymbol{\epsilon}+2\boldsymbol{W}\vec{1}_{n})||$ (11) $\displaystyle=||A^{+}(n-2n\boldsymbol{\epsilon})||=n||A^{+}(1-2\boldsymbol{\epsilon})||.$ ∎ This bound provides a quantity that is computable in practice. However, to gain an intuition about what factors affect its value, the distance formula can be further analyzed by using the singular-value decomposition (SVD) formula for the pseudoinverse. Consider SVD $\boldsymbol{A}=\boldsymbol{U}\Sigma\boldsymbol{V}^{\top}$. Then $\boldsymbol{A}^{+}=\boldsymbol{V}\Sigma^{+}\boldsymbol{U}^{\top}$, where the pseudoinverse $\Sigma^{+}$ contains the reciprocal of all nonzero singular values along the diagonal (and zeros elsewhere). The distance simplifies to $\displaystyle n||\boldsymbol{V}\Sigma^{+}\boldsymbol{U}^{\top}(1-2\boldsymbol{\epsilon})||=n||\Sigma^{+}\boldsymbol{U}^{\top}(1-2\boldsymbol{\epsilon})||,$ (12) since $\boldsymbol{V}$ is orthonormal. Furthermore, let $\boldsymbol{p}=\boldsymbol{U}^{\top}(1-2\boldsymbol{\epsilon})$, i.e., $\boldsymbol{p}$ is a rotation of the centered error rates of the weak signals with the same norm as $(1-2\boldsymbol{\epsilon})$. From this change of variables, we can decompose the distance into $\displaystyle n||\Sigma^{+}\boldsymbol{p}||=n\sqrt{\sigma_{1}^{2}p_{1}^{2}~{}+\ldots+~{}\sigma_{m}^{2}p_{m}^{2}},$ (13) where $\sigma_{j}$ is the $j$th singular value of $\boldsymbol{A}^{+}$. As this distance grows toward $\sqrt{n}$, the space of possible labelings shrinks toward zero, at which point the only feasible label vectors are close to the true labels $\boldsymbol{y}$. Equation 13 indicates that the distance increases roughly as the rank of $\boldsymbol{A}$ increases, in which case the number of non-zero singular values in $\Sigma^{+}$ increases, irrespective of how many actual weak signals are given. Thus, redundancy in the weak supervision does not affect the performance of CLL. The other key factor in the distance is how far from 0.5 the errors $\boldsymbol{\epsilon}$ are. These quantities can be interpreted as the diversity and number of the weak signals (corresponding to the rank) and their accuracies (the magnitude of $\boldsymbol{p}$). Though the analysis is for length-$n$ label vectors, it is straightforwardly extended to multi-label settings with length-$(nK)$. And with careful indexing and tracking of the abstaining indicators, the same form of analysis can apply for abstaining weak signals. Figure 2 shows an empirical validation of Theorem 1 on a synthetic experiment. We plot the error of the labels returned by CLL and majority voting as we change the rank of $\boldsymbol{A}$. We use a synthetic data for a binary classification task with 100 randomly generated examples containing 20 binary features. The weak signals are random binary predictions for the labels where each weak signal error rate is calculated using the true labels of the data. We start with 100 redundant weak signals by generating a matrix $\boldsymbol{A}$ whose 100 columns contain copies of the same weak signal, giving it a rank of 1. We then iteratively increase the rank of $\boldsymbol{A}$ by replacing copies of the weak signal with random vectors from the uniform distribution. The error of CLL labels approaches zero as the rank of the matrix increases while the majority vote error does not improve significantly. Figure 2: Error of CLL estimated labels compared to majority vote as we increase the rank of $\boldsymbol{A}$ by replacing redundant weak signals with linearly independent weak signals. ### Error Estimation In our analysis, we assume that the expected error rates of the weak signals are available. This may be the case if the weak signals have been evaluated on historical data or if an expert provides the error rates. In practice, users typically define weak supervision signals whose error rates are unknown. In this section, we discuss two approaches to handle such situations. We test these estimation techniques on real and synthetic data in our experiments, finding that CLL with these strategies forms a powerful weakly supervised approach. #### Agreement Rate Method Estimating the error rates of binary classifiers using their agreement rates was first proposed by Platanios, Blum, and Mitchell (2014). They propose two different objective functions for solving the error rates of classifiers using their agreement rates as constraints. Similar to MeTaL (Ratner et al. 2018), we solve a matrix-completion problem to find a low-rank factorization for the weak signal accuracies. We assume that if the weak signals are conditionally independent, we can relate the disagreement rates to the weak signal accuracies. We implemented this method and report its performance in our synthetic experiment (see Section 4). The one-vs-all form of the weak signals on our real datasets violates the assumption that each weak signal makes prediction on all the classes, so we cannot use the agreement rate method on our real data. #### Uniform Error Rate The idea of using uniform error rates of the weak signals was first proposed in ALL (Arachie and Huang 2019a). Their experiments showed that ALL can learn as effectively as when using true error rates by using a constant for the error rates of all the weak signals on their binary classification datasets. We use this approach in our experiments and extend it to weak supervision signals that abstain and also on multi-class datasets. Figure 3 plots the accuracy of generated labels as we increase the error-rate parameter. On the binary-class SST-2 dataset, the label accuracy remains similar if the error rate is set between 0 and 0.5 and drops for values at least $0.5$. On the multiclass Fashion-MNIST data, we notice similar behavior where the label accuracies are similar between 0.05 and 0.1 and drop with larger values. We surmise that this behavior mirrors the type of weak supervision signals we use in our experiments. The weak signals in our real experiments are one-vs-all signals; hence a baseline signal (guessing 0 on all examples) will have an error rate of $\frac{1}{K}$. Performance deteriorates when the error rate is worse than this baseline rate. Figure 3: Accuracy of constrained label learning as we increase the error rates from 0 to 1 on binary and 0 to 0.5 on multiclass datasets (SST-2 and Fashion-MNIST). ## 4 Experiments We test constrained label learning on a variety of tasks on text and image classification. First, we measure the test accuracy of CLL on a synthetic dataset and compare its performance to that of supervised learning and other baselines. Second, we validate our approach on real datasets. For all our experiments, we compare CLL to other weakly supervised methods: data programming (DP) (Ratner et al. 2016) and majority-vote (MV) or averaging (AVG). Additionally, on our real datasets we show comparison to regularized minimax conditional entropy for crowdsourcing (MMCE) (Zhou et al. 2015). For reference, we include the performance of supervised learning baseline. On the image datasets, we show comparison of CLL to Stoch-GALL, a multiclass extension of adversarial label learning. It is worth noting that DP was developed for binary classification, thus to compare its performance on our multiclass datasets, we run DP on the weak signals that label each class in the datasets. All the weak signals on the real datasets are one-vs-all signals meaning they only label a single class and abstain on other classes. ### Synthetic Experiment Method | Test Accuracy ---|--- CLL (Agr. rate $\boldsymbol{\epsilon}$) | 0.668$\pm$ 0.005 CLL (Constant $\boldsymbol{\epsilon}$) | 0.630$\pm$ 0.009 Data Programming | 0.504$\pm$ 0.000 Majority Vote | 0.504$\pm$ 0.000 CLL (True $\boldsymbol{\epsilon}$) | 0.675 $\pm$ 0.024 Supervised Learning | 0.997$\pm$ 0.001 Table 1: Classification accuracies of the different methods on synthetic data using dependent weak signals. We report the mean and standard deviation over three trials. Method | Test Accuracy ---|--- CLL (Agr. rate $\boldsymbol{\epsilon}$) | 0.984$\pm$ 0.003 CLL (Constant $\boldsymbol{\epsilon}$) | 0.978$\pm$ 0.004 Data Programming | 0.978$\pm$ 0.003 Majority Vote | 0.925$\pm$ 0.009 CLL (True $\boldsymbol{\epsilon}$) | 0.985$\pm$ 0.0004 Supervised Learning | 0.997$\pm$ 0.001 Table 2: Classification accuracies of the different methods on synthetic data using independent weak signals. We report the mean and standard deviation over three trials Datasets | CLL | MMCE | DP | MV ---|---|---|---|--- IMDB | 0.736$\pm$ 0.0005 | 0.573 | 0.693 | 0.702 SST-2 | 0.678$\pm$ 0.0004 | 0.677 | 0.666 | 0.666 YELP-2 | 0.765$\pm$ 0.0002 | 0.685 | 0.770 | 0.775 TREC-6 | 0.842$\pm$ 0.004 | 0.833 | 0.898 | 0.273 Table 3: Label accuracies of CLL compared to other weak supervision methods on different text classification datasets. We report the mean and standard deviation over three trials. CLL is trained using $\boldsymbol{\epsilon}$ = 0.01 on the text classification datasets. Datasets | CLL | MMCE | DP | MV | Supervised ---|---|---|---|---|--- IMDB | 0.740$\pm$ 0.005 | 0.551 | 0.623$\pm$ 0.007 | 0.724$\pm$0.004 | 0.820$\pm$0.003 SST-2 | 0.729$\pm$ 0.001 | 0.727 | 0.720$\pm$ 0.001 | 0.720$\pm$ 0.0009 | 0.792$\pm$ 0.001 YELP-2 | 0.840$\pm$ 0.0007 | 0.68 | 0.760$\pm$ 0.005 | 0.798$\pm$ 0.007 | 0.879$\pm$ 0.001 TREC-6 | 0.641$\pm$ 0.022 | 0.64 | 0.627$\pm$ 0.014 | 0.605$\pm$ 0.006 | 0.700$\pm$ 0.024 Table 4: Test accuracies of CLL compared to other weak supervision methods on different text classification datasets. We report the mean and standard deviation over three trials. CLL is trained using $\boldsymbol{\epsilon}$ = 0.01 on the text classification datasets Datasets | CLL | MMCE | DP | AVG | Stoch-GALL ---|---|---|---|---|--- SVHN | 0.575$\pm$ 0.001 | 0.1 | 0.42 | 0.444 | 0.196$\pm$ 0.025 Fashion-MNIST | 0.658$\pm$ 0.001 | 0.147 | 0.65 | 0.649 | 0.488$\pm$ 0.002 Table 5: Label accuracies of CLL compared to other weak supervision methods on image datasets. We report the mean and standard deviation over three trials. CLL is trained using $\boldsymbol{\epsilon}$ = $\frac{1}{K}$ on the datasets and it outperforms other baseline approaches. Datasets | CLL | MMCE | DP | AVG | Stoch-GALL | Supervised ---|---|---|---|---|---|--- SVHN | 0.670$\pm$ 0.031 | 0.1 | 0.265$\pm$ 0.004 | 0.432$\pm$ 0.001 | 0.366$\pm$ 0.003 | 0.851$\pm$ 0.002 Fashion-MNIST | 0.695$\pm$ 0.002 | 0.151 | 0.635$\pm$ 0.0004 | 0.666$\pm$ 0.002 | 0.598$\pm$ 0.002 | 0.852$\pm$ 0.003 Table 6: Test accuracies of CLL compared to other weak supervision methods on image datasets. We report the mean and standard deviation over three trials. CLL is trained using $\boldsymbol{\epsilon}$ = $\frac{1}{K}$ on the datasets. We construct a toy dataset for a binary classification task where the data has 200 randomly generated binary features and 20,000 examples, 16,000 for training and 4,000 for testing. Each feature vector has between 50% to 70% correlation with the true label. We define two scenarios for our synthetic experiments. We run the methods using (1) dependent weak signals and, (2) independent weak signals. In both experiments, we use $10$ weak signals that have at most $30\%$ coverage on the data and conflicts on their label assignments. The dependent weak signals were constructed by generating one weak signal that is copied noisily 9 times (randomly flipping $20\%$ of the labels). The original weak signal labeled $30\%$ of the data points and had an accuracy in $[0.5,0.6]$. So, on average, we expect to perturb $6\%$ of its labels on the copies. The independent weak signals are randomly generated to have accuracies in the range $[0.6,0.7]$. We report in Table 1 and Table 2 the label and test accuracy from running CLL using true error rates for the weak signals, error rates estimated via agreement rate described in Section 3, and error rates using a maximum error rate constant set to 0.4 as the expected error for all the weak signals. CLL trained using the true $\boldsymbol{\epsilon}$ obtains the highest test accuracy compared to the other baselines, and its performance almost matches that of supervised learning in Table 2. With the true bounds, CLL slightly outperforms CLL trained using estimated and constant $\boldsymbol{\epsilon}$. More interestingly, the results in Table 1 show that our method outperforms other baselines that are strongly affected by the dependence in the weak signals. The generative model of data programming assumes that the weak signals are independent given the true labels, but this is not the case in this setup as the weak signals are strongly dependent. Thus the conditional independence violation hurts its performance and essentially reduces it to performing a majority vote on the labels. Since our evaluation in Fig. 3 demonstrated that CLL is not very sensitive to the choice of error rate, we set the error rates $\boldsymbol{\epsilon}$ = 0.01 on the text datasets and $\boldsymbol{\epsilon}$ = $\frac{1}{K}$ on the image datasets. We choose these values because our weak signals in the text dataset tend to label few examples and have low error rates thus we prefer not to under-constrain the optimization by using high error rates values for the one-vs-all weak-signals. In contrast, our human labeled weak signals on the image datasets have high error rates hence we set the error rate value to the baseline value for one-vs-all signals. ### Real Experiments Dataset | No. classes | No. weak signals | Train Size | Test Size ---|---|---|---|--- IMDB | 2 | 10 | 29,182 | 20,392 SST-2 | 2 | 14 | 3,998 | 1,821 YELP-2 | 2 | 14 | 45,370 | 10,000 TREC-6 | 6 | 18 | 4,988 | 500 SVHN | 10 | 50 | 73,257 | 26,032 Fashion-MNIST | 10 | 50 | 60,000 | 10,000 Table 7: Summary of datasets, including the number of weak signals used for training. The data sets for our real experiments and their weak signal generation process are described below. Table 7 summarizes the key statistics about these datasets. Our code and datasets are provided here.222 https://github.com/VTCSML/Constrained-Labeling-for-Weakly-Supervised-Learning IMDB The IMDB dataset (Maas et al. 2011) is used for sentiment analysis. The data contains reviews of different movies, and the task is to classify user reviews as either positive or negative in sentiment. We provide weak supervision by measuring mentions of specific words in the movie reviews. We created a set of positive words that weakly indicate positive sentiment and negative words that weakly indicate negative sentiment. We chose these keywords by looking at samples of the reviews and selecting popular words used in them. Many reviews could contain both positive and negative keywords, and in these cases, the weak signals will conflict on their labels. We split the dataset into training and testing subsets, where any example that contains one of our keywords is placed in the training set. Thus, _the test set consists of reviews that are not labeled by any weak signal_ , making it important for the weakly supervised learning to generalize beyond the weak signals. The dataset contains 50,000 reviews, of which 29,182 are used for training and 20,392 are test examples. SST-2 The Stanford Sentiment Treebank (SST-2) is another sentiment analysis dataset (Socher et al. 2013) containing movie reviews. Like the IMDB dataset, the goal is to classify reviews from users as having either positive or negative sentiment. We use similar keyword-based weak supervision but with different keywords. We use the standard train-test split provided by the original dataset. While the original training data contained 6,920 reviews, our weak signals only cover 3,998 examples. Thus, we used the reduced data size to train our model. We use the full test set of 1,821 reviews. YELP-2 We used the Yelp review dataset containing user reviews of businesses from the Yelp Dataset Challenge in 2015. Like the IMDB and SST-2 dataset, the goal is to classify reviews from users as having either positive or negative sentiment. We converted the star ratings in the dataset by considering reviews above 3 stars rating as positive and negative otherwise. We used similar weak supervision generating process as in SST-2. We sampled 50,000 reviews for training and 10,000 for testing from the original data set. Our weak signals only cover 45,370 data points, thus, we used the reduced data size to train our model. TREC-6 TREC is a question classification dataset consisting of fact-based questions divided into different categories (Li and Roth 2002). The task is to classify questions to predict what category the question belongs to. We use the six-class version (TREC-6) from which we use 4,988 examples for training and 500 for testing. The weak supervision we use combines word mentions with other heuristics we defined to analyze patterns of the question and assign a class label based on certain patterns. SVHN The Street View House Numbers (SVHN) (Netzer et al. 2018) dataset represents the task of recognizing digits on real images of house numbers taken by Google Street View. Each image is a $32\times 32$ RGB vector. The dataset has 10 classes and has 73,257 training images and 26,032 test images. We define 50 weak signals for this dataset. For this image classification dataset, we augment 40 other human-annotated weak signals (four per class) with ten pseudolabel predictions of each class from a model trained on 1% of the training data. The human-annotated weak signals are nearest-neighbor classifiers where a human annotator is asked to mark distinguishing features about an exemplar image belonging to a specific class. We then calculate pairwise Euclidean distances between the pixels in the marked region across images. We convert the Euclidean scores to probabilities (soft labels for the examples) via a logistic transform. Through this process, an annotator is guiding the design of a simple one-versus-rest classifier, where images most similar to the reference image are more likely to belong to its class. Fashion-MNIST The Fashion-MNIST dataset (Xiao, Rasul, and Vollgraf 2017) represents the task of recognizing articles of clothing where each example is a $28\times 28$ grayscale image. The images are categorized into 10 classes of clothing types where each class contains 6,000 training examples and 1,000 test examples. We used the same format of weak supervision signals as in the SVHN dataset (pseudolabels and human-annotated nearest-neighbor classifiers). Models For the text analysis tasks, we use 300-dimensional GloVe vectors (Pennington, Socher, and Manning 2014) as features for the text classification tasks. Then we train a simple two-layer neural network with 512 hidden units and ReLU activation in its hidden layer. The model for the image classification tasks is a six-layer convolutional neural network model with a 3$\times$3 filter and 32 channels at each layer. We use a sigmoid function as the output layer for both models in our experiment. Thus we train using binary cross-entropy loss with the soft labels returned by CLL, which represent the probability of examples belonging to classes. Results Tables 3 and 4 list the performance of the various weakly supervised methods on text classification datasets, while Tables 5 and 6 list the performance of various weakly supervised methods on image classification datasets. Considering both types of accuracy, CLL is able to output labels for the training data that train high-quality models for the test set. CLL outperforms all competing methods on test accuracy on the datasets. Interestingly, on Yelp and Trec-6 datasets, CLL label accuracy is lower than that of competing baselines yet CLL still achieves superior test accuracy. We surmise that CLL label accuracy is lower than competing methods on some datasets because of the inaccuracy in the error estimates. Generally, CLL is able to learn robust labels from the weak signals, and it seems to pass this information to the learning algorithm to help it generalize on unseen examples. For example, on the IMDB dataset, we used keyword-based weak signals that only occur on the training data. The model trained using CLL labels performs better on the test set than models trained with labels learned from data programming or majority vote. CLL outperforms all competing methods on the image classification tasks. On the digit recognition task (SVHN), CLL outperforms the best compared method (average) by over $13$ percentage points for the label accuracy and $23$ percentage points on the test data. CLL is able to better synthesize information from the low-quality human-annotated signals combined with the higher-quality pseudolabel signals. ## 5 Conclusion We introduced constrained label learning (CLL), a weakly supervised learning method that combines different weak supervision signals to produce probabilistic training labels for the data. CLL defines a constrained space for the labels of the training data by requiring that the errors of the weak signals agree with the provided error estimates. CLL is fast and converges after a few iterations of gradient descent. Our theoretical analysis shows that the accuracy of our estimated labels increases as we add more linearly independent weak signals. This analysis is consistent with the intuition that the constrained-space interpretation of weak supervision avoids overcounting evidence when multiple redundant weak signals provide the same information, since they are linearly dependent. Our experiments compare CLL against other weak supervision approaches on different text and image classification tasks. The results demonstrate that CLL outperforms these methods on most tasks. Interestingly, we are able to perform well when we train CLL using a worst case uniform error estimate for the weak signals. This shows that CLL is robust and not too sensitive to inaccuracy in the error estimates. In future work, we aim to theoretically analyze the behavior of this approach in such settings where the error rates are unreliable, with the hope that theoretical understanding will suggest new approaches that are even more robust. ## Acknowledgments Arachie and Huang were both supported by a grant from the U.S. Department of Transportation, University Transportation Centers Program to the Safety through Disruption University Transportation Center (69A3551747115). ## References * Arachie and Huang (2019a) Arachie, C.; and Huang, B. 2019a. Adversarial Label Learning. In _Proc. of the AAAI Conf. on Artif. Intelligence_ , 3183–3190. * Arachie and Huang (2019b) Arachie, C.; and Huang, B. 2019b. Stochastic Generalized Adversarial Label Learning. _arXiv preprint arXiv:1906.00512_ . * Bach et al. (2019) Bach, S. H.; Rodriguez, D.; Liu, Y.; Luo, C.; Shao, H.; Xia, C.; Sen, S.; Ratner, A.; Hancock, B.; and Alborzi, H. 2019. Snorkel DryBell: A case study in deploying weak supervision at industrial scale. In _Intl. Conf. on Manag. of Data_ , 362–375. * Balsubramani and Freund (2015) Balsubramani, A.; and Freund, Y. 2015. Scalable semi-supervised aggregation of classifiers. In _Advances in Neural Information Processing Systems_ , 1351–1359. * Bunescu and Mooney (2007) Bunescu, R. C.; and Mooney, R. 2007. Learning to extract relations from the web using minimal supervision. In _Proceedings of the Annual Meeting of the Association for Computational Linguistics_ , volume 45, 576–583. * Carpenter (2008) Carpenter, B. 2008. Multilevel Bayesian models of categorical data annotation. _Unpublished manuscript_ 17(122): 45–50. * Chen et al. (2014) Chen, L.-C.; Fidler, S.; Yuille, A. L.; and Urtasun, R. 2014. Beat the mturkers: Automatic image labeling from weak 3d supervision. In _Proc. of the IEEE Conf. on Comp. Vis. and Pattern Recognition_ , 3198–3205. * Dawid and Skene (1979) Dawid, A. P.; and Skene, A. M. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. _Applied Statistics_ 20–28. * Druck, Mann, and McCallum (2008) Druck, G.; Mann, G.; and McCallum, A. 2008. Learning from labeled features using generalized expectation criteria. In _Proceedings of the 31st Annual Intl. ACM SIGIR Conf. on Research and Dev. in Information Retrieval_ , 595–602. * Duchi, Hazan, and Singer (2011) Duchi, J.; Hazan, E.; and Singer, Y. 2011. Adaptive subgradient methods for online learning and stochastic optimization. _Journal of Machine Learning Research_ 12(Jul): 2121–2159. * Gao, Barbier, and Goolsby (2011) Gao, H.; Barbier, G.; and Goolsby, R. 2011. Harnessing the crowdsourcing power of social media for disaster relief. _IEEE Intelligent Systems_ 26(3): 10–14. * Halpern, Horng, and Sontag (2016) Halpern, Y.; Horng, S.; and Sontag, D. 2016. Clinical Tagging with Joint Probabilistic Models. In _Proceedings of the Conference on Machine Learning for Healthcare_ , 209–225. * Hoffmann et al. (2011) Hoffmann, R.; Zhang, C.; Ling, X.; Zettlemoyer, L.; and Weld, D. S. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In _Proc. of the Annual Meeting of the Assoc. for Comp. Linguistics: Human Language Tech._ , 541–550. * Jaffe et al. (2016) Jaffe, A.; Fetaya, E.; Nadler, B.; Jiang, T.; and Kluger, Y. 2016. Unsupervised ensemble learning with dependent classifiers. In _Artificial Intelligence and Statistics_ , 351–360. * Karger, Oh, and Shah (2011) Karger, D. R.; Oh, S.; and Shah, D. 2011. Iterative learning for reliable crowdsourcing systems. In _Advances in Neural Information Processing Systems_ , 1953–1961. * Khetan, Lipton, and Anandkumar (2017) Khetan, A.; Lipton, Z. C.; and Anandkumar, A. 2017. Learning from noisy singly-labeled data. _arXiv preprint arXiv:1712.04577_ . * Li and Roth (2002) Li, X.; and Roth, D. 2002. Learning question classifiers. In _Proceedings of the 19th International Conference on Computational Linguistics-Volume 1_ , 1–7. Association for Computational Linguistics. * Liu, Peng, and Ihler (2012) Liu, Q.; Peng, J.; and Ihler, A. T. 2012. Variational inference for crowdsourcing. In _Advances in Neural Information Processing Systems_ , 692–700. * Maas et al. (2011) Maas, A. L.; Daly, R. E.; Pham, P. T.; Huang, D.; Ng, A. Y.; and Potts, C. 2011\. Learning word vectors for sentiment analysis. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1_ , 142–150. Association for Computational Linguistics. * Madani, Pennock, and Flake (2005) Madani, O.; Pennock, D. M.; and Flake, G. W. 2005. Co-validation: Using model disagreement on unlabeled data to validate classification algorithms. In _Advances in Neural Information Processing Systems_ , 873–880. * Mann and McCallum (2008) Mann, G. S.; and McCallum, A. 2008. Generalized expectation criteria for semi-supervised learning of conditional random fields. _Proc. of the Annual Meeting of the Assoc. for Comp. Linguistics: Human Language Tech._ 870–878. * Mann and McCallum (2010) Mann, G. S.; and McCallum, A. 2010. Generalized expectation criteria for semi-supervised learning with weakly labeled data. _Journal of Machine Learning Research_ 11: 955–984. * Mintz et al. (2009) Mintz, M.; Bills, S.; Snow, R.; and Jurafsky, D. 2009. Distant supervision for relation extraction without labeled data. In _Proc. of the Annual Meeting of the Assoc. for Comp. Ling._ * Netzer et al. (2018) Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and Ng, A. 2018. The Street View House Numbers (SVHN) Dataset. Technical report, Accessed 2016-08-01.[Online]. * Pennington, Socher, and Manning (2014) Pennington, J.; Socher, R.; and Manning, C. 2014. GloVe: Global vectors for word representation. In _Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , 1532–1543. Doha, Qatar. * Platanios et al. (2020) Platanios, E. A.; Al-Shedivat, M.; Xing, E.; and Mitchell, T. 2020. Learning from Imperfect Annotations. _arXiv preprint arXiv:2004.03473_ . * Platanios, Blum, and Mitchell (2014) Platanios, E. A.; Blum, A.; and Mitchell, T. 2014. Estimating accuracy from unlabeled data. In _Proceedings of the Thirtieth Conf. on Uncertainty in Artificial Intelligence_ , 682–691. * Platanios, Dubey, and Mitchell (2016) Platanios, E. A.; Dubey, A.; and Mitchell, T. 2016. Estimating accuracy from unlabeled data: A Bayesian approach. In _International Conference on Machine Learning_ , 1416–1425. * Ratner et al. (2018) Ratner, A.; Hancock, B.; Dunnmon, J.; Goldman, R.; and Ré, C. 2018. Snorkel metal: Weak supervision for multi-task learning. In _Proceedings of the Second Workshop on Data Management for End-To-End Machine Learning_ , 1–4. * Ratner et al. (2017) Ratner, A. J.; Bach, S. H.; Ehrenberg, H. R.; and Ré, C. 2017. Snorkel: Fast training set generation for information extraction. In _Proceedings of the 2017 ACM Intl. Conf. on Management of Data_ , 1683–1686. ACM. * Ratner et al. (2016) Ratner, A. J.; De Sa, C. M.; Wu, S.; Selsam, D.; and Ré, C. 2016. Data programming: Creating large training sets, quickly. In _Advances in Neural Info. Proc. Sys._ , 3567–3575. * Riedel, Yao, and McCallum (2010) Riedel, S.; Yao, L.; and McCallum, A. 2010. Modeling relations and their mentions without labeled text. In _Joint Euro. Conf. on Mach. Learn. and Knowledge Disc. in Databases_. * Schapire et al. (2002) Schapire, R. E.; Rochery, M.; Rahim, M.; and Gupta, N. 2002. Incorporating prior knowledge into boosting. In _Intl. Conf. on Machine Learning_ , volume 2, 538–545. * Socher et al. (2013) Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , 1631–1642. * Steinhardt and Liang (2016) Steinhardt, J.; and Liang, P. S. 2016. Unsupervised risk estimation using only conditional independence structure. In _Adv. in Neural Information Processing Systems_ , 3657–3665. * Xiao, Rasul, and Vollgraf (2017) Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_ . * Xu, Schwing, and Urtasun (2014) Xu, J.; Schwing, A. G.; and Urtasun, R. 2014. Tell me what you see and I will show you where it is. In _Proc. of the IEEE Conf. on Computer Vis. and Pattern Recog._ , 3190–3197. * Yao, Riedel, and McCallum (2010) Yao, L.; Riedel, S.; and McCallum, A. 2010. Collective cross-document relation extraction without labelled data. In _Proc. of the Conf. on Empirical Methods in Natural Language Processing_ , 1013–1023. * Zhou et al. (2015) Zhou, D.; Liu, Q.; Platt, J. C.; Meek, C.; and Shah, N. B. 2015. Regularized minimax conditional entropy for crowdsourcing. _arXiv preprint arXiv:1503.07240_ . * Zhou and He (2016) Zhou, Y.; and He, J. 2016. Crowdsourcing via Tensor Augmentation and Completion. In _International Joint Conference on Artificial Intelligence_ , 2435–2441.
CHAPTER: INTRODUCTION § MOTIVATION Let's place ourselves in the position of an engineer whose mission is to manage a factory and ask ourselves the following question: What do I have to do to improve the plant's production as much as possible? Which means: * To guarantee a predefined quality of production at all times. * To comply at all times with the safety regulations and legislation of the country in which the plant is located * To maximize the profits generated by the plant throughout its life. The set of answers to this question could define a research field called real-time optimization (RTO). The ultimate goal of RTO is to build an interface that facilitates the work of this engineer as much as possible while providing a high level of guarantees on the quality of the plant's monitoring. As in some cases such an interface could manage the piloting of a plant in an autonomous way, one will simply call it “Autopilot” (see Figure <ref>). RTO's goal could be to build autopilots to help engineers to drive plants. § THE AUTOPILOT ENVIRONMENT Before thinking about how the RTO methods work, it is necessary to define what they interact with, i.e. the three other objects in Figure <ref>: §.§ The plant The plant is a system whose inputs $\bm{x}_p\in\amsmathbb{R}^{n_{x_p}}$ are decision variables $\bm{u}\in\amsmathbb{R}^{n_u}$ and disturbances $\bm{d}_p\in\amsmathbb{R}^{n_{d_p}}$: \begin{equation} \bm{x}_p := \left[\bm{u}^{\rm T}, \bm{d}_p^{\rm T}\right]^{\rm T}. \end{equation} More precisely: * The decision variables $\bm{u}$ are manipulable, usually the setpoints of the plant's controllers, and continuous in $\amsmathbb{R}^{n_u}$. In this work we do not consider cases where decision variables are binary or integer variables. * The disturbances $\bm{d}_p$ are imposed to the plant by the environment. These disturbances can be, for example, variations in the quality of the raw materials that feed the plant, the effects of the weather on the plant, etc. The outputs of the plant $\bm{y}_p\in\amsmathbb{R}^{n_y}$ gather all the measured variables. The plant is considered to be operating continuously (i.e. processing materials without interruption), and to be designed to operate at steady-state (SS). The relationship between the inputs $\bm{x}_p$ and the outputs $\bm{y}_p$ at SS is materialized with the function $\bm{f}_p$: \begin{equation} \bm{y}_p := \bm{f}_p(\bm{x}_p). \end{equation} The measures $\widehat{\bm{y}}_p$ of $\bm{y}_p$ are typically polluted by measurement errors $\bm{\epsilon}_y$, and in this work one considers that these errors follow an unbiased normal random distribution: \begin{align} \label{eq:1___3_Mesures_Normal_Uncertainty} \widehat{\bm{y}}_p := \ & \bm{y}_p + \bm{\epsilon}_y, & \bm{\epsilon}_y \sim \mathcal{N}(\bm{0},\bm{N}_y), \end{align} where $\bm{N}_y\in\amsmathbb{R}^{n_y \times n_y}$ is the covariance matrix of the measurement errors $\bm{\epsilon}_y$ (which can itself be a function of the inputs $\bm{x}_p$ of the plant). In short, the plant receives disturbances $\bm{d}_p$ from the environment, receives instructions $\bm{u}$ either from the engineers or from the autopilot, and returns measurements $\widehat{\bm{y}}_p$ to the engineers and to the autopilot. §.§ The environment The environment gathers all the elements that are external to the plant and that cannot be manipulated either by the engineers or by the autopilot. Two distinct elements in the environment have been identified: The first is the disturbances $\bm{d}_p$ that affect the plant, which have already been discussed in the section introducing the plant. The second one gathers the operating cost of the plant which can be materialized with a function $\phi(\bm{u},\bm{y}_p)\in\amsmathbb{R}$, and operating constraints that can be materialized with a function $\bm{g}(\bm{u},\bm{y}_p)\in\amsmathbb{R}^{n_g}$. More specifically, operational costs and constraints are functions based on (i) agreements with suppliers and customers, (ii) safety and environmental regulations, or (iii) empirical limitations aimed at extending the life of the plant. Essentially, the Environment acts directly on the plant through disturbances $\bm{d}_p$, and guides the decision making of the Engineers and/or of the Autopilot by providing the functions $\phi$ and $\bm{g}$. §.§ The engineers On their side, the Engineers receive the measurements $\widehat{\bm{y}}_p$ from the plant and the functions $\phi$ and $\bm{g}$ from the Environment. They can then use their technical knowledge to build a model $\bm{f}$ of the functions $\bm{f}_p$ to guide their decisions. The inputs $\bm{x}$ of this model gather the decision variables $\bm{u}$ and the subset of the disturbances $\bm{d}_p$ which is part of the measures $\widehat{\bm{y}}_p$. It is thus necessary to distinguish the measured disturbances $\bm{d}^{\prime}_p$ from those not measured $\bm{d}^{\prime\prime}_p$: \begin{equation} \bm{d}_p := \left[{\bm{d}^{\prime}_p}^{\rm T}, {\bm{d}^{\prime\prime}_p}^{\rm T} \right]^{\rm T}. \end{equation} The model of the plant is therefore: \begin{align} \bm{y} := \ & \bm{f}(\bm{x},\bm{\theta}), & \text{where: } \ \bm{x} := \ & \left[{\bm{u}}^{\rm T}, {\bm{d}^{\prime}_p}^{\rm T} \right]^{\rm T}, \end{align} where $\bm{\theta}\in\amsmathbb{R}^{n_{\theta}}$ gathers all the parameters of the model, e.g. the values of the known but unmeasured perturbations, the empirical physical properties of the transformed materials, etc. On the basis of this model, the measurements $\widehat{\bm{y}}_p$ and the functions $\phi$ and $\bm{g}$, the Engineers have to choose the appropriate inputs $\bm{u}$ to be sent to the plant. But they can also give the model to the Autopilot — if necessary parameterize the Autopilot — and let it handle $\bm{u}$. In return, the Autopilot should return information to the engineers to facilitate the plant's supervision. § STATE OF THE ART In order to facilitate the explanation of the state of the art, it has been divided into three areas * Theoretical RTO, which gathers all the contributions that focus on the identification of the optimal operating point of the plant when the environment and the experimental conditions are ideal. * Practical RTO, which gathers all the contributions that focus on maintaining of the properties of theoretical RTO methods when the environment and the experimental conditions are no longer ideal. * Efficient RTO, which gathers all the contributions that question the fundamental principles of "classical" RTO methods in order to improve their performances. These improvements are generally associated with an increase in the complexity of the RTO algorithm. §.§ Theoretical RTO §.§.§ The theoretical RTO's problem formulation Let's consider the “ideal” conditions where: * The environment does not produce any disturbance $\bm{d}_p = \emptyset$. So the functions of the plant and of the model can be reduced to: \begin{align*} \bm{y}_p = \ & \bm{f}_p(\bm{u}), & \bm{y} = \ & \bm{f}(\bm{u},\bm{\theta}). \end{align*} * There are no measurement errors: $\widehat{\bm{y}}_p=\bm{y}_p.$ More precisely, one considers that the values of $\phi_p$, $\bm{g}_p$, and $\bm{f}_p)$ and their gradients $(\nabla_{\bm{u}} \phi_p,\nabla_{\bm{u}}\bm{g}_p,\nabla_{\bm{u}}\bm{f}_p)$ can be estimated without error $\forall \bm{u}\in\amsmathbb{R}^{n_u}$, where: \begin{align*} \phi_p(\bm{u}) := \ & \phi(\bm{u},\bm{f}_p(\bm{u})),\\ \bm{g}_p(\bm{u}) := \ & \bm{g}(\bm{u},\bm{f}_p(\bm{u})). \end{align*} Therefore, the theoretical RTO problem consists to (objective 1) solve the following nonlinear optimization problem (NLP): \begin{align} \bm{u}_{p}^{\star}:= \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad & \phi_p(\bm{u}) \quad \text{s.t.} \quad \bm{g}_p(\bm{u}) \leq \bm{0}, \nonumber \end{align} while (objective 2) evaluating the plant functions a minimal number of times, and (objective 3) avoiding constraints violations during those evaluations. It is important to understand that an evaluation of these functions is an experiment to be conducted in the real world on the actual plant. And usually such an experience is costly in time and money (hence objective 2), and presents risks for the plant (hence objective 3). §.§.§ Achieving objective 1 The first RTO method was proposed in 1970 in [22] and is known as: the two-step approach (TS). As the name suggests, this method consists of repeating two actions until they converge. (i) An update of the parameters $\bm{\theta}$ of the model based on the most recent measurements of the plant's inputs and outputs. (ii) An updated-model-based optimization to identify the point $\bm{u}$ around which the next experiments must be conducted in order to re-identify the parameters $\bm{\theta}$. Unless the error between the functions $\bm{f}$ and $\bm{f}_p$ is of parametric nature, i.e. $\exists \bm{\theta}\in\amsmathbb{R}^{n_{\theta}}$ such that $\bm{f}(\bm{u},\bm{\theta})=\bm{f}_p(\bm{u})$, it has been shown that TS does not generally converge on the optimal decisions [67]. The first theoretical RTO method to introduce structural corrections was proposed in 1979 in [67] and is known as: integrated system optimization and parameter estimation (ISOPE). This method consists in adding one action to TS. The three actions of ISOPE are: (i) An update of the parameters $\bm{\theta}$ of the model on the basis of the most recent measurements of the inputs and outputs of the plant. (ii - new) The identification of an artificial parameter (called “a modifier”) bringing an affine correction to the cost function of the model. (iii) An updated-model-based optimization to identify the point $\bm{u}$ around which the next experiments must be conducted in order to re-identify the parameters $\bm{\theta}$ and the modifier $\nabla_{\bm{u}}\phi_p$. It has been shown that, provided that the optimal decision $\bm{u}_p^{\star}$ does not activate any constraints, ISOPE guarantees that $\bm{u}_p^{\star}$ is identified upon convergence. The first theoretical RTO method to guarantee unconditionally that if a convergence is observed then it can only be on the optimal decisions $\bm{u}_p^{\star}$ was proposed in 2005 in [31] and is known as: iterative setpoint optimization (ISO). This method drops the idea of updating the model parameters to focus only on updating modifiers that are now applied to both the cost and constraint functions. ISO works in two steps (i) An update of the modifiers to bring an affine correction to the costs and constraints functions of the model. (ii) An updated-model-based optimization to identify the point $\bm{u}$ around which the next experiments must be conducted in view of the re-identification of the modifiers which requires the estimates of $\nabla_{\bm{u}}\phi_p$, $\bm{g}_p$ and $\nabla_{\bm{u}}\bm{g}_p$. The first theoretical RTO method using a filter to increase the chances converging on the optimal decision $\bm{u}_p^{\star}$ was proposed in 2009 in [43] and is known as: modifier adaptation (MA). This method is simply ISO to which one adds a filter either on the update of the modifiers or on the movement in the input space $\bm{u}$. The first theoretical RTO method to provide convergence properties similar to those of MA via an indirect correction of the costs and constraints of the model was proposed in 2009 in [43] and is known as output modifier adaptation (MAy). This method is mentioned in [43, 42] and is fully analyzed in [58]. MAy works exactly like MA with the difference that the modifiers are used to apply an affine correction to the function $\bm{f}$. And as demonstrated in [58] this is sufficient to guarantee optimality upon convergence. The first theoretical RTO method to provide the guarantee that it is always possible to converge on the optimal decision $\bm{u}_p^{\star}$ was proposed in 2013 in [29] and can be named as proposed in [59]: modifier adaptation with convexified problem (MAc). This method consists of simply applying MA while replacing the model by a convex approximation of itself. §.§.§ Achieving objective 2 All the methods mentioned in the previous section require that at each iteration the values $(\phi_p,\bm{g}_p,\bm{f}_p)$ and the gradients $(\nabla_{\bm{u}}\phi_p,\nabla_{\bm{u}}\bm{g}_p,\nabla_{\bm{u}}\bm{f}_p)$are measured on the plant. Although it is considered that these values can be measured in a perfect way, measuring them accurately presents an irreducible minimal cost in terms of the number of experiments to be performed. For example, to assess the values $(\phi_p,\bm{g}_p,\bm{f}_p)$ at a given point requires only one experiment whereas to evaluate $(\nabla_{\bm{u}}\phi_p,\nabla_{\bm{u}}\bm{g}_p,\nabla_{\bm{u}}\bm{f}_p)$ requires at least $n_u+1$ experiments (if one applies the most basic finite difference method). So, if the problem's dimension is high, i.e. $n_u \gg 1$, then each iteration is likely to be costly in terms of number of experiments and it is therefore of interest to find a way to reduce this number. This research direction has been studied in [17] where it is suggested to evaluate the gradient only in privileged directions. This method is improved in [75] where a strategy that allows these directions to be adapted in real time to minimize optimality losses (due to incomplete gradient evaluations) at convergence is proposed. To evaluate these directions, these methods require engineers to provide a relevant estimate of the set $\mathbb{\Theta}$ in which the parameters of the model $\bm{\theta}$ must be such that $\forall \bm{u}\in\amsmathbb{R}^{n_u}$, $\forall \bm{\theta}\in \mathbb{\Theta}$: $\nabla_{\bm{u}}\phi(\bm{u},\bm{f}(\bm{u},\bm{\theta})) \approx \nabla_{\bm{u}}\phi_p(\bm{u})$ and $\nabla_{\bm{u}}\bm{g}(\bm{u},\bm{f}(\bm{u},\bm{\theta})) \approx \nabla_{\bm{u}}\bm{g}_p(\bm{u})$. These methods will therefore be systematically efficient when both the modeling error is purely parametric, and the set $\mathbb{\Theta}$ is small and contains the “plant's parameters”, would they exist. Otherwise the convergence performance of these methods is less clear. Another way to get around this problem is proposed in [69, 44]. The idea is not to measure the plant gradient at each iteration but to estimate them with the $n_u+1$ results of the most recent experiments. This reasoning can be pushed further by noting that the measurements of these gradients are used to compute modifiers whose estimation can be assimilated to the resolution of an optimization problem associated with the estimation of the Lagrangian of the plant. It is on the basis of this observation that [55, 70] propose to simply replace the repeated plant gradient evaluation by a two-step procedure to be performed at each iteration: (Step 1) Choose the new operating point by calculating the minimum of the updated-model. (Step 2) Compute the new modifiers by minimizing the estimated value of the plant Lagrangian that they they imply. §.§.§ Achieving objective 3 It is clear that being able to converge on the minimum of the plant is an essential property for a theoretical RTO method, but to do so without destroying the plant on the way is also very important. Ensuring the feasibility of each iteration is a challenge that currently has no satisfactory solution. The study proposed in [8, 9] shows that one way to ensure that no experiment violates the constraints of the plant can be to use limitations on the size of the iterations based on the plant's Lipschitz constants. Another study [45] shows that if the constraints of the model are functions that, are at all points, more convex than the functions of the plant, then each iteration produced by MA will be feasible. Of course, in practice neither the plant's Lipschitz constants nor such a model is a priori available. Naturally, it is possible to replace the Lipschitz constants by overestimates of their values or to use a model so convex that it is undoubtedly more convex than the plant throughout the input space. However, in such cases, applying the methods proposed by [8, 9] and [45] would give sequences of iterations that would be feasible but that would converge so slowly on $\bm{u}_p^{\star}$ that it would not be practical. This means that (in our opinion) these two methods are not really satisfactory in practice. To limit the feasibility issue, it has been proposed to combine the RTO method with a trust region type of constraints [16] to keep each consecutive iteration close enough to each other so that they “secure each other” [7, 36, 20]. Moreover, in practice it is common to add safety margins to the constraints used by the RTO method so that small violations do not result in violations of the real constraints [84]. Of course, the larger the back-offs are, the less chance there is that the constraints will be violated, the counterpart is a loss of optimality at convergence. [79] proposes a method to interactively re-evaluate the back-off used to obtain a better compromise between the risk of violating the constraints and the loss of optimality at convergence. Finally, if the parameters of the model used are associated with an uncertainty of the type $\bm{\theta}\in\mathbb{\Theta}$, where $\mathbb{\Theta}$ is a set of possible parameters. Then, it is possible to consider using robust optimization methods of the type of those discussed in [78]. §.§ Practical RTO §.§.§ The practical RTO's problem formulation Let's consider the “un-ideal” conditions where: * The environment can produce disturbances $\bm{d}_p \neq \emptyset$. So the functions of the plant and of the model are: \begin{align*} \bm{y}_p = \ & \bm{f}_p(\bm{x}_p), & \bm{y} = \ & \bm{f}(\bm{x},\bm{\theta}). \end{align*} * There can be measurement errors: therefore, the estimates of $\{\phi_p,\bm{g}_p,\bm{f}_p\}$ and $\{\nabla_{\bm{u}} \phi_p,$ $\nabla_{\bm{u}}\bm{g}_p,\nabla_{\bm{u}}\bm{f}_p\}$ are inaccurate. * The cost and constraint functions can change over time. Therefore, the problem of practical RTO is to (objective 1) solve the following NLP: \begin{align*} \bm{u}_{p}^{\star}:= \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad & \phi_p(\bm{x}_p) \quad \text{s.t.} \quad \bm{g}_p(\bm{x}) \leq \bm{0}, \end{align*} while (objective 2) using plant function evaluations only a minimal number of times, and (objective 3) avoiding violating the constraints during those evaluations. To achieve these three objectives, the idea is generally to apply theoretical RTO methods to which features are added to reduce the effects of measurement errors and disturbances. This combination of theoretical RTO methods with “additional functionalities” can be assimilated into the function of the autopilot mentioned above (see Figure <ref>). The following section is dedicated to the “additional functionalities” that have been proposed so far. §.§.§ Management of measurement noise and disturbances To reduce the sensitivity of the gradient estimation to measurement noise, it has been proposed to base it on a larger number of measurements. For example it is conceivable to use a higher order finite difference method. Based on a similar idea, [32] proposed to use quadratic approximations to “combine” a large number of results from experiments performed in a large enough domain of the input space to filter out the effects of measurement noise. Also, it has been suggested to use non-linear regression methods such as Gaussian process regression [19] or neural networks [47] to perform such filtering. Disturbances that affect a plant designed to operate at steady-state (SS) can be classified into three categories. Disturbances that are (i) slow, e.g. a degradation, (ii) fast but rare, e.g. a step on the quality of materials used, and (iii) fast and frequent, e.g. a rapid succession of steps.[If the plant is designed to operate at steady state and is subject to other types of disturbances, it will indeed never reach steady state. Therefore the primary concern should be to improve the design and/or reduce disturbances.] A disturbance is said to be fast (or slow) if the transition time between two stable values of the disturbance is significantly smaller (or larger) than the largest time constant of the undisturbed plant $\tau_{max}$. Concerning the disturbances of type (i): If the plant is only subject to disturbances of type (i), then one way to handle them is to ignore them by considering that any experiment is completed after a fixed duration $t_{chosen}>\tau_{max}$. After a duration $t_{chosen}$ the measurements made on the plant can then be considered as being steady-state measurements (although this is not the actual case due to the slow disturbances). This approach is proposed and illustrated in [10]. Concerning the disturbances of type (ii): If the plant is subject to (ii) then these are implicitly handled by any iterative RTO method such as ISO, MA, MAy, etc. Indeed, a step on a perturbation during an iteration of one of these methods would only make the information used during this same iteration false. So, at the end of this iteration there is a risk that a wrong decision is taken. However, as these methods do not use memory, the following iterations will be executed correctly and this perturbation will not have any more harmful effect on the decision making. So, disturbances of type (ii) are to some extent managed. However, if the disturbances are measured and taken into account in the decision making process, then one can avoid these risky iterations and make the decision sequence more relevant as suggested in [56]. Concerning the disturbances of type (iii): Finally, if the plant is subject to (iii), then it is unlikely to reach a steady state. In this case, all of the methods discussed in this thesis that aim to identify the best possible stable state for the plant may not be suitable. Nevertheless, it is possible to extract principles, ideas and concepts from these methods and apply them to other more appropriate methods. For example, it is possible to combine economic model predictive control (EMPC-[23]) with MAy as suggested in [83, 26, 82]. §.§ Towards efficient RTO In addition to the theoretical and practical RTO objectives, it is possible to consider performance objectives. One will not attempt to define precisely what the performance of an RTO method is because, as one will see, there are multiple ways of understanding it. One can try to reduce the time required to converge on $\bm{u}_p^{\star}$ by replacing for each iteration the waiting time of the plant stabilization by a shorter fixed waiting time. As a result, it is no longer measurements of the plant's SS that are used to estimate its values and gradients, but transient data. This idea initially applied to TS [4, 73, 48], has been also applied to MA [30, 71]. Although it has been shown that using transient measures can accelerate convergence, [18] shows that such an approach presents risks that can be mitigated by using dynamic models instead of static ones. Indeed, since the measurements are now transient, they can only be pertinently compared to the predictions of a dynamical model. In addition to these risks, it is clear that such approaches increase the sensitivity of any RTO methods to measurement noise, since the noise filtering is obviously reduced to its minimum. Large-scale plant models can generally be represented as networks of submodels. If these submodels are subject to uncertainties, then their effects generally tend to spread to the rest of the network. For example, the inaccurate outputs of an inaccurate submodel can be the inputs of a correct submodel. But since the latter is not evaluated at the correct inputs, its outputs are also incorrect, and so on. This type of error diffusion can be stopped by correcting each sub-model individually as proposed in [68] in the context of an ISOPE implementation. In the context of adapting this approach to MA, it has been shown that in addition to reducing the diffusion of errors in the network, the individual correction of each submodel allows a parallelization of the computations that can enable the distribution of the computational load and potentially make some parts of a model private [50, 72]. Basic RTO methods use very simple correction functions. Be it ISOPE, ISO, or MA, all use affine correction functions affecting the model in its entire input space. Locally an affine correction based on local measures of values and gradient makes sense. Extending it to the whole input space is questionable. Indeed, if the experimental results give local information about the plant, but the model remains the only global data available about it. So, ideally the updated model should only be modified in the vicinity of the experienced points. To achieve this objective, the only proposal that has been made is to use non-linear regression methods such as Gaussian process regression [19, 20] (even though they do not specify the objective they meet which is to use local and global data for what they are). The outputs of the vast majority of RTO methods are set-points for the controllers of the plant. However, there are other options; indeed, [46, 24] suggests that the output of an ideal RTO system should be a control structure leading to optimal long-term performance. For the moment, only [6] provides a RTO method for tuning the parameters of the plant's controllers. The majority of RTO methods do not have a stopping mechanism. For example, if MA is applied and the sequence of experiments has converged on $\bm{u}_p^{\star}$, then the gradient of the plant will be constantly re-calculated at $\bm{u}_p^{\star}$. Although these revaluations lead to a form of sub-optimality because one is less often at $\bm{u}_p^{\star}$. They allow any event taking place after the identification $\bm{u}_p^{\star}$ to be managed. However, instead of perpetually perturbing the plant in case something happens, [88, 54] suggest to introduce a stop mechanism when convergence is detected and a restart mechanism when an event occurs. To detect such an event, they suggest using process monitoring theory (e.g. fault detection methods) to detect significant changes in plant behavior and trigger a decision making. § OBJECTIVES AND PLAN OF THE THESIS The objective of this thesis is to build an autopilot allowing the automation of a large number of decisions that the engineers supervising a plant have to make, with an exclusive focus on the search of performances. The remainder of this work is divided into six chapters that can be summarized as follows: Chapter 2: In this chapter, a reformulation of a part of the history of the development of RTO methods in which several of our contributions are integrated is proposed. This chapter ends with the proposal of a new theoretical RTO method that offers better capabilities of convergence on optimal decisions than most other methods of the same family. Chapter 3: Starting from the method proposed in chapter 2, one exposes a weakness common to any RTO method using a filter and one explains how to correct this weakness. The result of this chapter is a new theoretical RTO method combining the efficiency of the one built in chapter 2 with new safety features. (This chapter is an extension of our articles [59, 60, 62].) Chapter 4: This chapter is dedicated to the introduction to what the structure of an ideal autopilot could be. The functions composing this structure as well as the way they interact with each other is discussed. And in order to make the message as clear as possible, one illustrates the functioning of a very simplified version of this structure, which is called simple autopilot for steady-state processes (S-ASP), on the Williams-Otto reactor, which is a benchmark case study in the RTO community. Chapter 5: In this chapter, ones starts from the method proposed in chapter 3 and one tries to push its capabilities further by improving the way the data obtained at the end of each experiment is used. One proposes here to exploit the structure of the model to apply corrections not only to the outputs of the model but directly to variables interconnecting subparts of it. Empirical studies as well as an implementation on the Tennessee Eastman challenge process are made to demonstrate the interest of such approaches. (This chapter is an extension or our articles [58, 61]) Chapter 6: In this chapter, one discusses how it would be possible to exploit the data obtained from the whole history of the experiments. Also, one proposes an improved version of the S-ASP that is called autopilot for steady-state process (ASP) and one illustrates its functioning on one case study: the Williams-Otto plant. Chapter 7: This chapter concludes this thesis and points towards new research directions that may be relevant. CHAPTER: A RTO ALGORITHM THAT CONVERGES § THE PROBLEM TO SOLVE The problem addressed in this chapter is the theoretical RTO problem presented in section <ref>. In short, one wants to identify \begin{align} \bm{u}_{p}^{\star}:= \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad & \phi_p(\bm{u}) := \phi(\bm{u},\bm{f}_p(\bm{u})) \quad \text{s.t.} \quad \bm{g}_p(\bm{u}) := \bm{g}(\bm{u},\bm{f}_p(\bm{u})) \leq \bm{0}, \label{eq:2_2___1_Plant_PB} \end{align} while using only a minimal number of assessments of the values $(\phi_p,\bm{g}_p,\bm{f}_p)$ and gradients $(\nabla_{\bm{u}} \phi_p,\nabla_{\bm{u}}\bm{g}_p,\nabla_{\bm{u}}\bm{f}_p)$. In addition to these measurements, a model is available that provides an estimate, $\bm{f}$, of the plant's mapping, $\bm{f}_p$. § USING A SOLVER Simplified and schematic operation of a solver To solve problem (<ref>), the simplest option is to connect a standard non-linear optimization problem solver (NLP) directly to the plant. The strategy that these solvers rely on can be summarized as a repetition of evaluations of $\phi$ and $\bm{g}$ in the neighborhood of a point $\bm{u}_j$ allowing the construction of an alternative NLP (approximating (<ref>) around $\bm{u}_j$) and whose solution $\bm{u}_{j+1}$ is “easy” to compute. The resulting sequence $\{\bm{u}_0,...,\bm{u}_{j},\bm{u}_{j+1},...,\bm{u}_{\infty}\}$ is supposed to converge to the desired value, that is $\bm{u}_{\infty} = \bm{u}_p^{\star}$. Let's take for example one of these very popular families of methods: which is sequential quadratic programming (SQP). At each iteration it builds a quadratic approximation of $\phi$ and a linear approximation of $\bm{g}$ which form a quadratic program (QP) whose general writing can be: \begin{align} \bm{u}_{j+1}:= \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad & \nabla_{\bm{u}}\phi_p|_{\bm{u}_i} (\bm{u}- \bm{u}_i) + \frac{1}{2}(\bm{u}- \bm{u}_i)^{\rm T} \nabla^2_{\bm{uu}}\phi_p|_{\bm{u}_i} (\bm{u}- \bm{u}_i) \label{eq:2_2___2_QP_PB} \\ \text{s.t.} \quad & \bm{g}_p|_{\bm{u}_i} + \nabla_{\bm{u}_i}\bm{g}_p|_{\bm{u}_i} (\bm{u}- \bm{u}_i) \leq \bm{0}. \nonumber \end{align} If the active constraints at $\bm{u}_{j+1}$ are known, this problem is relatively simple to solve (see Appendix <ref>). But since they are usually unknown, they must be identified by following an iterative procedure similar to the one given in Appendix <ref>. Such procedures may involve constraint violations and usually require a large number of evaluations of the constraint functions (so in our case a large number of experiments to be performed on the plant). Finally, although such approaches provide very good guarantees on their ability to converge on the plant optimum, they present the following three major problems (if applied directly to an actual system such as an industrial process): i. They use the Hessians of the plant while they are assumed inaccessible. ii. They almost systematically violate constraints during the identification phase of the active constraints. iii. They require a significantly large number of evaluations of plant values, gradients and Hessians that make the identification of $\bm{u}_p^{\star}$ very slow (when one evaluation means one experiment to be performed on the real process). Of course, commercial solvers are generally more complex than the SQP algorithm presented here (and in the appendix). Some can provide better guarantees of non-violation of the constraints during the exploration process they use to identify $\bm{u}_p^{\star}$ (e.g. interior point methods). The counterpart is generally an even slower convergence, i.e. more evaluations of the plant values. Without going deeper into the details of these solvers, the main message of this section is that applying a solver directly to a plant is typically not the best idea because of the three major problems stated above, among others. § TOWARDS THE USE OF MODELS To reduce the effects of these three problems, one can use a model (i) to approximate the plant's Hessians instead of measuring them, (ii) to validate the solver's iterations on the model to reduce the chances of violating the plant constraints, and (iii) to use the global information provided by the model to accelerate the identification of $\bm{u}_p^{\star}$. In this case, the model becomes an intermediate between the solver and the plant and each experiment done on the plant is no longer used by the solver to build simplified representations of the plant, but rather to update the model. Therefore, solving problem (<ref>) is done through a double iterative mechanism of the type of the one illustrated in Figure <ref> where: * Loop 1 corresponds to the optimization of the updated model: \begin{align} \bm{u}_{k+1} = \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad & \phi_k(\bm{u},\bm{\theta}_k) % := \phi(\bm{u},\bm{f}_k(\bm{u},\bm{\theta}_k)) + \bm{\mu}_k(\bm{u}) \quad \text{s.t.} \quad \bm{g}_k(\bm{u},\bm{\theta}_k) \leq \bm{0}, \label{eq:2_2___3_Model_PB} \end{align} where the functions $\phi_k$ and $\bm{g}_k$, and the parameters $\bm{\theta}_k$, are the $k^{\text{th}}$ updates of $\phi$, $\bm{g}$ and $\bm{\theta}$, respectively. This mechanism can be an SQP algorithm as presented previously or any other NLP solver. Indeed, as the model is numerical, there is no penalty be it on the number of evaluation of the functions $(\phi_k,\bm{g}_k)$, or on the violation of constraints that are now “virtual”[Of course there remains challenges in numerical optimization. But it is not the actually the topic of this thesis. However, if a high-fidelity model that is expensive to evaluate is used, then one recommends to the reader to check section <ref>.]. * Loop 2 corresponds to the update of the model, i.e. the construction of $\phi_{k+1}$, $\bm{g}_{k+1}$ and $\bm{\theta}_{k+1}$, by using the measurements of $\phi_p$, $\nabla_{\bm{u}}\phi_p$, $\bm{g}_p$, and $\nabla_{\bm{u}}\bm{g}_p$ obtained on the plant for each of the experimented operational points $(\bm{u}_0,...,\bm{u}_k)$. If the mechanism of loop 1 is a “well known” standard optimization of a numerical model, then loop 2 is a novelty whose mechanism is to be defined. Towards the use of a model to solve the theoretical RTO problem. Ideally, the mechanism of loop 2 should be such that $\bm{u}_{\infty} = \bm{u}_{p}^{\star}$ (see Figure <ref>). It has been shown that: If at the convergence point $\bm{u}_{\infty}$ of an RTO algorithm the equalities \begin{align} \bm{g}_{\infty}|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} = \ & \bm{g}_p|_{\bm{u}_{\infty}}, & \nabla_u \bm{g}_{\infty}|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} = \ & \nabla_u \bm{g}_p|_{\bm{u}_{\infty}}, & \nabla_u \phi_{\infty}|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} = \ & \nabla_u \phi_p|_{\bm{u}_{\infty}}, \label{eq:2___4_NecessaryEqualities} \end{align} are true, then $\bm{u}_{\infty}$ is a KKT point (Karush–Kuhn–Tucker [37, 39]) of the plant, i.e. a minimum (like $\bm{u}_p^{\star}$), a maximum, or a saddle point. (This is the generalization proposed in [61] of other more specific theorems proposed in [43, 58, 50]) By definition: \begin{align} & \bm{u}_{\infty} := \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \phi_{\infty}(\bm{u},\bm{\theta}_\infty) % := \phi(\bm{u},\bm{f}_k(\bm{u},\bm{\theta}_k)) + \bm{\mu}_k(\bm{u}) \quad \text{s.t.} \quad \bm{g}_\infty(\bm{u},\bm{\theta}_\infty) \leq \bm{0}, \label{eq:2___5_Proof_PB_OPT_Model} \\ \Rightarrow \ & \exists \bm{\lambda} \in \amsmathbb{R}^{n_g} \text{ such that } \left\{ \begin{array}{l} \bm{g}_{\infty}|_{\bm{u}_{\infty}, \bm{\theta}_{\infty}} \leq \bm{0},\\ \bm{\lambda}^{\rm T} \bm{g}_{\infty}|_{\bm{u}_{\infty}, \bm{\theta}_{\infty}} = \bm{0}, \\ \bm{\lambda} \geq \bm{0}, \\ \nabla_{\bm{u}}\phi_{\infty}|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} + \bm{\lambda}^{\rm T} \nabla_{\bm{u}}\bm{g}_{\infty}|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} = \bm{0}. \end{array} \right. \label{eq:2___6_KKT_Model} \end{align} Indeed, as $\bm{u}_{\infty}$ is a solution of (<ref>), it is a KKT point of the same problem, hence the implication of (<ref>). Because of the equalities (<ref>), one can replace $(\bm{g}_{\infty}, \nabla_{\bm{u}}\bm{g}_{\infty}, \nabla_{\bm{u}}\phi_{\infty})|_{\bm{u}_{\infty},\bm{\theta}_{\infty}}$ by $(\bm{g}_{p}, \nabla_{\bm{u}}\bm{g}_{p}, \nabla_{\bm{u}}\phi_{p})|_{\bm{u}_{\infty}}$ and observe that $\bm{u}_{\infty}$ is also a KKT point of the plant. In brief: \begin{align} \text{\eqref{eq:2___4_NecessaryEqualities} \& \eqref{eq:2___6_KKT_Model}} \Rightarrow \ & \left\{ \begin{array}{l} \bm{g}_{p}|_{\bm{u}_{\infty}} \leq \bm{0},\\ \bm{\lambda}^{\rm T} \bm{g}_{p}|_{\bm{u}_{\infty}} = \bm{0}, \\ \bm{\lambda} \geq \bm{0}, \\ \nabla_{\bm{u}}\phi_{p}|_{\bm{u}_{\infty}} + \bm{\lambda}^{\rm T} \nabla_{\bm{u}}\bm{g}_{p}|_{\bm{u}_{\infty}} = \bm{0}. \end{array} \right. \label{eq:2___7_Proof_End} \end{align} It is then clear that for an RTO method to guarantee the optimality of the plant upon convergence, it is necessary that the equalities (<ref>) are true. These equalities can be enforced in two different ways which characterize two classes of approaches, direct and indirect approaches. Direct approaches consist of applying correction functions directly to $\phi$ and $\bm{g}$. More precisely, these functions must be corrected as follows: \begin{align} \phi_{\infty}(\bm{u},\bm{\theta}_{\infty}) := \ & \phi(\bm{u},\bm{f}(\bm{u},\bm{\theta}_{\infty})) + \mu_{\infty}^{\phi}(\bm{u},\bm{\theta}_{\infty}) \\ \bm{g}_{\infty}(\bm{u},\bm{\theta}_{\infty}) := \ & \bm{g}(\bm{u},\bm{f}(\bm{u},\bm{\theta}_{\infty})) +\bm{\mu}_{\infty}^{\bm{g}}(\bm{u},\bm{\theta}_{\infty}), \end{align} where $(\mu_{\infty}^{\phi},\bm{\mu}_{\infty}^{\bm{g}})$ are functions whose Taylor series expansions around $\bm{u}_{\infty}$ are: \begin{align} \mu_{\infty}^{\phi}(\bm{u},\bm{\theta}_{\infty}) := \ & \nabla_{\bm{u}} (\phi_p - \phi)|_{\bm{u}_{\infty},\bm{\theta}_{\infty}}^{\rm T} (\bm{u}-\bm{u}_{\infty}) + \text{ h.o.t.}, \\ \bm{\mu}_{\infty}^{\bm{g}}(\bm{u},\bm{\theta}_{\infty}) := \ & ( \bm{g}_p- \bm{g})|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} + \nabla_{\bm{u}} (\bm{g}_p - \bm{g})|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} (\bm{u}-\bm{u}_{\infty}) + \text{ h.o.t.}. \end{align} so that the equalities (<ref>) are enforced. Indirect approaches consist of applying correction functions to $\bm{f}$ to indirectly correct the functions $\phi$ and $\bm{g}$. To do this, the functions $\bm{f}$ must be corrected as follows: \begin{align} \bm{f}_{\infty}(\bm{u},\bm{\theta}_{\infty}) := \ & \bm{f}(\bm{u},\bm{\theta}_{\infty}) + \bm{\mu}_{\infty}^{\bm{f}}(\bm{u},\bm{\theta}_{\infty}), \end{align} \begin{align} \bm{\mu}_{\infty}^{\bm{f}}(\bm{u},\bm{\theta}_{\infty}) := \ & ( \bm{f}_p- \bm{f})|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} + \nabla_{\bm{u}} (\bm{f}_p - \bm{f})|_{\bm{u}_{\infty},\bm{\theta}_{\infty}} (\bm{u}-\bm{u}_{\infty}) + \text{ h.o.t.}, \label{eq:2___15_XXXX} \end{align} The functions $(\phi,\bm{g})$ are not modified, i.e. \begin{align} \phi_{\infty}(\bm{u},\bm{\theta}_{\infty}) := \ & \phi(\bm{u},\bm{f}_{\infty}(\bm{u},\bm{\theta}_{\infty})) \\ \bm{g}_{\infty}(\bm{u},\bm{\theta}_{\infty}) := \ & \bm{g}(\bm{u},\bm{f}_{\infty}(\bm{u},\bm{\theta}_{\infty})). \end{align} The fact that (<ref>) implies (<ref>) is not necessarily obvious on the face of it. The mathematical proof of this implication is given in the Appendix <ref> and is inspired of [58]. These two types of approaches are very important because they separate the whole set of theoretical RTO methods that guarantee the optimal operation of the plant upon convergence into two distinct families. Now that these two classes of approaches are clearly identified, the simplest possible versions can be built. § TWO VERY SIMPLE METHODS (ISO-D/I) To obtain the two “simplest” direct and indirect theoretical RTO methods, it is sufficient to make the following four simplifications: * (Simplification 1) Leave the parameters at predetermined values called nominal values \begin{align} \bm{\theta}_k := \ & \bm{\theta}_n, & \forall k \in \mathbb{Z}, \end{align} to avoid having to solve parameter identification problems. Note that from this point on, if it is not specified for which parameter a function is evaluated, it means that the values that are used are the nominal ones, $\bm{\theta}_n$. \begin{align*} \bm{f}(\bm{u},\bm{\theta}_n) \equiv \ & \bm{f}(\bm{u}), & (\cdot)|_{\bm{u}_k,\bm{\theta}_n} \equiv \ & (\cdot)|_{\bm{u}_k}. \end{align*} * (Simplification 2) Enforce the satisfaction of the following equalities at each iteration $k$: \begin{align} \bm{g}_{k}|_{\bm{u}_{k}} = \ & \bm{g}_p|_{\bm{u}_{k}}, & \nabla_u \bm{g}_{k}|_{\bm{u}_{k}} = \ & \nabla_u \bm{g}_p|_{\bm{u}_{k}}, & \nabla_u \phi_{k}|_{\bm{u}_{k}} = \ & \nabla_u \phi_p|_{\bm{u}_{k}}, & \forall k \in \mathbb{Z}. \label{eq:2___11_SimpleApproach_SameCones} \end{align} to make sure that they are true when $k\rightarrow \infty$ and thus satisfy (<ref>). * (Simplification 3) Use the simplest correction functions allowing to obtain the equalities (<ref>). In other words, the functions $\mu_k^{\phi}$ and $\bm{\mu}_k^{\bm{g}}$, or $\bm{\mu}_k^{\bm{f}}$, are affine, i.e. their higher order terms $(h.o.t.)$ are fixed to $0$. * (Simplification 4) Use only the measures obtained at $\bm{u}_k$ ($\phi_p,$ $ \nabla_{\bm{u}}\phi_p,\bm{g}_p,\nabla_{\bm{u}}\bm{g}_p$). Those obtained previously at $(\bm{u}_0,...,\bm{u}_{k-1})$ are ignored so that there is no need to manage any database. This is how Algorithm <ref>, which corresponds to the “direct” version of iterative setpoint optimization method (ISO-D – [1, 31]), can be reconstructed. At the same time the indirect version of this method can be obtained, which is called indirect iterative setpoint optimization (ISO-I) and whose details are given in Algorithm <ref>. Direct Iterative Setpoint optimization (ISO-D – [1, 31])ISO Initialization. Provide $\bm{u}_0$, functions $(\bm{f},\phi,\bm{g})$, and the stoping criteron of step 4). for $k=0 \rightarrow \infty$ 1) Measure $(\nabla_{\bm{u}}\phi_p, \bm{g}_p,\nabla_{\bm{u}}\bm{g}_p)|_{\bm{u}_{k}}$ on the plant. 2) Construct the functions $(\phi_k,\bm{g}_k)$: \begin{align} \phi_{k}(\bm{u}) := \ & \phi(\bm{u},\bm{f}(\bm{u})) + \mu_{k}^{\phi}(\bm{u}) \\ \bm{g}_{k}(\bm{u}) := \ & \bm{g}(\bm{u},\bm{f}(\bm{u})) +\bm{\mu}_{k}^{\bm{g}}(\bm{u}), \end{align} \begin{align} \mu_{k}^{\phi}(\bm{u}) := \ & \nabla_{\bm{u}} (\phi_p - \phi)|_{\bm{u}_{k}}^{\rm T} (\bm{u}-\bm{u}_{k}), \\ \bm{\mu}_{k}^{\bm{g}}(\bm{u}) := \ & ( \bm{g}_p- \bm{g})|_{\bm{u}_k} + \nabla_{\bm{u}} (\bm{g}_p - \bm{g})|_{\bm{u}_{k}} (\bm{u}-\bm{u}_{k}), \end{align} 3) Solve the model-based optimization problem (<ref>) using the definitions (<ref>) to find $\bm{u}_{k+1}$. 4) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. Indirect Iterative Setpoint optimization (ISO-I)ISOy Initialization. Provide $\bm{u}_0$, functions $(\bm{f},\phi,\bm{g})$, and the stoping criteron of step 4). for $k=0 \rightarrow \infty$ 1) Measure $(\bm{f}_p,\nabla_{\bm{u}}\bm{f}_p)|_{\bm{u}_{k}}$ on the plant. 2) Construct the functions $(\bm{f}_k,\phi_k,\bm{g}_k)$: \begin{align} & \qquad \qquad \qquad \bm{f}_{k}(\bm{u}) := \bm{f}(\bm{u}) +\bm{\mu}_{k}^{\bm{f}}(\bm{u}), \\ & \phi_k(\bm{u}) := \phi(\bm{u},\bm{f}_k(\bm{u})), \qquad \bm{g}_k(\bm{u}) := \bm{g}(\bm{u},\bm{f}_k(\bm{u})), \end{align} \begin{align} \bm{\mu}_{k}^{\bm{f}}(\bm{u}) := \ & ( \bm{f}_p- \bm{f})|_{\bm{u}_k} + \nabla_{\bm{u}} (\bm{f}_p - \bm{f})|_{\bm{u}_{k}} (\bm{u}-\bm{u}_{k}). \end{align} 3) Solve the model-based optimization problem (<ref>) using the definition (<ref>) to find $\bm{u}_{k+1}$. 4) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. These two methods are particularly interesting because they are the simplest to understand and to implement, and they are at the basis of many other more complex RTO methods. To analyze the capacities of these methods to converge on $\bm{u}_p^{\star}$, it is necessary to define clearly what is meant by “capacity of convergence”. § THREE LEVELS OF CONVERGENCE Starting off by introducing the concept of asymptotic convergence, a sequence of points $\{\bm{u}_{\ell},\bm{u}_{\ell+1}, ... \bm{u}_{\infty}\}$ converges asymptotically on $\bm{u}_{p}^{\star}$ if, and only if, each of these points progressively gets closer to $\bm{u}_{p}^{\star}$. In more mathematical terms, this sequence can be said to converge asymptotically on $\bm{u}_{p}^{\star}$ starting from an iteration $\ell$ if: \begin{align} \label{eq:2___15_DefConvergenceAsymptotique} & \forall k\geq \ell: \quad \underset{k\rightarrow\infty}{\operatorname{lim}} \bm{u}_{k} = \bm{u}_{p}^{\star}, \ \text{ and } \ \left\{\begin{array}{ll} \| \bm{u}_{k} - \bm{u}_p^{\star} \| < \| \bm{u}_{k+1} - \bm{u}_p^{\star} \|, & \text{if } \bm{u}_{k} \neq \bm{u}_p^{\star}, \\ \| \bm{u}_{k} - \bm{u}_p^{\star} \| = \| \bm{u}_{k+1} - \bm{u}_p^{\star} \| = 0, & \text{if } \bm{u}_{k} = \bm{u}_p^{\star}. \end{array}\right. \end{align} Ideally, independently of their initialization $\bm{u}_0$ and after a limited number of iterates, ISO-D/I would enter into such a convergent sequence, i.e. \begin{equation} \label{eq:2___16_CeQuOnVeut} \forall \bm{u}_0\in\amsmathbb{R}^{n_u}, \ \exists \ell\in\mathbb{Z} \quad \text{ such that \eqref{eq:2___15_DefConvergenceAsymptotique}.} \end{equation} Instead of directly analyzing the ability of ISO-D/I to provide (<ref>), this condition can be divided into four levels of convergence whose definitions are: * Level 0: A theoretical RTO method provides level 0 of convergence if no matter what $\bm{u}_0$ is, it is unable to guarantee asymptotic convergence of the iterates on $\bm{u}_p^{\star}$. * Level 1 – Equilibrium condition: A theoretical RTO method satisfies the equilibrium condition if $\bm{u}_{\ell}=\bm{u}_p^{\star}$ $\Rightarrow$ $\bm{u}_{k\geq\ell}=\bm{u}_p^{\star}$. * Level 2 – Stability Condition: A theoretical RTO method satisfies the stability condition if $\exists r\in\amsmathbb{R}>0$ such that if $\bm{u}_{\ell}\in\mathcal{B}(\bm{u}_p^{\star},r)$ then $\bm{u}_{k\geq\ell}$ asymptotically converges on $\bm{u}_p^{\star}$, where $\mathcal{B}(\bm{u}_p^{\star},r)$ is a ball of center $\bm{u}_p^{\star}$ and radius $r>0$. * Level 3 – Superstability Condition: A theoretical RTO method satisfies the superstability condition if $\bm{u}_{\ell}\in \amsmathbb{R}^{n_u}$ $\Rightarrow$ $\bm{u}_{k\geq\ell}$ asymptotically converge on $\bm{u}_p^{\star}$. Figure <ref> provides illustrations of those definitions. Based on these representations, the condition (<ref>) for a given $\bm{u}_0$ can be reduced to the expectation that one of the iterations of the sequence $\{\bm{u}_0, \bm{u}_1,...\}$ produced by a theoretical RTO method falls in the blue area (see Figure <ref>) associated with this method. So, the larger this domain, i.e. the higher the level of convergence of this method is, the higher the chances to fall in the blue domain and thus to converge on $\bm{u}_{p}^\star$. Schematic definition of the 4 levels of convergence. With these levels of convergence defined, the question arises of to what level does an ISO-D/I converge and what can be done to raise the level of convergence. § ENFORCING THE EQUILIBRIUM CONDITION Starting off, determining whether an ISO-D/I provides level 1 convergence; if it does not, determining how it should be modified to provide it. Firstly, analyzing under which conditions ISO-D/I guarantee the satisfaction of the equilibrium condition, the following observation can be made: If the RTO method used guarantees the equalities (<ref>) and if during an iteration $k^{\star}\in\amsmathbb{Z}$, $\bm{u}_{k^{\star}}= \bm{u}_p^{\star}$. Then: i) The fact that $\bm{u}_p^{\star}$ is a KKT point of the plant makes that it also a KKT point of the model updated at $\bm{u}_p^{\star}$. ii) The condition for $\bm{u}_{k\geq k^{\star}}= \bm{u}_p^{\star}$ is that: \begin{equation} \label{eq:2___18_ModelAdequacy_ISO_ISOy} \nabla^2_{\bm{u}\bm{u}}\phi_{k^\star}|_{\bm{u}_{p}^{\star}} + \sum_{i=1}^{n_g}\left[ \lambda_{p(i)} \nabla^2_{\bm{u}\bm{u}}g_{(i)k^\star}|_{\bm{u}_{p}^{\star}} \right] > 0, \end{equation} where the $\bm{\lambda}_p$ are the KKT-multipliers of the plant. By using the definition of $\bm{u}_p^{\star}$ and (<ref>), one can easily show that $\bm{u}_p^{\star}$ is a KKT point of the model updated at $\bm{u}_p^{\star}$: \begin{align} & \bm{u}_p^{\star} := \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \phi_{p}(\bm{u}) % := \phi(\bm{u},\bm{f}_k(\bm{u},\bm{\theta}_k)) + \bm{\mu}_k(\bm{u}) \quad \text{s.t.} \quad \bm{g}_p(\bm{u}) \leq \bm{0}, \label{eq:2___wegrw} \\ \text{\eqref{eq:2___wegrw} } \Rightarrow \ & \exists \bm{\lambda}_p \in \amsmathbb{R}^{n_g} \text{ such that } \left\{ \begin{array}{l} \bm{g}_{p}|_{\bm{u}_p^{\star}} \leq \bm{0},\\ \bm{\lambda}_p^{\rm T} \bm{g}_{p}|_{\bm{u}_p^{\star}} = \bm{0}, \\ \bm{\lambda}_p \geq \bm{0}, \\ \nabla_{\bm{u}}\phi_{p}|_{\bm{u}_p^{\star}} + \bm{\lambda}_p^{\rm T} \nabla_{\bm{u}}\bm{g}_{p}|_{\bm{u}_p^{\star}} = \bm{0}. \end{array} \right. \label{eq:2___16_Proof_KKT1_Plant} \\ \text{\eqref{eq:2___11_SimpleApproach_SameCones} \& \eqref{eq:2___16_Proof_KKT1_Plant} } \Rightarrow \ & \left\{ \begin{array}{l} \bm{g}_{k^\star}|_{\bm{u}_p^{\star}} \leq \bm{0},\\ \bm{\lambda}_p^{\rm T} \bm{g}_{k^\star}|_{\bm{u}_p^{\star}} = \bm{0}, \\ \bm{\lambda}_p \geq \bm{0}, \\ \nabla_{\bm{u}}\phi_{k^\star}|_{\bm{u}_p^{\star}} + \bm{\lambda}_p^{\rm T} \nabla_{\bm{u}}\bm{g}_{k^\star}|_{\bm{u}_p^{\star}} = \bm{0}. \end{array} \right. \label{eq:2___16_Proof_KKT1_Modle} \end{align} So, $\bm{u}_p^{\star}$ is a KKT point of the model updated at this point, and $\bm{\lambda}_p$ are the KKT-multipliers associated to $\bm{u}_p^{\star}$. So, $\bm{u}_p^{\star}$ can be either a minimum, a maximum or a saddle point of the model. In order for it to be a minimum, i.e. $\bm{u}_{k^\star} = \bm{u}_p^{\star} \ \Rightarrow \ \bm{u}_{k^\star+1} = \bm{u}_p^{\star}$, the second order KKT condition must be satisfied, which is the case when (<ref>) is true. The model is said to be adequate for ISO-D/I if (<ref>) holds without any intervention. The concept of model adequacy is proposed in [27] where it is applied to TS to demonstrate its weaknesses. It is reused in [43] where it is applied to an improved version of ISO to demonstrate the conceptual advantages of ISO methods over TS. It is clear that it is a priori impossible to verify whether (<ref>) is true, because neither $\bm{u}_p^{\star}$ nor $\bm{\lambda}_p$ are known. However, what one can do is to make this inequality true at all points $\bm{u}\in\amsmathbb{R}^{n_u}$ (instead of only at $\bm{u}_p^{\star}$), and for all $\bm{\lambda}^{\prime}\in\amsmathbb{R}^{n_g+}$ (instead of only for $\bm{\lambda}_p$). To do this, the idea is to replace the functions $\phi_k$ and $\bm{g}_k$ by approximations that are convex at $\bm{u}_k$ and that are noted $\phi^c_k$ and $\bm{g}^c_k$. These new functions must, $\forall k \in \amsmathbb{Z}$ and $i=1,...,n_g$, be such that: \begin{align} \label{eq:3___21} % \phi^c_{k+1}|_{\bm{u}_k} = \ & \phi_{k+1}|_{\bm{u}_k}, & \nabla_{\bm{u}} \phi^c_{k}|_{\bm{u}_k} = \ & \nabla_{\bm{u}} \phi_{k}|_{\bm{u}_k}, & \bm{g}^c_{k}|_{\bm{u}_k} = \ & \bm{g}_{k}|_{\bm{u}_k}, & \nabla_{\bm{u}} \bm{g}^c_{k}|_{\bm{u}_k} = \ & \nabla_{\bm{u}} \bm{g}_{k}|_{\bm{u}_k}, \end{align} \begin{align} \label{eq:3___22} \nabla^2_{\bm{u}\bm{u}} \phi^c_{k}|_{\bm{u}_k} > \ & 0, & \nabla^2_{\bm{u}\bm{u}} g^c_{(i)k}|_{\bm{u}_k} > \ & 0, \end{align} On one hand, equalities (<ref>) guarantee that Theorems <ref> and <ref> are applicable. On the other hand, inequalities (<ref>) imply that: \begin{align} \label{eq:3___23} \nabla^2_{\bm{u}\bm{u}}\phi^c_{k}|_{\bm{u}_{k}} + \sum_{i=1}^{n_g}\left[ \lambda^{\prime}_{(i)} \nabla^2_{\bm{u}\bm{u}}g^c_{(i)k}|_{\bm{u}_{k}} \right] > \ & 0, & \forall \bm{\lambda}^{\prime}\in\amsmathbb{R}^{n_g+}, \ \forall \bm{u}_k\in\mathbb{R}^{n_u}, \end{align} and therefore, by applying Theorem <ref>, one can conclude that the equilibrium condition is provided. Now the question that remains is: How to build $(\phi^c_{k},$ $\bm{g}^c_{k})$? It is clear that an infinite number of approaches can be considered. Below, two fairly simple options are proposed: §.§ Option 1: With model preprocessing The functions $\phi^c_{k},$ and $\bm{g}^c_{k}$ can be obtained by preprocessing of the model, whereby it is made convex at any point. For ISO-I, this option is not easily applicable as it induces high order corrections (see Appendix <ref> for the details). For ISO-D, this option consists in choosing $\phi^c$ and $g_{(i)}^c$, $\forall i = 1,...,n_g$, in the following way: \begin{align} \label{eq:2___24a_Coutc} \phi^c(\bm{u}) :\approx \ & \phi(\bm{u},\bm{f}(\bm{u},\bm{\theta}_n)), & & \text{such that $\forall \bm{u}\in\amsmathbb{R}^{n_u}$: } \nabla^2_{\bm{u}\bm{u}} \phi^c(\bm{u}) > 0, \\ \label{eq:2___24b_Contraintesc} g_{(i)}^c(\bm{u}) :\approx \ & g_{(i)}(\bm{u},\bm{f}(\bm{u},\bm{\theta}_n)), & & \text{such that $\forall \bm{u}\in\amsmathbb{R}^{n_u}$: } \nabla^2_{\bm{u}\bm{u}} g_{(i)}^c(\bm{u}) \geq 0. \end{align} Since ISO-D applies affine corrections directly to the cost and constraint functions, their initial curvatures are preserved, so $\forall i = 1,...,n_g$: \begin{align*} \nabla^2_{\bm{u}\bm{u}} \phi^c(\bm{u}) > \ & 0, \forall \bm{u}\in\amsmathbb{R}^{n_u}& & \Rightarrow \quad \nabla^2_{\bm{u}\bm{u}} \phi^c_{k}(\bm{u}) > 0, \ \forall (\bm{u},\bm{u}_k)\in\amsmathbb{R}^{n_u} \\ \nabla^2_{\bm{u}\bm{u}} g_{(i)}^c(\bm{u}) \geq \ & 0, \forall \bm{u}\in\amsmathbb{R}^{n_u}& & \Rightarrow \quad \nabla^2_{\bm{u}\bm{u}} g^c_{(i)k}(\bm{u}) \geq 0, \ \forall (\bm{u},\bm{u}_k)\in\amsmathbb{R}^{n_u}. \end{align*} Therefore, convex approximations of type (<ref>) ensure that the equilibrium condition (<ref>) is satisfied. A special case of this strategy is proposed in [29] where the cost function is approximated with (<ref>) and where the constraints are simply linearized (which is a special case of (<ref>)). §.§ Option 2: With successive convexifications The functions $\phi^c_{k}$ and $\bm{g}^c_{k}$ can be obtained by applying, at each iteration, a local convexification of the updated functions $\phi_{k}$ and $\bm{g}_{k}$. The idea is to proceed as follows: \begin{align} \phi^c_{k}(\bm{u}) := \ & \phi_{k}(\bm{u},\bm{f}_{k}(\bm{u})) + \frac{1}{2} (\bm{u}-\bm{u}_k)^{\rm T}\bm{Q}_k^{\phi} (\bm{u}-\bm{u}_k), \\ g^c_{(i)k}(\bm{u}) := \ & g_{(i)k}(\bm{u},\bm{f}_{k}(\bm{u})) + \frac{1}{2} (\bm{u}-\bm{u}_k)^{\rm T}\bm{Q}_k^{g_{(i)}}(\bm{u}-\bm{u}_k), \end{align} \begin{align} \bm{Q}_k^{\phi} :\approx \ &\nabla^2_{uu}\phi_{k}(\bm{u},\bm{f}_{k}(\bm{u})) &&\text{ such that } \bm{Q}_k^{\phi} > 0, \\ \bm{Q}_k^{g_{(i)}} :\approx \ & \nabla^2_{uu}g_{(i)k}(\bm{u},\bm{f}_{k}(\bm{u}))&& \text{ such that } \bm{Q}_k^{g_{(i)}}\geq 0, \quad \forall i=1,...,n_g. \end{align} This strategy taken from [62] (section 4.2 of [62]) works for both ISO-D and -I. It has been shown that level 1 can be enforced by using the options 1 or 2. Next the stability condition is analyzed. § ENFORCING THE STABILITY CONDITION §.§ A property of the iterative RTO methods The functions $\phi_k$ and $\bm{g}_k$ of the model updated at the iteration $k$ are entirely defined by the nominal functions $\phi$ and $\bm{g}$ and all the results of the experiments performed around the points $\{\bm{u}_0,\bm{u}_1,...,\bm{u}_k\}$. So, in a way, one can consider that the functions $\phi_k$ and $\bm{g}_k$ are simplified versions of more general functions $\widetilde{g}_{(0)}$ and $\widetilde{\bm{g}}_k$ which can be defined as: \begin{align} \widetilde{g}_{(0)}(\bm{u},\bm{v}_k,...,\bm{v}_0)|_{\bm{v}_k= \bm{u}_k, ..., \bm{v}_0 = \bm{u}_0} := \ & \phi_k(\bm{u}), \nonumber \\ \widetilde{\bm{g}}(\bm{u},\bm{v}_k,...,\bm{v}_0)|_{\bm{v}_k= \bm{u}_k, ..., \bm{v}_0 = \bm{u}_0} := \ & \bm{g}_k(\bm{u}). \label{eq:2_Generalized_version_function} \end{align} Consider the particular case of purely iterative methods: (Purely iterative methods) A RTO method is said to be purely iterative if it exploits only the results obtained during the last series of experiments, i.e. if: \begin{align} \label{eq:2___25_DefMethodePurementIteratives} \phi_k(\bm{u}) = \ & \widetilde{g}_{(0)}(\bm{u},\bm{v}_k), & \bm{g}_k(\bm{u}) = \ & \widetilde{\bm{g}}(\bm{u},\bm{v}_k). \end{align} The Simplification 4 used to build ISO-D/I implies that they are purely iterative. In the case of purely iterative methods, the solution of updated problem (<ref>) is a function $\bm{sol}$ of the point $\bm{v}$ around which the latest set of experiments have been conducted. In more mathematical terms, (<ref>) becomes: \begin{align} \bm{sol}(\bm{v}) := \ & \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \widetilde{\phi}(\bm{u},\bm{v}), \quad \text{s.t.} \quad \widetilde{\bm{g}}(\bm{u},\bm{v}) \leq \bm{0}, \label{eq:2___26_sol_v_def} \end{align} and the Lagrange multipliers associated to $\bm{sol}(\bm{v})$ are: \begin{align} \bm{\lambda}(\bm{v}) := \ & \ \left\{ \bm{a}\in\amsmathbb{R}^{n_g}\geq \bm{0} \ \left| \ \widetilde{\phi}(\bm{sol}(\bm{v}),\bm{v}) + \sum_{i=1}^{n_g} \widetilde{g}_{(i)}(\bm{sol}(\bm{v}),\bm{v}) \right.\right\}. \label{eq:2___26_lambda_v_def} \end{align} These two functions have properties which are given by the following theorem and which are essential for the analysis of the stability of a purely iterative RTO method. (Properties of purely iterative RTO methods) * A purely iterative RTO method is used; * For any correction point $\bm{v}\in\amsmathbb{R}^{n_u}$ the following relations are true: \begin{align} \widetilde{\bm{g}}|_{\bm{v},\bm{v}} = \ & \bm{g}_p|_{\bm{v}}, & \nabla_u \widetilde{\bm{g}}|_{\bm{v},\bm{v}} = \ & \nabla_u \bm{g}_p|_{\bm{v}}, & \nabla_u \widetilde{\phi}|_{\bm{v},\bm{v}} = \ & \nabla_u \phi_p|_{\bm{v}}, \end{align} \begin{align} \label{eq:2_29_uywgkfyurwkc} \nabla^2_{\bm{u}\bm{u}}\widetilde{\phi}|_{\bm{v},\bm{v}} + \sum_{i=0}^{n_g}\left[ \lambda^{\prime}_{(i)} \nabla^2_{\bm{u}\bm{u}}\widetilde{g}_{(i)}|_{\bm{v},\bm{v}} \right] > \ & 0, & \forall \bm{\lambda}^{\prime}\in\amsmathbb{R}^{n_g+}, \end{align} where the functions $(\widetilde{\phi}, \widetilde{\bm{g}})$ are defined by (<ref>); * For any correction point $\bm{v}\in\amsmathbb{R}^{n_u}$ the problem (<ref>) is such that $\bm{sol}(\bm{v})$ is unique and the active constraints at $\bm{sol}(\bm{v})$ satisfy the Linear Independence Constraint Qualification (LICQ) conditions. Then the following statements are true: A. $\bm{sol}(\bm{v})$ and $\bm{\lambda}(\bm{v})$ are globally $\mathcal{C}^0$ and piecewise-$\mathcal{C}^1$ ; B. If $\bm{v}^{\bullet}$ is a KKT point of the plant and $\delta \bm{v}$ a vector in $\amsmathbb{R}^{n_u}\backslash \bm{0}$, then $\bm{sol}(\bm{v}^{\bullet}) = \bm{v}^{\bullet}$ and the value of the directional derivative of $\bm{sol}$ \begin{align} \label{eq:2___29_DeriveeDirectionnelle_Forward} \nabla^S(\bm{v},\delta\bm{v}) := \ & \left( \frac{\delta \bm{v} }{\|\delta \bm{v} \|} \right)^{\rm T} \lim_{\begin{smallmatrix} \alpha \to 0 & \\ \alpha>0 \end{smallmatrix}} \frac{ \bm{sol}(\bm{v}+\alpha\delta \bm{v}) - \bm{sol}(\bm{v}) }{\alpha}, \\ \Big( = \ & \left( \frac{\delta \bm{v}}{\|\delta \bm{v}\|} \right)^{\rm T} \nabla_{v}\bm{sol}|_{\bm{v}} \frac{\delta \bm{v}}{\|\delta \bm{v}\|}, \quad \text{ si $\bm{sol}$ est $\mathcal{C}^1$ en $\bm{v}$} \Big). \nonumber \end{align} \begin{equation}\label{eq:2___29_NablaS_Signe} \text{is: } \left\{ \begin{array}{ll} \nabla^S(\bm{v}^{\bullet},\delta\bm{v}) < 1, & \text{if $\bm{v}^{\bullet}$ is a minimum of the plant}, \\ \nabla^S(\bm{v}^{\bullet},\delta\bm{v}) > 1, & \text{if $\bm{v}^{\bullet}$ is a maximum of the plant}, \\ \nabla^S(\bm{v}^{\bullet},\delta\bm{v}) \gtrless 1, & \text{if $\bm{v}^{\bullet}$ is a saddle point of the plant.} \end{array} \right. \qquad \quad \end{equation} If $\bm{v}^{\bullet}$ is a minimum of the plant and $\delta\bm{v}$ is a vector orthogonal to the set of active constraints of the plant at $\bm{v}^{\bullet}$, then in order to have $\nabla^{S}(\bm{v},\delta\bm{v})<-1$, the Lagrangian of the plant must be at least “two times more convex” than the one of the model in the direction $\delta\bm{v}$, i.e.: \begin{align} \label{eq:2___32_Condition_NablaS_pluspetitque_m1} \frac{\delta\bm{v}^{\rm T} \nabla_{\bm{uu}}\mathcal{L}_p|_{\bm{v}^{\bullet}} \delta\bm{v}}{\|\delta\bm{v}\|^2} > \ & 2, & \Leftrightarrow \quad \nabla^{S}(\bm{v},\delta\bm{v})< \ & -1. \end{align} The proof of this theorem is given in Appendix <ref>. When this theorem is applicable, the two successive iterations $\bm{u}_k = \bm{u}_p^{\star} + \delta\bm{u}_k$ and $\bm{u}_{k+1}$ can be linked with the following equation: \begin{align} & &\bm{u}_{k+1} = \ & \bm{sol}(\bm{u}_p^{\star} + \delta\bm{u}_k), \nonumber \\ & & = \ & \bm{sol}(\bm{u}_p^{\star}) + \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \delta\bm{u}_k + \mathcal{O}(\| \delta \bm{u}_k\|^2), \nonumber \\ & & = \ & \bm{u}_p^{\star} + \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \delta\bm{u}_k + \mathcal{O}(\| \delta \bm{u}_k\|^2), \label{eq:2___28_C1_Uasteri} \\ \Leftrightarrow & & \bm{u}_{k+1} - \bm{u}_p^{\star} = \ & \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) (\bm{u}_{k} - \bm{u}_p^{\star}) + \mathcal{O}(\| \bm{u}_{k} - \bm{u}_p^{\star}\|^2). \label{eq:2___33_rcugbk} \end{align} And when $\delta\bm{u}_k\rightarrow\bm{0}$ the term $\mathcal{O}(\| \bm{u}_{k} - \bm{u}_p^{\star}\|^2)$ is negligible. \begin{align} \text{\eqref{eq:2___33_rcugbk} } \Leftrightarrow \ \|\bm{u}_{k+1} - \bm{u}_p^{\star} \|= \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k)^2 \|\bm{u}_{k} - \bm{u}_p^{\star}\|. \nonumber \end{align} It is possible to derive from this equation the condition for $\bm{u}_{k+1}$ to be closer to $\bm{u}_p^{\star}$ than $\bm{u}_{k}$ and this, for all $\bm{u}_{k}\in\mathcal{B}(\bm{u}_p^{\star},r\rightarrow 0)$, is: \begin{align} \label{eq:3___36_kcunrg} -1<\nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k)< \ & 1, & \forall \delta\bm{u}_k \in\amsmathbb{R}^{n_u}. \end{align} If the model is convex at the correction point, then thanks to the equation (<ref>) of Theorem <ref>, $\forall \delta\bm{u}_k \in\amsmathbb{R}^{n_u}$, $\nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k)<1$. However, there is no guarantee that $\forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u}, \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) > -1$; thus, the condition (<ref>) can be reduced to: \begin{align} \label{eq:2___36_Condition_Niveau2_SansFiltre} \Underline{\nabla}^S(\bm{u}_p^{\star}) >\ & -1, & \text{where} \qquad \Underline{\nabla}^S(\bm{u}_p^{\star}) := \ & \underset{\delta\bm{u}_k\in\amsmathbb{R}^{n_u}}{\operatorname{min}} \ \nabla^S(\delta\bm{u}_k). \end{align} which is therefore the stability condition of all RTO methods satisfying the applicability conditions of Theorem <ref> (e.g. ISO-D/I using convexified models). It is to eliminate the condition (<ref>) that [42, 43] propose to apply a filter on the iterations of the decision variables[It is also shown that an equivalent result can be obtained by applying a filter on the model updates. But this option is not considered in this thesis.]. §.§ Existence of a filter enforcing stability A filter on the iterations is implemented as follows: * The minimum of the updated model is computed: \begin{equation} \label{eq:2___27a_ModelBasedPB} \bm{u}^{\star}_{k+1} = \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \phi^c_{k}(\bm{u}) \quad \text{s.t.} \quad \bm{g}^c_{k}(\bm{u}) \leq \bm{0}. \end{equation} * Then one selects $\bm{u}_{k+1}$ between $\bm{u}_k$ and $\bm{u}^{\star}_{k+1}$ with a filtering gain[In the literature a filtering matrix $\bm{K}\in\amsmathbb{R}^{n_u \times n_u}$ whose eigenvalues are in $]0,1]$ is generally used. Here, one indirectly chooses the matrix $\bm{K}=K\bm{I}$, and this choice makes the following developments far more readable.] $K\in]0,1]$: \begin{equation} \label{eq:2___27b_Filter} \bm{u}_{k+1} = \bm{u}_k + K(\bm{u}^{\star}_{k+1} - \bm{u}_k). \end{equation} In less mathematical terms, implementing a filter on an iteration means choosing the next operating point $\bm{u}_{k+1}$ between the current one $\bm{u}_{k}$ and the minimum of the updated model $\bm{u}_{k+1}^\star$. It is shown in [42, 43] that the use of a filter $K$ which is “sufficiently small” enforces the stability conditions for any RTO method satisfying the applicability conditions of Theorem <ref>. To achieve this result, it is sufficient to start from (<ref>): * replace $\bm{u}_{k+1}^*$ by (<ref>); * neglect the term $\mathcal{O}(\|\delta \bm{u}_k\|^2)$ to place the study in the neighborhood of $\bm{u}_p^{\star}$ (otherwise it is not the stability but the superstability that would be analyzed – as discussed later); * add “$-\bm{u}_p^{\star}$” on each side of the equality; to get \begin{align} & & \bm{u}_{k+1} - \bm{u}^{\star}_{p} = \ & \bm{u}_k + K (\bm{u}^{\star}_p + \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) (\bm{u}_k-\bm{u}_p^{\star}) - \bm{u}_k) - \bm{u}^{\star}_{p}, \nonumber \\ & & = \ & \Big(1 - K \left( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \right)\Big) (\bm{u}_{k} - \bm{u}^{\star}_{p} ), \nonumber \\ \Rightarrow & & \|\bm{u}_{k+1} - \bm{u}^{\star}_{p}\| = \ & \Big(1 - K \left( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \right)\Big)^2 \|\bm{u}_{k} - \bm{u}^{\star}_{p} \|. \label{eq:2___30_IterFiltreEffet} \end{align} Therefore, in order for $\bm{u}_{k+1}$ to be closer to $\bm{u}_p^{\star}$ than $\bm{u}_{k}$ and this, for all $\bm{u}_{k}\in\mathcal{B}(\bm{u}_p^{\star},r\rightarrow 0)$, it is necessary that: \begin{align} & & \Big(1 - K \left( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \right)\Big)^2 < \ & 1, \qquad \forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u}, \nonumber \\ \Leftrightarrow & & -1 < 1 - K \left( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \right) < \ & 1, \qquad \forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u}, \nonumber \\ \Leftrightarrow & & 0 < K \left( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \right) < \ & 2, \qquad \forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u}, \nonumber \\ \Leftrightarrow & & 0 < K \left( 1 - \Underline{\nabla}^S(\bm{u}_p^{\star}) \right) < \ & 2, \quad \Underline{\nabla}^S(\bm{u}_p^{\star}) := \underset{\delta\bm{u}_k\in\amsmathbb{R}^{n_u}}{\operatorname{min}} \ \nabla^S(\delta\bm{u}_k). \label{eq:2___30_ConditionNiveau2} \end{align} Since $\bm{u}_p^{\star}$ is a minimum of the plant, \begin{equation*} \text{ \eqref{eq:2___29_NablaS_Signe} } \Rightarrow \quad 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) > 0, \ \forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u} \quad \Rightarrow \quad 1- \Underline{\nabla}^S(\bm{u}_p^{\star}) > 0. \end{equation*} Therefore, the stability condition is a condition on $K$: \begin{equation} \label{eq:2___36_ConditionNiveau2_K} 0 < K < \frac{2}{1-\Underline{\nabla}^S(\bm{u}_p^{\star}))}, \end{equation} and this set is never empty since $1-\nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k)>0$. Finally, to choose a “small enough” filter one just needs to choose $K$ such that the inequality (<ref>) is satisfied. However, it is clear that it is a priori impossible to evaluate this inequality because to do so one would need to know both $\bm{u}_p^{\star}$ and the directional derivatives $\nabla^S$ of the function $\bm{sol}$, and neither of them are a priori known. §.§ Towards an adaptive filter To get around this problem, one proposes (i) to compute at each iteration an approximation $\utilde{\text{\hskip 0.1ex $\nabla$}}_k^S$ of $\Underline{\nabla}^S(\bm{u}_p^{\star})$ and (ii) to choose at each iteration a filter $K_k$ such that: \begin{align} \label{eq:2___41_strat1_adaptationK} K_k < \ & \frac{2}{1-\utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}}, & \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S := \ & \left( \frac{\bm{u}_k - \bm{u}_{k-1}}{\|\bm{u}_k - \bm{u}_{k-1}\|} \right)^{\rm T} \frac{\bm{u}^{*}_{k+1} - \bm{u}^{*}_{k}}{\| \bm{u}_{k} - \bm{u}_{k-1} \| }, & \text{si } \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S < 1. \end{align} Any strategy of adapting the filter $K$ that follows (<ref>) provides interesting properties in particular: If the conditions of applicability of the Theorem <ref> are satisfied and if: * The problem is unidimensional, i.e. $n_u=1$; * The function $sol$ is $\mathcal{C}^1$ at $\bm{u}_p^{\star}$; * A filter $K>0$ is applied by following the procedure (<ref>). Then any filter adaptation strategy that satisfies (<ref>) guarantees that the stability condition (<ref>) is always satisfied. Given that $u_p^{\star}$ is a minimum of the plant, that the function $sol$ is $\mathcal{C}^1$ at $u_p^{\star}$, and that Theorem <ref> is applicable, the function $sol$ can be approximated by: \begin{align} \label{eq:2___42_ApproximationLocale} sol(u) = \ & u_p^{\star} + \nabla_u sol|_{u_p^{\star}} (u-u_p^{\star}), & \forall \bm{u} \in \amsmathbb{D}_{r} = [u_p^{\star}-r, \ u_p^{\star}+r]. \end{align} In domain $\amsmathbb{D}_{r}$, there is always a sub-domain $\amsmathbb{D}_{\epsilon} = [u_p^{\star}-\epsilon, \ u_p^{\star}+\epsilon]$ such that whatever is the filter $K>0$ used, $u_k \in \amsmathbb{D}_{\epsilon} \ \Rightarrow \ u_{k+1} \in \amsmathbb{D}_{r}$. Indeed, \begin{equation*} \begin{array}{rrcl} & u_p^{\star} - r < & u_{k+1} & < u_p^{\star} + r \\ \Leftrightarrow & u_p^{\star} - r < & u_{k} + K(sol(u_k)-u_{k}) & < u_p^{\star} + r \\ \Leftrightarrow & u_p^{\star} - r < & u_{k} + K(u_p^{\star} + \nabla_usol|_{u_p^{\star}}(u_k-u_p^{\star})-u_{k}) & < u_p^{\star} + r \\ \Leftrightarrow & u_p^{\star} - \epsilon < & u_{k} & < u_p^{\star} + \epsilon \\ \end{array} \end{equation*} \begin{equation*} \epsilon := \frac{r}{|1-K+K\nabla_u sol|_{u_p^{\star}}|}. \end{equation*} So, if an iterate $u_k\in\amsmathbb{D}_{\epsilon}$, then $\forall K>0$ the next iterate will satisfy: $u_{k+1}\in\amsmathbb{D}_{\delta}$. Given these two iterations and because (i) locally (<ref>) holds, (ii) $sol$ is $\mathcal{C}^1$, (iii) $n_u=1$, the estimate $\utilde{\text{\hskip 0.1ex $\nabla$}}_k^S$ is equal to the actual value of $\nabla_u sol|_{u_p^{\star}}$. Therefore, it is sure that the filter chosen for the iteration $u_{k+2}$ satisfies (<ref>). As a result, $u_{k+2}\in\amsmathbb{D}_{r}$ and for all the following iterations (<ref>) will be systematically respected. It follows from this observation that the sequence $\{u_{k+1},...,u_{\infty}\}$ converges asymptotically on $u_p^{\star}$. Finally, it is enough that one iterate falls in the domain $\amsmathbb{R}_\epsilon$ so that the filter chosen with (<ref>) guarantees that the next iterates converge asymptotically on $\bm{u}_p^{\star}$. Noting that $\amsmathbb{D}_r$ is a ball $\mathcal{B}(\bm{u}_p^{\star},r \rightarrow 0)$ in $\amsmathbb{R}$, it can be concluded that any strategy that satisfies (<ref>) guarantees the satisfaction of the stability condition (<ref>). Hereafter, a graphical interpretation of Theorem <ref> is proposed: If the conditions of applicability of Theorem <ref> are satisfied by a RTO method and if no filter is used. Then, $\forall u_o$ close to a stationary point of the plant $u_p^\bullet$ the iterations $u_{k>0}$ will follow a behavior similar to one of the 4 cases shown in Figure <ref>. The four cases of Figure <ref> correspond to situations where: \begin{align*} \text{case A: } & \ \nabla_u sol|_{u_p^{\star}} \in ]1,+\infty[, & \text{case B: } & \ \nabla_u sol|_{u_p^{\star}} \in ]0,1[, \\ \text{case C: } & \ \nabla_u sol|_{u_p^{\star}} \in ]-1,0[, & \text{case D: } & \ \nabla_u sol|_{u_p^{\star}} \in ]-\infty,-1]. \end{align*} figureFour possible cases when $n_u=1$ and $sol$ is $\mathcal{C}^{1}$ at $u_p^{\bullet}$. The black line represents the function $sol(u)$. This line crosses the point $[u_p^\bullet,u_p^\bullet]$ because according to Theorem <ref> $sol(u_p^\bullet)=u_p^\bullet$. The evolution of the iterations is represented by yellow points on this curve. (One starts from $\delta u_k = \delta u_0 $ to generate the first point $[\delta u_0, sol(u_p^\bullet + \delta u_0)]^{\rm T}$. As no filter is applied, the ordinate of one point is the abscissa of the next one. So, to construct a new point (i.e. move forward one iteration) one must horizontally project a point on the line $y= \delta u_k + u_p^\bullet$ (black dotted line) to then project it vertically on the curve $u_{k+1} = sol(u_p^\bullet + \delta u_k)$.) Finally, a color code is used to clearly show in which case (A,B,C,D) one is located. If the function $sol$ is in the green domain, then it is the B or C and a filter is not necessary to make the RTO method stable, i.e. $\nabla_u sol|_{u_p^\bullet}\in[-1,1]$ which can be linked to (<ref>). On the other hand, if it is in the yellow or red areas, then one can observe that the iterations do not converge on $u_p^{\bullet}$. Now, let's build scenarios A' and D' similar to scenarios A and D but using a filter (i.e. “where each horizontal projection is reduced”). Figure <ref> illustrates what is obtained and two things can be observed: (Case D':) If the function $sol$ is in the yellow area (i.e. if $\nabla_u sol|_{u_p^\bullet}< -1$) then a filter can provide stability. The figure on the very right of Figure <ref> gives a graphical interpretation of the appropriate choices of $K$. (Case A':) If the function $sol$ is in the red area (i.e. if $\nabla_u sol|_{u_p^\bullet}> 1$), then there is no $K>0$ providing stability. However, according to the Theorem <ref>, $\nabla_u sol|_{u_p^\bullet}> 1$ implies that $u_p^\bullet$ s not a minimum of the plant. So this particular case, is a case for which the convergence on $u_p^\bullet$ is not desirable. Hence the interest in setting a lower bound on the choice of $K$ to $0$. figureIllustration of the effects of the filter. Finally, the green, yellow and red cones whose common vertex is $u_p^{\bullet}$ provide the following information: “If for a point $u$ the point $[u,sol(u)]$ is within the cone of color * green, then $sol(u)$ will be closer to $u_p^{\bullet}$ than $u$. * yellow, then $sol(u)$ will be closer to $u_p^{\bullet}$ than $u$ if a small enough filter is used . * red, then $sol(u)$ will be farther from $u_p^{\bullet}$ than $u$. These color cones are reused later in this chapter. §.§ Towards an optimal adaptive filter To go further, one can take the equation (<ref>) and choose the filter $K$ which minimizes the term “$1 - K \big( 1 - \nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k) \big)$” so that $\bm{u}_{k+1}$ is as close as possible to $\bm{u}_p^{\star}$. Of course, $\nabla^S(\bm{u}_p^{\star},\delta\bm{u}_k)$ is unknown but can be approximated with (<ref>). So, the idea is to define the filter to be used as the solution of the following optimization problem: \begin{equation} \label{eq:2___43__filtre_Optimal} K^\star_k = \operatorname{arg} \ \underset{K}{\operatorname{min}} \quad \left| 1 - K \left[ 1 - \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} \right] \right| , \quad \text{s.t.} \quad K > K_o, \quad \text{si } \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S < 1. \end{equation} where $K_o>0$ is the minimum value that the filter can take, which must be chosen by the engineers supervising the plant. The default choice is set to $K_o:=0.1$. This optimization problem has a simple explicit solution which is: \begin{align} \label{eq:2___36_OptimizationPB_ChoiceK} K^{\star}_k = \ & \max\left\{K_o, \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}} \right\}, & \text{if } \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S < 1. \end{align} Notice that by definition $K_k^\star$ always satisfies the condition (<ref>) because it is obvious that it is always true that: \begin{align*} \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}} < \ & \frac{2}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}}, & \text{if: } \qquad \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} <1. \end{align*} Let's illustrate the effects of using this adaptive filter on two purely mathematical examples. * Example <ref> is a detailed analysis of the solving of a 1D problem ($n_u=1$) satisfying all the conditions of Theorem <ref>. Some practical points related to the implementation, such as convergence management, are discussed. * Example <ref> proposes an empirical analyses of the use of the adaptive filter (<ref>). It is observed that very interesting behaviors emerges from the use of this adaptive filter. (This is an example used in [42] (the example 4.3 page 112).) One considers a problem where the functions of the plant and the model are: \begin{align*} \phi(u,y) := \ & y, \ & y_p = f_p(u) := \ & (u-1)^2, \ & y = f(u) := \ & \frac{u^2}{4}, \end{align*} and where the constraints are simply on the inputs: $-5 \leq u \leq 5$. In this case, it is clear that ISO-D/I are identical and that the equilibrium condition is always satisfied since the cost function is convex at all points and the constraints are known. It is therefore a “simple” case study allowing for the effects of the filter to be analyzed. The rest of this example is divided into two parts. In part 1, a theoretical analysis of the convergence properties of ISO-D/I w.r.t. the filter is proposed. In part 2, simulation results are shown and some "practical" aspects of the implementation of the adaptive filter are discussed. Part 1: Theoretical analysis The minimum of the plant is $u_p^{\star} = 1$ and no constraints are active at this point. So the function $sol$ given in Figure <ref> can be approximated around $u_p^{\star}$ in the following way: \begin{align*} & & \widetilde{\phi}(u,u_p^{\star}) := \ & \phi_p|_{u_p^{\star}} + \nabla_u \phi_p|_{u_p^{\star}} (u - u_p^{\star}) + \frac{1}{2} \nabla^2_{uu} \phi|_{u_p^{\star}} (u - u_p^{\star})^2, \nonumber \\ \Rightarrow & & \nabla_u \widetilde{\phi}(u,u_p^{\star}) = \ & \nabla_u \phi_p|_{u_p^{\star}} + \nabla^2_{uu} \phi|_{u_p^{\star}} (u-u_p^{\star}), \\ \Rightarrow & & 0 = \ & \nabla_u \phi_p|_{u_p^{\star}} + \nabla^2_{uu} \phi|_{u_p^{\star}} (sol(u_p^{\star})-u_p^{\star}), \nonumber \\ \Rightarrow & & sol(u_p^{\star}) = \ & u_p^{\star} - \frac{\nabla_u \phi_p|_{u_p^{\star}}}{\nabla^2_{uu} \phi|_{u_p^{\star}}}. \label{eq:A___12_Solution} \end{align*} One can observe on Figure <ref> (i) that the function $sol(u)$ is piecewise-$\mathcal{C}^1$ and globally $\mathcal{C}^0$; (ii) that the function $sol(u)$ is $\mathcal{C}^1$ at $u_p^{\star}$; and (iii) that a filter will be necessary to bring stability since the curve is in the yellow area around $u_p^{\star}$. \begin{align*} \nabla^S(u_p^{\star},1) = \nabla^S(u_p^{\star},-1) = \ & \nabla_{u_k} sol|_{u_p^{\star}} , \\ = \ & 1- \frac{\nabla^2_{uu} \phi_p|_{u_p^{\star}} \nabla^2_{uu} \phi|_{u_p^{\star}} - \nabla^3_{uuu}\phi|_{u_p^{\star}} \nabla_{u} \phi_p|_{u_p^{\star}} }{\nabla^2_{uu}\phi|_{u_p^{\star}} ^2}, \\ = \ & 1- \frac{\nabla^2_{uu} \phi_p|_{u_p^{\star}} }{\nabla^2_{uu}\phi|_{u_p^{\star}} } = 1- \frac{2}{1/2} = -3. \end{align*} Therefore, in order to satisfy the stability condition one should choose the filter in the interval: $K \in]0,\ \frac{2}{1-(-3)}=\frac{1}{2}[$. And according to (<ref>), the optimal filter is: \begin{equation} \label{eq:2___44_Exemple2_1_filtreoptimal} K^{\star} = \frac{1}{1-(-3)} = \frac{1}{4}. \end{equation} figureExample <ref>: The function $sol(u)$. Part 2: Simulations & Implementation. One runs several simulations starting from the minimum of the model: $u_0=0$, and one considers the three following approaches: The classical approach (MA and MAy) which consists in letting the engineers supervising the plant predefine the filter to be used. Figure <ref> shows the results obtained for different filter choices: $K=\{0.1, 0.2, 0.25,0.3, 0.4,$ $0.5\}$. It can be noticed that $K=0.25$ is indeed the best filter since it allows an immediate convergence. On the other hand, all filter $K>0.5$ do not enable convergence. So, depending on the choice that is made this approach may, or may not, converge to the plant optimum. figureThe classical approach: Results for multiple filter choices . The maximal approach which consists in choosing, at each iteration, the largest filter which guarantees stability, i.e. at each iteration the filter that is applied is: \begin{align} & \text{If } k =0 : & K_k = \ & K_o, \\ & \text{If } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \text{ et } k > 0 : & K_k = \ & \max\Big\{K_o := 0.1, \ 2/(1-\utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}) - 0.1\Big\}, \label{eq:2___gcerbgfj}\\ & \text{Otherwise:} & K_k = \ & 1. \qquad \text{(No filter)} \end{align} The idea is to estimate the critical value of the filter at the iteration $k$, then select a slightly smaller one (hence “$-0.1$”) guaranteeing the satisfaction of the stability condition. The optimal approach which consists in choosing at each iteration the filter by following the following rules: \begin{align*} & \text{If } k =0 : & K_k = \ & K_o, \\ & \text{If } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \text{ and } k > 0 : & K_k = \ & \text{\eqref{eq:2___36_OptimizationPB_ChoiceK}}, \\ & \text{otherwise:} & K_k = \ & 1. \qquad \text{(No filter)} \end{align*} Simulation results: The maximal and optimal approaches require that a past iteration exists to allows the estimation of $\utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}$. However, at the first iteration it does not exist. So, one must predefine a filter to use for the first iteration. In this example, and in the rest of this thesis one uses $K_o=0.1$. The simulation results of these two approaches are given on Figure <ref>. One can see that the optimal approach converges to the optimum of the plant in 2 iterations while finding the optimal value of the filter. The maximal approach is much slower as it requires 10 iterations (or 45 if a very precise estimate of $u_p^{\star}$ is desired). figureComparison of maximal (in black and blue) and optimal (in magenta) approaches. The blue curve shows what the maximal approach identifies as the limit value of the filter. Remark on the management of the convergence: One has noticed that when the iterates $u_k$ are very close to the solution $u_p^{\star}$, the maximal and optimal approaches end up choosing filters $K_k$ in a quasi-random way. In fact, it is the computation of $\utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}$ with (<ref>) which becomes erratic because the fraction $(u^*_{k+1}-u^*_{k})/(||u_{k}-u_{k-1}||)$ becomes a division of numbers so small that their values is more or less a digital noise related to the precision of the computer. Not managing this can lead to unexpected behaviors. Therefore, one suggests to handle this practical issue with the following improve filter selector: \begin{align} & \text{If } k =0 : & & K_k = K_o, \nonumber \\ & \text{If } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \ \text{ and } \ k > 0 \ \text{ et } \ |u_{k}-u_{k-1}| \leq a: & & K_k = K_{k-1}, \nonumber \\ & \text{If } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \ \text{ and } \ k > 0 \ \text{ et } \ |u_{k}-u_{k-1}|>a: & & K_k = \eqref{eq:2___gcerbgfj} \text{ or } \text{\eqref{eq:2___36_OptimizationPB_ChoiceK}} \nonumber \\ & \text{Otherwise:} & & K_k = 1, \ \text{(No filter)}. \label{eq:2___39_GestionConvergence} \end{align} \begin{equation*} a:= \text{solver's precision}\times100 = 10^{-12}\times 100 = 10^{-10}. \end{equation*} (To obtain the accuracy of the solver one just needs to launch a simulation where instead of optimizing the plant, one optimizes a modified version of the model. One will then see around which values the of iteration size $\| u_{k} - u_{k-1} \|$ converge. In this simulation they converge to $\approx10^{-12}$). This example shows that contrary to the insight one may have, filtering alone does not necessarily imply a decrease in the speed of convergence. Indeed, it has been shown in a mathematical study that a well-chosen filter $K=0.25$ can enable much faster converge than a larger filter $K=0.4$ (and, based on several other tests not presented here, this observation is independent of the starting point). (Is $\bm{K_k^\star}$ still an appropriate choice when $n_u$>$1$?) One considers the following generic RTO problem: \begin{align*} \phi(u,y) := \ & y, \ & y_p = f_p(u) := \ & \frac{(\bm{u}-\bm{1}_{n_u})^{\rm T} \bm{A} (\bm{u}-\bm{1}_{n_u})}{2}, \ & y = f(u) := \ & \frac{\bm{u}^{\rm T} \bm{u}}{4}, \end{align*} where $\bm{1}_{n_u} := [1,...,1]^{\rm T}\in \amsmathbb{R}^{n_u}$, and where clearly $\bm{u}_p^{\star} = \bm{1}_{n_u}$. Based on simulation results, the behavior emerging from the use of the filter update strategy (<ref>)-(<ref>) is analyzed empirically. In the first part, one considers the case $n_u = 2$, and in second part one considers the case $n_u = 3$. Part 1: “$\bm{n_u = 2}$”. One sets: \begin{equation*} \bm{A} = \left(\begin{array}{cc} 2 & 0 \\ 0 & 1 \end{array} \right), \end{equation*} and run a simulation using ISO with the filter update strategy (<ref>)-(<ref>). Figure <ref> shows the simulation results. figureSimulation results with the optimal filter adaptation. One can see that when the value of the filter is close to $1/2$ (highlighted with a magenta background), the distance $\|u_{(2)}-u_{p(2)}^\star\|$ decreases significantly. On the contrary, when it is close to $1/4$ (highlighted with a gray background), it is the distance $\|u_{(1)}-u_{p(1)}^\star\|$ that decreases significantly. Moreover, one can observe that the filter is systematically close to either $1/2$ or $1/4$. These two values $\{1/4,\ 1/2\}$ are not meaningless as they correspond to the optimal filter values when working only in the subspaces $\bm{u} = \bm{u}_p^{\star} + \alpha[1, 0]^{\rm T}$ and $\bm{u} = \bm{u}_p^{\star} + \alpha[0, 1]^{\rm T}$ with $\alpha\in\amsmathbb{R}$. * In the subspace $\bm{u} = \bm{u}_p^{\star} + \alpha[1, 0]^{\rm T}$ with $\alpha\in\amsmathbb{R}\backslash 0$: \begin{align*} & \alpha[1, 0]\nabla_{u} \bm{sol}|_{\bm{u}_p^{\star}} = 1- \frac{ \alpha[1, 0] \nabla^2_{\bm{uu}}\phi_p|_{\bm{u}_p^{\star}} \alpha[1, 0]^{\rm T} \alpha[1, 0] \nabla^2_{\bm{uu}}\phi|_{\bm{u}_p^{\star}} \alpha[1, 0]^{\rm T} } = 1- \frac{2}{1/2} = -3, \\ & \Rightarrow \ K^*_k = \min\left( 1,\ \frac{1}{1- (-3) }\right) = \frac{1}{4}. \end{align*} * In the subspace $\bm{u} = \bm{u}_p^{\star} + \alpha[0, 1]^{\rm T}$ with $\alpha\in\amsmathbb{R}\backslash 0$: \begin{align*} & \alpha[0, 1]\nabla_{u} \bm{sol}|_{\bm{u}_p^{\star}} = 1- \frac{ \alpha[0, 1] \nabla^2_{\bm{uu}}\phi_p|_{\bm{u}_p^{\star}} \alpha[0, 1]^{\rm T} \alpha[0, 1] \nabla^2_{\bm{uu}}\phi|_{\bm{u}_p^{\star}} \alpha[0, 1]^{\rm T} } = 1- \frac{1}{1/2} = -1, \\ & \Rightarrow \ K^*_k = \min\left( 1,\ \frac{1}{1- (-1) }\right) = \frac{1}{2}. \end{align*} Therefore, it seems that the use of the strategy (<ref>)-(<ref>) implicitly implies that the solving of the 2-D problem ($n_u=2$) is done through a process that transforms this problem into a succession of 1-D problems in subspaces comparable to $\bm{u} = \bm{u}_p^{\star} + \alpha[1, 0]^{\rm T}$ et $\bm{u} = \bm{u}_p^{\star} + \alpha[0, 1]^{\rm T}$ with $\alpha\in\amsmathbb{R}$. On Figure <ref> those results are compared with the ones obtained by applying several fixed filters $K=\{0.25, \ 0.33, \ 0.5\}$. The facts are clear, (<ref>)-(<ref>) outperforms any fixed filter strategy. Note that the fixed filter $K=0.33$ has been selected by hand and is close to the best you can get with a fixed filter for this problem. figureThe optimal approach (in black) is better than any fixed filter approach (in green for $K=0.5$, blue for $K=0.25$, and magenta for $K=0.33$). (To go further: By varying the values of the elements of the diagonal of $\bm{A}$ comparable results are obtained. However, if the matrix $\bm{A}$ is not diagonal, i.e. if a rotation is applied to it, then the results will remain similar but the value of $K_k^{\star}$ will oscillate between the values of the optimal filters in subspaces $\bm{u} = \bm{u}_p^{\star} +\alpha \bm{v}_1$ et $\bm{u} = \bm{u}_p^{\star} +\alpha \bm{v}_2$, where $\alpha\in\amsmathbb{R}$, and where $(\bm{v}_1, \bm{v}_2)$ are the eigenvectors of $\bm{A}$.) Part 2: “$\bm{n_u = 3}$”. One now considers a 3-D case, and one wants to check whether the empirical observations made on the 2-D problems remain applicable. One sets: \begin{equation*} \bm{A} = \left(\begin{array}{ccc} 2 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 2/3 \end{array} \right), \end{equation*} and run a simulation using ISO with the filter update strategy (<ref>)-(<ref>). Figure <ref> shows the simulation results. figureSimulation results using the optimal filter adaptation. The analysis of these results is similar to the analysis done for the 2-D case. Therefore, one can skip the technical details and go directly to the calculation of the optimal filters: * In the subspace $\bm{u} = \bm{u}_p^{\star} + \alpha[1, 0, 0]^{\rm T}$ with $\alpha\in\amsmathbb{R}\backslash 0$: $K^*_k = 1/4$. * In the subspace $\bm{u} = \bm{u}_p^{\star} + \alpha[0, 1, 0]^{\rm T}$ with $\alpha\in\amsmathbb{R}\backslash 0$: $K^*_k = 1/2$. * In the subspace $\bm{u} = \bm{u}_p^{\star} + \alpha[0, 0, 1]^{\rm T}$ with $\alpha\in\amsmathbb{R}\backslash 0$: $K^*_k = 3/4$. One finds the 3 levels on Figure <ref>. This validates the extension of the empirical analysis from 2-D cases to higher dimensional cases. Moreover, as for part 1, the strategy (<ref>)-(<ref>) is superior to any classical strategy which consists in fixing the filter $K$ from the start (see Figure <ref>). The results obtained with the adaptive filter are in black and those obtained with the fixed filter $K=0.34$ (manually identified as the “best” fixed filter) are in magenta. The filter update strategy (<ref>)-(<ref>) leads to the emergence of interesting behaviors when the problem dimension is greater than 1. In such cases, the value of the filter appears to alternate between its optimal values in sub-dimensions of the $n_u$-dimensional problem. (Larger dimensions with matrices $\bm{A}$ which are non-diagonal lead to the same observations on testing.) figureThe adaptive approach (in black) is better than any fixed filter approach (whose best results are plotted in magenta). (Should one use an upper bound for the filter: $K<1$?) When one uses the strategy (<ref>)-(<ref>) it may happen that: \begin{equation}\label{eq:2___36_OptimizationPB_ChoiceK_2} \max\left\{0.1, \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}} \right\} > 1. \end{equation} This fact calls into question the classical approach (of MA and MAy) of choosing a filter between $0$ and $1$. This limit can easily be imposed by replacing (<ref>) by \begin{align} \label{eq:2___50_JCBJSRCFHYUDFCR} K^{\star}_k = \ & \min\left\{1,\ \max\left\{K_o, \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}} \right\}\right\}, & \text{si } \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S < 1. \end{align} In the following example, the strategy (<ref>)-(<ref>) is compared to the one using an upper bound on the filter (<ref>)-(<ref>) on a case where (<ref>) holds. The results clearly show the superiority of the unbounded approach. (Comparison of (<ref>)-(<ref>) versus (<ref>)-(<ref>)) One takes the Example <ref> and one changes the model $f$: \begin{align*} \phi(u,y) := \ & y, \ & y_p = f_p(u) := \ & (u-1)^2, \ & y = f(u) := \ & 5u^2. \end{align*} This change of model has the effect of changing the functions $sol$ associated to this RTO problem which becomes the one represented on the Figure <ref>. figureThe function $sol(u_k)$. Now, let's implement ISO with the strategies (<ref>)-(<ref>) and (<ref>)-(<ref>) and let's see the results: see Figure <ref>. These results show very clearly that the strategy (<ref>)-(<ref>) leads to a much faster convergence than (<ref>)-(<ref>), in 2 iterations against 111 for a precise convergence and 2 iterations against 25 for an "approximate" convergence. Thus, this simulation supports the idea of dropping the upper bound on the filter $K$. figureResults of the strategy (<ref>)-(<ref>) (in black) and of the strategy (<ref>)-(<ref>) (in magenta). At this stage one has shown that the equilibrium condition of ISO-D/I can be enforced with the use of locally convex models. One has also shown that the stability can be enforced with the use of a filter on the inputs (<ref>) whose value can be automatically tuned with the procedure (<ref>)-(<ref>) or (<ref>)-(<ref>). Let us now move on to the analysis of the superstability. § ENFORCING THE SUPERSTABILITY CONDITION To determine the superstability condition, one must go back to the beginning of the stability analysis. In fact, the only thing one has to do is to start from (<ref>) and: * replace $\bm{u}_{k+1}^*$ by (<ref>); * not neglect term $\mathcal{O}(\|\delta \bm{u}_k\|^2)$ to not place the study only in the neighborhood of $\bm{u}_p^{\star}$; * add “$-\bm{u}_p^{\star}$” on both sides of the equality; to find: \begin{align} \bm{u}_{k+1} - \bm{u}^{\star}_{p} = \ & \bm{u}_k + K (\bm{u}^{\star}_p + \nabla^S(\delta\bm{u}_k) (\bm{u}_k-\bm{u}_p^{\star}) + \mathcal{O}(\|\delta \bm{u}_k\|^2) - \bm{u}_k) - \bm{u}^{\star}_{p}, \nonumber \\ = \ & \left(1 - K \left( 1 - \nabla^S(\delta\bm{u}_k) - \mathcal{O}(\|\delta \bm{u}_k\|) \right)\right) (\bm{u}_{k} - \bm{u}^{\star}_{p} ). \label{eq:2___41_IterFiltreEffet_Global} \end{align} This equation is similar to (<ref>) but the difference is that it takes into account the higher order terms $\mathcal{O}(\|\delta \bm{u}_k\|)$ of (<ref>). The superstability condition is then: \begin{equation} \label{eq:2___42_ConditionNiveau3} -1 < 1 - K \left( 1 - \left(\nabla^S(\delta\bm{u}_k) + \mathcal{O}(\|\delta \bm{u}_k\|) \right) \right) < 1, \qquad \forall \delta\bm{u}_k\in\amsmathbb{R}^{n_u}. \end{equation} At first glance, the term “$\nabla^S(\delta\bm{u}_k) + \mathcal{O}(\|\delta \bm{u}_k\|)$” may seem complicated to understand, but in fact it has a simple geometric meaning. It is the “slope [The tangent of the angle between the line passing through $[\bm{u}_p^{\star}{}^{\rm T}, \bm{sol}(\bm{u}_p^{\star})^{\rm T}]^{\rm T}$ and $[(\bm{u}_p^{\star}+\delta\bm{u}_k)^{\rm T}, \bm{sol}(\bm{u}_p^{\star}+\delta\bm{u}_k)^{\rm T}]^{\rm T}$ and the line passing through $[\bm{u}_p^{\star}{}^{\rm T}, \bm{0}^{\rm T}]^{\rm T}$ and $[(\bm{u}_p^{\star}+\delta\bm{u}_k)^{\rm T}, \bm{0}^{\rm T}]^{\rm T}$. ]” of the line passing through the points $\bm{u}_p^{\star}$ and $\bm{sol}(\bm{u}_p^{\star} + \delta \bm{u}_k)$. It is therefore proposed to replace it by a term $\Delta^S(\delta\bm{u}_k)$ which is less intimidating and which is defined as follows: \begin{equation*} \Delta^S(\delta\bm{u}_k) := \nabla^S(\delta\bm{u}_k) + \mathcal{O}(||\delta \bm{u}_k||) % - \mathcal{O}(||\delta \bm{u}_k||), \\ % \ & \left(\frac{\delta \bm{u}_k}{\|\delta \bm{u}_k\|}\right)^{\rm T} \frac{\bm{sol}(\bm{u}_p^{\star} + \delta \bm{u}_k) - \bm{u}_p^{\star}}{\|\delta \bm{u}_k\|} \end{equation*} The condition (<ref>) can then be rewritten more simply as: \begin{align} 0 < K<\frac{2}{1-\Underline{\Delta}^S}, && \text{ si: } \Overline{\Delta}^S < 1, \end{align} \begin{align} \Underline{\Delta}^S := \ & \underset{\delta\bm{u}_k}{\operatorname{min}}\{\Delta^S(\delta\bm{u}_k)\}, & \Overline{\Delta}^S := \ & \underset{\delta\bm{u}_k}{\operatorname{max}}\{\Delta^S(\delta\bm{u}_k)\}. \end{align} As these calculations and reasoning are neither very intuitive, nor easy to visualize, one proposes the following graphic interpretation: Let's take the simplest possible case where $n_u=1$ and where $sol(u)$ is globally $\mathcal{C}^1$. Figure <ref> shows what the function $sol$ could look like (the black curve). The green, yellow and red cones shown are the same as the ones used in the graphical interpretation <ref>. the values of $\Underline{\Delta}^S$ and $\Overline{\Delta}^S<1$ are given graphically as the minimum and maximum slopes between $[u_p^{\star},u_p^{\star}]$ and all the points $[u,sol(u)]$ on the curve $sol$. It should be clear that if all the points $[u,sol(u)]$ are in the green cone then superstability is guaranteed without requiring a filter. If all the points $[u,sol(u)]$ are in the green and yellow cones then superstability is guaranteed only if the filter is small enough. To understand this at a geometrical level, one must associate Figures <ref> (image on the right) and <ref>. figureGraphical interpretation of the superstability condition Unfortunately, it is impossible to guarantee (<ref>) without questioning what was proposed to guarantee the equilibrium (and thus indirectly the stability). Indeed, forcing the updated model to be convex at any correction point makes the plant's minimums, maximums and saddle points fixed points of ISO-D/I. In other words, if $\bm{u}_p^{\bullet}$ is a stationary point – a minimum, maximum or saddle point – of the plant, then $\bm{sol}(\bm{u}_p^{\bullet}) = \bm{u}_p^{\bullet}$. So, if the plant has several stationary points, then it is certain that $\Overline{\Delta}^S\geq 1$ and therefore the condition (<ref>) cannot be satisfied. Nevertheless, even though the plant maximums and saddle points are fixed points of the improved versions of ISO-D and ISO-I, they are “unstable” fixed points. Indeed, the Theorem <ref> through the equations (<ref>) shows that the function $\bm{sol}$ around such points has directional derivatives $\geq 1$, and one knows that stability is only provided if they are $<1$. So, such points only satisfy the equilibrium conditions, and any iteration close to them, but not equal to them, is likely to lead to a sequence of iterations that diverge, despite the use of local convexifications and the filter. This instability is illustrated in the following example: Let be an unconstrained RTO problem where $n_u=1$, where the cost function is \begin{align*} \phi(u,y) := \ & y, \end{align*} and where the plant and the model are: \begin{align*} y_p = f_p(u) := \ & \frac{1}{4}-\frac{1}{4}u^2+\frac{1}{4}u^3+\frac{1}{16}u^4 - \frac{3}{20}u^5 + \frac{1}{24}u^6, & \ \ \ y = f(u) := \ & \frac{u^2}{4}. \end{align*} Figure <ref> shows what these curves look like and one can observe that this plant has two minimums ($u_p^{\bullet}=\{-1,\ 2\}$), one saddle-point ($u_p^{\bullet}=1$) and one maximum ($u_p^{\bullet}=0$). Moreover, one can see that the model is convex and therefore satisfies the equilibrium condition. figureThe cost functions. The model updated at a point $u_k$ is: \begin{equation*} \phi_k(u) = \phi_p(u_k) + \nabla_u\phi_p|_{u_k}(u-u_k) + \frac{1}{4}(u-u_k)^2. \end{equation*} Figure <ref> gives the solution $sol(u_k)$ of the optimization problem based on this model updated at $u_k$ for all $u_k$. In addition to this curve, the green, yellow, and red cones of the Graphical Interpretation <ref> have been added at the four fixed points (where $sol(u_k)=u_k$) so that it is possible to visually judge whether their are stable. Clearly, only the minimums of the plant $u_p^{\star}=\{-1,\ 2\}$ meet the necessary conditions of stability ($sol(u_k)$ is locally in the green and yellow areas, so there is a filter that stabilizes these points). On the other hand, any point $u_k$ in the neighborhood of the maximum of the plant $u_p^{\bullet}=0$ leads to a $u_{k+1}$ which goes away from it (because $sol(u_k)$ is locally in the red cone). Concerning the saddle point $u_p^{\bullet}=1$ one can see that the function $sol$ is tangential to the boundary between the red and green cones at this point. On the left side it is tangent but in the green domain while on the right side it is tangent but in the red domain. This means that the points on the left of $u_p^{\bullet}=1$ would lead to a step towards $u_p^{\bullet}=1$, and the points on the right of $u_p^{\bullet}=1$ would lead to a step away from $u_p^{\bullet}=1$. figureGraphical analysis of the convergence capabilities on various types of fixed points of the improved versions of ISO-D/I. So, as one can see with the Example <ref>, the validity of the condition $\Overline{\Delta}^S\geq 1$ depends on the number of stationary points in the plant and this number cannot be manipulated. If one decides to ignore the condition $\Overline{\Delta}^S\geq 1$, there remains only the condition: $0 < K< 2/(1-\Underline{\Delta}^S)$. The value of $\Underline{\Delta}^S$ is in practice not accessible because a priori neither $\bm{u}_p^{\star}$ nor $\bm{sol}(\bm{u})$ are known, $\forall \bm{u}\in\amsmathbb{R}^{n_u}$. However, an idea to estimate $\Underline{\Delta}^S$ could be the following: \begin{equation} \label{eq:2___43_Une_Borne_Superieur_Pour_Niveau3} \Underline{\Delta}^S \approx \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k := \underset{\ell=1,...,k}{\operatorname{min}} \left\{ \left(\frac{\bm{u}_{\ell} - \bm{u}_{\ell-1}}{\|\bm{u}_{\ell} - \bm{u}_{\ell-1}\|}\right)^{\rm T} \frac{\bm{sol}(\bm{u}_{\ell}) - \bm{sol}(\bm{u}_{\ell-1})}{\|\bm{u}_{\ell} - \bm{u}_{\ell-1}\|} \right\}. \end{equation} Basically, $\utilde{\text{\hskip 0.1ex $\Delta$}}^S_k$ is the smallest slope observed between two consecutive points until iteration $k$. Of course, there is no guarantee that $\utilde{\text{\hskip 0.1ex $\Delta$}}^S_k \leq \Underline{\Delta}^S$, but selecting the filter in this way can only increase the chances of convergence. So one can choose: \begin{align} K_k = \ & \max\left\{0.1, \min\left\{ \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}}, \ \frac{2}{1- \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k } \right\}\right\} & & \text{if: } \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k < 1 \text{ and } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} <1, \nonumber \\ = \ & \max\left\{0.1, \ \frac{2}{1- \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k } \right\} & & \text{if: } \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k < 1. \label{eq:2___44_OptimizationPB_ChoiceK_AvecAm} \end{align} \begin{align} K_k = \ & \max\left\{0.1,\ \min\left\{ 1, \ \frac{1}{1- \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}}, \ \frac{2}{1- \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k } \right\}\right\} & & \text{if: } \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k < 1 \text{ and } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} <1, \nonumber \\ = \ & \max\left\{0.1, \ \min\left\{ 1, \ \frac{2}{1- \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k } \right\} \right\} & & \text{if: } \utilde{\text{\hskip 0.1ex $\Delta$}}^S_k < 1. \label{eq:2___44_OptimizationPB_ChoiceK_AvecAm_2} \end{align} However, doing so is not really necessary because the counterpart associated with this attempt to enforce superstability is a decrease in the positive effects of the adaptive filter illustrated in Example <ref>. The following example illustrates this last point: If one restarts from the case $n_u=3$ of Example <ref>, and instead of applying the strategy (<ref>)-(<ref>), (<ref>)-(<ref>) is applied. Then the results of Figure <ref> are obtained. On the left the magenta curve (0.5cm0.1cm) gives the $K_k$ obtained with (<ref>)-(<ref>) and those obtained with (<ref>)-(<ref>) are plotted in black (0.5cm0.1cm). The gray area (0.4ex0.5cm0.3cm) gives the upper bound for the choice of $K_k$, i.e. $2/(1-\utilde{\text{\hskip 0.1ex $\Delta$}}^S_k)$. The curve on the right gives the evolution of the distance between the iterates $(u_{(1)k}, u_{(2)k}, u_{(3)k})$ and the optimum $\bm{u}_p^{\star}$. Unlike in the previous example, the different $u_{(i)k}$ are not discriminated since this would not have much interest here. Clearly, one can see that the addition of the boundary $K_k\leq2/(1-\utilde{\text{\hskip 0.1ex $\Delta$}}^S_k)$ doubles the convergence speed from 15 to 30 iterations. figureStrategy (<ref>)-(<ref>) vs. Strategy (<ref>)-(<ref>). § MODIFIER-FILTER-CURVATURE ADAPTATION If one applies the set of improvements discussed in the previous three subsections to ISO-D/I, one obtains two new methods that guarantee that the equilibrium and stability conditions are always satisfied. These two methods are called modifier-filter-curvature adaptation direct (MFCA-D) and modifier-filter-curvature Adaptation indirect (MFCA-I) which are described hereafter. Modifier-Filter-Curvature Adaptation Direct (MFCA-D)MFCA Initialization. Provide $\bm{u}_0$, $\bm{K}_0(=0.1)$, $\bm{u}^*_0(=\bm{u}_0)$, functions $(\bm{f},\phi,\bm{g})$, and the stopping criterion of step 6). for $k=0 \rightarrow \infty$ 1) Measure $(\nabla_{\bm{u}}\phi_p, \bm{g}_p,\nabla_{\bm{u}}\bm{g}_p)|_{\bm{u}_{k}}$ on the plant. 2) Update the functions $(\phi,\bm{g})$ with (<ref>). 3) Make these cost and constraint functions convex at $\bm{u}_k$: (<ref>). 4) Compute $\bm{u}^\star_{k+1}$ with (<ref>). 5) Compute The filter with: \begin{align} K_k := \ & \left\{ \begin{array}{ll} K_o, & \text{if } k =0, \\ K_{k-1} & \text{if } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \ \text{ and } \ k > 0 \ \text{ and } |\bm{u}_{k}-\bm{u}_{k-1}|\leq \bm{a}, \\ % & \multicolumn{1}{r}{ |\bm{u}_{k}-\bm{u}_{k-1}|\leq a,} \\ % \text{(\ref{eq:2___36_OptimizationPB_ChoiceK}),} \max\left\{K_o,\frac{1}{1-\utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k}}\right\} & \text{if } \utilde{\text{\hskip 0.1ex $\nabla$}}^S_{k} < 1 \ \text{ and } \ k > 0 \ \text{ and } |\bm{u}_{k}-\bm{u}_{k-1}|> \bm{a}, \\ % & \multicolumn{1}{r}{\|\bm{u}^\star_{k+1}-\bm{u}^\star_{k}\|>a\ \text{ et } \|\bm{u}_{k}-\bm{u}_{k-1}\|>a,} \\ 1, & \text{otherwise.} \end{array} \right. \label{eq:2___45_LE_FILTRE}\\ \utilde{\text{\hskip 0.1ex $\nabla$}}_k^S := \ & \left( \frac{\bm{u}_k - \bm{u}_{k-1}}{\|\bm{u}_k - \bm{u}_{k-1}\|} \right)^{\rm T} \frac{\bm{u}^{\star}_{k+1} - \bm{u}^{\star}_{k}}{\| \bm{u}_{k} - \bm{u}_{k-1} \| }. \label{eq:2___59_Nabla_S} \end{align} 6) Compute $\bm{u}_{k+1} = \bm{u}_{k} + K_k(\bm{u}^*_{k+1}-\bm{u}_{k})$. 7) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. Modifier-Filter-Curvature Adaptation Indirect (MFCA-I)MFCAy Initialization. Provide $\bm{u}_0$, $\bm{K}_0(=0.1)$, $\bm{u}^*_0(=\bm{u}_0)$, functions $(\bm{f},\phi,\bm{g})$, and the stopping criterion of step 6). for $k=0 \rightarrow \infty$ 1) Measure $(\bm{f}_p,\nabla_{\bm{u}}\bm{f}_p)|_{\bm{u}_{k}}$ on the plant. Update the functions $(\bm{f},\phi,\bm{g})$ with (<ref>). 3) Make the cost and constraint functions convex at $\bm{u}_k$: (<ref>). 4) Compute $\bm{u}^\star_{k+1}$ with (<ref>). 5) Compute the filter $ K_k = \text{\eqref{eq:2___45_LE_FILTRE}}$. 6) Compute $\bm{u}_{k+1} = \bm{u}_{k} + K_k(\bm{u}^\star_{k+1}-\bm{u}_{k})$. 7) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. Where one can summarize the filter update strategy as follows: * If “$k=0$”, i.e. there is no previous iteration that can be used to estimate the gradient of the function $sol$. Then, the selected filter is the $K_o$. * If “$|\bm{u}_k-\bm{u}_{k-1}|\leq a$”, i.e. the size of the last step is less than or equal to the precision of the computer so one cannot estimate the gradient of the function $sol$. Then, the selected filter is the $K_{k-1}$. * If “$\utilde{\text{\hskip 0.1ex $\nabla$}}_k^S\geq 1$”, i.e. the estimated gradient of the function $sol$ is greater than $1$, so one is near a maximum or a saddle point of the plant, so applying a filter is useless. So, no filter needs to applied $K_k = 1$. § CONCLUSION This chapter starts from the formulation of the theoretical RTO problem. Then, the majority of the fundamental principles and methods of some of the most important contributions to theoretical RTO are rediscovered and improved in all their aspects, through a completely new progressive and didactic re-explanation. Indeed, thanks to a deep an intuitive explanation of how a filter works in the context of RTO, a new very efficient and meaningful filtering protocol has been introduced. However, while this chapter is mainly focused on the ability to converge on plant's minimums and the speed of this convergence, the next chapter provides answers on how to make this convergence more secure. CHAPTER: A MORE SECURE RTO ALGORITHM § AN OBSERVATION The MFCA-D/I methods proposed at the end of the previous chapter are considered and the following thought experiments are performed: Considering the case illustrated on Figure <ref>. This is a schematic representation of a 2-D optimization problem whose cost function $\phi_{k}$ is convex and whose constraint function $g_k$ is concave at $\bm{u}_k$. Therefore, if MFCA-D/I is applied, then the constraint $g_k$ would be linearized at $\bm{u}_k$ to obtain $g^c_k$. If $g^c_k$ is a relevant approximation of $g_k$ in the region of $\bm{u}_k$, it is clearly not true globally. In fact, a large area of the input space that was predicted to be infeasible becomes feasible. This presents the non-negligible risk that MFCA-D/I selects $\bm{u}_{k+1}^{\star}$ and possibly $\bm{u}_{k+1}$ in this unfeasible area, as illustrated in Figure <ref>. This makes it more likely that $\bm{u}_{k+1}$ violates the constraints of the plant. Illustration of the potential risk of convexifying locally concave constraints § AN IDEA To counter this problem, an initial option would be to combine the optimization step (<ref>) with the filtering step (<ref>) in order to enforce $\bm{u}_{k+1}$ to be feasible according to the original updated model (i.e. the unconvexified one). This idea leads to two new RTO algorithms: * Alg <ref>: MFCA-D with filter-based constraints (KMFCA-D), * Alg <ref>: MFCA-I with filter-based constraints (KMFCA-I). MFCA-D with filter-based constraints (KMFCA-D)K_MFCAv01 Initialization. Provide $\bm{u}_0$, $\bm{K}_0(=0.1)$, $\bm{u}^*_0(=\bm{u}_0)$, $\bm{a}$, functions $(\bm{f},\phi,\bm{g})$, and the stopping criterion of step 5). for $k=0 \rightarrow \infty$ 1) Measure $(\nabla_{\bm{u}}\phi_p, \bm{g}_p,\nabla_{\bm{u}}\bm{g}_p)|_{\bm{u}_{k}}$ on the plant. 2) Update $\phi$ and $\bm{g}$ with (<ref>). 3) Compute $\phi^c_k$ and $\bm{g}^c_k$ with (<ref>). 4) Compute $(\bm{u}^{\star}_{k+1},K_k,\bm{u}_{k+1})$ as follows: u^⋆_k+1 := arg umin ϕ^c_k(u), s.t. g^c_k(u) ≤0, s.t. g_k(u_k + filt_k(u)(u - u_k ) ) ≤0, K_k := filt_k(u^⋆_k+1), u_k+1 := u_k + K_k (u^⋆_k+1 - u_k), filt_k(u) := { K_o, if k =0, K_k-1, if nab_k(u) < 1, k > 0 and |u_k-u_k-1 |≤a max{K_o,1/1-nab_k(u)}, if nab_k(u) < 1 , k > 0 and |u_k-u_k-1|>a, 1, otherwise. nab_k(u) := (u_k - u_k-1)^T (u - u^*_k ) /u_k - u_k-1^2. 5) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. MFCA-I with filter-based constraints (KMFCA-I)K_MFCAyv01 Initialization. Provide $\bm{u}_0$, $\bm{K}_0(=0.1)$, $\bm{u}^*_0(=\bm{u}_0)$, $\bm{a}$, functions $(\bm{f},\phi,\bm{g})$, and the stoping criteron of step 5). for $k=0 \rightarrow \infty$ 1) Measure $(\bm{f}_p,\nabla_{\bm{u}}\bm{f}_p)|_{\bm{u}_{k}}$ on the plant. Update $\bm{f}$, $\phi$, $\bm{g}$ with (<ref>). 3) Compute $\phi^c_k$, $\bm{g}^c_k$ with (<ref>). 4) Compute $(\bm{u}^{\star}_{k+1},K_k,\bm{u}_{k+1})$ with (<ref>), (<ref>), and (<ref>). 5) Stop if $\bm{u}_{k+1}\approx\bm{u}_{k}$ and return $\bm{u}_{\infty} := \bm{u}_{k+1}$. The functions $filt_k(\bm{u})$ and $nab_k(\bm{u})$ are the functional versions of (<ref>) and (<ref>). Their argument, $\bm{u}$, is the argument of optimization problem (<ref>) and their outputs are what would be (<ref>) and (<ref>) if they were evaluated at $\bm{u}^{\star}_{k+1} = \bm{u}$. By construction, the KMFCA-D/I algorithms avoid the problem discussed earlier and illustrated in <ref> as one can see in the following example: (Illustration of the risks associated with the linearization of locally concave constraints) One considers the following theoretical RTO problem: \begin{align*} \phi(u,\bm{y}) := \ & y_{(1)}, & y_{p(1)} := \ & 5 -14u +8u^2, \\ & & y_{(1)} := \ & 0.2 - 2u +5u^2, \\ g(u,\bm{y}) := \ & y_{(2)}, & y_{p(2)} := \ & -2 + 7 u - 29 u^2 + 32 u^3,\\ & & y_{(2)} := \ & -2 + 9 u - 35 u^2 + 40 u^3, \end{align*} where $u\in[0,1]$. Figure <ref> illustrates what these functions are. figure Costs and constraints functions of the plant and the model. The minimum of the nominal model is $u_0=0.2$. This is therefore the point at which one initializes the MFCA and KMFCA. The simulation results are shown on Figure <ref>. 0.5mm0.5cm0.1cm : MFCA, 0.5mm0.5cm0.1cm : KMFCA, -0.5mm0.5cm0.3cm : Unfeasible area, 0.5mm0.2cm0.05cm0.2cm0.05cm : Optimum. figureSimulation results One can observe that KMFCA provides much better performances than MFCA. Indeed, MFCA significantly violates the constraints, and if one observes its curve $log_{10}(|u_k-u_p^{\star}|)$, one can see that it is roughly shifted by five iterations to the right. To explain these observations, a detailed analysis is presented in Figures <ref> and <ref>. 0.5mm0.5cm0.1cm : $\phi_k(u)$, 0.5mm0.5cm0.1cm : $g^c_k(u)$, 0.5mm0.5cm0.1cm : $g_k(u)$, 0.5mm0.5cm0.1cm : $g_k(u)$ unused, [-Triangle[width=0.2cm,length=0.25cm], line width=0.1cm](0,0) – (0.7, 0); : $u_k \rightarrow u_{k+1}$, : $u^{\star}_{k+1}$, figureDetails of MFCA iterations. Figures <ref> clearly illustrates the problem that MFCA faces. At iterations $k=0$ and $k=1$ the function $g_k$ is concave at $u_k$ and is therefore linearized, hence the magenta and light red curves for these two iterations. Since $g_k$ is ignored in favor of $g^c_k$, MFCA can aim at the points $u_1^{\star}$ and $u_2^{\star}$ that are not feasible according to $g_k$ and thus generate risky iterations. Hence the large constraint violation. On the other hand, when KMFCA is used, although the constraint $g_k$ is linearized for the choice of $u_1^{\star}$ and $u_2^{\star}$, its satisfaction is imposed at the points $u_{1}$ and $u_2$; hence, better management of constraints and the absence of violation figureDetails of KMFCA iterations (Legend: see Figure <ref>). However, to validate these improvements, one needs evidence that the satisfaction of the equilibrium and stability conditions is not lost. §.§ Effects on the equilibrium condition To show that the satisfaction of the equilibrium condition is not lost, an analysis of the geometrical properties of Problems (<ref>) and (<ref>) updated at $\bm{u}_k$ in the vicinity of $\bm{u}_k$ is required. Indeed, it is clear that the fact that (<ref>) has twice as many constraints as the plant (<ref>) implies that, globally, the feasible domains of these two optimization problems are different, i.e.: \begin{align*} \bm{g}_p(\bm{u}) \leq \ & \bm{0}, & & \not\Leftrightarrow & \left( \begin{array}{c} \bm{g}^c_k(\bm{u}) \\ \bm{g}_k(\bm{u}_k + filt_k(\bm{u})(\bm{u}-\bm{u}_k)) \end{array} \right) \leq \ & \bm{0}, \quad \forall \bm{u}\in\amsmathbb{R}^{n_u}. \end{align*} However, this does not mean that around $\bm{u}_k$ they are not similar. In the previous chapter is was shown that, by forcing these two problems to have similarities at $\bm{u}_k$ (see (<ref>) and (<ref>)), the validity of the conditions of equilibrium and stability of MFCA-D/I can be enforced. So, one chooses to analyze these two optimization problems at $\bm{u}_k$ through their cones of feasible directions (CDF) at that point. These two cones, called $\text{CFD}_{\text{p,k}}$ for the plant and $\text{CFD}_{\text{k}}$ for the updated model, can be defined mathematically as follows: \begin{align} \text{CFD}_{\text{p,k}} := \ & \left\{ \bm{d}\in\amsmathbb{R}^{n_u} \ | \ \bm{g}_p(\bm{u}_k)\leq \bm{0}, \ \nabla_{\bm{u}}g_{p(i)}|_{\bm{u}_k}^{\rm T}\bm{d}<0 \text{ if } g_{p(i)}|_{\bm{u}_k} = 0, \forall i \right\}, \label{eq:3___2_CFDk}\\ \text{CFD}_{\text{k}} := \ & \big\{ \bm{d}\in\amsmathbb{R}^{n_u} \ | \ \bm{g}_k(\bm{u}_k)\leq \bm{0}, \ \nabla_{\bm{u}}g^c_{k(i)}|_{\bm{u}_k}^{\rm T}\bm{d}<0 \text{ if } g^c_{k(i)}|_{\bm{u}_k} = 0, \text{ and } ... \nonumber \\ \ \ \nabla_{\bm{u}}g_{k(i)}(\bm{u}_k + filt_k(\bm{u}) (\bm{u}-\bm{u}_k))|_{\bm{u}_k}^{\rm T}\bm{d}<0 \text{ if } g_{k(i)}(\bm{u}_k + filt_k(\bm{u}) ... \nonumber \\ & \ \ (\bm{u}-\bm{u}_k)|_{\bm{u}_k} = 0, \forall i \big\},\label{eq:3___2_CFDk_prime} \end{align} and they can be geometrically interpreted as the set of directions along which one can move starting from a feasible point $\bm{u}_k$ while remaining in the feasible domain. If one can prove that $\text{CFD}_{\text{p,k}} = \text{CFD}_{\text{k}}$, then it would mean that locally the feasible and non-feasible areas of (<ref>) and (<ref>) are the same, i.e.: \begin{align} \label{eq:3___8_MemesCones_equivalence} \bm{g}_p(\bm{u}) \leq \ & \bm{0}, & & \Leftrightarrow & \left( \begin{array}{c} \bm{g}^c_k(\bm{u}) \\ \bm{g}_k(\bm{u}_k + filt_k(\bm{u})(\bm{u}-\bm{u}_k)) \end{array} \right) \leq \ & \bm{0}, \quad \forall \bm{u}\in\mathcal{B}(\bm{u}_k,r\rightarrow 0). \end{align} With the following Lemma, it is shown that this is always the case. Consider the optimization problem of the plant given by and the one of the model updated at $\bm{u}_k$ given by (<ref>), such that equalities (<ref>) and (<ref>) are true. Then: * The cone of feasible directions of (<ref>) at $\bm{u}_k$: $\text{CFD}_\text{p,k} := \text{\eqref{eq:3___2_CFDk}}$, and the cone of feasible directions of (<ref>) at $\bm{u}_k$: $\text{CFD}_\text{k} := \text{\eqref{eq:3___2_CFDk_prime}}$, are equal: \begin{equation} \text{CFD}_\text{p,k} = \text{CFD}_\text{k}. \end{equation} * If Problem (<ref>) is LICQ at $\bm{u}_k$, then Problem (<ref>) is MFCQ (Mangasarian-Fromovitz Constaints Qualification) at $\bm{u}_k$. . Starting by proving the first statement. To do so, the following three observations are made: \begin{align} \label{eq:3___5_a} g_{k(i)}(\bm{u}_k + \underbrace{filt_k(\bm{u})(\bm{u}-\bm{u}_k)|_{\bm{u}_k}}_{=\bm{0}} = g_{k(i)}|_{\bm{u}_k}. \end{align} \begin{align} & \nabla_{\bm{u}}g_{k(i)}(\bm{u}_k + filt_k(\bm{u})(\bm{u}-\bm{u}_k))|_{\bm{u}_k} \nonumber \\ & \qquad \qquad \qquad \qquad = \left( \nabla_{\bm{u}}filt_k|_{\bm{u}_k}(\bm{u}_k-\bm{u}_k) \hspace{-1mm} + \hspace{-1mm} filt_k|_{\bm{u}_k} \right) \nabla_{\bm{u}} g_{k(i)}|_{\bm{u}_k}, \nonumber \\ & \qquad\qquad \qquad \qquad = filt_k|_{\bm{u}_k} \nabla_{\bm{u}} g_{k(i)}|_{\bm{u}_k}. \label{eq:3___5_b} \end{align} \begin{align} &\text{\eqref{eq:3___21}} \Rightarrow & \bm{g}_k|_{\bm{u}_{k}} = \ & \bm{g}^c_k|_{\bm{u}_{k}}, & \nabla_{\bm{u}} \bm{g}_k|_{\bm{u}_{k}} = \ & \nabla_{\bm{u}}\bm{g}^c_k|_{\bm{u}_{k}}, \label{eq:3___5_c} \end{align} By combining (<ref>), (<ref>), (<ref>), and by construction $filt_k|_{\bm{u}_k}>0$, the following is stated: \begin{align} \begin{array}{r@{}lcr@{}l} g^c_{k(i)}|_{\bm{u}_k} = \ & 0, & \Rightarrow & g^c_{k(i)}(\bm{u}_k + filt_k(\bm{u})(\bm{u}-\bm{u}_k)|_{\bm{u}_k}= \ & 0 , \\ \nabla_{\bm{u}} g_{k(i)}|_{\bm{u}_k}^{\rm T}\bm{d} >\ & 0, & \Rightarrow & \nabla_{\bm{u}}g_{k(i)}(\bm{u}_k + filt_k(\bm{u})(\bm{u}-\bm{u}_k))|_{\bm{u}_k}^{\rm T}\bm{d} >\ & 0. \end{array} \nonumber \end{align} So the definition (<ref>) of $\text{CFD}_{\text{k}}$ can be reduced to: \begin{align} \text{CFD}_{\text{k}} = \big\{ \bm{d}\in\amsmathbb{R}^{n_u} \ | \ \nabla_{\bm{u}}g^c_{k(i)}|_{\bm{u}_k}^{\rm T}\bm{d}<0 \text{ if } g^c_{k(i)}|_{\bm{u}_k} = 0, \forall i \big\}. \label{eq:3___6_c} \end{align} As equalities (<ref>) are assumed to be true, i.e. $g^c_{k(i)}|_{\bm{u}_k} = \ g_{p(i)}|_{\bm{u}_k}$ and $\nabla_{\bm{u}}g^c_{k(i)}|_{\bm{u}_k} = \nabla_{\bm{u}}g_{p(i)}|_{\bm{u}_k}$, the definition of $\text{CFD}_{\text{k}}$, given by (<ref>), can be reformulated as: \begin{align} \text{CFD}_{\text{k}} = \big\{ \bm{d}\in\amsmathbb{R}^{n_u} \ | \ \nabla_{\bm{u}}g_{p(i)}|_{\bm{u}_k}^{\rm T}\bm{d}<0 \text{ if } g_{p(i)}|_{\bm{u}_k} = 0, \forall i \big\} = \text{CFD}_{\text{p,k}}. \label{eq:3___6_d} \end{align} Which proves the first statement of the lemma. Now one can easily prove the second statement. To prove that Problem (<ref>) is MFCQ at $\bm{u}_k$ for all $\bm{u}_k\in\amsmathbb{R}^{n_u}$, one must show that $\forall\bm{u}_k\in\amsmathbb{R}^{n_u}:$ $\text{CFD}_{\text{k}} \neq \emptyset$. By definition, the CFD of a LICQ problem is never empty [15]. So, if (<ref>) is LICQ at $\bm{u}_k$, then $\text{CFD}_{\text{p,k}}\neq \emptyset$. As according to (<ref>), $\text{CFD}_{\text{p,k}}= \text{CFD}_{\text{k}}$, one can conclude that: $\text{CFD}_{\text{k}}\neq \emptyset$. So the Problem (<ref>) is always MFCQ at $\bm{u}_k$. On the basis of this local geometric similarity between Problems (<ref>) updated at $\bm{u}_k$ and (<ref>), it can be shown that KMFCA-D/I converges only on a KKT point of the plant, see Theorem <ref>. Also, the equilibrium condition of KMFCA-D/I can be identified, see Theorem <ref>. If KMFCA-D/I converges on the limit value $\bm{u}_\infty := \underset{k\rightarrow \infty}{\lim} \bm{u}_k$, then $\exists \bm{\lambda} \in\amsmathbb{R}^{2n_g}$ such that $(\bm{u}_\infty, \bm{\lambda})$ is a KKT point of the model updated at $\bm{u}_\infty$, and such that $(\bm{u}_\infty, \bm{\lambda}_p)$ is a KKT point of the plant with $\bm{\lambda}_p = \bm{\lambda}^{\prime} + filt_{\infty}|_{\bm{u}_{\infty}}\bm{\lambda}^{\prime\prime}$, where $(\bm{\lambda}^{\prime},\bm{\lambda}^{\prime\prime})\in\amsmathbb{R}^{n_g}$ and $ \bm{\lambda} := [ \bm{\lambda}^{\prime \rm T}, \ \bm{\lambda}^{\prime\prime \rm T}]^{\rm T}$. Thanks to Lemma <ref>, one knows that Problem (<ref>) is MFCQ at $\bm{u}_k$ $\forall k$, and therefore at $\bm{u}_\infty$. So, according to [15]: \begin{align} & \bm{u}_{\infty} := \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \phi_{\infty}(\bm{u}) % \quad \text{s.t.} \quad \bm{g}^c_\infty(\bm{u}) \leq \bm{0}, \nonumber \\ & \phantom{ \bm{u}_{\infty} := \operatorname{arg} \underset{\bm{u}}{\operatorname{min}} \quad \phi_{\infty}(\bm{u}) % \quad \text{s.t.} \quad} \bm{g}_\infty\big(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty)\big) \leq \bm{0}, \label{eq:3___7}\\ \nonumber \\ \hspace{-0.1mm} \Rightarrow & \exists \bm{\lambda} \in \amsmathbb{R}^{2n_g}: \left\{ \begin{array}{l} \left( \begin{array}{c} \bm{g}^c_\infty|_{\bm{u}_{\infty}} \\ \bm{g}_\infty(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty))|_{\bm{u}_{\infty}} \end{array} \right) \leq \bm{0}, \\ \bm{\lambda}^{\rm T} \left( \begin{array}{c} \bm{g}^c_\infty|_{\bm{u}_{\infty}} \\ \bm{g}_\infty(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty))|_{\bm{u}_{\infty}} \end{array} \right) = 0, \\ \bm{\lambda} \geq\bm{0}, \\ \nabla_{\bm{u}} \phi_{\infty}|_{\bm{u}_{\infty}} + \bm{\lambda}^{\rm T} \left( \hspace{-1mm} \begin{array}{c} \nabla_{\bm{u}} \bm{g}^c_{\infty}|_{\bm{u}_{\infty}} \\ \nabla_{\bm{u}} \bm{g}_\infty(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty))|_{\bm{u}_{\infty}} \end{array} \hspace{-1mm} \right) = \bm{0}, \end{array} \right. \label{eq:3___8} \end{align} where $\bm{\lambda} := (\bm{\lambda}^{\prime \rm T}, \bm{\lambda}^{\prime\prime\rm T})^{\rm T} \in \amsmathbb{R}^{2n_g}$, $\bm{\lambda}^{\prime} \in \amsmathbb{R}^{n_g}$ are the Lagrange multipliers associated to the constraints $\bm{g}^c_{\infty}|_{\bm{u}_{\infty}}$, and $\bm{\lambda}^{\prime\prime} \in \amsmathbb{R}^{n_g}$ are the Lagrange multipliers associated with the constraints $\bm{g}_{\infty}(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty))|_{\bm{u}_{\infty}}$. Noting that: \begin{align} \bm{g}_\infty\big(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty)\big)\big|_{\bm{u}_{\infty}} = \ & \bm{g}_\infty|_{\bm{u}_{\infty}}, \label{eq:3___9_kbrgcs}\\ \nabla_{\bm{u}} \bm{g}_\infty\big(\bm{u}_\infty + filt_\infty(\bm{u})(\bm{u}-\bm{u}_\infty)\big)\big|_{\bm{u}_{\infty}} = \ & \big(\nabla_{\bm{u}} filt_\infty|_{\bm{u}_{\infty}} (\bm{u}_{\infty} - \bm{u}_{\infty}) + ... \nonumber \\ & filt_\infty|_{\bm{u}_{\infty}}\big) \nabla_{\bm{u}} \bm{g}_\infty|_{\bm{u}_{\infty}}, \nonumber\\ = \ &filt_\infty|_{\bm{u}_{\infty}}\nabla_{\bm{u}} \bm{g}_\infty|_{\bm{u}_{\infty}}. \label{eq:3___10_iuachngb} \end{align} and that \begin{align} \label{eq:3___15_sgljxbhv} &\text{\eqref{eq:3___21}} \Rightarrow & \bm{g}_\infty|_{\bm{u}_{\infty}} = \ & \bm{g}^c_\infty|_{\bm{u}_{\infty}}, & \nabla_{\bm{u}} \bm{g}_\infty|_{\bm{u}_{\infty}} = \ & \nabla_{\bm{u}}\bm{g}^c_\infty|_{\bm{u}_{\infty}}, \end{align} system (<ref>) can be re-written as follows: \begin{equation} \label{eq:3___9} \left. \begin{array}{c} \text{\eqref{eq:3___8}} \\ \text{\eqref{eq:3___9_kbrgcs}} \\ \text{\eqref{eq:3___10_iuachngb}}\\ \text{\eqref{eq:3___15_sgljxbhv}} \end{array} \right\} \Rightarrow \exists \bm{\lambda} \in \amsmathbb{R}^{2n_g}: \left\{ \begin{array}{l} \bm{g}_\infty|_{\bm{u}_{\infty}} \leq \bm{0}, \\ \big(\bm{\lambda}^{\prime} + \bm{\lambda}^{\prime\prime}\big)^{\rm T} \bm{g}_\infty|_{\bm{u}_{\infty}} = 0, \\ \bm{\lambda} \geq\bm{0}, \\ \nabla_{\bm{u}} \phi_{\infty}|_{\bm{u}_{\infty}} + \big(\bm{\lambda}^{\prime} + \bm{\lambda}^{\prime\prime}\big)^{\rm T} \nabla_{\bm{u}} \bm{g}_{\infty}|_{\bm{u}_{\infty}} = \bm{0}. \end{array} \right. \end{equation} In addition, as $\bm{g}_\infty|_{\bm{u}_{\infty}} \leq \bm{0}$, $\bm{\lambda} \geq\bm{0}$ and $filt_\infty|_{\bm{u}_{\infty}}>0$: \begin{align} \big(\bm{\lambda}^{\prime} + \bm{\lambda}^{\prime\prime}\big)^{\rm T} \bm{g}_\infty|_{\bm{u}_{\infty}} = \ & 0, & \Leftrightarrow & & \sum_{i=1}^{n_g} \lambda^{\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} + \lambda^{\prime\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} = \ & 0, \nonumber \\ & & \Leftrightarrow & & \lambda^{\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} = \lambda^{\prime\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} = \ & 0, \ \forall i, \nonumber \\ & & \Leftrightarrow & & \lambda^{\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} = filt_\infty|_{\bm{u}_{\infty}} \lambda^{\prime\prime}_{(i)} g_{\infty(i)}|_{\bm{u}_{\infty}} = \ & 0, \ \forall i, \nonumber \\ & & \Leftrightarrow & & \big(\bm{\lambda}^{\prime} + filt_\infty|_{\bm{u}_{\infty}} \bm{\lambda}^{\prime\prime}\big)^{\rm T} \bm{g}_\infty|_{\bm{u}_{\infty}} = \ & 0. \label{eq:3___10_b} \end{align} So, (<ref>) can be rewritten: \begin{equation} \label{eq:3___10} \left. \begin{array}{c} \text{\eqref{eq:3___9}} \\ \text{\eqref{eq:3___10_b}} \end{array} \right\} \Rightarrow \exists \bm{\lambda} \in \amsmathbb{R}^{2n_g}: \left\{ \begin{array}{l} \bm{g}_\infty|_{\bm{u}_{\infty}} \leq \bm{0}, \\ \big(\bm{\lambda}^{\prime} + filt_\infty|_{\bm{u}_{\infty}} \bm{\lambda}^{\prime\prime}\big)^{\rm T} \bm{g}_\infty|_{\bm{u}_{\infty}} = 0, \\ \bm{\lambda} \geq\bm{0}, \\
# Uniquely hamiltonian graphs for many sets of degrees Gunnar Brinkmann and Matthias De Pauw Ghent University, TWIST Krijgslaan 281 S9 B9000 Ghent, Belgium <EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract We give constructive proofs for the existence of uniquely hamiltonian graphs for various sets of degrees. We give constructions for all sets with minimum $2$ (a trivial case added for completeness), all sets with minimum $3$ that contain an even number (for sets without an even number it is known that no uniquely hamiltonian graphs exist), and all sets with minimum $4$ and at least two elements, where all degrees different from $4$ are at least $10$. For minimum degree $3$ and $4$, the constructions also give 3-connected graphs. We also introduce the concept of seeds, which makes the above results possible and might be useful in the study of Sheehan’s conjecture. Furthermore we prove that $3$-connected uniquely hamiltonian $4$-regular graphs exist if and only if $2$-connected uniquely hamiltonian $4$-regular graphs exist. ## 1 Introduction The most important problem for hamiltonian cycles is of course which properties guarantee the existence of a hamiltonian cycle, but as soon as the existence of a hamiltonian cycle is known, the question arises how many hamiltonian cycles exist. In [4], recent results and an overview of older results on graphs with few hamiltonian cycles are given. The extremal case is when a graph contains a single hamiltonian cycle, that is: it is uniquely hamiltonian. A crucial role for the existence of a uniquely hamiltonian graph is played by the combination of vertex degrees present in the graph. Already in 1946 Tutte reported a result by Smith that uniquely hamiltonian cubic graphs don’t exist [9]. A long standing conjecture by Sheehan [7] states that this should in fact be the case for all $d$-regular graphs with $d>2$. The result by Smith was later improved by Thomason [8] showing that uniquely hamiltonian graphs where all vertices have odd degree don’t exist. In [5] it is shown that no $d$-regular uniquely hamiltonian graphs exist if $d\geq 23$. So while there are e.g. neither uniquely hamiltonian graphs with all degrees 3 nor with all degrees 24, a special case of what we will prove will be that there are uniquely hamiltonian graphs if both these vertex degrees are allowed. For even $d$ with $4\leq d\leq 22$ it is not known whether $d$-regular uniquely hamiltonian graphs exist. In [3] Fleischner shows that there are uniquely hamiltonian graphs with minimum degree 4. He constructs graphs with vertices of degree $4$ and $14$ and graphs where the maximum degree can grow even larger – without specifying which degrees can occur. We will use an improved version of his method to prove that for all sets $M$ with at least two elements, containing a $4$ and otherwise only numbers $d\geq 10$, uniquely hamiltonian graphs exist, so that the set of vertex degrees is exactly $M$. Furthermore we characterize sets of degrees with minimum $2$ or $3$ for which uniquely hamiltonian graphs exist completely. The term graph always refers to a simple undirected graph, that is: without multiple edges and without loops. If multiple edges are allowed, we use the term multigraph. Loops are never allowed, as they are trivial in the context of uniquely hamiltonian graphs. We define the degree set $M_{deg}(G)$ of a graph (or multigraph) $G$ with vertex set $V$ as $M_{deg}(G)=\\{\deg(v)\mbox{ }|v\in V\\}$. For a set $M=\\{d_{0},d_{1},d_{2},\dots,d_{k}\\}$ with $d_{0}<d_{1}<\dots<d_{k}$, we say that a $2$-connected (if $2\in M$), resp. $3$-connected (otherwise) uniquely hamiltonian graph $G$ realizes $M$ if $M_{deg}(G)=M$. If such a $G$ exists, we define $M$ to be uhc-realizable. Next to the question whether a set $M$ is uhc-realizable, it is also interesting which role is played by the larger degrees. Our emphasis is on the smallest degree $d_{0}$ and we want to know whether the number of times that the degrees $d_{1},\dots,d_{k}$ occur can be bounded by a constant even for very large graphs, so that the average degree can be arbitrarily close to the smallest degree. On the other hand it might also be interesting to know, whether the larger degrees can occur an unbounded number of times and maybe also occur for at least a fixed fraction of the vertices also in arbitrarily large graphs. The average degree would in that case be bounded from below by the minimum degree times a constant factor $c>1$. The strongest requirement is, if both can occur and even in combination depending on the $d_{i}$. We formalize that by the following definition: For a set $M=\\{d_{0},d_{1},d_{2},\dots,d_{k}\\}$ with $k>0$, $d_{0}<d_{1}<\dots<d_{k}$ we say that $M$ is strongly uhc-realizable, if for each partition $D_{1},D_{2}$ of $\\{d_{1},\dots,d_{k}\\}$ (with one of $D_{1},D_{2}$ possibly empty) there are constants $c_{1}\in\mathbb{N},c_{2}\in\mathbb{R}$, $c_{2}>0$, and an infinite sequence of graphs $G_{i}=(V_{i},E_{i})$ realizing $M$, so that for all $d\in D_{1}$ each $G_{i}$ has at most $c_{1}$ vertices of degree $d$, and for each $d^{\prime}\in D_{2}$ each $G_{i}$ has at least $c_{2}|V_{i}|$ vertices of degree $d^{\prime}$. ## 2 Minimum degree 2 or 3 We will start with an easy remark that is mainly contained for completeness: ###### Remark 2.1 Any finite set $M=\\{d_{0}=2,d_{1},d_{2},\dots,d_{k}\\}\subset\mathbb{N}$ with $2<d_{1}<d_{2}<\dots<d_{k}$ is uhc-realizable and if $k>0$, it is also strongly uhc-realizable. Proof: We will first prove that $M$ is uhc-realizable. $|M|=1$ is trivial. If $|M|=2$, one can take $K_{d_{1}+1}$ and subdivide the edges of a hamiltonian cycle. If $|M|>2$ one can e.g. take complete graphs $K_{d_{1}+1},\dots K_{d_{k}+1}$, remove an edge $e_{d_{i}}\in K_{d_{i}+1}$ for $1\leq i<k$, an edge $e^{\prime}_{d_{i}}\in K_{d_{i}+1}$ for $2\leq i\leq k$ with $e_{d_{i}}\cap e^{\prime}_{d_{i}}=\emptyset$ for $2\leq i<k$, and then connect the endpoints of $e_{d_{i}}$ and $e^{\prime}_{d_{i+1}}$ for $1\leq i<k$. The result is obviously hamiltonian and 2-connected and after subdividing the edges of a hamiltonian cycle, one has a uniquely hamiltonian graph with exactly the vertex degrees in $M$. To show that $M$ is strongly uhc-realizable, assume a partition $D_{1},D_{2}$ to be given. If $D_{2}=\emptyset$ one can subdivide edges on the hamiltonian cycle arbitrarily often to obtain the sequence of graphs. If $D_{2}\not=\emptyset$ one can use the above construction for multisets $M^{\prime}_{j}$ containing the same elements as $M$, but numbers in $D_{1}$ exactly once and numbers in $D_{2}$ exactly $j$ times. ## 3 Minimum degree 3 and 4 The following construction is a slight modification of a construction by H. Fleischner [3]. Let $P=(V,E)$ be a graph and $s,t,v\in V$ be vertices. If there is a unique hamiltonian path from $s$ to $t$ in the graph $P_{-v}=P[V\setminus\\{v\\}]$ induced by $V\setminus\\{v\\}$, we call ${\cal P}=(P,s,t,v)$ a weak H-plugin or just an H-plugin. If in addition there is no hamiltonian path from $s$ to $t$ in $P$ (so also containing $v$), we call ${\cal P}=(P,s,t,v)$ a strong H-plugin. In cases where $s,t,$ and $v$ are clear from the context, we will also refer to the graph $P$ alone as an H-plugin. For an H-plugin $(P,s,t,v)$ and a graph $G$ with a vertex $x$ of degree $3$ with neighbours $y,x_{1},x_{2}$, we define the P-splice of $G$ at $\\{x,y\\}$, denoted as $O(x,y,{\cal P})$ as the graph obtained by removing $x$, connecting $x_{1}$ with the vertex $s$ in a copy $P^{\prime}$ of $P$, $x_{2}$ with the vertex $t$ in $P^{\prime}$ and identifying the vertex $v$ in $P^{\prime}$ with $y$. This operation is sketched in Figure 1. We will also refer to it shortly as splicing the edge $\\{x,y\\}$. The notation $O(x,y,{\cal P})$ does not take into account which of the vertices is $x_{1}$ and which is $x_{2}$, so in general $O(x,y,{\cal P})$ is one of the two possibilities. Elementary arguments show that if $P$ – or at least $P$ together with a new vertex connected to $s,t$, and $v$ – as well as $G$ are 3-connected, then $O(x,y,{\cal P})$ is 3-connected. Figure 1: The splicing operation. The following lemma and corollary are stronger versions of Lemmas 1,2, and 3 in [3]. ###### Lemma 3.1 (parts already in [3]) Let $G=(V,E)$ be a graph with a unique hamiltonian cycle $C_{H}$, $x\in V$ of degree $3$ with neighbour $y$, so that the edge $\\{x,y\\}$ is not on $C_{H}$. Let ${\cal P}=(P,s,t,v)$ be an H-plugin. If at least one of the following three conditions is fulfilled, then $O(x,y,{\cal P})$ has a unique hamiltonian cycle $C_{H,O}$. Except for the edges incident with $x$, all edges of $C_{H}$ are also contained in $C_{H,O}$. (i) $G[V\setminus\\{y\\}]$ is not hamiltonian. (ii) $\\{x,y\\}$ lies in a triangle. (iii) ${\cal P}$ is a strong H-plugin. Condition (iii) also explains the name strong H-plugin: while in general the splicing of edges that are not on the unique hamiltonian cycle only guarantees a unique hamiltonian cycle in the result if the edges satisfy some extra condition, this extra condition is not necessary if ${\cal P}$ is strong. Proof: As $s$ and $t$ have only one edge to the outside of (the copy of) $P$ in $O(x,y,{\cal P})$, none of them can be incident only with edges of a hamiltonian cycle $C_{H,O}$ of $O(x,y,{\cal P})$ that lie outside $P$. To this end there are in principle three ways how $C_{H,O}$ could pass through $P$: a.) by a hamiltonian path of $P_{-v}$ from $s$ to $t$ while the vertex $v=y$ is incident to two edges of $C_{H,O}$ not in $P$, b.) by a hamiltonian path of $P$ from $v$ to $s$ or to $t$, c.) by a hamiltonian path of $P$ from $s$ to $t$. In all three cases (i), (ii), and (iii) of the lemma, we can get a hamiltonian cycle of $O(x,y,{\cal P})$ passing $P$ like described in a.) if we replace the part $x_{1},x,x_{2}$ in $C_{H}$ by $x_{1},s,\dots,t,x_{2}$ with the middle part the unique hamiltonian path from $s$ to $t$ in $P_{-v}$. So there is always a hamiltonian cycle for case a.), but that cycle is unique due to the two paths in $P_{-v}$ and outside $P_{-v}$ being unique. Assume now that $O(x,y,{\cal P})$ has a hamiltonian cycle passing $P$ as in case b.) and assume w.l.o.g. that the endpoint is $s$. Replacing the part $y=v,\dots,s,x_{1}$ by $y,x,x_{1}$, we get a hamiltonian cycle of $G$ containing $\\{x,y\\}$, which does not exist, as $C_{H}$ is unique. So a hamiltonian cycle falling into case b.) does not exist. It remains to be shown that also case c.) can not occur under the additional prerequisites. (i) Assume that $O(x,y,{\cal P})$ has a hamiltonian cycle passing $P$ as in case c.). Replacing the part $x_{1},s,\dots,t,x_{2}$ (now also containing $v=y$) by $x_{1},x,x_{2}$, we get a cycle in $G$ missing only $y$ – that is: a hamiltonian cycle of $G[V\setminus\\{y\\}]$, which does by assumption not exist. (ii) This is a special case of (i). Assume that $G[V\setminus\\{y\\}]$ contains a hamiltonian cycle $C^{\prime}_{H}$. Then $C^{\prime}_{H}$ passes $x$ by $x_{1},x,x_{2}$, but replacing this part by $x_{1},y,x,x_{2}$ or $x_{1},x,y,x_{2}$ – depending on whether the triangle is $x_{1},x,y$ or $x_{2},x,y$ – we get a hamiltonian cycle of $G$ containing $e$, which does not exist, as $C_{H}$ is unique. (iii) In this case the prerequisites are exactly that a path as in c.) does not exist. ###### Corollary 3.2 (parts already in [3]) Let $G=(V,E)$ be a graph with a unique hamiltonian path $P_{H}$ from $s\in V$ to $t\in V$. Assume $x\in V$, $x\not\in\\{s,t\\}$ is of degree $3$ with neighbour $y$, so that the edge $\\{x,y\\}$ is not on $P_{H}$. Let ${\cal P}=(P,s^{\prime},t^{\prime},v)$ be an H-plugin. If at least one of the following four conditions is fulfilled, then $O(x,y,{\cal P})$ has a unique hamiltonian path $P_{H,O}$ from $s$ to $t$. Except for the edges incident with $x$, all edges of $P_{H}$ are also contained in $P_{H,O}$. (i) $y\not\in\\{s,t\\}$, and $G[V\setminus\\{y\\}]$ has no hamiltonian path from $s$ to $t$. (ii) $\\{x,y\\}$ lies in a triangle. (iii) ${\cal P}$ is a strong H-plugin. (iv) $y\in\\{s,t\\}$. Proof: Adding a new vertex to $G$ and connecting it with $s$ and $t$, the resulting graph $G^{\prime}$ has a unique hamiltonian cycle if and only if $G$ has a unique hamiltonian path from $s$ to $t$. Applying Lemma 3.1 to $G^{\prime}$ we get the results. Case (iv) follows by case (i) of Lemma 3.1. We can now prove the main theorem for minimum degree $3$: ###### Theorem 3.3 A finite set $M=\\{d_{0}=3,d_{1},d_{2},\dots,d_{k}\\}$ with $3<d_{1}<d_{2}<\dots<d_{k}$ of natural numbers is uhc-realizable if and only if $M$ contains an even number. In that case it is also strongly uhc- realizable. Proof: The fact that there is no uniquely hamiltonian graph $G$ with $M_{deg}(G)=M$ if $M$ contains no even number, is a well known result of Thomason [8] – no matter what the condition on connectivity is. To show that $M$ is uhc- realizable if $M$ contains an even number, we will explicitly construct a $3$-connected uniquely hamiltonian graph $G$ with $M_{deg(G)}=M$ in that case. 123456789101112131415161718213243651110121311121514BCDCDABA Figure 2: The graph $U_{3,4}$ drawn as a minimum genus embedding. Sides with the same label have to be identified. This is one of the five smallest uniquely hamiltonian graphs with only degrees $3$ and $4$ as given in [4]. The unique hamiltonian cycle is $1,2,\dots,18$. The vertices $3$ and $12$ are the only vertices of degree $4$. 1234567891010762BABA Figure 3: The Petersen graph with one edge removed gives a strong plugin $P_{3,+2}$ for $s=1$, $t=9$ and $v=10$. It can be easily checked by hand that $1,2,\dots,9$ is the unique hamiltonian path from $s$ to $t$ if $v$ is removed and of course there is no hamiltonian path from $s$ to $t$ without removing $v$ as it would imply a hamiltonian cycle in the Petersen graph. When used as a plugin, the degrees in the copy of $P_{3,+2}$ are $3$ and $d+2$ if the vertex identified with $v$ has degree $d$. Figure 2 shows one of the five smallest uniquely hamiltonian graphs $G$ with $M_{deg(G)}=\\{3,4\\}$ (see [4]). By using the strong plugin given in Figure 3 to an edge not on the hamiltonian cycle and incident to a vertex of degree $4$, we can increase the degree of that vertex by $2$. Doing that recursively, we can increase the degree of that vertex to any even degree. Applying the plugin to an edge incident with two vertices of degree $3$, we can increase the degree of one of them to $5$ and recursively to any odd degree. As the number of vertices of degree $3$ can be increased by replacing a vertex by a triangle – and keeping the graph uniquely hamiltonian – we can conclude that there are infinitely many (3-connected) uniquely hamiltonian graphs $G_{M}$ for any degree set $M=\\{3,d_{1},d_{2},\dots,d_{k}\\}$ containing one or two even degrees. If we take two graphs realizing degree sets $M,M^{\prime}$, remove one vertex of degree $3$ in each of them and connect the neighbours in a way that the parts of the unique hamiltonian cycles are connected to each other, we get a graph $G_{M\cup M^{\prime}}$ realizing the degree set $M\cup M^{\prime}$. This way we get that for each $M=\\{3,d_{1},d_{2},\dots,d_{k}\\}$ with at least one even element there are infinitely many uniquely hamiltonian graphs $G_{M}$ realizing it. Assume now that for a degree set $M=\\{3,d_{1},d_{2},\dots,d_{k}\\}$ containing an even degree a partition $D_{1},D_{2}$ of $\\{d_{1},d_{2},\dots,d_{k}\\}$ is given. There is a uniquely hamiltonian graph $G_{M}$ realizing $M$. If $D_{2}$ is empty, we can recursively replace vertices of degree $3$ by triangles to get an infinite sequence of uniquely hamiltonian graphs realizing $M$ and having the same number of vertices of degree $d\in D_{1}$. If $D_{2}$ contains an even degree, we can make arbitrarily many copies of a graph realizing $D_{2}\cup\\{3\\}$ and recursively combine them in the way described above with $G_{M}$. The result has a constant number of vertices with degree in $D_{1}$ and at least a constant fraction of vertices with degree in $D_{2}$. If finally $D_{2}$ does not contain a vertex of even degree, we can recursively replace vertices of degree $3$ in $G_{M}$ by triangles, so that for each $k\in\mathbb{N}$ and each $d\in D_{2}$ we can use the plugin to make $k$ vertices with degree $d$. As all graphs constructed in this proof are $3$-connected, this final construction proves that $M$ is strongly uhc-realizable. The repeated application of $P_{3,+2}$ does not give smallest possible graphs with this degree sequence – in fact not even smallest graphs constructed by using plugins. There is e.g. a plugin on $15$ vertices increasing the degree of the identified vertex by $4$ and increasing the number of vertices by $13$ instead of $16$ when applying $P_{3,+2}$ twice. For minimum degree $4$, it is unfortunately not so easy to give a strong plugin, but we have to construct it, starting from weak plugins. We do not only want to splice one edge in a graph $G$, but each edge in some set of edges. This is in general not possible, if the edges only satisfy condition (i) of Lemma 3.1 or Corollary 3.2 for $G$: if $z$ is a vertex, so that $G[V\setminus\\{z\\}]$ has no hamiltonian cycle or hamiltonian path between two vertices $a,b$, it is possible that after splicing an edge $\\{x,y\\}$ not even close to $z$, the result $O(x,y,{\cal P})$ has a hamiltonian path or cycle in the graph with $z$ removed. If on the other hand we have a set $E_{O}$ of candidate edges $\\{x_{1},y_{1}\\},\dots,\\{x_{k},y_{k}\\}$ to be spliced with different $x_{i}$ in different triangles, or the $y_{i}$ are one of the starting points $s,t$ of the unique hamiltonian path, these properties are preserved after splicing an edge in $E_{O}$. This implies that in that situation we can apply the splicing operation also with a weak H-plugin to all edges simultaneously or in any order and still draw the conclusions of Lemma 3.1 or Corollary 3.2. Let $G=(V,E)$ be a graph with $s,t\in V$, $v\not\in V$. Assume that its maximum degree is $4$, that there are $d-2\geq 2$ vertices of degree $3$ – among them $s$ – and that $t$ is the unique vertex of degree $2$. We define $W(G)$ as the graph obtained by adding $v$ and connecting it to $t$ and to all vertices of degree $3$ except $s$ – or formally: $W(G)=(V_{W},E_{W})$ with $V_{W}=V\cup\\{v\\}$, $E_{W}=E\cup\\{\\{v,w\\}|w\not=s,\deg(w)=3\\}\cup\\{\\{v,t\\}\\}$. We call $G$ a $d$-seed if there is a unique hamiltonian path from $s$ to $t$ in $G$ and if $W(G)$ together with a new vertex $v^{\prime}$ connected to $v,s$ and $t$ is $3$-connected. The last property guarantees that $3$-connectivity is preserved when $W(G)$ is used for splicing an edge in a $3$-connected graph. An example for a $10$-seed is given in Figure 4. As the number of vertices of odd degree must be even, $d$-seeds do not exist for odd $d$. 123456789106598BABA Figure 4: A $10$-seed $G$ with the unique hamiltonian path $1,2,3,\dots,10$ from $s=1$ to $t=10$. The $10$-seed has one vertex of degree $4$, $8$ vertices of degree $3$ – among them $s$ – and one vertex of degree $2$ – the vertex $t$. The uniqueness of the hamiltonian path as well as the fact that $W(G)$ (so also the graph obtained after adding $v^{\prime}$) is $3$-connected has been checked by computer, but as the graph is small, it can also be confirmed by hand. ###### Remark 3.4 Let $P=(V,E)$ be a graph with a unique hamiltonian path from $s\in V$ to $t\in V$, $V^{\prime}\subseteq V$ and $v\not\in V$. Then we have: (i) $P=(V\cup\\{v\\},E\cup\\{\\{v,w\\}|w\in V^{\prime}\\})$ is an H-plugin. (ii) If $V^{\prime}=\\{x,y\\}$ and $x,y$ are the endpoints of an edge not on the unique hamiltonian path from $s$ to $t$, then $P^{str}=(V\cup\\{v\\},E\cup\\{\\{v,x\\},\\{v,y\\}\\})$ is a strong H-plugin. This remark follows immediately from the definitions of H-plugin and strong H-plugin and the fact that a hamiltonian path from $s$ to $t$ containing $v$ would imply a hamiltonian path in $P$ containing the edge $\\{x,y\\}$. We will first prove the existence of certain weak H-plugins, use those to construct strong H-plugins and the strong H-plugins to construct uniquely hamiltonian graphs with certain sets of degrees. ###### Lemma 3.5 Let $d_{0}\in\mathbb{N},d_{0}\geq 4$. If there is a $d_{0}$-seed $S_{d_{0}}$, then (i) For each even $d\geq d_{0}$ there are infinitely many $d$-seeds, so there are $d$-seeds with an arbitrarily large number of vertices of degree $4$. (ii) For each even $d\geq d_{0}$ there are infinitely many weak H-plugins ${\cal P}_{d}$, so that after being used for splicing an edge $\\{x,y\\}$ with $\deg(x)=3$ and $\deg(y)\in\\{3,4\\}$, there is one vertex in the copy of ${\cal P}_{d}$ of degree $d$ if $\deg(y)=3$, resp. $d+1$ if $\deg(y)=4$, and all the rest have degree $4$. If applied to an edge in a $3$-connected graph, the result after splicing with ${\cal P}_{d}$ is also $3$-connected. Figure 5: Extending a $d$-seed by increasing the number of vertices of degree $4$. The number of vertices of degree $4$ can be increased by steps of $1$ vertex. Proof: (i) We can replace cubic vertices different from $s$ by triangles with $3$ cubic vertices exactly $\frac{d-d_{0}}{2}$ times and get graphs $S_{d}$ that still have a unique hamiltonian path between $s$ of degree $3$ and $t$ of degree $2$, and then have $d-2$ vertices of degree $3$. The fact that $W(S_{d})$ is $3$-connected follows easily from the $3$-connectivity of $W(S_{d_{0}})$. We can now extend those $d$-seeds $S_{d}$ to $d$-seeds $S^{\prime}_{d}$ with arbitrarily many vertices of degree $4$ by connecting $s$ and $t$ to an arbitrarily long zigzag path between two linear paths ending in a new vertex $s^{\prime}$ of degree $3$ and a new vertex $t^{\prime}$ of degree $2$. This operation is displayed in Figure 5. It only increases the number of vertices of degree $4$ and $W(S^{\prime}_{d})$ will also be 3-connected. As a hamiltonian path from $s^{\prime}$ to $t^{\prime}$ in $S^{\prime}_{d}$ would contain disjoint paths to $s$ and $t$, these can only be the left and right linear path, so the uniqueness of the hamiltonian path from $s^{\prime}$ to $t^{\prime}$ follows immediately from the uniqueness of the hamiltonian path in $S_{d}$. (ii) For a $d$-seed $S_{d}$, $W(S_{d})$ is by definition a weak H-plugin. As the vertex $v$ in $W(S_{d})$ has degree $d-2$, after using $W(S_{d})$ for splicing an edge $\\{x,y\\}$, $v=y$ has degree $d+\deg(y)-3$ and as $s$ and $t$ have degree $3$ in $W(S_{d})$ and get an additional neighbour during splicing, all other vertices different from $v$ have degree $4$ after splicing. The $3$-connectivity of the result follows by elementary arguments. We will now use the splicing operation and the weak H-plugins to prove the existence of certain strong H-plugins: ###### Lemma 3.6 Let $d_{0}\in\mathbb{N},d_{0}\geq 4$. If there is a $d_{0}$-seed $S_{d_{0}}$, then for each (not necessarily even) $d\geq d_{0}$ there are infinitely many strong H-plugins ${\cal P}_{d}^{str}$, so that when applied for splicing an edge with both endpoints of degree $3$, in the copy of ${\cal P}_{d}^{str}$ there are at most $8$ vertices of degree $d$ and all the other vertices have degree $4$. Proof: Assume that an even $d\geq d_{0}$ is given. We will prove the existence of those strong H-plugins for $d$ and $d+1$. 123456789101112131415547612111514BABA Figure 6: The graph $P^{-}$ from [3], which has two hamiltonian cycles: $1,2,3,\dots,15$ and $1,2,3,4,5,11,12,13,14,15,6,7,8,9,10$. As only one of them contains the edge $\\{1,15\\}$ it has a unique hamiltonian path $1,2,\dots,15$ from $s=1$ to $t=15$. Edges with both endpoints of degree $3$ to which the splicing operation with a weak H-plugin can be applied while the uniqueness of the hamiltonian path is preserved, are drawn as arrows pointing at the vertices which can or must be chosen as $y$. Figure 7: If a unique hamiltonian cycle or path traverses the triangle as described by the bold edges on the left hand side, then after extending the graph, the part on the right hand side can only be traversed by a hamiltonian cycle or path as given by the bold edges. 123456789101112131415161718192021769816152120BABA Figure 8: The graph $P^{-}_{ex}$, which is an extension of the graph $P^{-}$ in Figure 6 in a way that it has a unique hamiltonian path $1,2,3,\dots,21$ between $s=1$ and $t=21$ and that except for $6$ vertices, all vertices of degree $3$ have a neighbour of degree $4$ along an edge not on the hamiltonian path (not counting the edge $\\{1,21\\}$). Edges to which the splicing operation with a weak H-plugin can be applied while the uniqueness of the hamiltonian path is preserved, are drawn as arrows pointing at the vertices which must be chosen as $y$. Figure 6 shows the graph $P^{-}$ with a unique hamiltonian path $1,2,\dots,15$ from $s=1$ to $t=15$ (given in [3]). Edges with both endpoints of degree $3$ to which the splicing operation with a weak H-plugin can be applied in a way that there is still a unique hamiltonian path between the endpoints are drawn as arrows pointing at the vertices which can or must be chosen as the vertex $y$ in the operation. If we splice these edges with a weak H-plugin ${\cal P}_{d}$ from Lemma 3.5, we get a graph with $5$ vertices of degree $d$, $2$ vertices (the vertices $5$ and $11$) of degree $3$, and all other vertices of degree $4$. Due to Corollary 3.2, this graph still has a unique hamiltonian path from $s$ to $t$ not containing the edge $\\{5,11\\}$. If we remove the edge between $s$ and $t$, add a new vertex $v$, and connect it to the vertices $5$ and $11$, due to Remark 3.4 we get a strong H-plugin ${\cal P}_{d}^{str}$. The vertices $s$ and $t$ now have degree $d-1$, so when applied in a splicing operation the degree is again $d$, $v$ has degree $2$, and all other vertices have degree $4$ or $d$. If we apply the ${\cal P}_{d}^{str}$-splice to an edge with both endpoints of degree $3$, one of them is deleted and the other one is identified with $v$ and gets degree $4$. As there are infinitely many H-plugins ${\cal P}_{d}$ with $1$ vertex of degree $d$, we also get infinitely many strong H-plugins ${\cal P}_{d}^{str}$, that all have $5$ vertices of degree $d$ after being used for splicing an edge with both endpoints of degree $3$. The graph $P^{-}_{ex}$ in Figure 8 is constructed from the graph $P^{-}$ in Figure 6 by applying the operation described in Figure 7 to three triangles. Any hamiltonian cycle or path traversing the part on the right hand side of Figure 7 in another way than displayed by the bold edges would imply the existence of a hamiltonian path traversing the part on the left in another way – which does not exist in the parts of $P^{-}$ to which it is applied. So also $P^{-}_{ex}$ has a unique hamiltonian path from $s$ to $t$ with in this case $s=1$ and $t=21$. Note that all vertices of degree $3$ in the modified part are adjacent to a vertex of degree $4$ by an edge not on the hamiltonian path and contained in a triangle not containing another vertex of degree $3$ of an edge that is to be spliced. If we use a weak H-plugin ${\cal P}_{d}$ for splicing edges of $P^{-}_{ex}$ in the way described in Figure 8, all vertices but $1,7,15,$ and $21$ have degree $4$ or $d+1$. The vertices $s=1$ and $t=21$ have degree $d$, so when used as $s$ and $t$ in a splicing operation, they get degree $d+1$ too. Vertices $7$ and $15$ have degree $3$ and are adjacent by an edge not on the unique hamiltonian path from $s$ to $t$. So we can again add a new vertex $v$ and connect it to the vertices $7$ and $15$, which then have degree $4$ and get a strong H-plugin ${\cal P}_{d+1}^{str}$, so that when used for splicing an edge with both endpoints of degree $3$ the copy of ${\cal P}_{d+1}^{str}$ contains $8$ vertices of degree $d+1$ and all other vertices have degree $4$. Also in this case we can use infinitely many H-plugins ${\cal P}_{d}$, in this case producing one vertex of degree $d+1$ each. ###### Lemma 3.7 For each $k\in\mathbb{N}$ there are $3$-connected uniquely hamiltonian graphs $G_{k}=(V_{k},E_{k})$ with $M_{deg}(G_{k})=\\{3,4\\}$, so that the edges not on the hamiltonian cycle form a 2-regular subgraph containing all vertices of degree $4$ together with a matching of size at least $k$ containing all vertices of degree $3$. Proof: We can apply a well known technique from [6] to obtain a uniquely hamiltonian graph from a graph with two hamiltonian cycles. We take two copies of $P^{-}$ and in each of them an arbitrary cubic vertex that is traversed by the two hamiltonian cycles in two different ways. Say these vertices are $v$ and $v^{\prime}$, that the neighbours are $a,b,c$, resp. $a^{\prime},b^{\prime},c^{\prime}$ and that the hamiltonian cycles pass $v$ as $a,v,b$ and $a,v,c$ (and accordingly for $v^{\prime}$). Removing $v$ and $v^{\prime}$ and adding the edges $\\{a,c^{\prime}\\},\\{b,b^{\prime}\\},\\{c,a^{\prime}\\}$, only one hamiltonian cycle remains – using the paths $a$ to $c$ in one copy and $c^{\prime}$ to $a^{\prime}$ in the other. As in both hamiltonian cycles the vertices of degree $4$ are traversed in a way so that the edges not on the hamiltonian cycle and incident with the 4-valent vertices form a triangle, the result will in each case have a unique hamiltonian cycle with two triangles of edges not on the hamiltonian cycle containing all $6$ vertices of degree $4$. As each cubic vertex has exactly one edge not on the hamiltonian cycle, these edges form the required matching. Starting from this graph, we can replace vertices of degree $3$ by triangles to increase the number of cubic vertices and therefore also the size of the matching until we have a matching of size at least $k$. We get the following theorem as an immediate consequence: ###### Theorem 3.8 If there is a $4$-seed $S_{4}$, or a $d_{1}$-seed $S_{d_{1}}$ for $d_{1}>4$, then any set $M=\\{d_{0}=4,d_{1},d_{2},\dots,d_{k}\\}$ with $4<d_{1}<d_{2}<\dots<d_{k}$ is uhc-realizable. If $k\geq 1$, $M$ is also strongly uhc-realizable. Proof: Given the set $M=\\{4,d_{1},d_{2},\dots,d_{k}\\}$, we can take any uniquely hamiltonian graph $G_{k^{\prime}}$ with $k^{\prime}\geq k$, $k^{\prime}>0$ and the properties of Lemma 3.7 and splice the edges of the matching using each of the strong H-plugins ${\cal P}_{d_{1}}^{str}$,…,${\cal P}_{d_{k}}^{str}$ (or ${\cal P}_{4}$ if $k=0$) at least once. This removes all vertices of degree $3$ or increases their degree to $4$. Furthermore outside the H-plugins only degree $4$ occurs and in the H-plugins exactly all vertex degrees in $M$ occur, while the graph has still one unique hamiltonian cycle. To show that $M$ is strongly uhc-realizable for $k\geq 1$, assume a partition $D_{1},D_{2}$ to be given. If $D_{2}=\emptyset$, to construct the sequence of graphs we can use increasingly large strong H-plugins – keeping the numbers of vertices of degree $d$ constant for $d\in\\{d_{1},\dots,d_{k}\\}$. If $D_{2}\not=\emptyset$, we can use graphs $G_{k^{\prime}}$ for increasingly large $k^{\prime}$ and use the same arbitrarily large number of copies of strong H-plugins ${\cal P}_{d}^{str}$ for each $d\in D_{2}$. The proof is chosen for its simplicity and not for giving the optimal values for the constants $c_{1},c_{2}$ in the definition of strong realizability. The constants also depend on $M$ and the partition. In order to get the best constant $c_{1}$ that can be obtained by this construction or determine upper bounds for the number of vertices of a smallest graph $G$ realizing $M$, in some cases with odd degrees it would be better to use one or two copies of $P^{-}_{ex}$ instead of $P^{-}$ for the construction of the $G_{k}$ and to analyze which edges can be spliced by a weak H-plugin instead of a strong H-plugin. The $10$-seed in Figure 4 now immediately implies the main result for minimum degree $4$: ###### Theorem 3.9 Any set $M=\\{4,d_{1},d_{2},\dots,d_{k}\\}$ with $10\leq d_{1}<d_{2}<\dots<d_{k}$ and $k\geq 1$ is strongly uhc-realizable. The formulation of the main result as a direct consequence of Theorem 3.8 is in order to make Theorem 3.8 citable in case a $d$-seed for $d<10$ is discovered – instead of referring to the fact that the proofs can be completely analogously repeated with the new $d$-seed. Due to Theorem 3.8 the existence of a $4$-seed implies the existence of a $3$-connected uniquely hamiltonian $4$-regular graph, but in fact also the other direction is correct: ###### Corollary 3.10 There is a $3$-connected uniquely hamiltonian $4$-regular graph, if and only if there is a $4$-seed. In that case there are infinitely many $3$-connected uniquely hamiltonian $4$-regular graphs and every set $M$ of natural numbers $d\geq 2$ with $4\in M$ and $|M|\geq 2$ is strongly uhc-realizable. Proof: From a $3$-connected uniquely hamiltonian $4$-regular graph $G$ we can get a $4$-seed by choosing a vertex of $G$ as $s$, subdivide an edge on the hamiltonian cycle incident with $s$ with a new vertex $t$, and remove an edge incident with $s$ that is not on the hamiltonian cycle. The rest of the statement is a direct consequence of Remark 2.1, Theorem 3.3, and Theorem 3.8. Furthermore, for $4$-regular graphs, the existence of a $2$-connected uniquely hamiltonian graph also implies the existence of a $3$-connected uniquely hamiltonian graph: ###### Lemma 3.11 There is a $3$-connected uniquely hamiltonian $4$-regular graph, if and only if there is a $2$-connected uniquely hamiltonian $4$-regular graph. Proof: As $3$-connected graphs are also $2$-connected, the only thing to prove is that the existence of a uniquely hamiltonian $4$-regular graph with a $2$-cut implies the existence of a $3$-connected uniquely hamiltonian $4$-regular graph. Let $G=(V,E)$ be a uniquely hamiltonian $4$-regular graph with a $2$-cut and $\\{s,t\\}$ be vertices of a $2$-cut, so that one of the components of $G[V\setminus\\{s,t\\}]$ – say $C_{0}$ – has minimum size. Let $G_{0}=G[C_{0}\cup\\{s,t\\}]$. Then there is a unique hamiltonian path in $G_{0}$ from $s$ to $t$ and due to the minimality of $C_{0}$ the vertices $s$ and $t$ have degree at least $2$ in $G_{0}$. If one has degree $2$, they are non-adjacent. As the number of vertices with odd degree must be even and as they both have neighbours in more than one component, they both have degree $2$ or both have degree $3$. In case of degree $2$ we can add the edge $\\{s,t\\}$, so that in each case we have a graph, which we will call again $G_{0}$ with a unique hamiltonian path $P_{H}$ from $s$ to $t$, where $s$ and $t$ are of degree $3$ and all other vertices of degree $4$. Let now $G_{0}^{v}$ be $G_{0}$ with an edge $e\not=\\{s,t\\}$ that is not part of $P_{H}$ subdivided with a new vertex $v$. By construction $G_{0}^{v}$ does not have a hamiltonian path from $s$ to $t$, but a unique hamiltonian path in $(G_{0}^{v})_{-v}=G_{0}$. So $G_{0}^{v}$ is a strong H-plugin that when applied to two connected copies of $P^{-}$ like in Lemma 3.7 gives a $4$-regular uniquely hamiltonian graph. It remains to be shown that for a $3$-connected graph $G^{\prime}$ and suitable $x,y\in G^{\prime}$ the graph $O(x,y,G_{0}^{v})$ is $3$-connected. It is sufficient to show that the graph $G_{1}$ obtained from $G_{0}^{v}$ by adding a new vertex $v^{\prime}$ and connecting it to $s,t,$ and $v$ is $3$-connected. Assume to the contrary that $G_{1}$ has a $2$-cut $K$. Note that $K\not=\\{s,t\\}$ as $C_{0}$ is a component and $v$ and through $v$ also $v^{\prime}$ are connected to it. If $s$ and $t$ are in different components of $G_{1}\setminus K$, then the common neighbour $v^{\prime}$ must be in $K$. So $K\setminus\\{v^{\prime}\\}$ is a cut of $G_{0}^{v}$. If $K=\\{v,v^{\prime}\\}$, choose $w$ as a neighbour of $v$ different from $s,t$, otherwise let $w$ be the vertex in $K\setminus\\{v^{\prime}\\}$. Then $w$ is a cutvertex of $G_{0}^{v}$ and also of $G_{0}$. Together with $s$ or $t$ it forms a $2$-cut contradicting the minimality of $C_{0}$. If $s$ and $t$ are in the same component of $G_{1}\setminus K$ or one is in $K$, there is a vertex $x\not\in\\{v,v^{\prime}\\}$ in a component not containing $s$ or $t$. But then $K$ – possibly after replacing $v$ or $v^{\prime}$ in $K$ by a neighbour – again contradicts the minimality of $C_{0}$, so $G_{1}$ does not have a $2$-cut. In [2] Fleischner proved that there are $4$-regular uniquely hamiltonian multigraphs and in fact $2k$-regular uniquely hamiltonian multigraphs with arbitrarily high degree. Another direct consequence of Lemma 3.7 is the following simple generalisation: ###### Corollary 3.12 For a set $M=\\{d_{1},\dots,d_{k}\\}$ with $2\leq d_{1}<d_{2}<\dots<d_{k}$ of natural numbers there is a uniquely hamiltonian multigraph $G$ with $M_{deg}(G)=M$ if and only if $M$ contains an even number. In that case there are infinitely many $3$-connected uniquely hamiltonian multigraphs $G$ with $M_{deg}(G)=M$. Proof: In [8] it is shown that uniquely hamiltonian multigraphs do not exist if all degrees are odd, so we only have to prove that they do exist if an even degree is contained. For $2\in M$ this is even proven for simple graphs in Remark 2.1, so assume that all elements of $M$ are at least $3$. Taking graphs $G_{k^{\prime}}$ with $k^{\prime}\geq k$ from Lemma 3.7 with the matching and $2$-factor with the described properties, we can multiply the edges of the $2$-factor containing the $4$-regular vertices until the vertices all have an even degree contained in $M$. For each remaining degree $d_{i}$, we can now choose an edge in the matching and multiply it until it has degree $d_{i}$. If there are still vertices of degree $3$ left and $3\not\in M$, we can multiply the corresponding edges of the matching until a degree in $M$ is reached. ## 4 Computational results The fact that $10$-seeds already exist with a relatively small number of vertices and that there is no obvious reason why degree $10$ should be special in this context suggest that maybe also relatively small $8$-seeds might exist. We developed two independent programs to construct seeds in a straightforward way – that is: start with a hamiltonian path $1,2,\dots,n$ and then add edges in a way that the degree restrictions are respected and that no second hamiltonian path from $1$ to $n$ is introduced. E.g. construct all graphs on $20$ vertices with a unique hamiltonian path from vertex $1$ to vertex $20$ and where vertex $1$ has degree $3$, vertex $20$ has degree $2$ and where there are in total $13$ vertices with degree $4$, $6$ vertices with degree $3$ and one vertex with degree $2$. In this context graphs are considered as isomorphic if there is a graph isomorphism mapping the end vertices of the path onto each other. Both programs can be obtained from the authors. Due to the following remark it was sufficient to test for the existence of $8$-seeds to conclude on the existence of $4$\- and $6$-seeds: ###### Remark 4.1 If there is a $k$-seed on $n$ vertices, there is also a $(k+2)$-seed on $n+2$ vertices. If $n>2.5k-4$, then there is also a $(k+2)$-seed on $n$ vertices. Proof: The first part is immediate by replacing a vertex of degree $3$ that is not $s$ or $t$ by a triangle. A $k$-seed on $n$ vertices has one vertex of degree $2$, $k-2$ vertices of degree $3$, and $n-k+1$ vertices of degree $4$. So there are $4(n-k+1)$ outgoing edges at vertices of degree $4$ and at most $2+3(k-2)$ can end in a vertex of degree $2$ or $3$. So there are at least $\frac{4n-7k+8}{2}$ edges with both endpoints of degree $4$. At most $n-k$ of them can be on the unique hamiltonian path, so at least $n-2.5k+4$ of them are not. As $n>2.5k-4$ guarantees that this number is strictly positive and as removing one of these edges gives a $(k+2)$-seed, the remark is proven. ###### Lemma 4.2 There are no $4$-, $6$-, or $8$ seeds on up to $21$ vertices. Proof: We ran the computer programs to search for $8$-seeds on up to $21$ vertices to find that they do not exist. Together with Remark 4.1 this proves the lemma. The jobs were run on a cluster of very different machines. Samples run for the faster of the two programs on a Core i7-9700 CPU restricted to 3.00GHz suggest a total running time on this type of processor of about 5.6 CPU-years for the most time consuming part – the search for $8$-seeds on $21$ vertices. Even for carefully designed and implemented algorithms independent tests are necessary. As runs without any output are not very good tests for the programs, they were also compared when generating $10$-seeds and $12$-seeds. The graphs constructed by the two programs were compared for their number and for isomorphism up to $20$ vertices. For $10$-seeds there were in total $4.689$ non-isomorphic seeds and for $12$-seeds there were in total $1.414.640$ non-isomorphic seeds. For seeds isomorphism means that the two endpoints of the hamiltonian path are marked vertices and are distinguished from the other vertices, so some seeds that are non-isomorphic as seeds can be isomorphic as graphs. There was complete agreement. ## 5 Final remarks Figure 9: The splicing operation for more than one hamiltonian cycle with a generalized 4-seed and a generalized 6-seed. In this article we are interested only in uniquely hamiltonian graphs. Nevertheless the method of splicing can also be useful when constructing graphs with few hamiltonian cycles. We will only give a short sketch of the possibilities. We will not formally state results, as we do not give formal proofs. The following statements should be considered as preliminary as long as no proofs are given somewhere. If we allow $n_{s}$ hamiltonian paths from $s$ to $t$ in a seed and $n_{G}$ hamiltonian cycles in a graph $G$ – none of them containing the edge $e$ of $G$ – then with the otherwise same prerequisites of Lemma 3.1, the proof can be repeated, this time showing that the result after splicing has $n_{s}\cdot n_{G}$ hamiltonian cycles. This implies that for any set $M$ of natural numbers with minimum $4$ there is a constant $C$ and an infinite series of graphs with degree set $M$ and at most $C$ hamiltonian cycles. In fact there is also one constant working as an upper bound for all sets $M$. The constants we get from our proof that used $P^{-}$ are nevertheless very large and far worse for the 4-regular case than in [10]. For better constants one has to search for starting graphs that need fewer splicing operations, but can have more than one hamiltonian cycle. An example is the construction in [10] proving that there are infinitely many (2-connected) $4$-regular graphs with $144$ hamiltonian cycles. It was found and proven in a completely different way, but can be interpreted making use of splicing: The graph in Figure 9(c) has $36$ hamiltonian cycles – none of them containing $\\{x,y\\}$. Furthermore removing $y$, the graph is non-hamiltonian. The generalized 4-seed (that is: allowing more than one hamiltonian path from $s$ to $t$) in Figure 9(a) has $4$ hamiltonian paths from $s$ to $t$, so with plugins obtained from it and its extensions, the results of splicing $\\{x,y\\}$ have $144$ hamiltonian cycles. The generalized seed in Figure 9(b) has $2$ hamiltonian paths from $s$ to $t$ and would give one vertex of degree $6$, so splicing $\\{x,y\\}$ would give $72$ hamiltonian cycles for the degree set $M=\\{4,6\\}$ and replacing a vertex of degree $3$ by a triangle also for $M=\\{4,8\\}$. All graphs explicitly given in the previous sections can be inspected at and downloaded from the database House of Graphs [1]. They can be found by searching for the keyword `UHG_degree_sequence`. All properties about small graphs stated here have been checked by computer, but can easily be confirmed by hand. ## References * [1] K. Coolsaet, S. D’hondt, and J. Goedgebeur. House of graphs 2.0: a database of interesting graphs and more. Discrete Applied Mathematics, 325:97–107, 2023. Available at https://houseofgraphs.org. * [2] H. Fleischner. Uniqueness of maximal dominating cycles in 3-regular and of hamiltonian cycles in 4-regular graphs. Journal of Graph Theory, 18(5):449–459, 1994. * [3] H. Fleischner. Uniquely hamiltonian graphs of minimum degree 4. Journal of Graph Theory, 75:167–177, 2014. * [4] J. Goedgebeur, J. Jooken, O. Solomon Lo, B. Seamone, and C.T. Zamfirescu. Few hamiltonian cycles in graphs with one or two vertex degrees. Submitted, arXiv identifier 2211.08105. * [5] P. Haxell, B. Seamone, and J. Verstraete. Independent dominating sets and hamiltonian cycles. Journal of Graph Theory, 54:233–244, 2007. * [6] D. Holton and R.E.L. Aldred. Planar graphs, regular graphs, bipartite graphs and hamiltonicity. Australas. J. Combin., 20:111–131, 1999. * [7] J. Sheehan. The multiplicity of hamiltonian circuits in a graph. In Recent Advances in Graph Theory (Proceedings of the Second Czechoslovak Symposium, Prague, 1974), 477–480, 1975. * [8] A.G. Thomason. Hamiltonian cycles and uniquely edge colourable graphs. Annals of Discrete Mathematics, 3:259–268, 1978. * [9] W.T. Tutte. On hamiltonian circuits. J. London Math. Soc., 21:98–101, 1946. * [10] C.T. Zamfirescu. Regular graphs with few longest cycles. SIAM Journal on Discrete Mathematics, 36(1):755–776, 2022.
# Parabolic Systems with measurable coefficients in weighted Sobolev spaces Doyoon Kim Doyoon Kim, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea<EMAIL_ADDRESS>, Kyeong-Hun Kim Kyeong-Hun Kim, Department of Mathematics, Korea University, Anam-ro 145, Sungbuk-gu, Seoul, 02841, Republic of Korea <EMAIL_ADDRESS>and Kijung Lee Kijung Lee, Department of Mathematics, Ajou University, Worldcup-ro 206, Yeongtong-gu, Suwon, 16499, Republic of Korea<EMAIL_ADDRESS> ###### Abstract. In this paper we present a weighted $L_{p}$-theory of parabolic systems on a half space. The leading coefficients are assumed to be only measurable in $t$ and have small bounded mean oscillations (BMO) with respect to $x$, and the lower order coefficients are allowed to blow up near the boundary. ###### Key words and phrases: sharp/maximal functions, parabolic systems, weighted Sobolev spaces, measurable coefficients ###### 2010 Mathematics Subject Classification: 35K51, 35R05 D. Kim was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (2016R1D1A1B03934369) K. Kim was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2017R1D1A1B03033255) K. Lee was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (NRF-2013R1A1A2060996) ## 1\. Introduction In this paper we introduce a weighted $L_{p}$-theory for parabolic systems in non-divergence form: $-u_{t}(t,x)+\sum_{i,j=1}^{d}A^{ij}(t,x)D_{ij}u(t,x)+\sum_{i=1}^{d}B^{i}(t,x)D_{i}u+C(t,x)u-\lambda u(t,x)=f(t,x)$ (1.1) in $(-\infty,T)\times\mathbb{R}^{d}_{+}$, where $\mathbb{R}^{d}_{+}:=\\{x=(x_{1},x^{\prime})\in\mathbb{R}^{d}:x^{1}>0\\}$, $\lambda$ is a non-negative number, $A^{ij}=[a^{ij}_{kr}]_{k,r=1,\ldots,d_{1}}$, $B^{i}=[b^{i}_{kr}]_{k,r=1,\ldots,d_{1}}$ and $C=[c_{kr}]_{k,r=1,\ldots,d_{1}}$ are $d_{1}\times d_{1}$ matrix valued, and $u=[u^{1}\cdots u^{d_{1}}]^{\text{tr}},\quad f=[f^{1}\cdots f^{d_{1}}]^{\text{tr}}$ are $d_{1}\times 1$ matrix valued functions with values possibly in $\mathbb{C}$. This model embraces $d_{1}$ different and interacting diffusions with the diffusion speeds changing upon $(t,x)$. We may interpret $u(t,\cdot)$ as the densities of diffusing chemical materials at time $t$. The system (1.1) combined with the zero boundary condition, a typical control of the densities on the boundary, yields a very subtle question on the diffusion near the boundary since the densities are forced to decrease or increase near the boundary in very steep ways. This forced behavior conflicts with diffusion near the boundary and it is related to $x_{1}$, the distance to the boundary. We want to understand quantitative relations among $u$, the partial derivatives of $u$, and $f$, focusing on the boundary behaviors of them. Precisely, we consider the system (1.1) in the weighted Sobolev spaces $L_{p}((-\infty,T);H^{\gamma}_{p,\theta}(\mathbb{R}^{d}_{+})),$ which introduced by Krylov [11] for all $\gamma\in\mathbb{R}$. In particular, if $\gamma$ is a non-negative integer, we have the characterization $H_{p,\theta}^{\gamma}=H_{p,\theta}^{\gamma}(\mathbb{R}^{d}_{+})=\\{u:x_{1}^{|\alpha|}D^{\alpha}u\in L_{p,\theta}(\mathbb{R}^{d}_{+})\;\;\forall\alpha:0\leq|\alpha|\leq\gamma\\},$ (1.2) where $L_{p,\theta}(\mathbb{R}^{d}_{+})$ is the $L_{p}$ space with the measure $\mu_{d}(dx)=x_{1}^{\theta-d}\,dx$. Since the work of [11], there has been steady attention to the solvability theory for equations in the weighted Sobolev spaces $H_{p,\theta}^{\gamma}$ setting; see [7, 8, 6, 4]. The necessity of such theory came from, for instance, the theory of stochastic partial differential equations (SPDEs); see e.g. [12, 13] for detailed reasons. We only mention that in general the derivatives of solutions to SPDEs behave badly near the boundary of domains and $L_{p}$-norm of derivatives of solutions cannot be measured without the help of appropriate weights. Interesting enough, it turns out that the weighted spaces $H_{p,\theta}^{\gamma}$ and $L_{p}((-\infty,T);H^{\gamma}_{p,\theta}(\mathbb{R}^{d}_{+}))$ are also quite useful to the study of deterministic elliptic and parabolic systems if, for instance, the free term $f$ behaves wildly near the boundary, if systems have lower order derivatives whose coefficients are unbounded near the boundary, or if systems are defined on non-smooth domains. More specifically, if the free term $f$ blows up near the boundary, then again the derivatives of solutions to systems do not belong to $L_{p}$-spaces without weights and one needs appropriate weights to measure the $L_{p}$ norm of derivatives of solutions. We remark that, if one has a certain solvability theory in weighted Sobolev space $L_{p}((-\infty,T);H^{\gamma}_{p,\theta}(\mathbb{R}^{d}_{+}))$ for systems defined on a half space, then almost for free one gets the corresponding theory in $L_{p}((-\infty,T);H^{\gamma}_{p,\theta}(\mathcal{O}))$ for systems defined on $C^{1}$ domain $\mathcal{O}\subset\mathbb{R}^{d}$ for any $\gamma\in\mathbb{R}$. For details, we refer to [7], where single equations are studied on $C^{1}$ domains based on the results on a half space. A short description on related work is the following. The Laplace and heat equations in the weighted Sobolev spaces $H_{p,\theta}^{\gamma}$ setting were first considered in [11], when $\theta$ is in the optimal range $(d-1,d-1+p)$. These results were extended to non-divergence type elliptic and parabolic equations with continuous coefficients in [7]. Kozlov and Nazarov [8] treated parabolic equations with coefficients depending only on $t$ in mixed space- time norm spaces with the same type of weights. Recently, in [2, 4, 6] non- divergence and divergence type equations were treated with coefficients having small mean oscillations in both the spatial and time variables. In particular, the coefficients in [2] are further allowed to have no regularity assumptions in the time variable or in one spatial variable. We remark that all the results in [2, 4, 8, 7, 6, 11] treated only single equations. Quite recently, [5] handled elliptic and parabolic systems in $H_{p,\theta}^{\gamma}$ and $L_{p}((-\infty,T);H^{\gamma}_{p,\theta}(\mathbb{R}^{d}_{+}))$, respectively. In this paper we extend the results in [5] to a considerably more general setting. Compared to the results in [5], the features of our results can be summarized as follows: * • Extension on the range of admissible weights: the condition $\theta\in(d-1,d+1)$ if $p\geq 2$ and $\theta\in(d+1-p,d+p-1)$ if $1<p\leq 2$ in [5] is extended to the full range $\theta\in(d-1,d-1+p)$. * • The additional assumption $A^{1j}\equiv 0$ for $j=2,\cdots,d_{1}$ in [5] is dropped in this paper. * • While $A^{ij}=A^{ij}(t)$ are assumed to depend only on $t$ in [5], in this paper $A^{ij}(t,x)$ are merely measurable in $t$ and have small BMO in $x$. The main reason why in this paper we can drop such extra conditions assumed in [5] is that we use somewhat different approaches that we explain as follows. The overall procedure to obtain the main results is as a standard scheme in $L_{p}$-theory by deriving a priori estimates and then using the method of continuity. While in [5] the above extra conditions were needed for the estimation of the sharp functions of the second derivatives of solutions, in this article we only estimate the sharp functions of the first derivatives, and then we estimate the weighted $L_{p}$-norms of solutions and their second derivatives from those of the first derivatives and unweighted $L_{p}$-estimates for systems as in (1.1) through a partition of unity argument. Another technical difference is that unlike in [5] we use the Fefferman-Stein theorem and the Hardy-Littlewood maximal function theorem with $A_{p}$ weights. The use of $A_{p}$ weights made it possible to derive desired a priori estimates under weaker conditions described above. In fact, in our setting the aforementioned theorems with $A_{p}$ weights are available only when estimating the first derivatives of solutions, where their associated weight is an $A_{p}$ weight for the full range $\theta\in(d-1,d-1+p)$. See Remark 4.4. On the other hand, the associated weight for the second derivatives of solutions is not in the class of $A_{p}$ weights. See (1.2), where the weights $x_{1}^{|\alpha|}$ differ depending on the number of derivatives. Throughout the paper, we impose the Legendre-Hadamard ellipticity, i.e., there exists a constant $\delta>0$ such that $\Re\left(\sum_{i,j=1}^{d}\theta^{\text{tr}}\xi_{i}\xi_{j}A^{ij}(t,x)\bar{\theta}\right)\geq\delta|\xi|^{2}|\theta|^{2}$ (1.3) holds for all $(t,x)\in\mathbb{R}\times\mathbb{R}^{d}_{+}$, $\xi\in\mathbb{R}^{d}$, and $\theta\in\mathbb{C}^{d_{1}}$, where $\Re(f)$ denotes the real part of $f$. We assume that $A^{ij}(t,x)$ are merely measurable in $t$ and have small BMO semi-norm with respect to $x$ (see Section 2). We also impose the boundedness condition $\displaystyle|a^{ij}_{kr}(t,x)|\leq\delta^{-1},\quad(t,x)\in\mathbb{R}\times\mathbb{R}^{d}_{+}$ (1.4) for all $i,j=1,\ldots,d$, $k,r=1,\dots,d_{1}$, where $\delta>0$ is taken we take from (1.3). The paper is organized as follows. In Section 2 we introduce weighted Sobolev spaces and our main result, Theorem 2.1. In Section 3 we study systems with coefficients depending only on $t$, and sharp function estimates of solutions are obtained in Section 4. Finally we prove our main result in Section 5. We use the following rules of notations. * • $D_{j}=\frac{\partial}{\partial x_{j}}$, $D_{ij}u=\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}$. * • Throughout the proofs in this paper, the constant $N=N(\cdots)$ depends only on the parameters inside of the parentheses and can be generic along the proof. * • We will meet $d_{1}\times 1$ matrix valued, $d_{1}\times d$ matrix valued, or $d_{1}\times d\times d$ tensor valued functions $f$ depending on situations. * • The notation $|f|$ means the square root of the sum of all squares of the components of $f$. For instance, given $u=[u^{1}\cdots u^{d_{1}}]^{\text{tr}}$ $|u|=\sqrt{\sum_{k}|u^{k}|^{2}},\quad|Du|=\sqrt{\sum_{k,i}|D_{i}u^{k}|^{2}},\quad|D^{2}u|=\sqrt{\sum_{k,i,j}|D_{i,j}u^{k}|^{2}}.$ ## 2\. Preliminary and the main results In what follows we abbreviate the system (1.1) to write $-u_{t}+A^{ij}(t,x)D_{ij}uB^{i}(t,x)D_{i}u+C(t,x)u-\lambda u=f$ with the summations upon the repeated indices are assumed. When $A^{ij}$ depend only on $t$, we write $-u_{t}+A^{ij}(t)D_{ij}u+B^{i}D_{i}u+Cu-\lambda u=f.$ To present our result, we first introduce some function spaces that we use in this paper. The basic function spaces are $H_{p,\theta}^{\gamma}=H_{p,\theta}^{\gamma}(\mathbb{R}^{d}_{+})$, which were introduced in a unified manner by N. V. Krylov [11] for all $\gamma\in\mathbb{R}$. The main ingredients of these spaces are the spaces of Bessel potentials. Given $p\in(1,\infty)$ and $\gamma\in\mathbb{R}$, the space of Bessel potential $H^{\gamma}_{p}(\mathbb{R}^{d})$ is defined by $H^{\gamma}_{p}=(1-\Delta)^{-\gamma/2}L_{p}$ as the set of all matrix valued distributions $u$ such that $(1-\Delta)^{\gamma/2}u\in L_{p}$, i.e. $\|u\|^{p}_{H^{\gamma}_{p}}=\|(1-\Delta)^{\gamma/2}u\|^{p}_{L_{p}}<\infty$ with $\|(1-\Delta)^{\gamma/2}f\|_{L_{p}}:=\|\mathcal{F}^{-1}[(1+|\xi|^{2})^{\gamma/2}\mathcal{F}(f)(\xi)]\|_{p}$, where the Fourier transform $\mathcal{F}(f)=\tilde{f}$ is defined by $\tilde{f}(\xi)=\frac{1}{(2\pi)^{d/2}}\int_{\mathbb{R}^{d}}e^{-i\xi\cdot x}f(x)\,dx$ and $\xi\cdot x$ is the Euclidean inner product in $\mathbb{R}^{d}$. Now, take and fix a nonnegative function $\zeta\in C_{0}^{\infty}(\mathbb{R}_{+})$ satisfying $\sum_{n=-\infty}^{\infty}\zeta^{p}\left(e^{x_{1}-n}\right)\geq 1$ for all $x_{1}\in\mathbb{R}$. For $p\in(1,\infty)$, $\gamma,\theta\in\mathbb{R}$, we define $H_{p,\theta}^{\gamma}$ as the set of all matrix valued distributions $u$ on $\mathbb{R}^{d}_{+}$ such that $\|u\|_{H_{p,\theta}^{\gamma}}^{p}:=\sum_{n=-\infty}^{\infty}e^{n\theta}\|u(e^{n}\cdot)\zeta(\pi_{1}(\cdot))\|_{H^{\gamma}_{p}}^{p}<\infty,$ where $\pi(x)=\pi(x_{1},x^{\prime})=x_{1}$. If $\gamma$ is a non-negative integer, then the following characterization is available; $H_{p,\theta}^{\gamma}=\\{u:x_{1}^{|\alpha|}D^{\alpha}u\in L_{p,\theta}\ \ \forall\alpha:0\leq|\alpha|\leq\gamma\\},$ where $L_{p,\theta}=L_{p,\theta}(\mathbb{R}^{d}_{+})$ is the weighted $L_{p}$ space of matrix valued or tensor valued functions $f$ on $\mathbb{R}^{d}_{+}$ satisfying $\|f\|^{p}_{L_{p,\theta}}:=\int_{\mathbb{R}^{d}_{+}}|f(x)|^{p}x_{1}^{\theta-d}dx<\infty$; the dimension of the matrix values or tensor values of $f$ will be clear in the context. We will denote $M^{k}f\in L_{p,\theta}$, $k\in\mathbb{Z}$ if $x_{1}^{k}f\in L_{p,\theta}$. We recall that the operators $MD$ and $DM$ are bounded from $H_{p,\theta}^{\gamma}$ to $H_{p,\theta}^{\gamma-1}$; see [11]. For parabolic equations, we define the function spaces $\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})=L_{p}\big{(}(S,T)\times\mathbb{R}^{d}_{+};x_{1}^{\theta-d}dxdt\big{)},$ for $-\infty\leq S<T\leq\infty$. In particular, if $\theta=d$, $\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})=L_{p}((S,T)\times\mathbb{R}^{d}_{+})$. We denote $M^{k}f\in\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})$, $k\in\mathbb{Z}$ if $x_{1}^{k}f\in\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})$. We write $u\in\mathfrak{H}_{p,\theta}^{2}((S,T)\times\mathbb{R}^{d}_{+})$ if $M^{-1}u,\,\,Du,\,\,MD^{2}u,\,\,Mu_{t}\in\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})$ and set $\|u\|_{\mathfrak{H}_{p,\theta}^{2}((S,T)\times\mathbb{R}^{d}_{+})}=\|M^{-1}u\|_{p,\theta}+\|Du\|_{p,\theta}+\|MD^{2}u\|_{p,\theta}+\|Mu_{t}\|_{p,\theta},$ where $\|\cdot\|_{p,\theta}=\|\cdot\|_{\mathbb{L}_{p,\theta}((S,T)\times\mathbb{R}^{d}_{+})}$. Moreover, for any $\mathcal{D}\subset\mathbb{R}\times\mathbb{R}^{d}_{+}$ we define $W^{1,2}_{p}(\mathcal{D})$ as the space of matrix valued functions $u=[u^{1}\cdots u^{d_{1}}]^{\text{tr}}$ defined on $\mathcal{D}$ satisfying $u,\,\,Du,\,\,D^{2}u,\,\,u_{t}\in L_{p}(\mathcal{D})$ and $C^{\infty}_{0}(\mathcal{D})$ is defined as the space of infinitely differentiable $d_{1}\times 1$ matrix valued functions with compact support in $\mathcal{D}$; $\mathcal{D}$ is not necessarily open. We also define the parabolic Hölder spaces $C^{\alpha/2,\alpha}(\mathcal{D})$, $\alpha\in(0,1)$, as the set of matrix valued functions $f$ defined on $\mathcal{D}$ satisfying $\|f\|_{C^{\alpha/2,\alpha}(\mathcal{D})}:=\sup_{(t,x)\in\mathcal{D}}|f(t,x)|+\sup_{(t_{1},x_{1})\neq(t_{2},x_{2})\in\mathcal{D}}\frac{|f(t_{1},x_{1})-f(t_{2},x_{2})|}{|t_{1}-t_{2}|^{\alpha/2}+|x_{1}-x_{2}|^{\alpha}}<\infty.$ As above, the dimension of the matrix values of $f$ will be clear in the context. We use the following notations frequently. $B_{r}^{\prime}(x^{\prime})=\\{y\in\mathbb{R}^{d-1}\,|\,|y^{\prime}-x^{\prime}|<r\\},\quad Q_{r}^{\prime}(t,x^{\prime})=(t-r^{2},t)\times B_{r}^{\prime}(x^{\prime}),$ $B_{r}(x)=(x_{1}-r,x_{1}+r)\times B^{\prime}_{r}(x^{\prime}),\quad Q_{r}(t,x)=(t-r^{2},t)\times B_{r}(x),$ $B_{r}^{+}(x)=B_{r}(x)\cap\mathbb{R}^{d}_{+},\quad Q_{r}^{+}(t,x)=(t-r^{2},t)\times B_{r}^{+}(x),$ where $x=(x_{1},x^{\prime})\in\mathbb{R}^{d}_{+}=\mathbb{R}_{+}\times\mathbb{R}^{d-1}$. For a matrix valued function $g$ defined on $\mathbb{R}\times\mathbb{R}^{d}_{+}$, we denote $[g(t,\cdot)]_{B_{r}(x)}=\frac{1}{|B_{r}(x)|}\int_{B_{r}(x)}\bigg{|}g(t,y)-\frac{1}{|B_{r}(x)|}\int_{B_{r}(x)}g(t,z)\,dz\bigg{|}\,dy.$ Then, for any $(s,y)\in\mathbb{R}\times\mathbb{R}^{d}_{+}$ and $r<y_{1}$, we define the mean oscillation of $g$ in $Q_{r}(s,y)=Q^{+}_{r}(s,y)$ with respect to the spatial variables as $\text{osc}_{{\sf x}}\left(g,Q_{r}(s,y)\right):=\frac{1}{r^{2}}\int_{s-r^{2}}^{\,\,s}\left[g(\tau,\cdot)\right]_{B_{r}(y)}\,d\tau.$ Finally, for $\rho\in(1/2,1)$, we denote $g^{{\sf x},\\#}_{\rho}:=\sup_{(s,y)\in\mathbb{R}\times\mathbb{R}^{d}_{+}}\;\;\sup_{r\in(0,\rho y_{1}]}\text{osc}_{\sf x}\left(g,Q_{r}(s,y)\right).$ Applying these notations to the diffusion coefficient matrices $A^{ij}$, $i,j=1,\ldots,d$ in place of $g$, we state the following regularity assumption on $A^{ij}$. ###### Assumption A$(\rho,\varepsilon)$. For $\rho\in(1/2,1)$ and $\varepsilon\in(0,1)$, we have the following bounded mean oscillation and bounded conditions $\sum_{i,j=1}^{d}(A^{ij})_{\rho}^{{\sf x},\\#}+\sup_{t,x}(|MB^{i}|+|M^{2}C|)\leq\varepsilon.$ Now, we state the main theorem of the paper. ###### Theorem 2.1 (Weighted $L_{p}$-theory on a half space). Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, $p\in(1,\infty)$ and $\theta\in(d-1,d-1+p)$. Then there exist positive constants $\rho\in(1/2,1)$ and $\varepsilon$, depending only on $d$, $d_{1}$, $\delta$, $p$, and $\theta$, such that under Assumption A$(\rho,\varepsilon)$, for any $u\in\mathfrak{H}_{p,\theta}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ satisfying $-u_{t}+A^{ij}(t,x)D_{ij}u+B^{i}(t,x)D_{i}u+C(t,x)u-\lambda u=f$ (2.1) in $(-\infty,T)\times\mathbb{R}^{d}_{+}$ with $Mf\in\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$, we have $\displaystyle\lambda\|Mu\|_{p,\theta}+\sqrt{\lambda}\|MDu\|_{p,\theta}+\|u\|_{\mathfrak{H}^{2}_{p,\theta}}\leq N\|Mf\|_{p,\theta}$ (2.2) where $\|\cdot\|_{p,\theta}=\|\cdot\|_{\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})},\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}}=\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})}$, and $N=N(d,d_{1},\delta,p,\theta,\rho)$. Moreover, for any $f$ satisfying $Mf\in\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$, there exists a unique solution $u\in\mathfrak{H}_{p,\theta}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ to the equation (2.1). ###### Remark 2.2. The range of $\theta$ in Theorem 2.1 is sharp. If $\theta\not\in(d-1,d-1+p)$, then the theorem does not hold even for the heat equation. See [11] for an explanation. ## 3\. Systems with coefficients measurable in time In this section all $a^{ij}_{kr}$ depend only on $t$ and are merely measurable. ###### Proposition 3.1 ($L_{p}$ theory on the whole space or a half space). Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, $p\in(1,\infty)$, and $\Omega=\mathbb{R}^{d}$ or $\Omega=\mathbb{R}^{d}_{+}$. Then for any $u\in W_{p}^{1,2}\left((-\infty,T)\times\Omega\right)$ satisfying $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f,\quad(t,x)\in(-\infty,T)\times\Omega$ (3.1) and $u(t,0,x^{\prime})=0$ in case $\Omega=\mathbb{R}^{d}_{+}$, where $f\in L_{p}((-\infty,T)\times\Omega)$, we have $\lambda\|u\|_{p}+\sqrt{\lambda}\|Du\|_{p}+\|D^{2}u\|_{p}+\|u_{t}\|_{p}\leq N\|f\|_{p},$ (3.2) where $N$ depends only on $d,d_{1},\delta,p$ and $\|\cdot\|_{p}=\|\cdot\|_{L_{p}((-\infty,T)\times\Omega)}$. Moreover, for any $\lambda>0$ and $f\in L_{p}\left((-\infty,T)\times\Omega\right)$, there exists a unique $u\in W_{p}^{1,2}\left((-\infty,T)\times\Omega\right)$ satisfying (3.1), (3.2), and $u(t,0,x^{\prime})=0$ in case $\Omega=\mathbb{R}^{d}_{+}$. ###### Proof. This proposition is a special case of [3, Theorem 2, Theorem 4], where the results are proved for higher order systems (including second order systems) with $\lambda\geq\lambda_{0}\geq 0$ when $A^{ij}$ are measurable in $t$ and have small mean oscillations in $x$. If $A^{ij}$ are functions of only $t$, then the mean oscillations in $x$ are zero, and one can take $\lambda_{0}=0$ due to the usual scaling argument. Indeed, if $\lambda\in(0,\lambda_{0})$, then set $R=\lambda_{0}/\lambda$ and consider $\tilde{u}(t,x)=R^{-1}u(Rt,\sqrt{R}x),$ which satisfies $-\tilde{u}_{t}+A^{ij}(Rt)D_{ij}\tilde{u}-\lambda_{0}\tilde{u}=\tilde{f}$ in $(-\infty,R^{-1}T)\times\Omega$, where $\tilde{f}(t,x)=f(Rt,\sqrt{R}x)$. Then, since the coefficients $A^{ij}(Rt)$ satisfy the same conditions as $A^{ij}(t)$, by [3, Theorem 2, Theorem 4] for $\lambda\geq\lambda_{0}$, we have $\lambda_{0}\|\tilde{u}\|_{p}+\sqrt{\lambda_{0}}\|D\tilde{u}\|_{p}+\|D^{2}\tilde{u}\|_{p}+\|\tilde{u}_{t}\|_{p}\leq N\|\tilde{f}\|_{p},$ where $\|\cdot\|_{p}=\|\cdot\|_{L_{p}\left((0,R^{-1}T)\times\Omega\right)}$. Then we scale back to $u$. If $\lambda=0$, for $\varepsilon>0$, we write $-u_{t}+A^{ij}(t)D_{ij}u-\varepsilon u=f-\varepsilon u$ in $(-\infty,T)\times\Omega$. By the estimate just proved for $\lambda>0$, we have $\varepsilon\|u\|_{p}+\sqrt{\varepsilon}\|Du\|_{p}+\|D^{2}u\|_{p}+\|u_{t}\|_{p}\leq N\|f\|_{p}+N\varepsilon\|u\|_{p}$ for any $\varepsilon>0$. Then we let $\varepsilon\searrow 0$ and obtain (3.2) with $\lambda=0$. ∎ Proposition 3.1 leads us to the following result. We recall the definition of the space $\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$ from Section 2. ###### Lemma 3.2. Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, $p>1$, and $\theta\in(d-p,\infty)$. Then for any $u\in\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$ satisfying $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f$ on $(-\infty,T)\times\mathbb{R}^{d}_{+}$ with $Mf$ belonging to $\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}_{+}^{d})$, we have $\displaystyle\lambda\|Mu\|_{p,\theta}+\sqrt{\lambda}\|MDu\|_{p,\theta}+\|u\|_{\mathfrak{H}^{2}_{p,\theta}}\leq N(\|M^{-1}u\|_{p,\theta}+\|Mf\|_{p,\theta}),$ (3.3) where $N$ depends only on $d,d_{1},\delta,p$ and $\theta$, and $\|\cdot\|_{p,\theta}=\|\cdot\|_{\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})}$. The same conclusion holds if $\theta\in(d-1,d-1+p)$, $u\in W_{p}^{1,2}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right)$, $u(t,0,x^{\prime})=0$, and $Mf\in\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}_{+}^{d})$. In this case, $u\in\mathfrak{H}_{p,\theta}^{2}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right)$. ###### Proof. The claim of this lemma can be obtained by repeating the proof of Lemma 2.2 in [11] almost word for word. Also see e.g. Theorem 3.5 in [2]. The only difference is that $u$ is $d_{1}\times 1$ matrix valued. We give a proof below for the reader’s convenience. 1\. Take and fix a function $\zeta=\zeta(s)\in C^{\infty}_{0}(\mathbb{R}_{+})$ satisfying $\int^{\infty}_{0}|\zeta(s)|^{p}s^{-p-\theta+d-1}ds=1.$ For any fixed $r>0$ define $\zeta_{r}(x_{1}):=\zeta(rx_{1})$ for $x_{1}>0$. Then for any function $g$ defined on $\mathbb{R}^{d}_{+}$, by Fubini’s theorem and change of variables, we have $\int^{\infty}_{0}\int_{\mathbb{R}^{d}_{+}}|\zeta_{r}(x_{1})g(x)|^{p}dx\;r^{-p-\theta+d-1}dr=\int_{\mathbb{R}^{d}_{+}}|x_{1}g(x)|^{p}x_{1}^{\theta-d}dx=\|Mg\|^{p}_{p,\theta},$ $\int_{0}^{\infty}\int_{\mathbb{R}^{d}_{+}}\left|\partial_{x_{1}}\left[\zeta_{r}(x_{1})\right]g(x)\right|^{p}\,dx\,r^{-p-\theta+d-1}\,dr$ $=\int_{0}^{\infty}\int_{\mathbb{R}^{d}_{+}}|r\zeta^{\prime}(rx_{1})g(x)|^{p}\,dx\,r^{-p-\theta+d-1}\,dr=N\|g\|_{p,\theta}^{p},$ where $N=N(d,p,\theta)=\int_{0}^{\infty}|\zeta^{\prime}(s)|^{p}s^{-\theta+d-1}\,ds,$ and $\int^{\infty}_{0}\int_{\mathbb{R}^{d}_{+}}|\partial_{x_{1}}^{2}\left(\zeta_{r}(x_{1})\right)g(x)|^{p}dx\;r^{-p-\theta+d-1}dr=N\|M^{-1}g\|_{p,\theta},$ where in this case $N=N(d,p,\theta)=\int_{0}^{\infty}|\zeta^{\prime\prime}(s)|^{p}s^{-p-\theta+d-1}\,ds.$ 2\. Using $\zeta_{r}$ defined in step 1, we regard $\zeta_{r}(x_{1})u(t,x)$ as a matrix valued function defined on $(-\infty,T)\times\mathbb{R}^{d}$ by extending $\zeta_{r}u$ to be zero on $(-\infty,T)\times\\{x=(x_{1},x^{\prime})\in\mathbb{R}^{d}:x_{1}\leq 0\\}$. Recalling the summation rule upon the repeated indices, we observe $\displaystyle-(\zeta_{r}u)_{t}+A^{ij}(t)D_{ij}(\zeta_{r}u)-\lambda\zeta_{r}u$ $\displaystyle=\zeta_{r}f+A^{i1}(t)\zeta^{\prime}_{r}D_{i}u+A^{1j}(t)\zeta^{\prime}_{r}D_{j}u+A^{11}(t)\zeta^{\prime\prime}_{r}u$ (3.4) by the relations $\displaystyle D_{j}(\zeta_{r}u)=\zeta_{r}D_{j}u+uD_{j}\zeta_{r},$ $\displaystyle D_{ij}(\zeta_{r}u)=\zeta_{r}D_{ij}u+D_{i}\zeta_{r}D_{j}u+D_{j}\zeta_{r}D_{i}u+uD_{ij}\zeta_{r}$ (3.5) for each $i,j=1,\ldots,d$, and the fact that $\zeta_{r}$ is the function of $x_{1}$. Since the compact support of $\zeta_{r}$ is away from $x_{1}=0$, $u\in\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$ implies $\zeta_{r}u\in W^{1,2}_{p}((-\infty,T)\times\mathbb{R}^{d})$. Then (3.4) with the observation that the right hand side of (3.4) is in $L_{p}((-\infty,T)\times\mathbb{R}^{d})$ and Proposition 3.1 lead us to $\displaystyle\lambda^{p}\|\zeta_{r}u\|^{p}_{p}+\lambda^{p/2}\|D(\zeta_{r}u)\|^{p}_{p}+\|D^{2}(\zeta_{r}u)\|^{p}_{p}+\|(\zeta_{r}u)_{t}\|^{p}_{p}$ $\displaystyle\quad\quad\leq N\left(\|\zeta f\|^{p}_{p}+\|\zeta^{\prime}_{r}Du\|^{p}_{p}+\|\zeta^{\prime\prime}_{r}u\|^{p}_{p}\right),$ (3.6) where $\|\cdot\|_{p}=\|\cdot\|_{L_{p}((-\infty,T)\times\mathbb{R}^{d})}$ and $N=N(d,d_{1},\delta,p)$. From (3.6) and the relation (3.5), we obtain that $\lambda^{p}\|\zeta_{r}u\|_{p}^{p}+\|\zeta_{r}D^{2}u\|_{p}^{p}+\|\zeta_{r}u_{t}\|_{p}^{p}\leq N\left(\|\zeta f\|^{p}_{p}+\|\zeta^{\prime}_{r}Du\|^{p}_{p}+\|\zeta^{\prime\prime}_{r}u\|^{p}_{p}\right).$ Then using this estimate along with (3.6) and the relations $\zeta_{r}^{\prime}D_{1}u=\frac{1}{2}\left(D_{1}^{2}(\zeta_{r}u)-\zeta_{r}D_{1}^{2}u-u\zeta_{r}^{\prime\prime}\right),$ $\zeta_{r}^{\prime}D_{j}u=D_{1j}(\zeta_{r}u)-\zeta_{r}D_{1j}u,\quad j\neq 1,$ we see that $\|\zeta_{r}^{\prime}Du\|_{p}^{p}+\lambda^{p}\|\zeta_{r}u\|_{p}^{p}+\|\zeta_{r}D^{2}u\|_{p}^{p}+\|\zeta_{r}u_{t}\|_{p}^{p}\leq N\left(\|\zeta f\|^{p}_{p}+\|\zeta^{\prime}_{r}Du\|^{p}_{p}+\|\zeta^{\prime\prime}_{r}u\|^{p}_{p}\right).$ Multiplying both sides of this inequality by $r^{-p-\theta+d-1}$, integrating with respect to $r$ over $\mathbb{R}_{+}$, and using step 1, we get $\displaystyle\lambda^{p}\|Mu\|^{p}_{p,\theta}+\|Du\|_{p,\theta}^{p}+\|MD^{2}u\|^{p}_{p,\theta}+\|Mu_{t}\|^{p}_{p,\theta}$ $\displaystyle\quad\quad\leq N\left(\|M^{-1}u\|^{p}_{p,\theta}+\|Du\|^{p}_{p,\theta}+\|Mf\|^{p}_{p,\theta}\right),$ where $N=N(d,d_{1},\delta,p,\theta)$. Adding $\|M^{-1}u\|_{p,\theta}^{p}$ to both sides of this inequality and using the interpolation inequality (see [2, Lemma 3.3]), $\sqrt{\lambda}\|MDu\|_{p,\theta}\leq N\lambda\|Mu\|_{p,\theta}+N\|MD^{2}u\|_{p,\theta}$ with $\theta+p-d>0$, where $N$ is a universal constant (independent of $d$, $u$, $p$, and $\theta$), we arrive at (3.3). The assertions for $u\in W_{p}^{1,2}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right)$ follow from the same lines of the proof above, provided that $\|M^{-1}u\|_{L_{p,\theta}}<\infty$. To check this, we note that $\|M^{-1}u\|_{L_{p,\theta}}\leq\|M^{-1}uI_{x_{1}\in(0,1)}\|_{L_{p,\theta}}+\|uI_{x_{1}\geq 1}\|_{L_{p}},$ where by Hardy’s inequality with the condition that $d-1<\theta<d-1+p$, we have $\|M^{-1}uI_{x_{1}\in(0,1)}\|_{L_{p,\theta}}\leq N\|DuI_{x_{1}\in(0,1)}\|_{L_{p,\theta}}$ $\leq N\|(|MD^{2}u|+|MDu|+|Mu|)I_{x_{1}\in(0,2)}\|_{L_{p,\theta}}$ $\leq N\||D^{2}u|+|Du|+|u|\|_{L_{p}}<\infty.$ The lemma is proved. ∎ ###### Proposition 3.3. Let $T\in(-\infty,\infty]$, $\lambda\geq 0$. Assume that $u\in C_{0}^{\infty}((-\infty,T]\times\mathbb{R}^{d}_{+})$ and $f$ is defined by $f:=-u_{t}+A^{ij}(t)D_{ij}u-\lambda u$ (3.7) in $(-\infty,T)\times\mathbb{R}^{d}_{+}$. Then $Mf$ belongs to $L_{2}((-\infty,T)\times\mathbb{R}^{d}_{+})=\mathbb{L}_{2,d}((-\infty,T)\times\mathbb{R}^{d}_{+})$ and we have $\|M^{-1}u\|_{2}\leq N\|Mf\|_{2}$ (3.8) where $N=N(\delta)$ and $\|\cdot\|_{2}=\|\cdot\|_{L_{2}((-\infty,T)\times\mathbb{R}^{d}_{+})}$. In case $T=\infty$, $u\in C_{0}^{\infty}((-\infty,T]\times\mathbb{R}^{d}_{+})$ means $u\in C_{0}^{\infty}((-\infty,\infty)\times\mathbb{R}^{d}_{+})$. ###### Proof. 1\. Let $T<\infty$. By multiplying both sides of (3.7) by $-\bar{u}^{\text{tr}}$ and integrating over $(-\infty,T)\times\mathbb{R}^{d}_{+}$, we have $\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\left(\bar{u}^{\text{tr}}u_{t}-\bar{u}^{\text{tr}}A^{ij}(t)D_{ij}u+\lambda\bar{u}^{\text{tr}}u\right)\,dx\,dt=-\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\bar{u}^{\text{tr}}f\,dx\,dt.$ (3.9) Firstly, note that $\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\left(\overline{u_{t}}^{\text{tr}}u+\bar{u}^{\text{tr}}u_{t}\right)\,dx\,dt=\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\left(\bar{u}^{\text{tr}}u\right)_{t}\,dx\,dt=\int_{\mathbb{R}^{d}_{+}}|u|^{2}(T,x)\,dx$ and $\Re\left(\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\bar{u}^{\text{tr}}u_{t}\,dx\,dt\right)=\frac{1}{2}\int_{\mathbb{R}^{d}_{+}}|u|^{2}(T,x)\,dx.$ Secondly, by integration by parts $\displaystyle-\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\bar{u}^{\text{tr}}A^{ij}(t)D_{ij}u\,dx\,dt=-\sum_{i,j=1}^{d}\sum_{k,r=1}^{d_{1}}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\overline{u^{k}}a^{ij}_{kr}(t)D_{ij}u^{r}\,dx\,dt$ becomes $\displaystyle\sum_{i,j=1}^{d}\sum_{k,r=1}^{d_{1}}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\overline{D_{i}u^{k}}a_{kr}^{ij}(t)D_{j}u^{r}\,dx\,dt=\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\overline{D_{i}u}^{\text{tr}}A^{ij}(t)D_{j}u\,dx\,dt.$ Since $u\in C_{0}^{\infty}\left((-\infty,T]\times\mathbb{R}^{d}_{+}\right)$, if we extend $u$ to be zero in the domain $(-\infty,T)\times\\{x=(x_{1},x^{\prime})\in\mathbb{R}^{d}:x_{1}\leq 0\\}$, then the extension of $u$, still denoted by $u$, belongs to $C_{0}^{\infty}\left((-\infty,T]\times\mathbb{R}^{d}\right)$. Now Plancherel’s formula, Legendre-Hadamard ellipticity condition (1.3), and Parseval’s identity give $\displaystyle\Re\left(\int^{T}_{-\infty}\int_{\mathbb{R}^{d}}\overline{D_{i}u}^{\text{tr}}A^{ij}(t)D_{j}u\,dx\,dt\right)$ $\displaystyle=$ $\displaystyle\Re\left(\sum^{d}_{i,j=1}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}}\bar{\tilde{u}}^{\text{tr}}\xi_{i}\xi_{j}A^{ij}(t)\tilde{u}\,d\xi\,dt\right)$ $\displaystyle\geq$ $\displaystyle\delta\int^{T}_{-\infty}\int_{\mathbb{R}^{d}}|\xi|^{2}|\tilde{u}|^{2}\,d\xi\,dt$ $\displaystyle=$ $\displaystyle\delta\int^{T}_{-\infty}\int_{\mathbb{R}^{d}}|Du|^{2}\,dx\,dt$ $\displaystyle=$ $\displaystyle\delta\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|Du|^{2}\,dx\,dt.$ Considering the real parts of both sides of (3.9), we find $\displaystyle\delta\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|Du|^{2}\,dx\,dt$ (3.10) $\displaystyle\leq$ $\displaystyle\frac{1}{2}\int_{\mathbb{R}_{+}^{d}}|u|^{2}(T,x)\,dx+\delta\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|Du|^{2}\,dx\,dt+\lambda\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|u|^{2}\,dx\,dt$ $\displaystyle\leq$ $\displaystyle\Re\left(-\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\bar{u}^{\text{tr}}f\,dx\,dt\right)$ $\displaystyle\leq$ $\displaystyle\frac{\varepsilon}{2}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|x_{1}^{-1}u|^{2}\,dx\,dt+\frac{1}{2\varepsilon}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|x_{1}f|^{2}\,\,dx\,dt,$ for any $\varepsilon>0$, where the last inequality follows from $2\left|\bar{u}^{\text{tr}}(t,x)f(t,x)\right|\leq\varepsilon|x_{1}^{-1}u(t,x)|^{2}\,+\frac{1}{\varepsilon}|x_{1}f(t,x)|^{2}\,.$ On the other hand, we note that Hardy’s inequality tells $\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|x_{1}^{-1}u|^{2}\,dx\,dt\leq 2^{2}\int^{T}_{-\infty}\int_{\mathbb{R}^{d}_{+}}|D_{1}u|^{2}\,dx\,dt.$ Hence, (3.10) and an appropriate choice of $\varepsilon>0$ depending only on $\delta$ lead to (3.8). 2\. When $T=\infty$, we have $\Re\left(\int^{\infty}_{-\infty}\int_{\mathbb{R}^{d}_{+}}\bar{u}^{\text{tr}}u_{t}\,dx\,dt\right)=0.$ The rest is the same as step 1 with $T$ replaced by $\infty$. ∎ Once we have the estimate (3.8) for the equation (3.7), we obtain the following theorem. ###### Theorem 3.4 (Weighted $L_{2}$-theory with $\theta=d$ on a half space). Let $T\in(-\infty,\infty]$, $\lambda\geq 0$. Then for any $u\in\mathfrak{H}_{2,d}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ satisfying $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f$ (3.11) in $(-\infty,T)\times\mathbb{R}^{d}_{+}$ with $Mf\in\mathbb{L}_{2,d}((-\infty,T)\times\mathbb{R}^{d}_{+})$, we have $\displaystyle\lambda\|Mu\|_{2,d}+\sqrt{\lambda}\|MDu\|_{2,d}+\|u\|_{\mathfrak{H}^{2}_{2,d}}\,\,\leq\,\,N\|Mf\|_{2,d},$ (3.12) where $N$ depends only on $d,d_{1},\delta$, $\|\cdot\|_{2,d}=\|\cdot\|_{\mathbb{L}_{2,d}((-\infty,T)\times\mathbb{R}^{d}_{+})}$, and $\|u\|_{\mathfrak{H}^{2}_{2,d}}=\|u\|_{\mathfrak{H}^{2}_{2,d}}((-\infty,T)\times\mathbb{R}^{d}_{+}))$. Moreover, for any $f$ satisfying $Mf\in\mathbb{L}_{2,d}((-\infty,T)\times\mathbb{R}^{d}_{+})$, there exists a unique solution $u\in\mathfrak{H}_{2,d}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ to the equation (3.11). ###### Proof. First we prove a prior estimate (3.12) given that $u\in\mathfrak{H}_{2,d}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ satisfies the equation (3.11). Note that $\lambda u=-u_{t}+A^{ij}(t)D_{ij}u-f,$ which means that $\lambda Mu\in L_{p}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right)$. Thus, $u\in\mathfrak{H}_{2,d}^{2}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right)\quad\text{and}\quad\lambda Mu\in L_{2}\left((-\infty,T)\times\mathbb{R}^{d}_{+}\right).$ Then, by the denseness results (see Theorem 1.19 and Remark 5.5 in [11]), $u$ can be approximated by functions $u_{n}$ in $C_{0}^{\infty}\left((-\infty,T]\times\mathbb{R}^{d}_{+}\right)$ with respect to both norms. That is, $\|u-u_{n}\|_{\mathfrak{H}_{2,d}^{2}}\to 0\quad\text{and}\quad\|\lambda M(u-u_{n})\|_{2,d}\to 0$ as $n\to\infty$. Moreover, by the interpolation inequality [2, Lemma 3.3], we have $\sqrt{\lambda}\|MD(u-u_{n})\|_{2}\leq N\lambda\|M(u-u_{n})\|_{2}+N\|MD^{2}(u-u_{n})\|_{2}\to 0.$ Hence, we may assume $u\in C^{\infty}_{0}((-\infty,T]\times\mathbb{R}^{d}_{+})$ and therefore we get (3.12) from Lemma 3.2 and Proposition 3.3. Thanks to the method of continuity, to prove the second assertion of the theorem for unique solvability, we only need the solvability of the system $-u_{t}+\Delta u-\lambda u=f$, where $\Delta u=[\Delta u^{1}\cdots\Delta u^{d_{1}}]^{\text{tr}}$ which in turn follows the solvability of the single equation $-v_{t}+\Delta v-\lambda v=g$ with the scalar valued functions $v$ and $g$. This is proved in Theorem 3.5 of [2]. The theorem is proved. ∎ ## 4\. Mean oscillation estimates In this section we estimate the mean oscillation of $Du$ to estimate $M^{-1}u$, having Hardy’s ineuqality in mind. The following two lemmas are similar to Lemmas 4.1, 4.2, and 4.3 in [2], which are based on unweighted $L_{p}$-estimates for equations along with the standard localization and Sobolev embeddings. Since the corresponding results for systems are available, for instance, in [3], the proofs are in the same spirit as those in [2]. We give a brief explanation, in particular, for the proof of Lemma 4.2. We abbreviate $Q_{r}=Q_{r}(0,(0,{\bf{0}}))$, $Q_{r}^{+}=Q^{+}_{r}(0,(0,{\bf{0}}))$; see Section 2 for the definitions of $Q_{r}(t,x)$, $Q^{+}_{r}(t,x)$. ###### Lemma 4.1 (Interior Hölder estimate of $Du$). Let $\lambda\geq 0$, $1<p\leq q<\infty$, and $u\in W_{p}^{1,2}(Q_{2})$ satisfy $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=0$ in $Q_{2}$. Then $u$ belongs to $W_{q}^{1,2}(Q_{1})$ and there exists a constant $N=N(d,d_{1},\delta,p,q)$ such that $\|u\|_{W_{q}^{1,2}(Q_{1})}\leq N\|u\|_{L_{p}(Q_{2})}.$ (4.1) In particular, for the case $q>d+2$, we have $\|Du\|_{C^{\alpha/2,\alpha}(Q_{1})}\leq N\|\sqrt{\lambda}\,|u|+|Du|\,\|_{L_{p}(Q_{2})},$ where $\alpha=1-(d+2)/q\in(0,1)$ and $N=N(d,d_{1},\delta,p,q)$. Note that in the estimate (4.1) the constant $N$ is independent of $\lambda(\geq 0)$. ###### Lemma 4.2 (Boundary Hölder estimate of $Du$). Let $\lambda\geq 0$, $1<p\leq q<\infty$, and $u\in\mathfrak{H}_{p,d}^{2}(Q_{2}^{+})$ satisfy $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=0$ in $Q_{2}^{+}$. Then $u$ belongs to $W_{q}^{1,2}(Q_{1}^{+})$ and there exists a constant $N=N(d,d_{1},\delta,p,q)$ such that $\|u\|_{W_{q}^{1,2}(Q_{1}^{+})}\leq N\|u\|_{L_{p}(Q_{2}^{+})}.$ In particular, for the case $q>d+2$, we have $\|Du\|_{C^{\alpha/2,\alpha}(Q_{1}^{+})}\leq N\|u\|_{L_{p}(Q_{2}^{+})},$ where $\alpha=1-(d+2)/q\in(0,1)$ and $N=N(d,d_{1},\delta,p,q)$. ###### Proof. As argued in the proof of [2, Lemma 4.3], we assume that $\lambda>0$. Since $u\in\mathfrak{H}_{p,d}^{2}(Q_{2}^{+})$, we have $M^{-1}u,\,Du\in L_{p}(Q_{2}^{+}),$ which implies that $u,Du\in L_{p}(Q_{2}^{+})$. Consider an infinitely differentiable function $\eta(t,x)$ defined in $\mathbb{R}\times\mathbb{R}^{d}$ such that $0\leq\eta(t,x)\leq 1$, $\eta(x)=1$ on $Q_{3/2}=(-9/4,0)\times(-3/2,3/2)\times B_{3/2}^{\prime},$ and $\operatorname{supp}\eta\subset(-4,4)\times(-2,2)\times B_{2}^{\prime}$. Then $\eta u$ satisfies $-(\eta u)_{t}+A^{ij}D_{ij}(\eta u)-\lambda(\eta u)=g$ in $(-\infty,0)\times\mathbb{R}^{d}_{+}$, where $\eta u$ is extended to be zero outside $Q_{2}^{+}$ and $g=-\eta_{t}u+uD_{ij}\eta+2D_{i}\eta D_{j}u.$ Because $g\in L_{p}\left((-\infty,0)\times\mathbb{R}^{d}_{+}\right),$ by Proposition 3.1, there exists a unique $w\in W_{p}^{1,2}\left((-\infty,0)\times\mathbb{R}^{d}_{+}\right)$ satisfying $w(t,0,x^{\prime})=0$ and $-w_{t}+A^{ij}D_{ij}w-\lambda w=g$ in $(-\infty,0)\times\mathbb{R}^{d}_{+}$. Since $Mg\in L_{p}\left((-\infty,0)\times\mathbb{R}^{d}_{+}\right)$, from Lemma 3.2 with $\theta=d$, it follows that $w\in\mathfrak{H}_{p,d}^{2}\left((-\infty,0)\times\mathbb{R}^{d}_{+}\right)$. We know that $\eta u\in\mathfrak{H}_{p,d}^{2}\left((-\infty,0)\times\mathbb{R}^{d}_{+}\right)$. Thus, by the uniqueness result of Theorem 3.4, we conclude that $w=\eta u$. This means that $u=w\in W_{p}^{1,2}(Q_{3/2}^{+})$. Then we use the localization argument and Sobolev embeddings along with unweighted $L_{p}$ estimates for systems when the spatial domain is a half ball. ∎ Denote $\left(u\right)_{Q}=\frac{1}{|Q|}\int_{Q}u(t,x)\,dx\,dt,\quad\text{where}\quad Q\subset\mathbb{R}\times\mathbb{R}^{d}.$ Below we abbreviate $Q_{\kappa r}^{+}(0,(y_{1},{\bf 0}))$ as $Q_{\kappa r}^{+}(y_{1})$. ###### Lemma 4.3. Let $\kappa\geq 32$, $y_{1}\geq 0$, $\lambda\geq 0$, and $r>0$. Assume that $Mf$ belongs to $L_{2}\left(Q_{\kappa r}^{+}(y_{1})\right)$ and $u\in\mathfrak{H}_{2,d}^{2}\left(Q_{\kappa r}^{+}(y_{1})\right)$ is a solution to the system $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f$ in $Q_{\kappa r}^{+}(y_{1})$. Then we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\sqrt{\lambda}\left(|u|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})}+\left(|Du|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})}\right)$ (4.2) $\displaystyle+N\kappa^{(d+2)/2}\left(|Mf|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})},$ where $N=N(d,d_{1},\delta,q)>0$; in particular, $N$ is indendent of $f$, $u$, $y_{1}$, $\lambda$, $r$. ###### Proof. 1\. By considering $-u_{t}+A^{ij}(t)D_{ij}u-\varepsilon u=f-\varepsilon u$ and letting $\varepsilon\searrow 0$, it suffices to consider $\lambda>0$. Moreover, we only need to prove the result for the special case $r=\frac{8}{\kappa}$ ($\kappa r=8$). In fact, let $y_{1}\geq 0$, $\lambda>0$, $r>0$ be any numbers. Then for any $f$, $u$ defined on $Q^{+}_{\kappa r}(y_{1})$ and satisfying the given assumptions, we define $v(t,x)=u(\beta^{2}t,\beta x),\quad g(t,x)=\beta^{2}f(\beta^{2}t,\beta x),$ where $\beta:=\frac{\kappa r}{8}$. Then $v,g$ are functions defined on $Q_{8}^{+}(y_{1}/\beta)$, $Mv$ is in $L_{2}\left(Q_{8}^{+}(y_{1}/\beta)\right)$, and $u$ is in $\mathfrak{H}_{2,d}^{2}\left(Q_{8}^{+}(y_{1}/\beta)\right)$. Moreover, $v$ is a solution to the system $-v_{t}+A^{ij}(\beta^{2}t)D_{ij}v-\lambda\beta^{2}v=g$ (4.3) in $Q_{8}^{+}(y_{1}/\beta)$. Hence, if the lemma holds when $kr=8$, then we have (4.2) with $v,g,y_{1}/\beta,\lambda\beta^{2},r=8/\kappa$ in the places of $u,f,y_{1},\lambda,r$. On the other hand, a straightforward computations show that $\left(|Mg|^{2}\right)^{1/2}_{Q_{8}^{+}(y_{1}/\beta)}=\beta\left(|Mf|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})},\quad\left(|Dv|^{2}\right)^{1/2}_{Q_{8}^{+}(y_{1}/\beta)}=\beta\left(|Du|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})},$ $\sqrt{\lambda\beta^{2}}\left(|v|^{2}\right)^{1/2}_{Q_{8}^{+}(y_{1}/\beta)}=\beta\sqrt{\lambda}\left(|u|^{2}\right)^{1/2}_{Q_{\kappa r}^{+}(y_{1})},$ $\left(\left|Dv-\left(Dv\right)_{Q_{8/\kappa}^{+}(y_{1}/\beta)}\right|^{2}\right)^{1/2}_{Q_{8/\kappa}^{+}(y_{1}/\beta)}=\beta\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})},$ and we obtain (4.2) for general $r>0$. We have seen that the result of this lemma for $r=\frac{8}{\kappa}$ implies the result for the general $r>0$. 2\. Let $y_{1}\in[0,1]$ Since we assume $r=8/\kappa\leq 1/4$, we will keep the following in mind: $Q_{r}^{+}(y_{1})\subset Q_{2}^{+}\subset Q_{4}^{+}\subset Q_{\kappa r}^{+}(y_{1})=Q^{+}_{8}(y_{1}).$ We note $MfI_{Q^{+}_{4}}\in\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})=L_{2}((-\infty,0)\times\mathbb{R}^{d}_{+})$. By Theorem 3.4, there is a unique $w\in\mathfrak{H}_{2,d}^{2}((-\infty,0)\times\mathbb{R}^{d}_{+})$ satisfying $-w_{t}+A^{ij}(t)D_{ij}w-\lambda u=fI_{Q_{4}^{+}}$ in $(-\infty,0)\times\mathbb{R}^{d}_{+}$ and, in particular, we have $\|Dw\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}\leq N\|MfI_{Q_{4}^{+}}\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}=N\|Mf\|_{L_{2}(Q_{4}^{+})},$ (4.4) where $N=N(d,d_{1},\delta)$. Then $v:=u-w$ is in $\mathfrak{H}_{2,d}^{2}(Q_{4}^{+})$ and satisfies $-v_{t}+A^{ij}(t)D_{ij}v-\lambda v=0,\quad(t,x)\in Q^{+}_{4}.$ We note that for any $\alpha\in(0,1)$ $\displaystyle\left(\left|Dv-\left(Dv\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}\leq Nr^{\alpha}[Dv]_{C^{\alpha/2,\alpha}(Q^{+}_{2})},$ (4.5) where $N=N(\alpha)$. On the other hand, by Lemma 4.2 with $q$ so that $1-(d+2)/q=1/2$, $p=2$, and a scaling argument as in step 1, we have $\displaystyle[Dv]_{C^{1/4,1/2}(Q^{+}_{2})}\leq N\|v\|_{L_{2}(Q^{+}_{4})}\leq N\|M^{-1}v\|_{L_{2}(Q^{+}_{4})}\leq N\|D_{1}v\|_{L_{2}(Q^{+}_{4})},$ (4.6) where the last inequality is due to Hardy’s inequality and the last $N$ depends only on $d,d_{1},q$. Combining (4.5), (4.6) and (4.4), we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle N\left(\left|Dv-\left(Dv\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}+N\left(\left|Dw\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}\left(\left|D_{1}v\right|^{2}\right)^{1/2}_{Q_{4}^{+}}+Nr^{-(d+2)/2}\left(\left|Mf\right|^{2}\right)^{1/2}_{Q_{4}^{+}}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}\left(\left|D_{1}u\right|^{2}\right)^{1/2}_{Q_{4}^{+}}+Nr^{-(d+2)/2}\left(\left|Mf\right|^{2}\right)^{1/2}_{Q_{4}^{+}},$ where $N=N(d,d_{1},q)$. Since $\kappa r=8$ and $Q_{4}^{+}\subset Q^{+}_{\kappa r}(y_{1})=Q^{+}_{8}(y_{1})$, we obtain (4.2) in this case. 3\. Let $y_{1}\in(1,\infty)$. We again assume $r=8/{\kappa}\leq 1/4$. Due to $y_{1}>1$, this time we have $Q_{r}^{+}(y_{1})=Q_{r}(y_{1})\subset Q_{1/4}(y_{1})\subset Q_{1/2}(y_{1})\subset Q_{\kappa r}^{+}(y_{1}).$ As in step 2, by Theorem 3.4 there is a unique solution $w\in\mathfrak{H}^{2}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})$ to the equation $-w_{t}+A^{ij}(t)w_{x^{i}x^{j}}-\lambda w=f1_{Q_{1/2}(y_{1})}$ and the estimate (3.12) holds with $w$ and $f1_{Q_{1/2}(y_{1})}$ in places of $u$ and $f$. In particular, we have $\displaystyle\lambda\|Mw\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}+\|M^{-1}w\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}+\|Dw\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}$ $\displaystyle\leq$ $\displaystyle N\|Mf\|_{L_{2}(Q_{1/2}(y_{1}))},$ where $N=N(d,d_{1},\delta)$. This estimate along with the inequality $\sqrt{\lambda}\leq\lambda x_{1}+x_{1}^{-1},\quad x_{1}>0$ shows that $\|\sqrt{\lambda}|w|+|Dw|\|_{\mathbb{L}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})}\leq N\|Mf\|_{L_{2}(Q_{1/2}(y_{1}))},$ (4.7) where $N=N(d,d_{1},\delta)$. Then $v:=u-w\in\mathfrak{H}^{2}_{2,d}((-\infty,0)\times\mathbb{R}^{d}_{+})$ and satisfies $-v_{t}+A^{ij}(t)v_{x^{i}x^{j}}-\lambda v=0,\quad(t,x)\in Q_{1/2}(y_{1}).$ Applying Lemma 4.1 with a large $q$ (so that $1-(d+2)/q=1/2$), $p=2$, and scaling and translation arguments, we get $\displaystyle\left(\left|Dv-\left(Dv\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}[Dv]_{C^{1/4,1/2}(Q_{1/4}(y_{1}))}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}\left((\sqrt{\lambda}|v|+|Dv|)^{2}\right)^{1/2}_{Q_{1/2}(y_{1})},$ where $N=N(d,d_{1},q)$. As in the last part of step 2, we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle N\left(\left|Dv-\left(Dv\right)_{Q_{r}^{+}(y_{1})}\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}+N\left(\left|Dw\right|^{2}\right)^{1/2}_{Q_{r}^{+}(y_{1})}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}\left((\sqrt{\lambda}|v|+|Dv|)^{2}\right)^{1/2}_{Q_{1/2}(y_{1})}+Nr^{-(d+2)/2}\left(\left|Mf\right|^{2}\right)^{1/2}_{Q_{1/2}(y_{1})}$ $\displaystyle\leq$ $\displaystyle Nr^{1/2}\left((\sqrt{\lambda}|u|+|Du|)^{2}\right)^{1/2}_{Q_{1/2}(y_{1})}+Nr^{-(d+2)/2}\left(\left|Mf\right|^{2}\right)^{1/2}_{Q_{1/2}(y_{1})},$ where the last $N=N(d,d_{1},q)$. Since $\kappa r=8$ and $Q_{1/2}(y_{1})\subset Q^{+}_{\kappa r}(y_{1})=Q^{+}_{8}(y_{1})$, (4.2) follows again. ∎ ###### Remark 4.4. For $T\in(-\infty,\infty]$, consider the collection of parabolic cylinders $\mathcal{Q}=\\{Q=Q^{+}_{r}(z):z=(t,x)\in(-\infty,T)\times\mathbb{R}^{d}_{+},r\in(0,\infty)\\},$ and recall Muckenhoupt weights $A_{p}((-\infty,T)\times\mathbb{R}^{d}_{+})$. That is, $w\in A_{p}((-\infty,T)\times\mathbb{R}^{d}_{+})$ if $w$ is a non- negative function defined on $(-\infty,T)\times\mathbb{R}^{d}_{+}$ such that $[w]_{A_{p}}=\sup_{Q^{+}(z)\in\mathcal{Q}}\left(\operatorname{\,\,\text{\bf--}\kern-9.79996pt\intop\ilimits@\\!\\!}_{Q_{r}^{+}(z)}w(s,y)\,dy\,ds\right)\left(\operatorname{\,\,\text{\bf--}\kern-9.79996pt\intop\ilimits@\\!\\!}_{Q_{r}^{+}(z)}\left(w(s,y)\right)^{-1/(p-1)}\,dy\,ds\right)^{p-1}<\infty.$ Set $\mathcal{M}g(t,x)=\sup_{Q\in\mathcal{Q},(t,x)\in Q}\frac{1}{|Q|}\int_{Q}|g(s,y)|\,dy\,ds,\quad(t,x)\in(-\infty,T)\times\mathbb{R}^{d}_{+}.$ Then, by the Hardy-Littlewood maximal function theorem with $A_{p}$ weights (WHL), we have $\|\mathcal{M}g\|_{L_{p,w}}\leq N\|g\|_{L_{p,w}},$ where $N=N(d,p,[w]_{A_{p}})$ and $\|f\|_{L_{p,w}}^{p}=\int_{-\infty}^{T}\int_{\mathbb{R}^{d}_{+}}|f(t,x)|^{p}w(t,x)\,dx\,dt.$ We also use the Fefferman-Stein theorem for sharp functions with $A_{p}$ weights (WFS). In doing so, we define sharp functions using a series of filtration. More precisely, we consider the following series of partitions of $(-\infty,T)\times\mathbb{R}^{d}_{+}$. $\mathbb{C}_{\ell}:=\\{Q^{\ell}=Q^{\ell}_{i_{0},i_{1},\ldots,i_{d}}:i_{0},i_{1},,\ldots,i_{d}\in\mathbb{Z},\,i_{0}\leq 0,\,i_{1}\geq 0\\},$ where $\ell\in\mathbb{Z}$ and $Q^{\ell}_{(i_{0},i_{1},\ldots,i_{d})}$ is the intersection of $(-\infty,T)\times\mathbb{R}^{d}_{+}$ with $[(i_{0}-1)2^{-2\ell}+T,i_{0}2^{-2\ell}+T)\times[i_{1}2^{-\ell},(i_{1}+1)2^{-\ell})\times\cdots\times[i_{d}2^{-\ell},(i_{d}+1)2^{-\ell}),$ provided that $T<\infty$. If $T=\infty$, we replace $i_{0}\leq 0$ by $i_{0}\in\mathbb{Z}$ and the time interval $[(i_{0}-1)2^{-2\ell}+T,i_{0}2^{-2\ell}+T)$ by $[(i_{0}-1)2^{-2\ell},i_{0}2^{-2\ell})$. Then, we define $g^{\\#}_{\operatorname{dy}}(t,x):=\sup_{\ell<\infty,(t,x)\in Q^{\ell}}\frac{1}{|Q^{\ell}|}\int_{Q^{\ell}}|g(s,y)-(g)_{Q^{\ell}}|\,dy\,ds,\quad(t,x)\in(-\infty,T)\times\mathbb{R}^{d}_{+}.$ By the Fefferman-Stein theorem for sharp functions with $A_{p}$ weights (see, for instance, [1, Theorems 2.3 and 2.4]), we have $\|g\|_{L_{p,w}}\leq N\|g^{\\#}_{\operatorname{dy}}\|_{L_{p,w}}$ for $w\in A_{p}\left((-\infty,T)\times\mathbb{R}^{d}\right)$, where $N=N(d,p,[w]_{p})$. In particular, we see that if $p\in(1,\infty)$, $1<q<p$, and $\theta\in(d-1,d-1+p/q)$, then $x_{1}^{\theta-d}\in A_{p/q}.$ Indeed, for each $x_{1}\in[0,\infty)$ and $r>0$, If $x_{1}<2r$, then $\left(\frac{1}{2r}\int_{x_{1}-r}^{x_{1}+r}y_{1}^{\theta-d}\,dy_{1}\right)\left(\frac{1}{2r}\int_{x_{1}-r}^{x_{1}+r}\left(y_{1}^{\theta-d}\right)^{-1/(p/q-1)}\,dy_{1}\right)^{p/q-1}$ $\leq\left(\frac{1}{r}\int_{0}^{x_{1}+r}y_{1}^{\theta-d}\,dy_{1}\right)\left(\frac{1}{r}\int_{0}^{x_{1}+r}\left(y_{1}^{\theta-d}\right)^{-1/(p/q-1)}\,dy_{1}\right)^{p/q-1}$ $\leq\left(r^{\theta-d}\int_{0}^{\frac{x_{1}}{r}+1}\tau^{\theta-d}\,d\tau\right)\left(r^{-\frac{\theta-d}{p/q-1}}\int_{0}^{\frac{x_{1}}{r}+1}\tau^{-\frac{\theta-d}{p/q-1}}\,d\tau\right)^{p/q-1}$ $\leq\left(\int_{0}^{3}\tau^{\theta-d}\,d\tau\right)\left(\int_{0}^{3}\tau^{-\frac{\theta-d}{p/q-1}}\,d\tau\right)^{p/q-1},$ where we note that $\theta-d>-1$ and $-\frac{\theta-d}{p/q-1}>-1$. If $x_{1}\geq 2r$, then $\left(\frac{1}{2r}\int_{x_{1}-r}^{x_{1}+r}y_{1}^{\theta-d}\,dy_{1}\right)\left(\frac{1}{2r}\int_{x_{1}-r}^{x_{1}+r}\left(y_{1}^{\theta-d}\right)^{-1/(p/q-1)}\,dy_{1}\right)^{p/q-1}$ $=2^{-p/q}\left(\int_{\frac{x_{1}}{r}-1}^{\frac{x_{1}}{r}+1}\tau^{\theta-d}\,d\tau\right)\left(\int_{\frac{x_{1}}{r}-1}^{\frac{x_{1}}{r}+1}\tau^{-\frac{\theta-d}{p/q-1}}\,d\tau\right)^{p/q-1}$ $\leq\left\\{\begin{aligned} 2^{-p/q}2\left(\frac{x_{1}}{r}-1\right)^{\theta-d}\left(2\left(\frac{x_{1}}{r}+1\right)^{-\frac{\theta-d}{p/q-1}}\right)^{p/q-1},\quad\text{if}\quad\theta-d\leq 0,\\\ 2^{-p/q}2\left(\frac{x_{1}}{r}+1\right)^{\theta-d}\left(2\left(\frac{x_{1}}{r}-1\right)^{-\frac{\theta-d}{p/q-1}}\right)^{p/q-1},\quad\text{if}\quad\theta-d>0,\end{aligned}\right.$ which is bounded by a constant independent of $x_{1}$ and $r$ because $\left(\frac{x_{1}}{r}+1\right)\left(\frac{x_{1}}{r}-1\right)^{-1}\leq 3,$ provided that $x_{1}\geq 2r$. The following theorem is an $L_{p}$ counterpart of Theorem 3.4. ###### Theorem 4.5 (Weighted $L_{p}$-theory with $\theta=d$ on a half space). Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, and $p\in(1,\infty)$. Then for any $u\in\mathfrak{H}_{p,d}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ satisfying $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f$ (4.8) in $(-\infty,T)\times\mathbb{R}^{d}_{+}$ with $Mf\in\mathbb{L}_{p,d}((-\infty,T)\times\mathbb{R}^{d}_{+})$, we have $\displaystyle\lambda\|Mu\|_{p,d}+\sqrt{\lambda}\|MDu\|_{p,d}+\|u\|_{\mathfrak{H}^{2}_{p,\theta}}\leq N\|Mf\|_{p,d},$ (4.9) where $N$ depends only on $d,d_{1},\delta,p$, $\|\cdot\|_{p,d}=\|\cdot\|_{\mathbb{L}_{p,d}((-\infty,T)\times\mathbb{R}^{d}_{+})}$, and $\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}}=\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})}$. Moreover, for any $f$ satisfying $Mf\in\mathbb{L}_{p,d}((-\infty,T)\times\mathbb{R}^{d}_{+})$, there exists a unique solution $u\in\mathfrak{H}_{p,d}^{2}((-\infty,T)\times\mathbb{R}^{d}_{+})$ to the equation (4.8). ###### Proof. Due to the method of continuity and the corresponding theory of the Laplacian case in Theorem 3.5 in [2], we only prove the a priori estimate (4.9). 1\. Let $p>2$. Take any $\kappa\geq 32$. Then by Lemma 4.3 with a simple translation argument, we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q^{+}_{r}(s,y)}\right|^{2}\right)^{1/2}_{Q^{+}_{r}(s,y)}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\sqrt{\lambda}\left(|u|^{2}\right)^{1/2}_{Q^{+}_{\kappa r(s,y)}}+\left(|Du|^{2}\right)^{1/2}_{Q^{+}_{\kappa r(s,y)}}\right)$ (4.10) $\displaystyle+N\kappa^{(d+2)/2}\left(|Mf|^{2}\right)^{1/2}_{Q^{+}_{\kappa r(s,y)}}$ for any $(s,y)\in(-\infty,T)\times\mathbb{R}^{d}_{+}$, where $N=N(d,d_{1},\delta)$. For each $(t,x)\in(-\infty,T)\times\mathbb{R}^{d}$ and $Q^{\ell}\in\mathbb{C}_{\ell}$ such that $(t,x)\in Q^{\ell}$, find $Q_{r}^{+}(s,y)$, $(s,y)\in(-\infty,T)\times\mathbb{R}^{d}_{+}$ with the smallest $r>0$ such that $Q^{\ell}\subset Q_{r}^{+}(s,y)$ and $\left(|Du-(Du)_{Q^{\ell}}|^{2}\right)^{1/2}_{Q^{\ell}}\leq N(d)\left(|Du-(Du)_{Q_{r}^{+}(s,y)}|^{2}\right)^{1/2}_{Q_{r}^{+}(s,y)}.$ From this, (4.10), Jensen’s inequality and the defintions of sharp functions and maximal functions in Remark 4.4, we get $\displaystyle(Du)^{\\#}_{\operatorname{dy}}(t,x)$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\sqrt{\lambda}\mathcal{M}^{1/2}(|u|^{2})(t,x)+\mathcal{M}^{1/2}(|Du|^{2}))(t,x)\right)$ $\displaystyle+N\kappa^{(d+2)/2}\mathcal{M}^{1/2}(|Mf|^{2}))(t,x)$ for any $(t,x)\in(-\infty,T)\times\mathbb{R}^{d}_{+}$. Then we have $\displaystyle\|(Du)^{\\#}_{\operatorname{dy}}\|^{p}_{p,d}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-p/2}\left((\sqrt{\lambda})^{p}\|\mathcal{M}(|u|^{2})\|^{p/2}_{p,d}+\|\mathcal{M}(|Du|^{2})\|^{p/2}_{p/2,d}\right)$ $\displaystyle+N\kappa^{p(d+2)/2}\|\mathcal{M}(|Mf|^{2})\|^{p/2}_{p/2,d},$ where $N=N(d,d_{1},\delta,p)$. Noting $p/2>1$ and applying WFS and WHL in Remark 4.4 with $w\equiv 1$, we have $\displaystyle\|Du\|^{p}_{p,d}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-p/2}\left((\sqrt{\lambda})^{p}\||u|^{2}\|^{p/2}_{p/2,d}+\||Du|^{2}\|^{p/2}_{p/2,d}\right)+N\kappa^{p(d+2)/2}\||Mf|^{2}\|^{p/2}_{p/2,d}$ $\displaystyle=$ $\displaystyle N\kappa^{-p/2}\left((\sqrt{\lambda})^{p}\|u\|^{p}_{p,d}+\|Du\|^{p}_{p,d}\right)+N\kappa^{p(d+2)/2}\|Mf\|^{p}_{p,d},$ and therefore $\displaystyle\|Du\|_{p,d}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\|\sqrt{\lambda}u\|_{p,d}+\|Du\|_{p,d}\right)+N\kappa^{(d+2)/2}\|Mf\|_{p,d}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\lambda\|Mu\|_{p,d}+\|M^{-1}u\|_{p,d}+\|Du\|_{p,d}\right)+N\kappa^{(d+2)/2}\|Mf\|_{p,d},$ where we used $\sqrt{\lambda}\leq\lambda x_{1}+1/x_{1}$, $x_{1}>0$ for the second inequality. Then Lemma 3.2 with $\theta=d$ and Hardy’s inequality give $\displaystyle\lambda\|Mu\|_{p,d}+\sqrt{\lambda}\|MDu\|_{p,d}+\|u\|_{\mathfrak{H}^{2}_{p,d}}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\lambda\|Mu\|_{p,d}+\|M^{-1}u\|_{p,d}+\|Du\|_{p,d}\right)+N\kappa^{(d+2)/2}\|Mf\|_{p,d}+N\|Mf\|_{p,d},$ and an appropriate choice of $\kappa\geq 32$ leads us to (4.9). 2\. Let $1<p<2$. We use a duality argument. Again it suffices to prove the a priori estimate (4.9). Furthermore, thanks to Lemma 3.2, we only need to prove that $\|M^{-1}u\|_{p}\leq N\|Mf\|_{p},$ (4.11) where $\|\cdot\|_{p}=\|\cdot\|_{L_{p}((-\infty,T)\times\mathbb{R}^{d}_{+})}$. Now we recall [10, Theorem 2.3] saying that $L_{p,d-p}((-\infty,T)\times\mathbb{R}^{d}_{+})$ is the dual space of $L_{q,d+p}((-\infty,T)\times\mathbb{R}^{d}_{+})$, where $1/p+1/q=1$. Let $g\in L_{q,d+p}((-\infty,T)\times\mathbb{R}^{d}_{+})$. That is, $Mg\in L_{q}((-\infty,T)\times\mathbb{R}^{d}_{+})$, where $q>2$. Then by the above result for $q>2$, there exists $v\in\mathfrak{H}_{p,d}^{2}(\mathbb{R}\times\mathbb{R}^{d}_{+})$ satisfying $v_{t}+A^{ij}(t)D_{ij}v-\lambda v=gI_{t\in(-\infty,T)}$ in $\mathbb{R}\times\mathbb{R}^{d}_{+}$. In particular, $v(t,x)=0$ for $t\geq T$ in case $T<\infty$. Thus $\int_{(-\infty,T)\times\mathbb{R}^{d}_{+}}ug\,dx\,dt=\int_{(-\infty,T)\times\mathbb{R}^{d}_{+}}u\left(v_{t}+A^{ij}(t)D_{ij}v-\lambda v\right)\,dx\,dt$ $=\int_{(-\infty,T)\times\mathbb{R}^{d}_{+}}v\left(-u_{t}+A^{ij}(t)D_{ij}u-\lambda u\right)\,dx\,dt=\int_{(-\infty,T)\times\mathbb{R}^{d}_{+}}vf\,dx\,dt$ $\leq\|M^{-1}v\|_{q}\|Mf\|_{p}\leq N\|Mg\|_{q}\|Mf\|_{p},$ which implies (4.11). Finally, Theorem 3.4 takes care of the case $p=2$. ∎ Theorem 4.5 allows us to have the following lemma, which is an $L_{p}$ counterpart of Lemma 4.3. ###### Lemma 4.6 (Mean oscillation of $Du$ on a half space). Let $p>1$, $\lambda\geq 0$, $r>0$, $\kappa\geq 32$, and $y_{1}\geq 0$. Assume that $Mf\in L_{p}\left(Q_{\kappa r}^{+}(y_{1})\right)$ and let $u\in\mathfrak{H}_{p,d}^{2}\left(Q_{\kappa r}^{+}(y_{1})\right)$ be a solution to $-u_{t}+A^{ij}(t)D_{ij}u-\lambda u=f$ in $Q_{\kappa r}^{+}(y_{1})$. Then we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(y_{1})}\right|^{p}\right)^{1/p}_{Q_{r}^{+}(y_{1})}$ (4.12) $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\sqrt{\lambda}\left(|u|^{p}\right)^{1/p}_{Q_{\kappa r}^{+}(y_{1})}+\left(|Du|^{p}\right)^{1/p}_{Q_{\kappa r}^{+}(y_{1})}\right)$ $\displaystyle+N\kappa^{(d+2)/p}\left(|Mf|^{p}\right)^{1/p}_{Q_{\kappa r}^{+}(y_{1})},$ where $N=N(d,d_{1},\delta,p)>0$. ###### Proof. The proof repeats the proof of Lemma 4.3 word for word. The only difference is that we use Theorem 4.5 ($L_{p}$ estimate) in place of Theorem 3.4 ($L_{2}$ estimate). ∎ ## 5\. Proof of Theorem 2.1 We recall that the $A(\rho,\varepsilon)$ condition for $A^{ij}=A^{ij}(t,x)$, $i,j=1,\ldots,d$ is assumed in Theorem 2.1. ###### Lemma 5.1. Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, $p\in(1,\infty)$, $\theta\in(d-1,d-1+p)$, and $\rho\in(1/2,1)$. Then there exists a positive constant $\varepsilon_{0}=\varepsilon_{0}(d,d_{1},\delta,p,\theta)$ such that, for any $\varepsilon\in(0,\varepsilon_{0}]$, under Assumption A$(\rho,\varepsilon)$ the following holds. Suppose that $u\in\mathfrak{H}_{p,\theta}^{2}(-\infty,T)$ satisfies $-u_{t}+A^{ij}(t,x)D_{ij}u-\lambda u=f$ in $(-\infty,T)\times\mathbb{R}^{d}_{+}$, where $Mf\in\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$. Then $\lambda\|Mu\|_{p,\theta}+\|u\|_{\mathfrak{H}^{2}_{p,\theta}}\leq N\|Mf\|_{p,\theta}+N\|Du\|_{p,\theta},$ where $\|\cdot\|_{p,\theta}=\|\cdot\|_{\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})},\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}}=\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})}$, and $N=N(d,d_{1},\delta,p,\theta)$. ###### Proof. To prove the lemma we follow the proof of [2, Lemma 5.1] almost word for word. Actually the regularity condition on $A^{ij}$ in this paper is a bit different from that in [2], however we see that the mean oscillations with respect to the spatial variables on $B_{R}(x)$, $R\in(0,1/2]$, of the coefficients $A_{r}^{ij}(t,x):=A^{ij}(t/r^{2},x/r)$ can be made sufficiently small under Assumption A($\rho$, $\varepsilon$) when $x_{1}\in(1,4)$. Then we use the results in [3] in place of the corresponding results for single equations used in the proof of [2, Lemma 5.1].∎ ###### Lemma 5.2. Let $q\in(1,\infty)$, $\theta\in\mathbb{R}$, $\beta\in(1,\infty)$, and $\beta^{\prime}=\frac{\beta}{\beta-1}$. Let $h>0$, $\rho\in(1/2,1)$, $R\in(0,\rho h)$, $\kappa\geq 32$ and let $u\in\mathfrak{H}_{\beta q,\theta}^{2}(\mathbb{R}\times\mathbb{R}^{d}_{+})$ be compactly supported on $Q_{R}(h):=Q_{R}(0,(h,{\bf{0}}))=Q^{+}_{R}(0,(h,{\bf{0}}))$. Then under Assumption $A(\rho,\varepsilon)$, for any $(s,y)\in\mathbb{R}\times\overline{\mathbb{R}^{d}_{+}}$ and $r>0$, we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(s,y)}\right|^{q}\right)^{1/q}_{Q_{r}^{+}(s,y)}$ $\displaystyle\leq$ $\displaystyle N_{0}\kappa^{-1/2}\left(\sqrt{\lambda}\left(|u|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}+\left(|Du|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}\right)$ $\displaystyle+N_{1}\kappa^{(d+2)/q}\varepsilon^{1/{(\beta^{\prime}q)}}\left(|MD^{2}u|^{\beta q}\right)^{1/(\beta q)}_{Q_{\kappa r}^{+}(s,y)}+N_{0}\kappa^{(d+2)/q}\left(|Mf|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)},$ for any $\varepsilon\in(0,1)$, where $N_{0}=N_{0}(d,d_{1},\delta,q)$, $N_{1}=N_{1}(d,d_{1},\delta,q,\beta,\rho)$, and $f=-u_{t}+A^{ij}(t,x)D_{ij}u-\lambda u$ in $Q_{\kappa r}^{+}(s,y)$. ###### Proof. We first note that since $u$ is supported on $Q_{R}(h)=(-R^{2},0)\times(h-R,h+R)\times B_{R}^{\prime}(0),$ where $h-R>0$, that is, $u$ is supported on a set strictly away from the boundary of $\mathbb{R}^{d}_{+}$, for any $(s,y)\in\mathbb{R}\times\overline{\mathbb{R}^{d}_{+}}$ and $r>0$, we have $u\in\mathfrak{H}_{\beta q,d}^{2}(Q_{\kappa r}^{+}(s,y))\cap\mathfrak{H}_{q,d}^{2}(Q_{\kappa r}^{+}(s,y)).$ (5.1) By scaling, we may assume that $h=1$. Obviously, we may assume that $Q_{r}^{+}(s,y)\cap Q_{R}(1)\neq\emptyset$, which means that $1-R-r<y_{1}<1+R+r.$ (5.2) Depending on the size of $\kappa r$, we consider two cases. Case 1: $\kappa r\leq\rho(1-R-r)$. This with (5.2) shows that $y_{1}>1-R-r>\kappa r/\rho.$ In this case, we take $Q=Q_{\kappa r}(s,y)=Q_{\kappa r}^{+}(s,y)$. Case 2: $\kappa r>\rho(1-R-r)$. This along with $\rho<1<\kappa$ shows that $\kappa r>(\rho+\kappa)r/2>\rho r/2+(1-R-r)\rho/2=(1-R)\rho/2>R(1-\rho)/2$ (5.3) because $R<\rho$. In this case, we take $Q=Q_{R}(1)$. By (5.3) we see that $|Q|=N(d)R^{d+2}\leq N(d,\rho)\kappa r^{d+2}\leq N|Q_{\kappa r}^{+}(s,y)|.$ (5.4) We denote $Q=Q_{\kappa r}(s,y)=Q^{+}_{\kappa r}(s,y)$ in Case 1 and we set $Q=Q_{R}(1)$ in Case 2. We also set $\bar{A}^{ij}(t)=\frac{1}{|B|}\int_{B}A^{ij}(t,z)\,dz,$ where $B$ is either $B_{\kappa r}(y)=(y_{1}-r,y_{1}+r)\times B^{\prime}_{\kappa r}(y^{\prime})$ or $B_{R}(1,{\bf{0}})=(1-r,1+r)\times B^{\prime}_{R}({\bf{0}})$ depending on Case 1, Case 2, respectively. We note that the inequality $(|\bar{A}^{ij}-A^{ij}|)_{Q}\leq\varepsilon$ holds in both cases of $Q$ by the condition $A(\rho,\varepsilon)$. We then have the equation $-u_{t}+\bar{A}^{ij}(t)D_{ij}u-\lambda u=F$ in $Q_{\kappa r}^{+}(s,y)$, where the $d_{1}\times 1$ matrix valued function $F$ is defined by $F(t,x)=\left(\bar{A}^{ij}(t)-A^{ij}(t,x)\right)D_{ij}u(t,x)+f(t,x).$ By (5.1), we have $u\in\mathfrak{H}_{q,d}^{2}(Q_{\kappa r}^{+}(s,y))$ and $MF\in L_{q}(Q_{\kappa r}^{+}(s,y))$. Then by Lemma 4.6 with $q$ in place of $p$ and a translation, we have $\displaystyle\left(\left|Du-\left(Du\right)_{Q_{r}^{+}(s,y)}\right|^{q}\right)^{1/q}_{Q_{r}^{+}(s,y)}$ $\displaystyle\leq$ $\displaystyle N\kappa^{-1/2}\left(\sqrt{\lambda}\left(|u|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}+\left(|Du|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}\right)$ (5.5) $\displaystyle+N\kappa^{(d+2)/q}\left(|MF|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)},$ where $N=n(d,d_{1},\delta,q)$. By the definition of $F$, triangle inequality, Hölder inequality, the boundedness condition (1.4) and the observation (5.4), we have $\displaystyle\left(|MF|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}$ (5.6) $\displaystyle\leq$ $\displaystyle\left(\left(\sum_{i,j}|\bar{A}^{ij}-A^{ij}|I_{Q_{R}(h)}\right)^{\beta^{\prime}q}\right)^{1/(\beta^{\prime}q)}_{Q^{+}_{\kappa r}(s,y)}\left(|MD^{2}u|^{\beta q}\right)^{1/(\beta q)}_{Q^{+}_{\kappa r}(s,y)}+\left(|Mf|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}$ $\displaystyle\leq$ $\displaystyle\sum_{i,j}\left(|\bar{A}^{ij}-A^{ij}|^{\beta^{\prime}q}I_{Q_{R}(h)}\right)^{1/(\beta^{\prime}q)}_{Q^{+}_{\kappa r}(s,y)}\left(|MD^{2}u|^{\beta q}\right)^{1/(\beta q)}_{Q^{+}_{\kappa r}(s,y)}+\left(|Mf|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}$ $\displaystyle\leq$ $\displaystyle N^{\prime}\sum_{i,j}\left(|\bar{A}^{ij}-A^{ij}|\right)^{1/(\beta^{\prime}q)}_{Q}\left(|MD^{2}u|^{\beta q}\right)^{1/(\beta q)}_{Q^{+}_{\kappa r}(s,y)}+\left(|Mf|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)},$ where $N^{\prime}$ depends only on $d_{1},\delta,q,\beta$. Then, by the condition A$(\rho,\varepsilon)$ $\displaystyle\left(|MF|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}\leq N^{{}^{\prime\prime}}\varepsilon^{1/{(\beta^{\prime}q)}}\left(|MD^{2}u|^{\beta q}\right)^{1/(\beta q)}_{Q_{\kappa r}^{+}(s,y)}+\left(|Mf|^{q}\right)^{1/q}_{Q_{\kappa r}^{+}(s,y)}$ for any $\varepsilon\in(0,1)$, where $N^{{}^{\prime\prime}}=N_{0}(d,d_{1},\delta,q,\beta,\rho)$. This with (5.5) proves the lemma. ∎ ###### Proposition 5.3. Let $T\in(-\infty,\infty]$, $\lambda\geq 0$, $p\in(1,\infty)$, and $\theta\in(d-1,d-1+p)$. Also, let $h>0$, $\rho\in(1/2,1)$, $\varepsilon\in(0,\varepsilon_{0}]$, where $\varepsilon_{0}$ is from Lemma 5.1, and $R\in(0,\rho h)$. Let $u\in\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d}_{+})$ be compactly supported on $Q_{R}(h)$ and $f:=-u_{t}+A^{ij}(t,x)D_{ij}u-\lambda u$. Then under Assumption A$(\rho,\varepsilon)$, we have $\lambda\|Mu\|_{p,\theta}+\sqrt{\lambda}\|MDu\|_{p,\theta}+\|u\|_{\mathfrak{H}^{2}_{p,\theta}}\leq N_{0}\|Mf\|_{p,\theta}+N_{1}\varepsilon^{1/(\beta^{\prime}q)}\|MD^{2}u\|_{p,\theta},$ (5.7) where $\|\cdot\|_{p,\theta}=\|\cdot\|_{\mathbb{L}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})}$, $\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}}=\|\cdot\|_{\mathfrak{H}^{2}_{p,\theta}((-\infty,T)\times\mathbb{R}^{d})}$, $N_{0}=N_{0}(d,d_{1},\delta,p,\theta)$, $N_{1}=N_{1}(d,d_{1},\delta,p,\theta,\rho)$, and $q,\beta^{\prime}$ are positive numbers determined by $p$ and $\theta$. ###### Proof. For the given $p\in(1,\infty)$, $\theta\in(d-1,d-1+p)$ we fix $q$, $\beta\in(1,\infty)$ satisfying $q\in(1,p),\quad q\beta<p,\quad\theta\in(d-1,d-1+p/{\beta q}).$ Then by following the arguments in the proof of Theorem 4.5, from Lemma 5.2 we obtain that, for any $\kappa\geq 32$ and $(t,x)\in(-\infty,T)\times\mathbb{R}^{d}_{+}$, $\displaystyle(Du)_{\operatorname{dy}}^{\\#}(t,x)$ $\displaystyle\leq$ $\displaystyle N_{0}\kappa^{-1/2}\left(\sqrt{\lambda}\mathcal{M}^{1/q}\left(|u|^{q}\right)(t,x)+\mathcal{M}^{1/q}\left(|Du|^{q}\right)(t,x)\right)$ $\displaystyle+N_{1}\kappa^{(d+2)/q}\varepsilon^{1/{(q\beta^{\prime})}}\mathcal{M}^{1/(q\beta)}\left(|MD^{2}u|^{q\beta}\right)(t,x)$ $\displaystyle+N_{0}\kappa^{(d+2)/q}\mathcal{M}^{1/q}\left(|Mf|^{q}\right)(t,x),$ where $\beta^{\prime}=\frac{\beta}{\beta-1}$. As noted in Remark 4.4, $x_{1}^{\theta-d}\in A_{p/(\beta q)}\subset A_{p/q}$. Then we have $\displaystyle\|Du\|_{p,\theta}$ $\displaystyle\leq$ $\displaystyle N_{0}\kappa^{-1/2}\left(\sqrt{\lambda}\|u\|_{p,\theta}+\|Du\|_{p,\theta}\right)$ $\displaystyle+N_{1}\kappa^{(d+\theta+2)/q}\varepsilon^{1/{(q\beta^{\prime})}}\|MD^{2}u\|_{p,\theta}+N_{0}\kappa^{(d+\theta+2)/q}\|Mf\|_{p,\theta}.$ By the relation $\sqrt{\lambda}\leq\lambda x_{1}+x_{1}^{-1}$ for $x_{1}>0$ mentioned earlier, we also have $\sqrt{\lambda}\|u\|_{p,\theta}\leq N(p)\left(\lambda\|Mu\|_{p,\theta}+\|M^{-1}u\|_{p,\theta}\right).$ Then by Lemma 5.1 with $\varepsilon_{0}$ therein and appropriate choice of sufficiently large $\kappa\geq 32$, we obtain (5.7). ∎ Proof of Theorem 2.1 ###### Proof. 1\. Due to the method of continuity and the corresponding theory of the Laplacian case in [2, Theorem 3.5], it suffices to show the a priori estimate (2.2). 2\. Assume $B^{i}(t,x)$ and $C(t,x)$ are zero matrices for all $t,x$. Fix a number $\varepsilon_{2}>0$. By Lemma 5.6 in [6] (see also Lemma 3.3 in [7]), there exist $\rho=\rho(\varepsilon_{2})\in(1/2,1)$ and nonnegative funcions $\eta_{k}\in C_{0}^{\infty}(\mathbb{R}\times\mathbb{R}^{d}_{+})$, $k=1,2,\ldots$, such that $\sum_{k}\eta_{k}^{p}\geq 1,\quad\sum_{k}\eta_{k}\leq N(d),\quad\sum_{k}\left(M|D\eta_{k}|+M^{2}|D^{2}\eta_{k}|+M^{2}|(\eta_{k})_{t}|\right)\leq\varepsilon_{2}^{p}$ (5.8) on $\mathbb{R}^{d+1}_{+}$ and, for each $k$, there exist $r>0$ and a point $(t,x)\in\mathbb{R}\times\mathbb{R}^{d}_{+}$ such that $r\leq\rho x_{1}$ and $\text{supp}\,\eta_{k}\subset Q_{r}(t,x)$. Observe that $u_{k}:=u\eta_{k}$ satisfies $\displaystyle-({u_{k}})_{t}+A^{ij}(t,x)D_{ij}u_{k}-\lambda u_{k}$ $\displaystyle=$ $\displaystyle f\eta_{k}+A^{ij}(t,x)(D_{i}uD_{j}\eta_{k}+D_{j}uD_{i}\eta_{k})+uA^{ij}(t,x)D_{ij}\eta_{k}-u(\eta_{k})_{t}$ in $(-\infty,T)\times\mathbb{R}^{d}_{+}$. Then using a translation argument and Proposition 5.3 with $\varepsilon\in(0,\varepsilon_{0}]$ there, we get $\displaystyle\lambda\|Mu_{k}\|_{p,\theta}+\sqrt{\lambda}\|MDu_{k}\|_{p,\theta}+\|u_{k}\|_{\mathfrak{H}_{p,\theta}^{2}}$ $\displaystyle\leq$ $\displaystyle N_{0}\|Mf\eta_{k}\|_{p,\theta}+N_{0}\|MDuD\eta_{k}\|_{p,\theta}+N_{0}\|MuD^{2}\eta_{k}\|_{p,\theta}$ $\displaystyle\quad+N_{0}\|Mu(\eta_{k})_{t}\|_{p,\theta}+N_{1}\varepsilon^{1/(\beta^{\prime}q)}\|MD^{2}u_{k}\|_{p,\theta},$ where $N_{0}=N_{0}(d,d_{1},\delta,p,\theta)$, $N_{1}=N_{1}(d,d_{1},\delta,p,\theta,\rho)$, and $q,\beta^{\prime}$ are positive numbers determined by $p$ and $\theta$. From this and the properties of $\eta_{k}$ in (5.8), we obtain $\displaystyle\lambda\|Mu\|_{p,\theta}+\sqrt{\lambda}\|MDu\|_{p,\theta}+\|u\|_{\mathfrak{H}_{p,\theta}^{2}}$ $\displaystyle\leq$ $\displaystyle N_{0}\|Mf\|_{p,\theta}+N_{0}\varepsilon_{2}\left(\|Du\|_{p,\theta}+\|M^{-1}u\|_{p,\theta}\right)$ $\displaystyle+N_{1}\varepsilon^{1/(\beta^{\prime}q)}\big{(}\|MD^{2}u\|_{p,\theta}+\varepsilon_{2}\|Du\|_{p,\theta}+\varepsilon_{2}\|M^{-1}u\|_{p,\theta}\big{)}.$ We now first choose $\varepsilon_{2}\in(0,1)$ sufficiently small depending only on $d$, $d_{1}$, $\delta$, $p$, and $\theta$ such that $N_{0}\varepsilon_{2}<1/3$, then choose $\rho=\rho(\varepsilon_{2})\in(1/2,1)$ such that (5.8) is satisfied, and finally $\varepsilon=\varepsilon(d,d_{1},\delta,p,\theta,\rho)\in(0,\varepsilon_{0}]$ so that $N_{1}\varepsilon^{1/(\beta^{\prime}q)}<1/3.$ (5.9) Then the above inequality implies (2.2). 3\. General case. Note that we have $-u_{t}+A^{ij}(t,x)D_{ij}u-\lambda u=f-B^{i}D_{i}u-Cu$ in $(-\infty,T)\times\mathbb{R}^{d}_{+}$. Thus, by the result of step 2, if $\varepsilon\in(0,\varepsilon_{0}]$ satisfies (5.9), $\lambda\|Mu\|_{p,\theta}+\sqrt{\lambda}\|MDu\|_{p,\theta}+\|u\|_{\mathfrak{H}_{p,\theta}^{2}}\leq N_{2}\|Mf\|_{p,\theta}+N_{2}\varepsilon\|Du\|_{p,\theta}+N_{2}\varepsilon\|M^{-1}u\|_{p,\theta},$ where $N_{2}=N_{2}(d,d_{1},\delta,p,\theta)$. Thus it is enough to take $\varepsilon$ further smaller such that $N_{2}\varepsilon<1/2$. The theorem is proved. ∎ ## References * [1] Hongjie Dong and Doyoon Kim. On $L_{p}$-estimates for elliptic and parabolic equations with $A_{p}$ weights. Trans. Amer. Math. Soc. (to appear), arXiv:1603.07844. * [2] Hongjie Dong and Doyoon Kim. Elliptic and parabolic equations with measurable coefficients in weighted Sobolev spaces. Adv. Math., 274, 681-735 (2015) * [3] Hongjie Dong and Doyoon Kim. On the $L_{p}$-solvability of higher order parabolic and elliptic systems with BMO coefficients. Arch. Ration. Mech. Anal., 199(3), 889–941 (2011) * [4] Ildoo Kim, Kyeong-Hun Kim, and Kijung Lee. A weighted $L_{p}$-theory for divergence type parabolic PDEs with BMO coefficients on $C^{1}$-domains. J. Math. Anal. Appl., 412(2), 589–612 (2014) * [5] Kyeong-Hun Kim and Kijung Lee. A weighted $L_{p}$-theory for second-order elliptic and parabolic partial differential systems on a half space. Comm. Pure Appl. Anal., 15, 761-794 (2016) * [6] Kyeong-Hun Kim and Kijung Lee. A weighted $L_{p}$-theory for parabolic PDEs with BMO coefficients on $C^{1}$-domains. J. Differential Equations, 254(2), 368–407 (2013) * [7] Kyeong-Hun Kim and N. V. Krylov. On the Sobolev space theory of parabolic and elliptic equations in $C^{1}$ domains. SIAM J. Math. Anal., 36(2), 618–642 (2004) * [8] Vladimir Kozlov and Alexander Nazarov. The Dirichlet problem for non-divergence parabolic equations with discontinuous in time coefficients. Math. Nachr., 282(9), 1220–1241 (2009) * [9] N. V. Krylov. Lectures on elliptic and parabolic equations in Sobolev spaces, volume 96 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI (2008) * [10] N. V. Krylov. Some properties of traces for stochastic and deterministic parabolic weighted Sobolev spaces. J. Funct. Anal., 183(1), 1–41 (2001) * [11] N. V. Krylov. Weighted Sobolev spaces and Laplace’s equation and the heat equations in a half space. Comm. Partial Differential Equations, 24(9-10), 1611–1653 (1999) * [12] N. V. Krylov. A $W^{n}_{2}$-theory of the Dirichlet problem for SPDEs in general smooth domains. Probab. Theory Related Fields, 98(3), 389–421 (1994) * [13] N. V. Krylov and S. V. Lototsky. A Sobolev space theory of SPDEs with constant coefficients in a half space. SIAM J. Math. Anal., 31(1), 19–33 (1999)
# forgedit: text guided image editing via learning and forgetting Shiwen Zhang Alibaba Group <EMAIL_ADDRESS> &Shuai Xiao Alibaba Group &Weilin Huang Alibaba Group ###### Abstract Text guided image editing on real images given only the image and the target text prompt as inputs, is a very general and challenging problem, which requires the editing model to reason by itself which part of the image should be edited, to preserve the characteristics of original image, and also to perform complicated non-rigid editing. Previous fine-tuning based solutions are time-consuming and vulnerable to overfitting, limiting their editing capabilities. To tackle these issues, we design a novel text guided image editing method, Forgedit. First, we propose a novel fine-tuning framework which learns to reconstruct the given image in less than one minute by vision language joint learning. Then we introduce vector subtraction and vector projection to explore the proper text embedding for editing. We also find a general property of UNet structures in Diffusion Models and inspired by such a finding, we design forgetting strategies to diminish the fatal overfitting issues and significantly boost the editing abilities of Diffusion Models. Our method, Forgedit, implemented with Stable Diffusion, achieves new state-of- the-art results on the challenging text guided image editing benchmark TEdBench, surpassing the previous SOTA method Imagic with Imagen, in terms of both CLIP score and LPIPS score. Codes are available at https://github.com/witcherofresearch/Forgedit. ## 1 introduction Image Editing (Oh et al., 2001) is a fundamental problem in computer vision. In order to edit the image, there should be a guidance condition to inform the model what is the editing target. Language is the most direct and general form of such editing guidance, in which case the editing task is called text guided image editing. Such a text describing the content of the desired edited image is usually called target prompt. In this paper, we are trying to tackle text guided image editing in the toughest setting with only original image and target prompt provided, which are the minimum requirements of input for text guided image editing. Text guided image editing is a very general and universal editing task, which includes both rigid and non-rigid editing, for example, editing the appearance, identity and style, replacing or adding or removing certain parts of the image, editing the pose, action and angles of the objects, editing multiple objects of complex relationships, controlling the numbers and positions of the objects, etc. According to whether fine- tuning process is involved, the solutions to text guided image editing are divided into non-optimization methods and optimization involved methods. There are various works for non-optimization editing, for example, ControlNets(Zhang & Agrawala, 2023), Diffusion based Inpainting Models (Rombach et al., ), SDEdit (Meng et al., 2021), PnP Diffusion (Tumanyan et al., 2023), instruct pix2pix (Brooks et al., 2023), DiffEdit (Couairon et al., 2023) etc. However, we found that none of them, are strong enough to preserve the characteristics and perform sophisticated non-rigid edits at the same time. Thus, it is essential to fine-tune the Diffusion Models with the original image in order to preserve the identity of the objects. Imagic (Kawar et al., 2023) is a three-stage text guided image editing method, which regards the target prompt as a pseudo source prompt to describe the original image. In the first stage, Imagic fine-tunes the source prompt text embedding, freezing everything else. In the second stage, Imagic fine-tunes the UNet, freezing other parameters. In the third stage, Imagic interpolates fine-tuned source prompt embedding and target prompt embedding and completes the editing by utilizing the interpolated text embedding to guide text-to-image generation. Imagic equipped by Imagen (Saharia et al., 2022) is the current state-of-the- art text guided image editing algorithm. However, such multi-stage fine-tuning process takes long time and costs great amount of computation resources. Another possible solution is a popular fine-tuning method, DreamBooth (Ruiz et al., 2023), which can be further adapted and improved to perform text guided image editing. Instead of requiring a user provided prompt ’a [V] object’ to refer to the editing object, we utilize BLIP (Li et al., 2022) to generate a caption to describe the original image. Such BLIP+DreamBooth combinations are capable of conducting non-rigid edits and preserving consistent characteristics of original image, demonstrating amazing semantic alignments with the target prompt and high fidelity to original image. However, Both Imagic and BLIP+DreamBooth suffer from overfitting in many cases, restricting the editing capability of the Diffusion Models. In this paper, we are going to tackle the aforementioned issues of these optimization based editing methods. We name our text guided image editing method Forgedit, similar with forget it. There are two stages in our editing method, fine-tuning and editing. Overall, with BLIP (Li et al., 2022) generating source prompt, we design a novel vision language joint optimization fine-tuning framework, which can efficiently learn to reconstruct the original image with source text embedding and UNet in less than one minute on one A100 GPU, much faster than Imagic(Kawar et al., 2023) considering the fact that Imagic+Stable Diffusion (Rombach et al., ) takes 7 minutes to fine-tune on an A100 GPU. Besides, we explore two different methods to merge the source text embedding and target text embedding, vector subtraction which sums source prompt embedding with a weighted subtraction of target prompt embedding and source prompt embedding, and vector projection which decomposes the target prompt embedding along source prompt embedding and orthogonal to the source prompt embedding then sum these two vectors with two coefficients. We found that vector subtraction is better at editing yet vector projection is better at preserving the characteristics of original image during editing. Finally, our Forgedit aims to tackle the overfitting issue existing in previous optimization based editing methods. Due to such overfitting issues, for many cases, text guided image editing methods are only capable of reconstructing the original image, losing their capabilities to edit. Simple solutions may be trying different learning rates and training steps or selecting proper parameters of the Diffusion Models to fine-tune. Yet, there are no silver bullets to find a group of proper hyper-parameters for each editing image thus such hyper-parameter searching for fine-tuning process could be very inefficient and resource consuming. Instead, we propose novel Forgetting Strategies to tackle the overfitting issue during sampling process. Compared with fine-tuning process, sampling process is more computation-efficient. Such forgetting strategies are designed based on our observation of a universal property of UNet structures in Diffusion Models. We found that the Encoder of UNets controls the pose, action, angles, spatial positions meanwhile the Decoder of UNets is in charge of appearance and textures. We could replace the learned parameters of the UNets with original parameters according to the purpose of target prompt, which we call forgetting. To sum up, our main contributions are: 1\. We present Forgedit, a novel efficient optimization based image editing framework to tackle general text guided image editing problem, capable of performing both rigid and non-rigid editing. 2\. We introduce vector projection mechanism to merge source text embedding and target text embedding, which is generally better at preserving the characteristics of original image than vector subtraction. 3\. We design novel forgetting strategies based on our observation of UNets’ properties in Diffusion Models, tackling the common and fatal overfitting issues in optimization involved text guided image editing methods thus significantly improve the editing capability of Diffusion Models. Our Forgedit implemented with even the outdated Stable Diffusion 1.4 achieves new state-of-the-art quantitative results on the challenging benchmark TEdBench (Kawar et al., 2023), surpassing previous SOTA Imagic equipped with Imagen in terms of both CLIP score (Hessel et al., 2021) and LPIPS score (Zhang et al., 2018). Our Forgedit is a very general text guided image editing method, which can also significantly boost the performance of other fine- tuning based text guided image editing method, which we will show in the appendix. ## 2 related work Text to Image Diffusion Models Diffusion Models have dominated text to image generation. DDPM(Ho et al., 2020) improves Diffusion process proposed by Sohl- Dickstein et al. (2015) on generating images. DDIM (Song et al., 2021) accelerates the sampling procedure of Diffusion Models by making reverse process deterministic and using sub-sequence of time-steps. Dalle 2 (Ramesh et al., 2022) trains a diffusion prior to convert a text caption to CLIP (Radford et al., 2021) image embedding and then employs a Diffusion Decoder to transfer the generated CLIP image embedding to an image. Imagen (Saharia et al., 2022) is a Cascaded Diffusion Model (Ho et al., 2021), whose UNet is composed of three Diffusion Models generating images with increasing resolutions. Also, Imagen employs the powerful T5 text encoder (Raffel et al., 2020), which turns out to be vital for complex semantic understanding and generating sophisticated scenarios. Stable Diffusion (Rombach et al., ) utilizes Variational AutoEncoders (Kingma & Welling, 2014) to compress the training image to a compact latent space so that the UNets could be trained with low resolution latents in order to save computational resources. Image Editing with Diffusion Models Empowered by recent progress in text-to- image Diffusion Models, image editing methods have witnessed remarkable improvements. There are various works for non-optimization editing. ControlNets (Zhang & Agrawala, 2023) are trained on extra datasets to learn generating images with different conditions. However, these conditions only reflect partial attributes of the original image thus ControlNets are incapable of preserving the identity of the object being edited and also struggle to conduct non-rigid edits. Inpainting Models based on Diffusion Models(Rombach et al., ) require masks to indicate the editing region, for whom the target mask can be obtained via semantic segmentation models by using a text prompt to refer to. Such text guided Inpainting Models are good at replacing or removing objects, better than other text guided image editing models in terms of preserving non-edited details of original image. However, there are several disadvantages of text guided inpainting models. First, these models cannot preserve the identity of the object being edited. Second, due to the restricts of the region of masks, inpainting models cannot conduct non- rigid editing, for example, making a bird perching on the branch spread its wings. Third, extra masks or texts to refer to the target objects in original image has to be provided, which is not possible in our case since there are only target prompt and original image given in our settings. SDEdit (Meng et al., 2021) utilizes DDIM Inversion to add noises to the original image and then denoises the image with target prompt. DiffEdit (Couairon et al., 2023) obtains the target object mask with Diffusion Model itself by a user provided source prompt and conduct SDEdit in the mask region. PnP Diffusion (Tumanyan et al., 2023) injects intermediate features of original image to the generation of target prompt. Instruct pix2pix (Brooks et al., 2023) pretrains the Diffusion Models on external datasets with triplets of original image, edited image and target prompt. All these non-optimization methods suffer from the fact that they are either incapable of preserving the characteristics or unable to conduct complex non-rigid editing. Prompt to Prompt (Hertz et al., 2023) requires that the source prompt and target prompt must be provided in a precisely matching form so that the algorithm could accurately find the editing target, which is too ideal thus impossible in our setting. Imagic (Kawar et al., 2023) is a three-stage optimization based editing method, which is the current state-of-the-art text guided image editing algorithm, which could be regarded as a combination of textual inversion (Gal et al., 2023)in the first stage and DreamBooth (Ruiz et al., 2023)in the second stage. However, the fine-tuning stages of Imagic are very slow and suffer from overfitting. ## 3 FORGEDIT ### 3.1 Problem settings Given a target prompt and an image, text guided image editing edits the image according to the target prompt, which not only requires the editing being conducted well, but also needs to preserve everything else unchanged. In this paper we try to tackle the text guided image editing problem with the condition that only target prompt and original image are provided, which means that the model should reason by itself which part of the original image is inconsistent with the target prompt and conduct the edit. We aim to design a general editing method, which is capable of conducting different kinds of edits including both rigid and non-rigid editing. ### 3.2 Preliminaries Diffusion models (Ho et al., 2020; Sohl-Dickstein et al., 2015) start from the given image $x_{0}$, and then progressively add Gaussian Noise $\epsilon_{t}\sim\mathcal{N}(0,1)$ in each timestep $t$ to get $x_{t}$. In such a diffusion process, $x_{t}$ can be directly calculated for each timestep $t\in\\{0,...,T\\}$, $x_{t}=\sqrt{\alpha_{t}}x_{0}+\sqrt{1-\alpha_{t}}\epsilon_{t}$ (1) with $\alpha_{t}$ being the diffusion schedule parameters with $0=\alpha_{T}<\alpha_{T-1}...<\alpha_{1}<\alpha_{0}=1$ . Given $x_{t}$ and text embedding $e$, the time-conditional UNets $\epsilon_{\theta}(x_{t},t,e)$ in diffusion models predict the random noise $\epsilon_{t}$ added to $x_{t-1}$. With DDIM (Song et al., 2021), the reverse process is $x_{t-1}=\frac{\sqrt{\alpha_{t-1}}}{\sqrt{\alpha_{t}}}(x_{t}-\sqrt{1-\alpha_{t}}\epsilon_{\theta}(x_{t},t,e))+\sqrt{1-\alpha_{t-1}}\epsilon_{\theta}(x_{t},t,e)$ (2) With Latent Diffusion Models (Rombach et al., ), the $x_{0}$ is replaced by the latent $z_{0}$ from VAE Encoder $\varepsilon(x_{0})$. The training loss is $L=\mathbb{E}_{z_{t},\epsilon_{t},t,e}||\epsilon_{t}-\epsilon_{\theta}(z_{t},t,e)||_{2}^{2}$ (3) Figure 1: We show the overall framework of Forgedit. We use BLIP to describe the original image and get the source text embedding $e_{src}$ with CLIP text encoder of Stable Diffusion. The source text embedding $e_{src}$ is jointly optimized with UNet with different learning rate for text embedding and UNet, with UNet’s deep layers frozen. During editing process, we merge source text embedding $e_{src}$ and target text embedding $e_{tgt}$ with vector subtraction or vector projection to get final text embedding $e$. With forgetting strategies on UNet parameters, we utilize DDIM sampling to get the final edited image. ### 3.3 Joint fine-tuning In order to tackle such challenging text guided image editing problems, we have to fine-tune the model to remember the concepts and reconstruct the image. It is worth noting that although DDIM inversion (Song et al., 2021) could reconstruct the original image, the given text prompt has to be an empty string. If the given text prompt is not empty, DDIM inversion is not able to reconstruct original image precisely and often leads to significant appearance shift (Hertz et al., 2023; Meng et al., 2021). Thus it is necessary to optimize the network for high quality reconstruction and semantic understanding. Shown in Figure 1, we introduce the overall design of our vision language joint optimization framework. source prompt We first use BLIP (Li et al., 2022) to generate a caption describing the image, which we call source prompt. The source prompt is then fed to text encoder of Stable Diffusion (Rombach et al., ) to get the text embedding $e_{src}$. Previous three-stage editing method Imagic (Kawar et al., 2023) regards target prompt text embedding as $e_{src}$. We found that it is essential to use the BLIP caption as source prompt instead of using the target prompt as a pseudo source prompt like Imagic. Otherwise such fine-tuning methods easily lead to overfitting issues. We also found that using the BLIP caption as source prompt leads to better semantic alignment with the given image, thus leads to better editing results. vision language joint learning We choose to optimize the encoder 0, 1, 2 and decoder 1, 2, 3 in the UNet structure. Similar with Imagic, we regard source text embedding as parameters of the network. Yet different with Imagic, we found it vital to jointly optimize the source text embedding and UNet parameters for faster learning and better reconstruction quality. Although trained together, we use a learning rate of $10^{-3}$ for source text embedding and $6\times 10^{-5}$ for UNet, with Adam Optimizer (Kingma & Ba, 2015). For faster training, since we only have one training image, we repeat the tensors on batch dimension for batch-wise optimization with batch size 10. We use mean square error loss and empirically we also make sure that the final loss is less than 0.03 for stable reconstruction quality. With batch size set to 10, the models are fine-tuned for 35 to 40 steps. Once the loss is less than 0.03 after 35 steps, the training stops. The model is fine-tuned for 40 steps at most. This fine-tuning process takes less than 1 minute on one A100 GPU. The training loss is $L=\mathbb{E}_{z_{t},\epsilon_{t},t,e_{src}}||\epsilon_{t}-\epsilon_{\theta,e_{src}}(z_{t},t,e_{src})||_{2}^{2}$ (4) whose difference with loss 3 is that $e_{src}$ is also considered parameters to optimize. Now, with joint fine-tuning, we are capable to reconstruct the original image given the optimized source text embedding $e_{src}$. Figure 2: We demonstrate vector subtraction and vector projection to merge $e_{src}$ and $e_{tgt}$. Vector subtraction could leads to inconsistent appearance of the object being edited since it cannot directly control the importance of $e_{src}$. The vector projection decompose the $e_{tgt}$ into $re_{src}$ along $e_{src}$ and $e_{edit}$ orthogonal to $e_{src}$. We can directly control the scale of $e_{src}$ and $e_{edit}$ by summation. ### 3.4 Reasoning and Editing We first input the target prompt to CLIP (Radford et al., 2021) text encoder of Stable Diffusion (Rombach et al., ) to get the target text embedding $e_{tgt}$. With our learned source text embedding $e_{src}$, we propose two methods to merge $e_{src}$ and $e_{tgt}$ so that the merged text embedding edits the original image according to the target prompt and preserves the unrelated details of original image. Given $e_{src}\in\mathbb{R}^{B\times N\times C}$ and $e_{tgt}\in\mathbb{R}^{B\times N\times C}$ , we conduct all vector operations on the $C$ dimension to get the final text embedding $e$. Vector Subtraction We use the same interpolation method as Imagic (Kawar et al., 2023), $e=\gamma e_{tgt}+(1-\gamma)e_{src}=e_{src}+\gamma(e_{tgt}-e_{src})$ (5) Shown in Figure 2, the final text embedding $e$ is obtained by travelling along vector subtraction $e_{tgt}-e_{src}$ . In our experiments, we found that in most cases, $\gamma$ goes beyond 1 when the editing is successful. This leads to the problem that the distance of final embedding $e$ and source embedding $e_{src}$ may be so far that the appearance of the edited object could change vastly. Vector Projection We propose to use vector projection to better preserve the appearance of the original image. Shown in the Figure 2, we decompose the target prompt text embedding $e_{tgt}$ into a vector along $e_{src}$ and a vector orthogonal to $e_{src}$. We call the orthogonal vector $e_{edit}$. We first calculate the ratio $r$ of the projected vector on $e_{src}$ direction. $r=\frac{e_{src}e_{tgt}}{||e_{src}||^{2}}$ (6) Thus, we could get the $e_{edit}$ by $e_{edit}=e_{tgt}-re_{src}$ (7) In order to better preserve the original image details, we sum $e_{src}$ and $e_{edit}$ with coefficient $\alpha$ and $\beta$, $e=\alpha e_{src}+\beta e_{edit}$ (8) Editing We use DDIM sampling (Song et al., 2021) with classifier free guidance (Ho, 2022) to conduct the edit. The guidance scale is 7.5. For vector subtraction, we iterate over a range of $\gamma\in[0.8,1.6]$. For vector projection, we choose $\alpha$ from two values $\\{0.8,1.1\\}$, $\beta$ from a range of [1.0,1.5] Figure 3: The encoder parameters of UNets learn features related to pose, angle, structure and position. The decoder parameters are related to appearance and texture. Thus we could design forgetting strategies according to the editing intention. ### 3.5 Forgetting Strategies In some cases the network still overfits since there is only one training image. The fine-tuning process is computational expensive compared to sampling process, thus we design forgetting strategies during sampling process to tackle the overfitting problem. The network is only fine-tuned once, and can be converted to multiple different networks during sampling process by merging certain fine-tuned parameters $w_{learned}$ and the corresponding original UNet parameters before fine-tuning $w_{orig}$ with coefficient $\sigma$. In practice, we found that $\sigma=0$ works in general, which means that we simply replace the fine-tuned parameters with original parameters so that the network completely forgets these learned parameters. $w=\sigma w_{learned}+(1-\sigma)w_{orig}$ (9) Yet it remains a problem which parameters should be forgotten. Shown in Figure 3, we found interesting properties of UNets in Diffusion Models. The encoder of UNets learns the pose, angle and overall layout of the image. The decoder learns the appearance and textures instead. If the target prompt tends to edit the pose and layout, we choose to forget parameters of encoder. If the target prompt aims to edit the appearance, the parameters of decoder should be forgotten. Currently we only apply the forgetting strategies when text embeddings $e$ is obtained by vector subtraction in previous section. For editing with forgetting strategies, we iterate over a range of $\gamma\in[0.0,1.4]$. For different settings of forgetting strategies, we explore their effects in the ablation study, shown in Figure 5 and Figure 6. ### 3.6 Limitations There are at least three limitations of our Forgedit. First of all, although our fine-tuning framework has been optimized and is much faster than Imagic, the fine-tuning process still takes tens of seconds or even more depending on the GPU devices. We will explore in the future whether it is possible to preserve high fidelity characteristics of the original image without fine- tuning. Second, the effect of Forgedit is influenced by randomness. The fine- tuning process inevitably introduces randomness thus for some particular cases, we cannot guarantee to perfectly reconstruct the details of original image thus we have to run the fine-tuning stage several times for these challenging cases. The sampling procedure is also related to the initial random seed of reverse process, thus for some extremely challenging cases we have to sample tens of images or even hundreds, though rarely the case, before we could get a proper edited one. Third, the editing capability of Forgedit is restricted by the utilized Diffusion Model. If the target prompt cannot even be generated by the Diffusion Model itself, it is almost impossible to accomplish the edit according to the target prompt. For example, the prompt ’a sitting flamingo’ cannot be generated by Stable Diffusion at all, thus Forgedit cannot successfully edit it either. Such an issue could possibly be solved by switching to better Diffusion Models. ## 4 Experiments ### 4.1 Benchmark TEdBench (Kawar et al., 2023), is one of the most difficult public available text guided image editing benchmarks. There are 100 editings in the benchmark, with one target prompt and one image for each edit. These target prompts are very general and various, including but not limited to changing the appearance of objects, replacing certain parts of the image, changing the position, action and number of the object, editing multiple objects with complex interactions. In particular, the non-rigid edits turn out to be very tough for many SOTA text-guided image editing methods. In terms of quantitative evaluation, we utilize CLIP Score (Hessel et al., 2021) to measure semantic alignments with target prompt and LPIPS score (Zhang et al., 2018) to indicate fidelity to the original image. ### 4.2 Ablation Study vector subtraction vs vector projection We compare two different reasoning method to merge $e_{src}$ and $e_{tgt}$ to get the final text embedding $e$, shown in Figure 4 . For the dog and the cat example, vector projection preserves the appearance of the dog and cat better than vector subtraction. However, for the glass of milk and cookie example, vector subtraction is better than vector projection. In this example, vector projection struggles to change the milk to juice and also introduces wave-like blurs in the image. We observe such phenomenons in many other cases for vector projection, which demonstrates that it is more suitable for edits where the identity of object should be kept instead of changed. These two methods are complementary to each other on many cases, with vector projection better at preserving the identity and vector subtraction better at editing. Figure 4: comparison of vector subtraction and vector projection to reason the final text embedding $e$. In fact, these two methods are on par in many cases, yet complementary on the others. forgetting strategies Although the forgetting strategies strengthen the editing abililty of the model, forgetting parameters inevitably leads to minor reconstruction quality. For example, shown in Figure 3, for encoder or decoder, we remain all parameters related to self attention and cross attention, forgetting the rest, which are called ’encoderattn’ in Figure 6 and ’decoderattn’ in Figure 5. We found that there are certain unwanted changes unrelated to the target prompt, which are the side effects of forgetting strategies. For each column, the background of the image changes a little bit, the white bird disappears, the knife is gone, the branch no longer exists, the appearance of the tree changes. Figure 5: We explore different forgetting strategies for decoder. All learned encoder parameters are preserved. In the second to fourth columns, we preserve decoder cross attention parameters k and v, decoder self attention and cross attention, decoder self attention and cross attention and the entire decoder2 block, forgetting all the other parameters of decoder. We also experiment with different extent of forgetting strategies. In Figure 5 , we explore different decoder forgetting strategies. With all fine-tuned parameters of encoder preserved and all decoder parameters forgotten, we gradually add fine-tuned parameters back to decoder. ’decoderattn2kv’ means that we use fine-tuned parameters of decoder cross attention key and value matrices. Since all the fine-tuned encoder parameters are preserved, the overall structure of the image and the pose of the objects being edited are almost identical with the original image, yet the appearance and textures are changed. ’decoderattn’ indicates that we utilize all learned self attentions and cross attentions parameters in decoder. This is our default setting since it is rather general. More appearance and textures features of the original image are preserved in such a setting. ’decoderattn+decoder2’ refers to the forgetting strategy that we preserve all learned self attentions and cross attentions of decoder plus the decoder2 block. The position of decoder2 block is shown in Figure 1. More details are preserved for some edits, yet for the others the editing ability of our method is lost due to overfitting. In the last column of figure, we show the editing results by using all fine-tuned parameters. Figure 6: We explore different forgetting strategies for encoder. All learned decoder parameters are preserved. For the second to fourth column each, we preserve none of the encoder parameters, encoder self attention and cross attention, encoder self attention and cross attention and the entire encoder1 block, forgetting all the other parameters of encoder. We also explore different forgetting strategies for encoder in Figure 6. ’noencoder’ indicates that we forget all learned parameters of encoder and only use learned decoder parameters for sampling. ’encoderattn’ refers to the strategy that we preserve all the parameters of self attention and cross attention. With ’encoderattn+encoder1’ strategy, we preserve encoder self attention, cross attention and the encoder1 block. All the other parameters of encoder are forgotten. ### 4.3 Comparison with State-of-the-art We compare our Forgedit with multiple SOTA text guided image editing methods in Figure 7. For non-optimization text guided image editing methods, we choose to compare with the most representative method, SDEdit (Meng et al., 2021). We found that SDEdit struggles to preserve the identity of the edited objects in most cases. We also compare with a kind of very strong optimization involved method, which we call ’BLIP+DreamBooth’. In order for such methods to be applied in text guided image editing, we utilize BLIP (Li et al., 2022) to generate captions describing the original image like our Forgedit. With the caption, we train the UNet to reconstruct the original image and edit the image by directly using the target prompt to guide the fine-tuned UNet for image generation, shown in the 3rd column of Figure 7. We also experiment with an improved version by training text encoder and UNet at the same time, shown in the 4th column. Such simple fine-tuning of UNet and text encoder are actually very powerful text guided image editing methods, which are also called ’DreamBooth’ (Ruiz et al., 2023) in some literature. The difference is that our BLIP+DreamBooth uses BLIP generated caption yet original DreamBooth requires user provided caption in a special form of ’a [V] object’ referring the object to be reconstruct. Following the settings of DreamBooth (Ruiz et al., 2023), we use a learning rate of $5\times 10^{-6}$ for both text encoder and UNet, with a batch size of 4. We train BLIP+DreamBooth with one image for 100 steps, which takes more than one minute on a A100 GPU. Unlike original DreamBooth which needs 3 to 4 images to learn the new object concept, we found that with BLIP+DreamBooth one image is enough to reconstruct the majority features of the original image. It is obvious that BLIP+DreamBooth methods are much better at preserving the identities and backgrounds than SDEdit. However, BLIP+DreamBooth, when only UNet is fine-tuned, suffers from underfitting since it cannot preserve the identity of the objects in many cases. BLIP+DreamBooth suffers from overfitting in many cases when text encoder and UNet are jointly fine-tuned. In fact, we found that our Forgedit can also be simply adapted to help tackling such overfitting issues of BLIP+DreamBooth, shown in the appendix, which again demonstrates the strong generalization of Forgedit framework on various optimization based editing methods. We also compare with the SOTA two-stage text guided image editing method, Imagic (Kawar et al., 2023). We use Stable Diffusion (Rombach et al., ) and Imagen Saharia et al. (2022) as the diffusion models for Imagic respectively, shown in the 5th and 6th columns of Figure 7. Imagic with Stable Diffusion suffers greatly from overfitting, leading to few successful edits. Imagic with Imagen is the current SOTA on TEdBench, demonstrating very strong editing abilities and preserves the original identities well in most cases. Our method, Forgedit, shown in the last column, though with the inferior Stable Diffusion as diffusion models for editing, is generally on par with Imagic with Imagen in most cases, sometimes better. Also, our Forgedit with Stable Diffusion surpass the current SOTA Imagic+Imagen on TEdBench benchmark in terms of both CLIP Score and LPIPS Score, shown in Table 1. Figure 7: comparison with SOTA text guided image editing methods. We compare with the non-optimization method, SDEdit and optimization methods, BLIP+DreamBooth and Imagic, demonstrating the strong editing ability and stable identity preservation. Editing method | CLIP Score $\uparrow$ | LPIPS Score $\downarrow$ ---|---|--- Imagic+Imagen (Kawar et al., 2023) | 0.748 | 0.537 Forgedit+SD (ours) | 0.771 | 0.534 Table 1: Our Forgedit with Stable Diffusion is the new state-of-the-art text guided image editing method on the challenging benchmark TEdBench, surpassing previous SOTA Imagic+Imagen. ### 4.4 Conclusion We present our novel Forgedit framework to tackle the challenging text guided image editing problem. Besides the optimized vision language joint learning for fast reconstruction of the original image, we also introduce the vector projection mechanism to strengthen Forgedit’s capability of identity preservation during editing. Finally, we propose the forgetting strategy to efficiently solve the overfitting issue of optimization based model during sampling. Even with the outdated Stable Diffusion version 1.4, our Forgedit achieves new state-of-the-art CLIP score and LPIPS score on the most challenging editing benchmark TEdBench. Forgedit can also be adapted to other fine-tuning based text guided image editing methods, for example, BLIP+DreamBooth. We demonstrate the generalization of Forgedit in the appendix. Theoretically, our Forgedit framework should also be compatible with other structures of Diffusion Models beyond Stable Diffusion thus has the potential to obtain better editing results, which we will explore in the future. ## References * Brooks et al. (2023) Tim Brooks, Aleksander Holynski, and Alexei A. Efros. Instructpix2pix: Learning to follow image editing instructions. In _CVPR_ , 2023. * Couairon et al. (2023) Guillaume Couairon, Jakob Verbeek, Holger Schwenk, and Matthieu Cord. Diffedit: Diffusion-based semantic image editing with mask guidance. In _ICLR_. OpenReview.net, 2023. * Gal et al. (2023) Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit Haim Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. In _ICLR_. OpenReview.net, 2023. * Hertz et al. (2023) Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross-attention control. In _ICLR_. OpenReview.net, 2023. * Hessel et al. (2021) Jack Hessel, Ari Holtzman, Maxwell Forbes, Ronan Le Bras, and Yejin Choi. Clipscore: A reference-free evaluation metric for image captioning. In _EMNLP (1)_ , pp. 7514–7528. Association for Computational Linguistics, 2021. * Ho (2022) Jonathan Ho. Classifier-free diffusion guidance. _ArXiv_ , abs/2207.12598, 2022. * Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. In _NeurIPS_ , 2020. * Ho et al. (2021) Jonathan Ho, Chitwan Saharia, William Chan, David J. Fleet, Mohammad Norouzi, and Tim Salimans. Cascaded diffusion models for high fidelity image generation. _J. Mach. Learn. Res._ , 23:47:1–47:33, 2021. * Hu et al. (2022) Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. In _ICLR_. OpenReview.net, 2022. * Kawar et al. (2023) Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In _CVPR_ , pp. 6007–6017. IEEE, 2023. * Kingma & Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR (Poster)_ , 2015. * Kingma & Welling (2014) Diederik P. Kingma and Max Welling. Auto-encoding variational bayes. In _ICLR_ , 2014. * Li et al. (2022) Junnan Li, Dongxu Li, Caiming Xiong, and Steven C. H. Hoi. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _International Conference on Machine Learning_ , 2022. * Meng et al. (2021) Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. In _International Conference on Learning Representations_ , 2021. * Oh et al. (2001) Byong Mok Oh, Max Chen, Julie Dorsey, and Frédo Durand. Image-based modeling and photo editing. In _SIGGRAPH_ , pp. 433–442. ACM, 2001. * Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_ , 2021. * Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. _J. Mach. Learn. Res._ , 21:140:1–140:67, 2020. * Ramesh et al. (2022) Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents, 2022\. * (19) Robin Rombach, A. Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. _2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_. * Ruiz et al. (2023) Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _CVPR_ , pp. 22500–22510. IEEE, 2023. * Saharia et al. (2022) Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L. Denton, Seyed Kamyar Seyed Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, Jonathan Ho, David J. Fleet, and Mohammad Norouzi. Photorealistic text-to-image diffusion models with deep language understanding. In _NeurIPS_ , 2022. * Sohl-Dickstein et al. (2015) Jascha Sohl-Dickstein, Eric A. Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In _ICML_ , volume 37 of _JMLR Workshop and Conference Proceedings_ , pp. 2256–2265. JMLR.org, 2015. * Song et al. (2021) Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. In _ICLR_. OpenReview.net, 2021. * Tumanyan et al. (2023) Narek Tumanyan, Michal Geyer, Shai Bagon, and Tali Dekel. Plug-and-play diffusion features for text-driven image-to-image translation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 1921–1930, June 2023. * Zhang & Agrawala (2023) Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023. * Zhang et al. (2018) Richard Zhang, Phillip Isola, Alexei A. Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pp. 586–595, 2018. ## Appendix A Appendix ### A.1 DreamBooth+Forgedit Forgedit is a very general framework, whose main features come from three aspects: joint vision and language learning with original image, obtaining final text embedding by vector subtraction and vector projection, using forgetting strategies to tackle the overfitting issues. Here we show how to extend Forgedit to BLIP+DreamBooth (Li et al., 2022; Ruiz et al., 2023). It is also possible to adapt our Forgedit to other Diffusion Models (Ho et al., 2021; Saharia et al., 2022) or fine-tuning methods (Hu et al., 2022), which we will explore in the future. vision and language joint learning This is natural for the method which we call BLIP+DreamBooth Text Encoder and UNet, since the Text Encoder and UNet are jointly trained already. vector subtraction and vector projection Our Forgedit presented in the main paper regards the source text embedding as a part of the network to optimize. For BLIP+DreamBooth, since we have already fine-tuned the text encoder, we switch to use the text encoder to get source text embedding directly from source prompt. Now we can use vector subtraction and vector projection in the same way. forgetting strategy We could directly apply the forgetting strategies to BLIP+DreamBooth. However, since the information are injected into both text encoder and UNet, our forgetting strategies on UNet may still fail in some cases. We will explore the forgetting strategies in text encoder in the future. We show some cases in Figure 8, and compare the editing effects of BLIP+DreamBooth+Forgedit with previous state-of-the-art text guided image editing methods and our Forgedit presented in the main paper. Comparing the 4th column and the last column, we could find that with Forgedit framework, the editing ability of DreamBooth has been tremendously improved. Please note that DreamBooth+Forgedit is not a simple combination of our Forgedit presented in the main paper and BLIP+DreamBooth, since the fine-tuning process of our Forgedit is different with DreamBooth+Forgedit. This leads to the fact that DreamBooth+Forgedit is not always better than Forgedit, shown in the last two columns in Figure 8. Figure 8: The editing effects of DreamBooth+Forgedit and comparison with SOTA text guided image editing methods. Please note that DreamBooth+Forgedit is not always better than Forgedit, shown in the last two columns in Figure.
# Bounds for theta sums in higher rank II111Research supported by EPSRC grant EP/S024948/1 Jens Marklof and Matthew Welsh (11 May 2023) ###### Abstract In the first paper of this series we established new upper bounds for multi- variable exponential sums associated with a quadratic form. The present study shows that if one adds a linear term in the exponent, the estimates can be further improved for almost all parameter values. Our results extend the bound for one-variable theta sums obtained by Fedotov and Klopp in 2012. ## 1 Introduction For $M>0$, a real $n\times n$ symmetric matrix $X$, and $\bm{x},\bm{y}\in\mathbb{R}^{n}$, we define a theta sum as the exponential sum $\theta_{f}(M,X,\bm{x},\bm{y})=\sum_{\bm{m}\in\mathbb{Z}^{n}}f\left(M^{-1}(\bm{m}+\bm{x})\right)\mathrm{e}\left(\tfrac{1}{2}\bm{m}X\prescript{t}{}{\bm{m}}+\bm{m}\prescript{t}{}{\bm{y}}\right),$ (1.1) where $f:\mathbb{R}^{n}\to\mathbb{C}$ is a rapidly decaying cut-off and $\mathrm{e}(z)=\mathrm{e}^{2\pi\mathrm{i}z}$ for any complex $z$. If $f=\chi_{\mathcal{B}}$ is the characteristic function of a bounded set $\mathcal{B}\subset\mathbb{R}^{n}$ we have the finite sum $\theta_{f}(M,X,\bm{x},\bm{y})=\sum_{\bm{m}\in\mathbb{Z}^{n}\cap(M\mathcal{B}-\bm{x})}\mathrm{e}(\tfrac{1}{2}\bm{m}X\prescript{t}{}{\bm{m}}+\bm{m}\prescript{t}{}{\bm{y}}).$ (1.2) In this case we will also use the notation $\theta_{f}=\theta_{\mathcal{B}}$. In this paper we will focus on the case when $\mathcal{B}$ is the open rectangular box $(0,b_{1})\times\cdots\times(0,b_{n})\subset\mathbb{R}^{n}$. The theorems below remain valid if $f=\chi_{\mathcal{B}}$ is replaced by any function $f$ in the Schwartz class $\mathcal{S}(\mathbb{R}^{n})$ (infinitely differentiable, with rapid decay of all derivatives). The results in the latter case follow from a simpler version of the argument for the sharp truncation, so we do not discuss them here. The principal result of part I [10] in this series is the following. ###### Theorem 1.1. Fix a compact subset $\mathcal{K}\subset\mathbb{R}_{>0}^{n}$, and let $\psi:[0,\infty)\to[1,\infty)$ be an increasing function such that $\int_{0}^{\infty}\psi(t)^{-2n-2}dt<\infty.$ (1.3) Then there exists a subset $\mathcal{X}(\psi)\subset\mathbb{R}^{n\times n}_{\mathrm{sym}}$ of full Lebesgue measure such that $\theta_{\mathcal{B}}(M,X,\bm{x},\bm{y})=O_{X}\big{(}M^{\frac{n}{2}}\psi(\log M)\big{)}$ (1.4) for all $M\geq 1$, $\bm{b}=(b_{1},\ldots,b_{n})\in\mathcal{K}$, $X\in\mathcal{X}(\psi)$, $\bm{x},\bm{y}\in\mathbb{R}^{n}$. The implied constants are independent of $M$, $\bm{b}$, $\bm{x}$ and $\bm{y}$. For example, for any $\epsilon>0$, the function $\psi(x)=(x+1)^{\frac{1}{2n+2}+\epsilon}$ satisfies the condition (1.3), which produces the bound $M^{\frac{n}{2}}(\log M)^{\frac{1}{2n+2}+\epsilon}$ for almost every $X$ and any $\bm{x}$ and $\bm{y}$. This improved the previously best bound due to Cosentino and Flaminio [3] by a factor of $(\log M)^{n}$. Moreover, in the case $n=1$, theorem 1.1 recovers the optimal result obtained by Fiedler, Jurkat and Körner [5]. In what follows we establish a stronger bound than (1.4), for example $M^{\frac{n}{2}}(\log M)^{\frac{1}{2n+4}+\epsilon}$, but now only valid for almost every $\bm{y}$. In the case $n=1$, theorem 1.2 recovers theorem 0.1 of Fedotov and Klopp [4]. ###### Theorem 1.2. Fix a compact subset $\mathcal{K}\subset\mathbb{R}_{>0}^{n}\times\mathbb{R}^{n}$, and let $\psi:[0,\infty)\to[1,\infty)$ be an increasing function such that $\int_{0}^{\infty}\psi(t)^{-2n-4}dt<\infty.$ (1.5) Then there exists a subset $\tilde{\mathcal{X}}(\psi)\subset\mathbb{R}^{n\times n}_{\mathrm{sym}}\times\mathbb{R}^{n}$ of full Lebesgue measure such that $\theta_{\mathcal{B}}(M,X,\bm{x},\bm{y})=O_{X,\bm{y}}\big{(}M^{\frac{n}{2}}\psi(\log M)\big{)}$ (1.6) for all $M\geq 1$, $(\bm{b},\bm{x})\in\mathcal{K}$, and $(X,\bm{y})\in\tilde{\mathcal{X}}(\psi)$. The implied constants are independent of $M$, $\bm{b}$ and $\bm{x}$. The paper is organized as follows. In section 2 we review some basic properties of theta functions and the Jacobi group. The Jacobi group is defined as the semi-direct product $H\rtimes G$ of the Heisenberg group $H$ and the symplectic group $G=\mathrm{Sp}(n,\mathbb{R})$, and, following a construction due to Lion and Vergne [8], the theta function associated to a Schwartz function $f\in\mathcal{S}(\mathbb{R}^{n})$ is a function $\Theta_{f}:H\rtimes G\to\mathbb{C}$ that, for appropriate $g\in G$ and $h\in H$, is a simple rescaling of the theta sums $\theta_{f}$. The theta functions $\Theta_{f}$ satisfy an automorphy equation, theorem 3.1, under a certain subgroup $\tilde{\Gamma}\subset H\rtimes G$. This subgroup, defined in section 3, projects to the discrete subgroup $\Gamma=\mathrm{Sp}(n,\mathbb{Z})\subset G$. In order to exploit additional savings from the linear term parameterized by $\bm{y}$, we found it necessary to have a better understanding of the shape of the cusp of $\Gamma\backslash G$ than in the first paper in this series [10]. For this reason we define in section 3.1 a new fundamental domain for $\Gamma\backslash G$ which has “box-shape” cusps, as explicated in section 3.2. Section 4 contains the proof of theorem 1.2, which is based on a Borel- Cantelli type argument together with a multi-dimensional dyadic decomposition of the characteristic function of the open unit cube $(0,1)^{n}$ that is naturally realized as an action of the diagonal subgroup of $G$. The execution of the Borel-Cantelli argument rests on a kind of “uniform continuity” property of a certain height function on $H\rtimes G$ that controls the theta function $\Theta_{f}$, see corollary 4.1. The required property is proved in section 4.1, see lemma 4.4, whose proof is the motivation for the creation of the fundamental domain and the study of its cuspidal regions in sections 3.1 and 3.2. We remark that the interaction of the dyadic decomposition with the $H$ coordinate in the Jacobi group leads to additional complications not seen in [10], see section 4.2. ## 2 Theta functions and the Jacobi group The theta function $\Theta_{f}$ associated to a Schwartz function $f\in\mathcal{S}(\mathbb{R}^{n})$ is a complex-valued function defined on the Jacobi group $H\rtimes G$, the semi-direct product of the Heisenberg group $H$ with the rank $n$ symplectic group $G=\mathrm{Sp}(n,\mathbb{R})$. Here $H$ is the set $\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}$ with multiplication given by $(\bm{x}_{1},\bm{y}_{1},t_{1})(\bm{x}_{2},\bm{y}_{2},t_{2})=(\bm{x}_{1}+\bm{x}_{2},\bm{y}_{1}+\bm{y}_{2},t_{1}+t_{2}+\tfrac{1}{2}(\bm{y}_{1}\prescript{t}{}{\bm{x}_{2}}-\bm{x}_{1}\prescript{t}{}{\bm{y}_{2}})),$ (2.1) and $G$ is the group of $2n\times 2n$ real matrices $g$ preserving the standard symplectic form: $g\begin{pmatrix}0&-I\\\ I&0\end{pmatrix}\prescript{t}{}{g}=\begin{pmatrix}0&-I\\\ I&0\end{pmatrix}$ (2.2) with $I$ the $n\times n$ identity. Alternatively, writing $g$ in $n\times n$ blocks, $G=\left\\{\begin{pmatrix}A&B\\\ C&D\end{pmatrix}:A\prescript{t}{}{B}=B\prescript{t}{}{A},\ C\prescript{t}{}{D}=D\prescript{t}{}{C},\ A\prescript{t}{}{D}-B\prescript{t}{}{C}=I\right\\}.$ (2.3) We note that $G$ acts on $H$ by automorphisms via $h^{g}=(\bm{x}A+\bm{y}C,\bm{x}B+\bm{y}D,t),\ \mathrm{where\ }h=(\bm{x},\bm{y},t),\ g=\begin{pmatrix}A&B\\\ C&D\end{pmatrix},$ (2.4) so we may define the semidirect product $H\rtimes G$, the Jacobi group, with multiplication $(h_{1},g_{1})(h_{2},g_{2})=(h_{1}h_{2}^{g_{1}^{-1}},g_{1}g_{2}).$ (2.5) The theta function is defined by $\Theta_{f}(h,g)=\sum_{\bm{m}\in\mathbb{Z}^{n}}(W(h)R(g)f)(\bm{m}),$ (2.6) where $W$ is the Schrödinger representation of $H$ and $R$ is the Segal-Shale- Weil (projective) representation of $G$. We refer the reader to [10] for details regarding these representations, including the slightly non-standard definition of $W$ and the unitary cocycle $\rho:G\times G\to\mathbb{C}$ satisfying $R(g_{1}g_{2})=\rho(g_{1},g_{2})R(g_{1})R(g_{2})$. We recall here that for $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}\in G,$ (2.7) we have $\Theta_{f}((\bm{x},\bm{y},t),g)\\\ =(\det Y)^{\frac{1}{4}}\mathrm{e}(-t+\tfrac{1}{2}\bm{x}\prescript{t}{}{\bm{y}})\sum_{\bm{m}\in\mathbb{Z}^{n}}f((\bm{m}+\bm{x})Y^{\frac{1}{2}})\mathrm{e}(\tfrac{1}{2}(\bm{m}+\bm{x})X\prescript{t}{}{(\bm{m}+\bm{x})}+\bm{m}\prescript{t}{}{\bm{y}}).$ (2.8) For $f(\bm{x})=\exp(-\pi\bm{x}\prescript{t}{}{\bm{x}})$ and $h=(0,0,0)$, we recover $(\det Y)^{\frac{1}{4}}$ times the classical Siegel theta series that is holomorphic in the complex symmetric matrix $Z=X+\mathrm{i}Y$. Here we choose $Y^{\frac{1}{2}}$ to be the upper-triangular matrix with positive diagonal entries such that $Y^{\frac{1}{2}}\prescript{t}{}{Y}^{\frac{1}{2}}=Y$, and we emphasize that $Y^{-\frac{1}{2}}$ is always interpreted as $(Y^{\frac{1}{2}})^{-1}$ and not $(Y^{-1})^{\frac{1}{2}}$. For general $g\in G$ we have the Iwasawa decomposition, $g=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\mathrm{Re}(Q)&-\mathrm{Im}(Q)\\\ \mathrm{Im}(Q)&\mathrm{Re}(Q)\end{pmatrix},$ (2.9) where $X,Y$ are symmetric and $Q$ is unitary. Explicitly, we have $\displaystyle Y$ $\displaystyle=(C\prescript{t}{}{C}+D\prescript{t}{}{D})^{-1}$ $\displaystyle X$ $\displaystyle=(A\prescript{t}{}{C}+B\prescript{t}{}{D})(C\prescript{t}{}{C}+D\prescript{t}{}{D})^{-1}$ $\displaystyle Q$ $\displaystyle=\prescript{t}{}{Y}^{\frac{1}{2}}(D+\mathrm{i}C).$ (2.10) We often further decompose $Y=UV\prescript{t}{}{U}$ with $U$ upper-triangular unipotent and $V$ positive diagonal, so $Y^{\frac{1}{2}}=UV^{\frac{1}{2}}$. It is easy to express the Haar measure $\mu$ on $G$ in these coordinates, $\differential\mu(g)=\differential Q\prod_{1\leq i\leq j\leq n}\differential x_{ij}\prod_{1\leq i<j\leq n}\differential u_{ij}\prod_{1\leq j\leq n}v_{j}^{-n+j-2}\differential v_{jj},$ (2.11) where $\differential Q$ is Haar measure on $\mathrm{U}(n)$ and $\differential x_{ij}$, $\differential u_{ij}$, $\differential v_{jj}$ are respectively the Lebesgue measures on the entries of $X$, $U$, $V$. We can also express the Haar measure on the open, dense set of $g$ which can be written as $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\\ 0&\prescript{t}{}{A}^{-1}\end{pmatrix}\begin{pmatrix}I&0\\\ T&I\end{pmatrix}$ (2.12) with $A\in\mathrm{GL}(n,\mathbb{R})$ and $X$ and $T$ symmetric. In these coordinates we have $\differential\mu(g)=c(\det A)^{-2n-1}\prod_{1\leq i\leq j\leq n}\differential x_{ij}\prod_{1\leq i,j\leq n}\differential a_{ij}\prod_{1\leq i\leq j\leq n}\differential t_{ij}$ (2.13) where $c$ is a positive constant and $\differential x_{ij}$, $\differential a_{ij}$, $\differential t_{ij}$ are respectively the Lebesgue measure on the entries of $X$, $A$, $T$, see [10]. We note that the Haar measure $\tilde{\mu}$ on the Jacobi group is simply $\differential\tilde{\mu}(h,g)=\differential{\bm{x}}\,\differential\bm{y}\,\differential t\,\differential\mu(g),$ (2.14) with $h=(\bm{x},\bm{y},t)$ and $\differential\bm{x}$, $\differential\bm{y}$, and $\differential t$ the Lebesgue measures. We often make use of the following refinements of the Iwasawa decomposition. For $1\leq l\leq n$ and the same $Q$ as in (2.9), we write $g\in G$ as $\begin{pmatrix}I&R_{l}&T_{l}-S_{l}\prescript{t}{}{R}_{l}&S_{l}\\\ 0&I&\prescript{t}{}{S}_{l}&0\\\ 0&0&I&0\\\ 0&0&-\prescript{t}{}{R}_{l}&I\end{pmatrix}\begin{pmatrix}U_{l}V_{l}^{\frac{1}{2}}&0&0&0\\\ 0&Y_{l}^{\frac{1}{2}}&0&X_{l}\prescript{t}{}{Y}_{2}^{-\frac{1}{2}}\\\ 0&0&\prescript{t}{}{U}_{l}^{-1}V_{l}^{-\frac{1}{2}}&0\\\ 0&0&0&\prescript{t}{}{Y_{l}}^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\mathrm{Re}(Q)&-\mathrm{Im}(Q)\\\ \mathrm{Im}(Q)&\mathrm{Re}(Q)\end{pmatrix},$ (2.15) where $R_{l}$ and $S_{l}$ are $l\times(n-l)$ matrices, $T_{l}$ is $l\times l$ symmetric, $U_{l}$ is $l\times l$ upper-triangular unipotent, $V_{l}$ is $l\times l$ positive diagonal, $X_{l}$ is $(n-l)\times(n-l)$ symmetric, and $Y_{l}$ is $(n-l)\times(n-l)$ positive definite symmetric. We note that for $l=n$ we recover $X=T_{l}$ and the factorization $Y=U_{l}V_{l}\prescript{t}{}{U}_{l}$. In what follows we use $g_{l}=g_{l}(g)\in\mathrm{Sp}(n-l,\mathbb{R})$ to denote the matrix $g_{l}=\begin{pmatrix}I&X_{l}\\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}_{l}^{-\frac{1}{2}}\end{pmatrix}.$ (2.16) These decompositions are closely related to the Langlands decompositions of the maximal parabolic subgroups $P_{l}$ of $G$. For $1\leq l<n$, $P_{l}$ is the subgroup of $g\in G$ which can be written in the form $\begin{pmatrix}I&R_{l}&T_{l}-S_{l}\prescript{t}{}{\\!R}_{l}&S_{l}\\\ 0&I&\prescript{t}{}{\\!S}_{l}&0\\\ 0&0&I&0\\\ 0&0&-\prescript{t}{}{\\!R}_{l}&I\end{pmatrix}\begin{pmatrix}a_{l}I&0&0&0\\\ 0&I&0&0\\\ 0&0&a_{l}^{-1}I&0\\\ 0&0&0&I\end{pmatrix}\begin{pmatrix}U_{l}&0&0&0\\\ 0&A_{l}&0&B_{l}\\\ 0&0&\prescript{t}{}{U}_{l}^{-1}&0\\\ 0&C_{l}&0&D_{l}\end{pmatrix}$ (2.17) where $R_{l}$ and $S_{l}$ are $l\times(n-l)$ matrices, $T_{l}$ is $l\times l$ symmetric, $a_{l}>0$, $U_{l}\in\mathrm{GL}(l,\mathbb{R})$ with $\det U_{l}=\pm 1$, and $g_{l}=\begin{pmatrix}A_{l}&B_{l}\\\ C_{l}&D_{l}\end{pmatrix}\in\mathrm{Sp}(n-l,\mathbb{R})$. The maximal parabolic $P_{n}$ is the subgroup of $g\in G$ that can be written as $\begin{pmatrix}I&T_{n}\\\ 0&I\end{pmatrix}\begin{pmatrix}a_{n}I&0\\\ 0&a_{n}^{-1}I\end{pmatrix}\begin{pmatrix}U_{n}&0\\\ 0&\prescript{t}{}{\\!U_{n}}^{-1}\end{pmatrix}$ (2.18) where $T_{n}$ is $n\times n$ symmetric, $a_{n}>0$, and $U_{n}\in\mathrm{GL}(n,\mathbb{R})$ with $\det U_{n}=\pm 1$. The factorizations (2.17), (2.18) are in fact the Langlands decompositions of $P_{l}$, $P_{n}$. The first paper in this series [10] contains more details on parabolic subgroups and their Langlands decompositions, and we refer the readers to [12], particularly sections 4.5.3 and 5.1, [7], particularly section 7.7, and the authors’ lecture notes [9] for further details. ## 3 The subgroups $\Gamma$ and $\tilde{\Gamma}$ We denote by $\Gamma$ the discrete subgroup $\Gamma=\mathrm{Sp}(n,\mathbb{Z})\subset G$. Recalling the notation of [10], for $\gamma=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}\in\Gamma,$ (3.1) we set $h_{\gamma}=(\bm{r},\bm{s},0)\in H$ where the entries or $\bm{r}$ are $0$ or $\frac{1}{2}$ depending on whether the corresponding diagonal entry of $C\prescript{t}{}{D}$ is even or odd, and the entries of $\bm{s}$ are $0$ or $\frac{1}{2}$ depending on whether the corresponding diagonal entry of $A\prescript{t}{}{B}$ is even or odd. As in [10], we now define the group $\tilde{\Gamma}\subset H\rtimes G$ by $\tilde{\Gamma}=\\{((\bm{m},\bm{n},t)h_{\gamma},\gamma)\in H\rtimes G:\gamma\in\Gamma,\bm{m}\in\mathbb{Z}^{n},\bm{n}\in\mathbb{Z}^{n},t\in\mathbb{R}\\}.$ (3.2) The relevance of the subgroup $\tilde{\Gamma}$ is made apparent by the following theorem, see theorem 4.1 in [10]. ###### Theorem 3.1. For any $(uh_{\gamma},\gamma)\in\tilde{\Gamma}$ and $(h,g)\in H\rtimes G$, there is a complex number $\varepsilon(\gamma)$ with $|\varepsilon(\gamma)|=1$ such that $\Theta_{f}((uh_{\gamma},\gamma)(h,g))=\varepsilon(\gamma)\rho(\gamma,g)\mathrm{e}\left(-t+\tfrac{1}{2}\bm{m}\prescript{t}{}{\bm{n}}\right)\Theta_{f}(h,g),$ (3.3) where $u=(\bm{m},\bm{n},t)$. A proof of this theorem is found in [8] but with $\Gamma$ replaced by the finite index subgroup for which $h_{\gamma}=(0,0,0)$. The automorphy under the full $\tilde{\Gamma}$ is proved in [11], but only for the special function $f(\bm{x})=\exp(-\pi\bm{x}\prescript{t}{}{\bm{x}})$. It is shown in [8] that this $f$ is an eigenfunction for all the operators $R(k(Q))$, with $R$ the Segal-Shale-Weil representation and $Q\in\mathrm{U}(n)$, and it can be seen from the theory built in [8] that the automorphy for any Schwartz function follows from that for $\exp(-\pi\bm{x}\prescript{t}{}{\bm{x}})$. A self- contained proof along the lines of [8] is presented in the authors’ lecture notes [9]. ### 3.1 Fundamental domains We say that a closed set $\mathcal{D}\subset G$ is a fundamental domain for $\Gamma\backslash G$ if * • for all $g\in G$ there exists $\gamma\in\Gamma$ such that $\gamma g\in\mathcal{D}$ and * • if for $g\in\mathcal{D}$ there is a non-identity $\gamma\in\Gamma$ such that $\gamma g\in\mathcal{D}$, then $g$ is contained in the boundary of $\mathcal{D}$. Similarly a closed set $\tilde{\mathcal{D}}\subset H\rtimes G$ is a fundamental domain for $\tilde{\Gamma}\backslash(H\rtimes G)$ if * • for all $(h,g)\in H\rtimes G$ there exists $\tilde{\gamma}\in\tilde{\Gamma}$ such that $\tilde{\gamma}(h,g)\in\tilde{\mathcal{D}}$ and * • if for $(h,g)\in\tilde{\mathcal{D}}$ there is a non-identity $\tilde{\gamma}\in\tilde{\Gamma}$ such that $\tilde{\gamma}(h,g)\in\tilde{\mathcal{D}}$, then $(h,g)$ is contained in the boundary of $\tilde{\mathcal{D}}$. We note that if $\mathcal{D}$ is a fundamental domain for $\Gamma\backslash G$, then $\tilde{\mathcal{D}}=\left\\{(\bm{x},\bm{y},0)\in H:|x_{j}|,|y_{j}|\leq\frac{1}{2}\right\\}\times\mathcal{D}$ (3.4) is a fundamental domain for $\tilde{\Gamma}\backslash(H\rtimes G)$. In contrast to our previous paper [10], here we need to make careful use of the shape of our fundamental domain $\mathcal{D}$ in the cuspidal regions. Drawing inspiration for the fundamental domain for $\mathrm{GL}(n,\mathbb{Z})\backslash\mathrm{GL}(n,\mathbb{R})$ constructed in [6] as well as from the reduction theory developed in [2] (see also [1]), we construct in this section a new fundamental domain $\mathcal{D}=\mathcal{D}_{n}$ for $\Gamma\backslash G$. In the following section we study the cuspidal region of $\mathcal{D}_{n}$. For $n=1$, we let $\mathcal{D}_{1}\subset G$ denote the standard fundamental domain for $\Gamma\backslash G=\mathrm{SL}(2,\mathbb{Z})\backslash\mathrm{SL}(2,\mathbb{R})$. That is, $\mathcal{D}_{1}=\left\\{\begin{pmatrix}1&x\\\ 0&1\end{pmatrix}\begin{pmatrix}y^{\frac{1}{2}}&0\\\ 0&y^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}\cos\phi&-\sin\phi\\\ \sin\phi&\cos\phi\end{pmatrix}:|x|\leq\frac{1}{2},x^{2}+y^{2}\geq 1,0\leq\phi<2\pi\right\\}.$ (3.5) We now define fundamental domains $\mathcal{D}_{n}$ inductively using the decomposition (2.15) for $l=1$. Writing $g\in G$ as $g=\begin{pmatrix}1&\bm{r}_{1}&t_{1}-\bm{s}_{1}\prescript{t}{}{\bm{r}}_{1}&\bm{s}_{1}\\\ 0&I&\prescript{t}{}{\bm{s}}_{1}&0\\\ 0&0&1&0\\\ 0&0&-\prescript{t}{}{\bm{r}}_{1}&I\end{pmatrix}\begin{pmatrix}1&0&0&0\\\ 0&I&0&X_{1}\\\ 0&0&1&0\\\ 0&0&0&I\end{pmatrix}\begin{pmatrix}v_{1}^{\frac{1}{2}}&0&0&0\\\ 0&Y_{1}^{\frac{1}{2}}&0&0\\\ 0&0&v_{1}^{-\frac{1}{2}}&0\\\ 0&0&0&\prescript{t}{}{Y_{1}}^{-\frac{1}{2}}\end{pmatrix}k(Q),$ (3.6) where $\bm{r}=\bm{r}(g)\in\mathbb{R}^{n-1}$, $\bm{s}=\bm{s}(g)\in\mathbb{R}^{n-1}$, $t_{1}=t_{1}(g)\in\mathbb{R}$, $X_{1}=X_{1}(g)$ is symmetric, $v_{1}=v_{1}(g)>0$, $Y_{1}=Y_{1}(g)$ is positive definite symmetric, and $Q\in\mathrm{U}(n)$, we define $\mathcal{D}_{n}$ as the set of all $g\in G$ satisfying * • $v_{1}(g)\geq v_{1}(\gamma g)$ for all $\gamma\in\Gamma$, * • $g_{1}(g)\in\mathcal{D}_{n-1}$, see (2.16), and * • the entries of $\bm{r}_{1}(g)$, $\bm{s}_{1}(g)$, and $t_{1}(g)$ are all less than or equal to $\frac{1}{2}$ in absolute value with the first entry of $\bm{r}_{1}$ greater than or equal to $0$. ###### Proposition 3.2. $\mathcal{D}_{n}$ is a fundamental domain for $\Gamma\backslash G$. ###### Proof. We begin by showing that for $g\in G$, $\sup_{\gamma\in\Gamma}v_{1}(\gamma g)$ is indeed obtained by some $\gamma\in\Gamma$. From (2), we have for $\gamma=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}\in\Gamma$ (3.7) that $v_{1}(\gamma g)^{-1}=\bm{c}Y\prescript{t}{}{\bm{c}}+(\bm{c}X+\bm{d})Y^{-1}\prescript{t}{}{(\bm{c}X+\bm{d})}$ (3.8) where $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}k(Q)$ (3.9) and $\bm{c}$, $\bm{d}$ are the first rows of $C$, $D$. Since $Y$ is positive definite, there are only finitely many $\bm{c}$ such that $\bm{c}Y\prescript{t}{}{\bm{c}}$, and hence $v_{1}(\gamma g)^{-1}$, is below a given bound. Similarly, for a fixed $\bm{c}$, the positive definiteness of $Y^{-1}$ implies that there are only finitely many $\bm{d}$ such that $v_{1}(\gamma g)^{-1}$ is below a given bound. It follows that there are only finitely many $\gamma\in\Gamma_{1}\backslash\Gamma$ such that $v_{1}(\gamma g)$ is larger than a given bound, where $\Gamma_{1}=\Gamma\cap P_{1}$ and we recall $P_{1}$ is given by (2.17). As $v_{1}(\gamma g)=v_{1}(g)$ for $\gamma\in\Gamma_{1}$ it follows that $v_{1}(\gamma g)$ is maximized for some $\gamma\in\Gamma$. Let $\gamma_{0}$ be so that $v_{1}(\gamma_{0}g)$ is maximal. We now decompose an arbitrary $\gamma\in\Gamma_{1}$ as in (2.17), $\gamma=\begin{pmatrix}1&\bm{r}_{1}&t_{1}-\bm{s}_{1}\prescript{t}{}{\bm{r}_{1}}&\bm{s}_{1}\\\ 0&I&\prescript{t}{}{\bm{s}_{1}}&0\\\ 0&0&1&0\\\ 0&0&-\prescript{t}{}{\bm{r}_{1}}&I\end{pmatrix}\begin{pmatrix}\pm 1&0&0&0\\\ 0&A_{1}&0&B_{1}\\\ 0&0&\pm 1&0\\\ 0&C_{1}&0&D_{1}\end{pmatrix}$ (3.10) with $\gamma_{1}=\begin{pmatrix}A_{1}&B_{1}\\\ C_{1}&D_{1}\end{pmatrix}\in\mathrm{Sp}(n-1,\mathbb{Z}).$ (3.11) Proceeding inductively, there exists $\gamma_{1}$ such that $\gamma_{1}g_{1}(\gamma_{0}g)=g_{1}(\gamma\gamma_{0}g)\in\mathcal{D}_{n-1}$. Now, we can change $\bm{r}_{1}(\gamma)$, $\bm{s}_{1}(\gamma)$, $t_{1}(\gamma)$, and the $\pm$, noting that this does not change $g_{1}(\gamma\gamma_{0}g)$, so that the entries of $\bm{r}_{1}(\gamma\gamma_{0}g)$, $\bm{s}_{1}(\gamma\gamma_{0}g)$ and $t_{1}(\gamma\gamma_{0}g)$ are all $\leq\frac{1}{2}$ in absolute value and the first entry of $\bm{r}_{1}(\gamma\gamma_{0}g)$ is nonnegative. Therefore $\gamma\gamma_{1}g\in\mathcal{D}_{n}$ as required. We now suppose that $g\in\mathcal{D}_{n}$ and there is a non-identity $\gamma\in\Gamma$ such that $\gamma g\in\mathcal{D}_{n}$. We set $\gamma=\begin{pmatrix}A&B\\\ C&D\end{pmatrix},\quad g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}k(Q).$ (3.12) By the maximality, we have $v_{1}(g)=v_{1}(\gamma g)$ and therefore $v_{1}^{-1}=\bm{c}Y\prescript{t}{}{\bm{c}}+(\bm{c}X+\bm{d})Y^{-1}\prescript{t}{}{(\bm{c}X+\bm{d})}$ (3.13) where $\bm{c}$ and $\bm{d}$ are the first rows of $C$ and $D$. Let us first consider the case when $\bm{c}\neq 0$. To show that $g$ is on the boundary of $\mathcal{D}_{n}$ in this case, we consider $g_{\epsilon}=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}(1-\epsilon)^{\frac{1}{2}}Y^{\frac{1}{2}}&0\\\ 0&(1-\epsilon)^{-\frac{1}{2}}\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}k(Q)$ (3.14) for $0<\epsilon<1$. We have $v_{1}(g_{\epsilon})=(1-\epsilon)v_{1}(g)$ and $v_{1}(\gamma g_{\epsilon})^{-1}=(1-\epsilon)\bm{c}Y\prescript{t}{}{\bm{c}}+(1-\epsilon)^{-1}(\bm{c}X+\bm{d})Y^{-1}\prescript{t}{}{(\bm{c}X+\bm{d})}\\\ =\left((1-\epsilon)-(1-\epsilon)^{-1}\right)\bm{c}Y\prescript{t}{}{\bm{c}}+v_{1}(g_{\epsilon})^{-1}$ (3.15) by (3.13). Since $v_{1}(\gamma g_{\epsilon})>v_{1}(g_{\epsilon})$, we have that $g_{\epsilon}\not\in\mathcal{D}_{n}$. As $g_{\epsilon}$ can be made arbitrarily close to $g$, we conclude that $g$ is on the boundary of $\mathcal{D}_{n}$. If $\bm{c}=0$, then from (3.13) we have $v_{1}(g)^{-1}=(d^{(1)}-\bm{d}^{(2)}\prescript{t}{}{\bm{r}}_{1})^{2}v_{1}(g)^{-1}+\bm{d}^{(2)}Y_{1}^{-1}\prescript{t}{}{\bm{d}}^{(2)}$ (3.16) where $\bm{d}=\begin{pmatrix}d^{(1)}&\bm{d}^{(2)}\end{pmatrix}$ are as above $Y=\begin{pmatrix}1&\bm{r}_{1}\\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\\ -\prescript{t}{}{\bm{r}_{1}}&I\end{pmatrix}.$ (3.17) This time we consider $g_{\epsilon}=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{\epsilon}^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y_{\epsilon}}^{-\frac{1}{2}}\end{pmatrix}k(Q)$ (3.18) with $Y_{\epsilon}=\begin{pmatrix}1&\bm{r}_{1}\\\ 0&I\end{pmatrix}\begin{pmatrix}(1-\epsilon)v_{1}&0\\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\\ -\prescript{t}{}{\bm{r}_{1}}&I\end{pmatrix}.$ (3.19) We have $v_{1}(g_{\epsilon})=(1-\epsilon)v_{1}(g)$ and $v_{1}(\gamma g_{\epsilon})^{-1}=(1-\epsilon)^{-1}(d^{(1)}-\bm{d}^{(2)}\prescript{t}{}{\bm{r}}_{1})^{2}v_{1}(g)^{-1}+\bm{d}^{(2)}Y_{1}^{-1}\prescript{t}{}{\bm{d}}^{(2)}\\\ =v_{1}(g_{\epsilon})^{-1}+\left(1-(1-\epsilon)^{-1}\right)\bm{d}^{(2)}Y_{1}\prescript{t}{}{\bm{d}^{(2)}}$ (3.20) from (3.16). If $\bm{d}^{(2)}\neq 0$, then $v_{1}(\gamma g_{\epsilon})>v_{1}(g_{\epsilon})$ and we conclude that $g$ is on the boundary of $\mathcal{D}_{n}$ as before. When $\bm{c}=0$ and $\bm{d}^{(2)}=0$ we have $d^{(1)}=\pm 1$, and so $\gamma\in\Gamma_{1}$. We decompose $\gamma$ as in (3.10) and define $\gamma_{1}$ as in (3.11). By the construction of $\mathcal{D}_{n}$, we have $g_{1}(g)\in\mathcal{D}_{n-1}$ and $g_{1}(\gamma g)=\gamma_{1}g_{1}(g)\in\mathcal{D}_{n-1}$. By induction, we have that either $\gamma_{1}$ is the identity or $g_{1}(g)$ is on the boundary of $\mathcal{D}_{n-1}$. In the latter case we have that $g$ is on the boundary of $\mathcal{D}_{n}$, and so it remains to consider $\gamma=\begin{pmatrix}\pm 1&\bm{r}_{1}&\pm t_{1}\mp\bm{r}_{1}\prescript{t}{}{\bm{s}_{1}}&\bm{s}_{1}\\\ 0&I&\pm\prescript{t}{}{\bm{s}_{1}}&0\\\ 0&0&\pm 1&0\\\ 0&0&\mp\bm{r}_{1}&I\end{pmatrix}.$ (3.21) If any of the entries of $\bm{r}_{1}(\gamma)$ or $\bm{s}_{1}(\gamma)$ is not zero, then the corresponding entry of $\bm{r}_{1}(g)$ or $\bm{s}_{1}(g)$ is $\pm\frac{1}{2}$ and so $g$ is on the boundary of $\mathcal{D}_{n}$. Similarly if $t_{1}(\gamma)\neq 0$, we have $t_{1}(g)=\pm\frac{1}{2}$ and again $g$ is on the boundary of $\mathcal{D}_{n}$. If all of $\bm{r}_{1},\bm{s}_{1},t_{1}$ are $0$, the sign must be $-$ as $\gamma$ is not the identity, and it follows that the first entry of $\bm{r}_{1}(g)$ is $0$ and $g$ is again on the boundary of $\mathcal{D}_{n}$. ∎ The following proposition records some useful properties of $\mathcal{D}_{n}$. It and its proof are very similar to the analogous statement for the different fundamental domain used in [10], see proposition 3.1 there. ###### Proposition 3.3. Let $g\in\mathcal{D}_{n}$ and write $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&Y^{-\frac{1}{2}}\end{pmatrix}k(Q),\quad Y=UV\prescript{t}{}{U},$ (3.22) where $X$ is symmetric, $Y$ is positive definite symmetric, $U$ upper triangular unipotent, $V$ positive diagonal, and $Q\in\mathrm{U}(n)$, and $V=\begin{pmatrix}v_{1}&\cdots&0\\\ \vdots&\ddots&\vdots\\\ 0&\cdots&v_{n}\end{pmatrix},\quad Y=\begin{pmatrix}1&\bm{r}_{1}\\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\\ \prescript{t}{}{\bm{r}_{1}}&I\end{pmatrix}.$ (3.23) Then we have 1. 1. $v_{n}\geq\frac{\sqrt{3}}{2}$ and $v_{j}\geq\frac{3}{4}v_{j+1}$ for $1\leq j\leq n-1$, 2. 2. for all $\bm{x}=\begin{pmatrix}x^{(1)}&\bm{x}^{(2)}\end{pmatrix}\in\mathbb{R}^{n}$ $\bm{x}Y\prescript{t}{}{\bm{x}}\asymp_{n}v_{1}(x^{(1)})^{2}+\bm{x}^{(2)}Y_{1}\prescript{t}{}{\bm{x}}^{(2)}.$ (3.24) ###### Proof. For the first, we observe that by the inductive construction of $\mathcal{D}_{n}$, we have that $g_{n-1}(g)=\begin{pmatrix}1&x_{n-1}(g)\\\ 0&1\end{pmatrix}\begin{pmatrix}v_{n}^{\frac{1}{2}}&0\\\ 0&v_{n}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{1}.$ (3.25) As $\mathcal{D}_{1}$ is the standard fundamental domain for $\mathrm{SL}(2,\mathbb{Z})\backslash\mathrm{SL}(2,\mathbb{R})$, we conclude that $v_{n}\geq\frac{\sqrt{3}}{2}$. To demonstrate that $v_{j}\geq\frac{3}{4}v_{j+1}$, we note that by the construction of $\mathcal{D}_{n}$, it suffices to consider only $j=1$. We start with $v_{1}^{-1}\leq\bm{c}Y\prescript{t}{}{\bm{c}}+(\bm{c}X+\bm{d})Y^{-1}\prescript{t}{}{(\bm{c}X+\bm{d})}$ (3.26) for any $\begin{pmatrix}\bm{c}&\bm{d}\end{pmatrix}\in\mathbb{Z}^{2n}$ nonzero and primitive. Choosing $\bm{c}=0$ and $\bm{d}=\begin{pmatrix}0&1&0\cdots&0\end{pmatrix}$, we have $v_{1}^{-1}\leq v_{1}^{-1}(r_{1}^{(1)})^{2}+v_{2}^{-1},$ (3.27) where $r_{1}^{(1)}$ is the first entry of $\bm{r}_{1}$. Since $0\leq r_{1}^{(1)}\leq\frac{1}{2}$, we conclude that $v_{1}\geq\frac{3}{4}v_{2}$. To demonstrate the second part of the proposition, we let $\bm{y}_{1},\dots,\bm{y}_{n}$ denote the rows of $Y^{\frac{1}{2}}=\begin{pmatrix}1&\bm{r}_{1}\\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}^{\frac{1}{2}}&0\\\ 0&Y_{1}^{\frac{1}{2}}\end{pmatrix}.$ (3.28) Setting $\bm{y}=x_{2}\bm{y}_{2}+\cdots+x_{n}\bm{y}_{n}$, where the $x_{j}$ are the entries of $\bm{x}$, our aim is to prove that for some constants $0<c_{1}<1<c_{2}$ depending only on $n$, $c_{1}\left(||\bm{y}_{1}||^{2}x_{1}^{2}+||\bm{y}||^{2}\right)\leq||x_{1}\bm{y}_{1}+\bm{y}||^{2}\leq c_{2}\left(||\bm{y}_{1}||^{2}x_{1}^{2}+||\bm{y}||^{2}\right),$ (3.29) from which the lower bound in (3.24) follows as $||\bm{y}_{1}||^{2}\geq v_{1}$. The upper bound in (3.24) follows from (3.29) and $v_{1}\gg||\bm{y}_{1}||^{2}$, which is verified below, see (3.35). Expanding the expression in the middle of (3.29), we find that it is enough to show that $2|x_{1}\bm{y}_{1}\prescript{t}{}{\\!\bm{y}}|\leq(1-c_{1})\left(||\bm{y}_{1}||^{2}x_{1}^{2}+||\bm{y}||^{2}\right),$ (3.30) and $2|x_{1}\bm{y}_{1}\prescript{t}{}{\\!\bm{y}}|\leq(c_{2}-1)\left(||\bm{y}_{1}||^{2}x_{1}^{2}+||\bm{y}||^{2}\right).$ (3.31) The upper bound (3.31) is trivial if $c_{2}=2$, and the upper bound (3.30) would follow from $|\bm{y}_{1}\prescript{t}{}{\\!\bm{y}}|\leq(1-c_{1})||\bm{y}_{1}||\;||\bm{y}||.$ (3.32) We let $0<\phi_{1}<\pi$ denote the angle between $\bm{y}_{1}$ and $\bm{y}$ and $0<\phi_{2}<\frac{\pi}{2}$ denote the angle between $\bm{y}_{1}$ and the hyperplane $\mathrm{span}(\bm{y}_{2},\dots,\bm{y}_{n})$. We have $\phi_{2}\leq\mathrm{min}(\phi_{1},\pi-\phi_{1})$, and so $|\cos\phi_{1}|\leq|\cos\phi_{2}|$. We bound $\cos\phi_{2}$ away from $1$ by bounding $\sin\phi_{2}$ away from $0$. We have $|\sin\phi_{2}|=\frac{||\bm{y}_{1}\wedge\cdots\wedge\bm{y}_{n}||}{||\bm{y}_{1}||\;||\bm{y}_{2}\wedge\cdots\wedge\bm{y}_{n}||}=\frac{v_{1}^{\frac{1}{2}}}{||\bm{y}_{1}||},$ (3.33) so it suffices to show that $v_{1}^{\frac{1}{2}}\gg||\bm{y}_{1}||$. Here $\wedge$ denotes the usual wedge product on $\mathbb{R}^{n}$ and the norm on $\bigwedge^{k}\mathbb{R}^{n}$ is given by $||\bm{a}_{1}\wedge\cdots\wedge\bm{a}_{k}||^{2}=\det\begin{pmatrix}\bm{a}_{1}\\\ \vdots\\\ \bm{a}_{k}\end{pmatrix}\begin{pmatrix}\prescript{t}{}{\bm{a}}_{1}&\cdots&\prescript{t}{}{\bm{a}}_{k}\end{pmatrix}.$ (3.34) Using the inductive construction of $\mathcal{D}_{n}$ and the fact that the entries of $\bm{r}_{1}(Y),\bm{r}_{1}(Y_{1}),\dots$ are at most $\frac{1}{2}$ in absolute value, we observe that $U$ has entries bounded by a constant depending only on $n$. We find that $||\bm{y}_{1}||^{2}\ll v_{1}+\cdots+v_{n}\ll v_{1}$ (3.35) with the implied constant depending on $n$. ∎ ### 3.2 Shape of the cusp As explicated in [1] and [2], the cusp of $\Gamma\backslash G$ can be partitioned into $2^{n}-1$ box-shaped regions. These regions are in correspondence with the conjugacy classes of proper parabolic subgroups of $G$ and are formed as $K$ times the product of three subsets, one for each of the components – nilpotent, diagonal, and semisimple – of the Langlands decomposition of $P$. In what follows we use the fundamental domain $\mathcal{D}_{n}$ constructed in section 3.1 to prove a variation of this fact, although only for the maximal parabolic subgroups (2.17), (2.18). Our main result for this section is proposition 3.6, which roughly states that if $g\in G$ is close enough the boundary in a precise sense, then $g$ can be brought into $\mathcal{D}_{n}$ by an element $\gamma$ in some maximal parabolic subgroup which depends on the way $g$ approaches the boundary. For $1\leq l<n$ we denote by $\Gamma_{l,1}$ and $\Gamma_{l,2}$ the subgroups of $\Gamma_{l}=\Gamma\cap P_{l}$ given by $\Gamma_{l,1}=\left\\{\begin{pmatrix}A&0&0&0\\\ 0&I&0&0\\\ 0&0&\prescript{t}{}{A}^{-1}&0\\\ 0&0&0&I\end{pmatrix}:A\in\mathrm{GL}(l,\mathbb{Z})\right\\}$ (3.36) and $\Gamma_{l,2}=\left\\{\begin{pmatrix}I&0&0&0\\\ 0&A&0&B\\\ 0&0&I&0\\\ 0&C&0&D\end{pmatrix}:\begin{pmatrix}A&B\\\ C&D\end{pmatrix}\in\mathrm{Sp}(n-l,\mathbb{Z})\right\\}.$ (3.37) For $l=n$, we set $\Gamma_{n,1}=\left\\{\begin{pmatrix}A&0\\\ 0&\prescript{t}{}{A}^{-1}\end{pmatrix}:A\in\mathrm{GL}(n,\mathbb{Z})\right\\},$ (3.38) and we let $\Gamma_{n,2}$ be trivial. We now define for $g\in G$ and $1\leq l\leq n$, $v_{l}(\Gamma_{l}g)=\min_{\gamma\in\Gamma_{l}}v_{l}(\gamma g)=\min_{\gamma\in\Gamma_{l,1}}v_{l}(\gamma g)$ (3.39) and, for $1\leq l<n$, $v_{l+1}(\Gamma_{l}g)=\max_{\gamma\in\Gamma_{l}}v_{l+1}(\gamma g)=\max_{\gamma\in\Gamma_{l,2}}v_{l+1}(\gamma g).$ (3.40) We note that in the proof of proposition 3.2, we saw that the maximum in (3.40) does exist. As for the minimum in (3.39), we simply note that $v_{l}(AU_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{A})=\bm{a}U_{l}V_{l}\prescript{t}{}{U_{l}}\prescript{t}{}{\bm{a}}$ (3.41) where $\bm{a}$ is the last row of $A\in\mathrm{GL}(l,\mathbb{Z})$, so the positive definiteness of $U_{l}V_{l}\prescript{t}{}{U}_{l}$ implies that there are only finitely many values of $v_{l}(AU_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{A})$ below a given bound. We now define a fundamental domain $\mathcal{D}_{l}^{\prime}$ for the action of $\mathrm{GL}(l,\mathbb{Z})$ on $l\times l$ positive definite symmetric matrices. We set $\mathcal{D}_{1}^{\prime}=\\{y>0\\}$ and $\mathcal{D}_{2}^{\prime}=\left\\{\begin{pmatrix}1&r\\\ 0&1\end{pmatrix}\begin{pmatrix}v_{1}&0\\\ 0&v_{2}\end{pmatrix}\begin{pmatrix}1&0\\\ r&1\end{pmatrix}:0\leq r\leq\frac{1}{2},\ r^{2}+\frac{v_{1}}{v_{2}}\geq 1\right\\},$ (3.42) the standard fundamental domain for $\mathrm{GL}(2,\mathbb{Z})$ acting on $2\times 2$ positive definite symmetric matrices. The domain $\mathcal{D}_{l}^{\prime}$ for $l>2$ is then defined inductively as the set of all $Y=\begin{pmatrix}1&\bm{r}\\\ 0&I\end{pmatrix}\begin{pmatrix}v_{1}&0\\\ 0&Y_{1}\end{pmatrix}\begin{pmatrix}1&0\\\ \bm{r}&1\end{pmatrix}$ (3.43) such that 1. 1. $v_{1}(Y)\geq v_{1}(AY\prescript{t}{}{A})$ for all $A\in\mathrm{GL}(l,\mathbb{Z})$, 2. 2. $Y_{1}\in\mathcal{D}_{l-1}^{\prime}$, and 3. 3. $|r_{j}|\leq\frac{1}{2}$ and $0\leq r_{1}\leq\frac{1}{2}$ where $r_{j}$ are the entries of $\bm{r}$. This is in fact the set of $Y$ such that $Y^{-1}$ is in Grenier’s fundamental domain, see [6] and [12], so we do not prove that $\mathcal{D}_{l}^{\prime}$ is a fundamental domain here. We do however record the following properties of $\mathcal{D}_{l}^{\prime}$. ###### Lemma 3.4. Let $UV\prescript{t}{}{U}\in\mathcal{D}_{l}^{\prime}$ with $V=\begin{pmatrix}v_{1}&\cdots&0\\\ \vdots&\ddots&\vdots\\\ 0&\cdots&v_{l}\end{pmatrix}$ (3.44) positive diagonal and $U$ upper triangular unipotent. Then we have 1. 1. $v_{j}\geq\frac{3}{4}v_{j+1}$ for $1\leq j<l$, 2. 2. for any $\bm{x}\in\mathbb{R}^{l}$, $\bm{x}UV\prescript{t}{}{U}\prescript{t}{}{\bm{x}}\asymp\bm{x}V\prescript{t}{}{\bm{x}}$ (3.45) with implied constant depending only on $l$, and 3. 3. $\min_{A\in\mathrm{GL}(l,\mathbb{Z})}v_{l}(AUV\prescript{t}{}{U}\prescript{t}{}{A})\asymp v_{l}(UV\prescript{t}{}{U})$ (3.46) with implied constant depending only on $l$. ###### Proof. The first and second parts are proved in proposition 3.1 of [10]. To prove the third part, we note that with $\bm{a}$ the last row of $A$, $v_{l}(AUV\prescript{t}{}{U}\prescript{t}{}{A})=\bm{a}UV\prescript{t}{}{U}\prescript{t}{}{\bm{a}}\gg\bm{a}V\prescript{t}{}{\bm{a}},$ (3.47) by the second part of the lemma. Applying the first part of the lemma we have $\bm{a}V\prescript{t}{}{\bm{a}}\gg v_{l}||\bm{a}||^{2}\geq v_{l}$, and (3.46) follows. ∎ As the proof is almost identical to the proof of the third part of lemma 3.4, we record the following lemma for later use. ###### Lemma 3.5. If $g\in\mathcal{D}_{n}$, then for all $1\leq l<n$, $v_{l}(\Gamma_{l}g)\asymp v_{l}(g)$ (3.48) with the implied constant depending only on $n$. ###### Proof. We recall from the second part of proposition 3.3 that for $\bm{x}\in\mathbb{R}^{l}$, $\bm{x}U_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{\bm{x}}\gg\bm{x}V_{l}\prescript{t}{}{\bm{x}}.$ (3.49) We have $v_{l}(\Gamma_{l}g)=\min_{\begin{subarray}{c}\bm{c}\in\mathbb{Z}^{l}\\\ \bm{c}\neq 0\end{subarray}}\bm{c}U_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{\bm{c}}\gg\min_{\begin{subarray}{c}\bm{c}\in\mathbb{Z}^{l}\\\ \bm{c}\neq 0\end{subarray}}\bm{c}V_{l}\prescript{t}{}{\bm{c}}.$ (3.50) Now as $\bm{c}\neq 0$, we have $c_{j}^{2}\geq 1$ for some $1\leq j\leq l$, and so $v_{l}(\Gamma_{l}g)\gg v_{j}(g)\gg v_{l}(g)$ (3.51) by the first part of proposition 3.3. ∎ We are now ready to prove the main result for this section. ###### Proposition 3.6. For $1\leq l\leq n$, there are constants $a_{l}>0$ such that for $l<n$, if $g\in G$ satisfies $v_{l}(\Gamma_{l}g)\geq a_{l}v_{l+1}(\Gamma_{l}g)$, and for $l=n$ if $g\in G$ satisfies $v_{n}(\Gamma_{n}g)\geq a_{n}$, then there exists $\gamma\in\Gamma_{l}$ so that $\gamma g\in\mathcal{D}_{n}$. Moreover, for this $\gamma$ we have $v_{l}(\Gamma_{l}g)\asymp v_{l}(\gamma g)$ and, for $l<n$, $v_{l+1}(\Gamma_{l}g)=v_{l+1}(\gamma g)$. We remark that this proposition can be extended to any of the parabolic subgroups $P_{L}$ of $G$ by taking intersections of the maximal parabolics. However some care needs to be taken regarding the possible non-uniqueness of the $\gamma$ bringing $g$ into $\mathcal{D}_{n}$. Since it is unnecessary for our goals, we do not discuss this here. ###### Proof. By multiplying $g$ by $\gamma_{1}=\begin{pmatrix}A^{\prime}&0&0&0\\\ 0&A&0&B\\\ 0&0&\prescript{t}{}{(A^{\prime})}^{-1}&0\\\ 0&C&0&D\end{pmatrix}\in\Gamma_{l},$ (3.52) we may assume that $U_{l}V_{l}\prescript{t}{}{U}_{l}\in\mathcal{D}_{l}^{\prime}$ and $\begin{pmatrix}I&X_{l}\\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y_{l}}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{n-l}.$ (3.53) We recall that for $\gamma=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}$, $v_{1}(\gamma g)^{-1}=\bm{c}Y\prescript{t}{}{\bm{c}}+(\bm{c}X+\bm{d})Y^{-1}\prescript{t}{}{(\bm{c}X+\bm{d})}$ (3.54) where $\bm{c}$, $\bm{d}$ are the first rows of $C$, $D$. Now, writing $\bm{c}=\begin{pmatrix}\bm{c}^{(1)}&\bm{c}^{(2)}\end{pmatrix}$, $\bm{d}=\begin{pmatrix}\bm{d}^{(1)}&\bm{d}^{(2)}\end{pmatrix}$ and $\displaystyle X=\begin{pmatrix}T_{l}+R_{l}X_{l}\prescript{t}{}{R}_{l}&S_{l}+R_{l}X_{l}\\\ \prescript{t}{}{S}_{l}+X_{l}\prescript{t}{}{R}_{l}&X_{l}\end{pmatrix},$ (3.55) $\displaystyle Y=\begin{pmatrix}U_{l}&R_{l}\\\ 0&I\end{pmatrix}\begin{pmatrix}V_{l}&0\\\ 0&Y_{l}\end{pmatrix}\begin{pmatrix}\prescript{t}{}{U}_{l}&0\\\ \prescript{t}{}{R}_{l}&I\end{pmatrix},$ (3.56) see (2.15), we obtain $\displaystyle v_{1}(\gamma g)^{-1}=$ $\displaystyle\bm{c}^{(1)}U_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{\bm{c}}^{(1)}+(\bm{c}^{(1)}R_{l}+\bm{c}^{(2)})Y_{l}\prescript{t}{}{(\bm{c}^{(1)}R_{l}+\bm{c}^{(2)})}$ $\displaystyle+\left(\bm{c}^{(1)}(T_{l}-S_{l}\prescript{t}{}{R}_{l})+\bm{c}^{(2)}\prescript{t}{}{S}_{l}+\bm{d}^{(1)}-\bm{d}^{(2)}\prescript{t}{}{R}_{l}\right)\prescript{t}{}{U}_{l}^{-1}V_{l}^{-1}U_{l}^{-1}$ $\displaystyle\qquad\prescript{t}{}{\left(\bm{c}^{(1)}(T_{l}-S_{l}\prescript{t}{}{R}_{l})+\bm{c}^{(2)}\prescript{t}{}{S}_{l}+\bm{d}^{(1)}-\bm{d}^{(2)}\prescript{t}{}{R}_{l}\right)}$ $\displaystyle+\left(\bm{c}^{(1)}(S_{l}+R_{l}X_{l})+\bm{c}^{(2)}X_{l}+\bm{d}^{(2)}\right)Y_{l}^{-1}$ $\displaystyle\qquad\prescript{t}{}{\left(\bm{c}^{(1)}(S_{l}+R_{l}X_{l})+\bm{c}^{(2)}X_{l}+\bm{d}^{(2)}\right)}.$ (3.57) If $\bm{c}^{(1)}\neq 0$, then, since $U_{l}V_{l}\prescript{t}{}{U}_{l}\in\mathcal{D}_{l}^{\prime}$, we have $v_{1}(\gamma g)^{-1}\geq\bm{c}^{(1)}U_{l}V_{l}\prescript{t}{}{U}_{l}\prescript{t}{}{\bm{c}}^{(1)}\gg\bm{c}^{(1)}V_{l}\prescript{t}{}{\bm{c}}^{(1)}\gg v_{l}$ (3.58) by the second part of lemma 3.4. Since, for $l<n$, $\begin{pmatrix}I&X_{l}\\\ 0&I\end{pmatrix}\begin{pmatrix}Y_{l}^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y_{l}}^{-\frac{1}{2}}\end{pmatrix}\in\mathcal{D}_{n-l},$ (3.59) we have $v_{l+1}\gg 1$, see proposition 3.3, and so $v_{l}\gg a_{l}$ by the hypothesis. For $l=n$, we directly have $v_{n}\gg a_{n}$ by hypothesis. Since also $v_{1}\gg v_{l}$ by lemma 3.4, we have $v_{1}v_{l}\gg a_{l}^{2}$, so by taking $a_{l}$ to be a sufficiently large constant, it follows that $v_{1}\geq v_{1}(\gamma g)$. For $l<n$, if $\bm{c}^{(1)}=0$ but $\begin{pmatrix}\bm{c}^{(2)}&\bm{d}^{(2)}\end{pmatrix}\neq 0$, then we have $v_{1}(\gamma g)^{-1}\geq\bm{c}^{(2)}Y_{l}\prescript{t}{}{\bm{c}}^{(2)}+(\bm{c}^{(2)}X_{l}+\bm{d}^{(2)})Y_{l}^{-1}\prescript{t}{}{(\bm{c}^{(2)}X_{l}+\bm{d}^{(2)})}\geq v_{l+1}(g)^{-1}$ (3.60) since $g_{l}(g)\in\mathcal{D}_{n-l}$. We have $v_{l+1}^{-1}\geq a_{l}v_{l}^{-1}\gg a_{l}v_{1}^{-1}$, so $v_{l+1}^{-1}\geq v_{1}^{-1}$ for $a_{l}$ sufficiently large, and it follows that $v_{1}\geq v_{1}(\gamma g)$. Now, if $l=n$ or if $\bm{c}^{(1)}$, $\bm{c}^{(2)}$, and $\bm{d}^{(2)}$ are all $0$, then we have $\bm{d}^{(1)}\neq 0$ and $v_{1}(\gamma g)^{-1}=\bm{d}^{(1)}\prescript{t}{}{U}_{l}^{-1}V_{l}^{-1}U_{l}^{-1}\prescript{t}{}{\bm{d}}^{(1)}\geq v_{1}^{-1}$ (3.61) as $U_{l}V_{l}\prescript{t}{}{U_{l}}\in\mathcal{D}_{l}^{\prime}$. We have verified that for any $\gamma\in\Gamma$, $v_{1}\leq v_{1}(\gamma g)$, which is the first condition defining the fundamental domain $\mathcal{D}_{n}$. Restricting to $\gamma\in\Gamma_{1}$, which fixes $v_{1}(g)$, the same argument as above shows that $v_{2}(g)\geq v_{2}(\gamma g)$ for all $\gamma\in\Gamma_{1}$. Continuing this way, we find that the $v_{j}$, $1\leq j\leq l$ are all maximal (over $\Gamma_{j,2}$), and so, by the construction of $\mathcal{D}_{n}$, there is a $\gamma\in\Gamma_{l}$ with the form $\gamma=\begin{pmatrix}A&B\\\ 0&\prescript{t}{}{A}^{-1}\end{pmatrix},$ (3.62) where $A$ is upper-triangular unipotent (so $\gamma\in\Gamma_{l}$ for all $l$) such that $\gamma g\in\mathcal{D}_{n}$. ∎ ## 4 Proof of the main theorem In the following subsection we gather some technical lemmas regarding the height function needed in the proof of theorem 1.2, see section 4.2. This height function is motivated by the following corollary from [10]. ###### Corollary 4.1. For a Schwartz function $f\in\mathcal{S}(\mathbb{R}^{n})$ and $(h,g)\in\tilde{\mathcal{D}}$, and $A>0$, we have $\Theta_{f}(h,g)\ll_{f,A}(\det Y)^{\frac{1}{4}}(1+\bm{x}Y\prescript{t}{}{\\!\bm{x}})^{-A}$ (4.1) where $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}k(Q).$ (4.2) We remark that in [10] this is obtained as a consequence of full asymptotics of the theta function in the various cuspidal regions. We also remark that in [10] we use a slightly different fundamental domain, however an examination of the proof there shows that the fundamental domain can be replaced by any set satisfying the conclusions of proposition 3.3. ### 4.1 Heights and volumes For a fixed $A>0$ sufficiently large depending only on $n$, we define the function $D:\tilde{\Gamma}\backslash(H\rtimes G)\to\mathbb{R}_{>0}$ by $D\left(\tilde{\Gamma}(h,g)\right)=\det Y(\gamma g)\left(1+\bm{x}(uh_{\gamma}h^{\gamma^{-1}})Y(\gamma g)\prescript{t}{}{\bm{x}}(uh_{\gamma}h^{\gamma^{-1}})\right)^{-A}$ (4.3) where $(uh_{\gamma},\gamma)\in\tilde{\Gamma}$ is so that $(uh_{\gamma},\gamma)(h,g)\in\tilde{\mathcal{D}}$. Here we write $h\in H$ as $h=(\bm{x}(h),\bm{y}(h),t(h))$. For completeness, in case there are more than one $(uh_{\gamma},\gamma)\in\tilde{\Gamma}$ such that $(uh_{\gamma},\gamma)(h,g)\in\tilde{\mathcal{D}}$, then we define $D\left(\tilde{\Gamma}(h,g)\right)$ to be the largest of the finite number of values (4.3). This point is not essential as these values are within constant multiples of each other; see the argument in lemma 4.4 for how this can be proved. We begin by analyzing the growth of the height function. We let $\tilde{\mu}$ denote the Haar probability measure on $\tilde{\Gamma}\backslash(H\rtimes G)$, which is $\mu$, the Haar probability measure on $\Gamma\backslash G$, times the Lebesgue measure on the entries of $h=(\bm{x},\bm{y},t)$. ###### Lemma 4.2. For $R\geq 1$ we have $\tilde{\mu}(\\{\tilde{\Gamma}(h,g)\in\tilde{\Gamma}\backslash(H\rtimes G):D(\tilde{\Gamma}(h,g))\geq R\\})\ll R^{-\frac{n+2}{2}}$ (4.4) with the implied constant depending only on $n$. ###### Proof. We recall that $g\in\mathcal{D}_{n}$ is written as $g=\begin{pmatrix}U&X\prescript{t}{}{U}^{-1}\\\ 0&\prescript{t}{}{U}^{-1}\end{pmatrix}\begin{pmatrix}V^{\frac{1}{2}}&0\\\ 0&V^{-\frac{1}{2}}\end{pmatrix}k(Q)$ (4.5) for $U$ upper-triangular unipotent, $X$ symmetric, $Q\in\mathrm{U}(n)$, and $V=V(g)=\begin{pmatrix}v_{1}&\cdots&0\\\ \vdots&\ddots&\vdots\\\ 0&\cdots&v_{n}\end{pmatrix}$ (4.6) positive diagonal. The Haar measure $\mu$ on $G$ is then proportional to Lebesgue measure with respect to the entries of $X$ and the off-diagonal entries of $U$, $\mathrm{U}(n)$-Haar measure on $Q$, and the measure given by $v_{1}^{-n-1}v_{2}^{-n}\cdots v_{n}^{-2}\differential v_{1}\differential v_{2}\cdots\differential v_{n}$ (4.7) on $V$. By proposition 3.3, we observe that the set in (4.4) is contained in the set of $(h,g)$ satisfying $v_{j}\geq cv_{j+1}$ for all $1\leq j<n$ and some $c>0$ in addition to $\det Y\geq R$ and $\bm{x}Y\prescript{t}{}{\bm{x}}\leq R^{-\frac{1}{A}}(\det Y)^{\frac{1}{A}}$. Moreover, the variables $\bm{x},\bm{y},t$ as well as $U$, $X$ are constrained to compact sets, and so the measure of the set (4.4) is $\ll R^{-\epsilon}\underset{\begin{subarray}{c}v_{j}\geq cv_{j+1}\\\ v_{1}\cdots v_{n}\geq R\end{subarray}}{\int\cdots\int}v_{1}^{-n-\frac{3}{2}+\epsilon}v_{2}^{-n-\frac{1}{2}+\epsilon}\cdots v_{n}^{-\frac{5}{2}+\epsilon}\differential v_{1}\differential v_{2}\cdots\differential v_{n},$ (4.8) where $\epsilon=\frac{n}{2A}$. Changing variables $v_{j}=\exp(u_{j})$, the integral in (4.8) is $R^{-\epsilon}\underset{\begin{subarray}{c}u_{j}-u_{j+1}\geq\log c\\\ u_{1}+\cdots+u_{n}\geq\log R\end{subarray}}{\int\cdots\int}\exp\big(-(n+\tfrac{1}{2}-\epsilon)u_{1}-(n-\tfrac{1}{2}-\epsilon)u_{2}{\\\ }-\cdots-(\tfrac{3}{2}-\epsilon)u_{n}\big{missing})\differential u_{1}\differential u_{2}\cdots\differential u_{n}.$ (4.9) We now make the linear change of variables $s_{j}=u_{j}-u_{j+1}$ for $j<n$ and $s_{n}=u_{1}+\cdots+u_{n}$. This transformation has determinant $n$ and its inverse is given by $u_{j}=-\frac{1}{n}\sum_{1\leq i<j}is_{i}+\frac{1}{n}\sum_{j\leq i<n}(n-i)s_{i}+\frac{1}{n}s_{n}.$ (4.10) We find that the exponent in (4.9) is then $-\sum_{1\leq j\leq n}(n-j+\tfrac{3}{2}-\epsilon)u_{j}=-\left(\frac{n+2}{2}-\epsilon\right)s_{n}-\sum_{1\leq j<n}\frac{j(n-j)}{2}s_{j}.$ (4.11) As $\frac{j(n-j)}{2}>0$ for $j<n$, the bound (4.4) follows. ∎ Lemma 4.4 below contains a key estimate, establishing a kind of ‘uniform continuity’ for $\log D$. The proof of this lemma is the primary motivation for defining our new fundamental domain and studying the shape of its cusp in sections 3.1 and 3.2. For the proof, we first establish a similar kind of ‘uniform continuity’ for the functions $v_{l}(\Gamma_{l}g)$ and $v_{l+1}(\Gamma_{l}g)$ that are essential to section 3.2. ###### Lemma 4.3. Let $g,g_{0}\in G$ with $||g_{0}-I||\leq 1$, then $v_{l}(g)\asymp v_{l}(gg_{0}),\ v_{l}(\Gamma_{l}g)\asymp v_{l}(\Gamma_{l}gg_{0}),\ v_{l+1}(\Gamma_{l}g)\asymp v_{l+1}(\Gamma_{l}gg_{0})$ (4.12) for all $1\leq l\leq n$ with implied constants depending only on $n$. ###### Proof. We first note that we may in fact work with $||I-g_{0}||\leq\epsilon$ as then the statement would follow by repeated application of the estimates. In fact, we may assume $||I-g_{0}^{-1}||\leq\epsilon$ as well. Now write $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}Y^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}^{-\frac{1}{2}}\end{pmatrix}\begin{pmatrix}R&-S\\\ S&R\end{pmatrix},$ (4.13) with $R+\mathrm{i}S\in\mathrm{U}(n)$, so in particular $R\prescript{t}{}{R}+S\prescript{t}{}{S}=I$. With $g_{0}=\begin{pmatrix}A&B\\\ C&D\end{pmatrix}$, we have from (2) that $Y(gg_{0})^{-1}=\prescript{t}{}{Y}^{-\frac{1}{2}}\big{(}SA\prescript{t}{}{A}\prescript{t}{}{S}+RC\prescript{t}{}{A}\prescript{t}{}{S}+SA\prescript{t}{}{C}\prescript{t}{}{R}+RC\prescript{t}{}{C}R\\\ +SB\prescript{t}{}{B}\prescript{t}{}{S}+RD\prescript{t}{}{B}\prescript{t}{}{S}+SB\prescript{t}{}{D}\prescript{t}{}{R}+RD\prescript{t}{}{D}\prescript{t}{}{R}\big{)}Y^{-\frac{1}{2}}.$ (4.14) As $||g_{0}-I||\leq\epsilon$, we have $\prescript{t}{}{Y}(gg_{0})^{-\frac{1}{2}}=\prescript{t}{}{Y}^{-\frac{1}{2}}(I+O(\epsilon)).$ (4.15) On the other hand, letting $\bm{y}_{j}$ and $\bm{y}_{J}^{\prime}$ denote the rows of $\prescript{t}{}{Y}^{-\frac{1}{2}}$ and $\prescript{t}{}{Y}(gg_{0})^{-\frac{1}{2}}$, we have $v_{1}(g)^{-\frac{1}{2}}=||\bm{y}_{1}||,\quad v_{1}(gg_{0})^{-\frac{1}{2}}=||\bm{y}_{1}^{\prime}||$ (4.16) and for $2\leq l\leq n$, $v_{l}(g)^{-\frac{1}{2}}=\frac{||\bm{y}_{1}\wedge\cdots\wedge\bm{y}_{l}||}{||\bm{y}_{1}\wedge\cdots\wedge\bm{y}_{l-1}||},\quad v_{l}(gg_{0})^{-\frac{1}{2}}=\frac{||\bm{y}_{1}^{\prime}\wedge\cdots\wedge\bm{y}_{l}^{\prime}||}{||\bm{y}_{1}^{\prime}\wedge\cdots\wedge\bm{y}_{l-1}^{\prime}||},$ (4.17) and so $v_{l}(g)\asymp v_{l}(gg_{0})$ follows. Now let $\gamma\in\Gamma_{l}$ be so that $v_{l}(\Gamma_{l}g)=v_{l}(\gamma g)$. We have $v_{l}(\Gamma_{l}gg_{0})\leq v_{l}(\gamma gg_{0})\ll v_{l}(\gamma g)=v_{l}(\Gamma_{l}g),$ (4.18) and the reverse bound follows by switching the roles of $g$ and $gg_{0}$, and using $||g_{0}^{-1}-I||\leq\epsilon$. The final estimate in (4.12) is proved in the same way. ∎ ###### Lemma 4.4. If $(h,g),(h_{0},g_{0})\in G$ with $||g_{0}-I||\leq 1$ and $h_{0}=(\bm{x}_{0},\bm{y}_{0},t_{0})$ satisfies $||\bm{x}_{0}||,||\bm{y}_{0}||\leq 1$, then $D(\tilde{\Gamma}(h,g))\asymp D(\tilde{\Gamma}(h,g)(h_{0},g_{0})).$ (4.19) ###### Proof. We observe as in lemma 4.3, we may in fact assume $||g_{0}-I||\leq\epsilon,\ ||\bm{x}_{0}||\leq\epsilon,\ \mathrm{and\ }||\bm{y}_{0}||\leq\epsilon.$ (4.20) Moreover, it suffices to show that $D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg D(\tilde{\Gamma}(h,g))$ as the other inequality follows from switching $(h,g)$ and $(h,g)(h_{0},g_{0})$ as we may assume in addition that $(h_{0},g_{0})^{-1}=(h_{0}^{-g_{0}},g_{0}^{-1})$ also satisfies (4.20). Now let us suppose that $(h,g)\in\tilde{\mathcal{D}}$ so that $D(\tilde{\Gamma}(h,g))=(\det Y(g))(1+\bm{x}(h)Y(g)\prescript{t}{}{\bm{x}(h)})^{-A}.$ (4.21) Let $1\leq l\leq n$ be the largest index such that $v_{l}(g)\geq av_{l+1}(g)$ (or $v_{n}(g)\geq a$ when $l=n$) where $a$ is a constant determined by the constants in proposition 3.6 and lemma 4.3. If no such $l$ exists, then we have $v_{j}(g)\asymp 1$ for all $j$, and lemma 4.3 implies that $v_{j}(gg_{0})\asymp 1$ as well. The bounds $D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg 1\gg D(\tilde{\Gamma}(h,g))$ (4.22) then follow immediately. Now assuming that such a maximal $l$ exists, we have that $v_{j}(g)\asymp 1$ for all $j>l$. For these $j$, lemma 4.3 then implies that $v_{j}(gg_{0})\asymp 1$, and it follows that $v_{j}(\gamma gg_{0})\asymp 1$ for $\gamma\in\Gamma_{l}$ such that $g_{l}(\gamma gg_{0})\in\mathcal{D}_{n-l}$, see (2.16). By lemma 3.5, we have $v_{l}(\Gamma_{l}g)\gg v_{l}(g)$, and so $v_{l}(\Gamma_{l}g)\gg av_{l+1}(g)=av_{l+1}(\Gamma_{l}g)$ (4.23) since $g_{l}(g)\in\mathcal{D}_{n-l}$. Via lemma 4.3, this implies that $v_{l}(\Gamma_{l}gg_{0})\gg av_{l+1}(\Gamma_{l}gg_{0})$, so $a$ can be chosen large enough so that $gg_{0}$ satisfies the hypotheses of proposition 3.6, and we let $\gamma\in\Gamma_{l}$ be so that $\gamma gg_{0}\in\mathcal{D}$. We write $\gamma=\begin{pmatrix}A_{1}&*&*&*\\\ 0&*&*&*\\\ 0&0&*&0\\\ 0&*&*&*\end{pmatrix},$ (4.24) where $A_{1}\in\mathrm{GL}(l,\mathbb{Z})$. From the estimates above, we have $\det Y(\gamma gg_{0})\asymp\det U_{l}(\gamma gg_{0})V_{l}(\gamma gg_{0})\prescript{t}{}{U_{l}}(\gamma gg_{0})=\det U_{l}(gg_{0})V_{l}(gg_{0})\prescript{t}{}{U_{l}}(gg_{0})\\\ \asymp\det U_{l}(g)V_{l}(g)\prescript{t}{}{U}_{l}(g)\asymp\det Y(g),$ (4.25) where the equality follows from the fact that $\gamma\in\Gamma_{l}$ normalizes the first matrix in (2.15) and $\det A_{1}=\pm 1$. It now remains to consider the factors $1+\bm{x}(*)Y(*)\prescript{t}{}{\bm{x}(*)}$ in the definition of the height function $D$. Let $u=(\bm{m},\bm{n},0)$ with $\bm{m},\bm{n}\in\mathbb{Z}^{n}$ be so that $(uh_{\gamma},\gamma)(h,g)(h_{0},g_{0})\in\tilde{\mathcal{D}}$. Recalling the definition of $h_{\gamma}=(\bm{r},\bm{s},0)$ following (3.1), we have that $\bm{r}^{(1)}=0$ where $\bm{r}=\begin{pmatrix}\bm{r}^{(1)}&\bm{r}^{(2)}\end{pmatrix}$. Moreover, writing $\bm{x}=\begin{pmatrix}\bm{x}^{(1)}&\bm{x}^{(2)}\end{pmatrix}$, we have $\bm{x}^{(1)}((hh_{0}^{g^{-1}})^{\gamma^{-1}})=\bm{x}^{(1)}(hh_{0}^{g^{-1}})A_{1}^{-1}$. Using proposition 3.3 together with the fact that $u$ minimizes the absolute values of the entries of $\bm{x}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})$, we have $1+\bm{x}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})Y(\gamma gg_{0})\prescript{t}{}{\bm{x}}(uh_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})\\\ \ll 1+\bm{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})Y(\gamma gg_{0})\prescript{t}{}{\bm{x}}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}}),$ (4.26) and from the estimates above on the $v_{j}(\gamma gg_{0})$ for $j>l$, we have $1+\bm{x}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})Y(\gamma gg_{0})\prescript{t}{}{\bm{x}}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})\\\ \asymp 1+\bm{x}^{(1)}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}})U_{l}(\gamma gg_{0})V_{l}(\gamma gg_{0})\prescript{t}{}{U_{l}}(\gamma gg_{0})\prescript{t}{}{\bm{x}}(h_{\gamma}(hh_{0}^{g^{-1}})^{\gamma^{-1}}).$ (4.27) Using the expressions for $h_{\gamma}$, $(hh_{0}^{g^{-1}})^{\gamma^{-1}}$, and that $U_{l}(\gamma gg_{0})V_{l}(\gamma gg_{0})\prescript{t}{}{U_{l}}(\gamma gg_{0})=A_{1}U_{l}(gg_{0})V_{l}(gg_{0})\prescript{t}{}{U_{l}}(gg_{0})\prescript{t}{}{A}_{1},$ (4.28) the right side of (4.27) is equal to $1+\bm{x}^{(1)}(hh_{0}^{g^{-1}})U_{l}(gg_{0})V_{l}(gg_{0})\prescript{t}{}{U}_{l}(gg_{0})\prescript{t}{}{\bm{x}}^{(1)}(hh_{0}^{g^{-1}})\asymp 1+\bm{x}(hh_{0}^{g^{-1}})Y(gg_{0})\prescript{t}{}{\bm{x}}(hh_{0}^{g^{-1}})$ (4.29) by the above bounds on $v_{j}(gg_{0})$ for $j>l$. Recalling that $g=\begin{pmatrix}I&X(g)\\\ 0&I\end{pmatrix}\begin{pmatrix}Y(g)^{\frac{1}{2}}&0\\\ 0&\prescript{t}{}{Y}(g)^{-\frac{1}{2}}\end{pmatrix}k(g)$ (4.30) with $k(g)\in K=G\cap\mathrm{SO}(2n,\mathbb{R})$, we set $h_{0}^{\prime}=h_{0}^{k(g)^{-1}}$ and note that $||\bm{x}(h_{0}^{\prime})||^{2}+||\bm{y}(h_{0}^{\prime})||^{2}=||\bm{x}(h_{0})||^{2}+||\bm{y}(h_{0})||^{2}.$ (4.31) Since $Y(gg_{0})=Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\prescript{t}{}{Y}(g)^{\frac{1}{2}}$ and $\bm{x}(hh_{0}^{g^{-1}})=\bm{x}(h)+\bm{x}(h_{0}^{\prime})Y(g)^{-\frac{1}{2}}$, the right side of (4.29) is equal to $1+\bm{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\prescript{t}{}{Y}(g)^{-\frac{1}{2}}\prescript{t}{}{\bm{x}}(h)\\\ +2\bm{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\prescript{t}{}{\bm{x}}(h_{0}^{\prime})+\bm{x}(h_{0}^{\prime})Y(k(g)g_{0})\prescript{t}{}{\bm{x}}(h_{0}^{\prime}).$ (4.32) We have that $||g_{0}-I||\leq\epsilon$ implies $Y(k(g)g_{0})=I+O(\epsilon)$ as in (4.14), so if (4.31) is at most $\epsilon^{2}$ as well, with $\epsilon$ sufficiently small, then (4.32) is $\asymp 1+\bm{x}(h)Y(g)\prescript{t}{}{\bm{x}}(h),$ (4.33) where we have used $2|\bm{x}(h)Y(g)^{\frac{1}{2}}Y(k(g)g_{0})\prescript{t}{}{\bm{x}}(h_{0}^{\prime})|\\\ \leq\sqrt{\bm{x}(h_{0}^{\prime})Y(k(g)g_{0})^{2}\prescript{t}{}{\bm{x}}(h_{0}^{\prime})}\left(\bm{x}(h)Y(g)\prescript{t}{}{\bm{x}}(h)+1\right)\ll\epsilon\left(\bm{x}(h)Y(g)\prescript{t}{}{\bm{x}}(h)+1\right)$ (4.34) to bound the third term in (4.32). The bound $D(\tilde{\Gamma}(h,g)(h_{0},g_{0}))\gg D(\tilde{\Gamma}(h,g)$ now follows. ∎ ### 4.2 Proof of theorem 1.2 We recall the following lemma from [10]. ###### Lemma 4.5. There exists a smooth, compactly supported function $f_{1}:\mathbb{R}\to\mathbb{R}_{\geq 0}$ such that $\chi_{1}(x)=\sum_{j\geq 0}\left(f_{1}\left(2^{j}x\right)+f_{1}\left(2^{j}(1-x)\right)\right),$ (4.35) where $\chi_{1}$ is the indicator function of the open unit interval $(0,1)$. Now, following the method of [10], we define for a subset $S\subset\\{1,\dots,n\\}$ and $\bm{j}=(j_{1},\dots,j_{n})\in\mathbb{Z}^{n}$ with $j_{i}\geq 0$, $g_{\bm{j},S}=\begin{pmatrix}A_{\bm{j}}E_{S}&0\\\ 0&A_{\bm{j}}^{-1}E_{S}\end{pmatrix}\in G$ (4.36) where $E_{S}$ is diagonal with $(i,i)$ entry $-1$ if $i\in S$, $+1$ if $i\not\in S$, and $A_{\bm{j}}=\begin{pmatrix}2^{j_{1}}&\cdots&0\\\ \vdots&\ddots&\vdots\\\ 0&\cdots&2^{j_{n}}\end{pmatrix}.$ (4.37) We also set $h_{S}=(\bm{x}_{S},0,0)\in H$ where $\bm{x}_{S}$ has $i$th entry $-1$ if $i\in S$ and $0$ if $i\not\in S$. As in [10], we have $\chi_{\mathcal{B}}(\bm{x})=\sum_{\bm{j}\geq 0}\sum_{S\subset\\{1,\dots,n\\}}f_{n}\left((\bm{x}B^{-1}+\bm{x}_{S})A_{\bm{j}}E_{S}\right),$ (4.38) where $\chi_{\mathcal{B}}$ is the indicator function of the rectangular box $\mathcal{B}=(0,b_{1})\times\cdots\times(0,b_{n})$, $B$ is the diagonal matrix with entries $b_{1},\dots,b_{n}$, $f_{n}(x_{1},\dots,x_{n})=\prod_{1\leq j\leq n}f_{1}(x_{j}),$ (4.39) and the sums are over $\bm{j}\in\mathbb{Z}^{n}$ with nonnegative entries. Let $\psi:[0,\infty)\to[1,\infty)$ be an increasing function. Then for $C>0$ we define $\mathcal{G}_{\bm{j}}(\psi,C)$ to be the set of $\tilde{\Gamma}(h,g)\in\tilde{\Gamma}\backslash(H\rtimes G)$ such that $D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\bm{j},S})\big{)}^{\frac{1}{4}}\leq C\psi(s)$ (4.40) for all $S\subset\\{1,\dots,n\\}$ and $s\geq 1$. ###### Lemma 4.6. Suppose that $\psi$ satisfies $\int_{0}^{\infty}\psi(x)^{-(2n+4)}\differential x\leq C_{\psi}$ (4.41) for some $C_{\psi}\geq 1$. Then $\tilde{\mu}\left(\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\bm{j}}(\psi,C)\right)\ll C_{\psi}C^{-(2n+4)}2^{j_{1}+\cdots+j_{n}}.$ (4.42) ###### Proof. Suppose that $\tilde{\Gamma}(h,g)\not\in\mathcal{G}_{\bm{j}}(\psi,C)$, so there exists $S\subset\\{1,\dots,n\\}$ and $s\geq 1$ such that $D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\bm{j},S})\big{)}^{\frac{1}{4}}\geq C\psi(s).$ (4.43) We let $k$ be a nonnegative integer such that $\frac{k}{K_{\bm{j}}}\leq s<\frac{k+1}{K_{\bm{j}}},$ (4.44) where $K_{\bm{j}}=K2^{j_{1}+\cdots+j_{n}}$ with $K$ a constant to be determined. We have $(1,\begin{pmatrix}\mathrm{e}^{-s}I&0\\\ 0&\mathrm{e}^{s}I\end{pmatrix})(h_{S},g_{\bm{j},S})=(1,\begin{pmatrix}\mathrm{e}^{-\frac{k}{K_{\bm{j}}}}I&0\\\ 0&\mathrm{e}^{\frac{k}{K_{\bm{j}}}}I\end{pmatrix})(h_{S},g_{\bm{j},S})(h_{1},g_{1}),$ (4.45) where, with $s^{\prime}=s-\frac{k}{K_{\bm{j}}}$, $h_{1}=((\mathrm{e}^{s^{\prime}}-1)\bm{x}_{S}A_{\bm{j}}E_{S},0,0),\quad g_{1}=\begin{pmatrix}\mathrm{e}^{-s^{\prime}}I&0\\\ 0&\mathrm{e}^{s^{\prime}}I\end{pmatrix}.$ (4.46) As $|s^{\prime}|\leq K_{\bm{j}}^{-1}$, we can make $K$ sufficiently large so that $(h_{1},g_{1})$ satisfies the conditions of lemma 4.4. From this and the fact that $\psi$ is increasing, we have that $D\big{(}\tilde{\Gamma}(h,g)(1,\begin{pmatrix}\mathrm{e}^{-\frac{k}{K_{\bm{j}}}}I&0\\\ 0&\mathrm{e}^{\frac{k}{K_{\bm{j}}}}I\end{pmatrix})(h_{S},g_{\bm{j},S})\big{)}^{\frac{1}{4}}\gg C\psi\left(\frac{k}{K_{\bm{j}}}\right).$ (4.47) By lemma 4.2 and the fact that right multiplication is volume preserving, we have that the set of $\tilde{\Gamma}(h,g)$ satisfying (4.47) has $\tilde{\mu}$-volume bounded by a constant times $C^{-2n-4}\psi\left(\frac{k}{K_{\bm{j}}}\right)^{-2n-4}.$ (4.48) Bounding the volume of the set $\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\bm{j}}(\psi,C)$ by summing (4.48) over $S\subset\\{1,\dots,n\\}$ and nonnegative $k\in\mathbb{Z}$, we obtain the bound $C^{-(2n+4)}\sum_{k\geq 0}\psi\left(\frac{k}{K_{\bm{j}}}\right)^{-(2n+4)}\ll C^{-(2n+4)}\left(\psi(0)+\int_{0}^{\infty}\psi\left(\frac{x}{K_{\bm{j}}}\right)^{-(2n+4)}\differential x\right)$ (4.49) as $\psi(x)$ is increasing. The bound (4.42) follows by changing variables. ∎ We now proceed to the proof of theorem 1.2. ###### Proof of theorem 1.2. From (4.38) we express $\theta_{\mathcal{B}}(M,X,\bm{x},\bm{y})$ as $\sum_{S\subset\\{1,\dots,n\\}}\sum_{\bm{j}\geq 0}\sum_{\bm{m}\in\mathbb{Z}^{n}}f_{n}\left(\frac{1}{M}(\bm{m}+\bm{x}+M\bm{x}_{S}B)B^{-1}E_{S}A_{\bm{j}}\right)\mathrm{e}\left(\frac{1}{2}\bm{m}X\prescript{t}{}{\\!\bm{m}}+\bm{m}\prescript{t}{}{\\!\bm{y}}\right).$ (4.50) We break the sum in (4.50) into terms $\bm{j}$ such that $2^{j_{i}}b_{j_{i}}^{-1}\leq M$ for all $i$ and terms $\bm{j}$ such that $2^{j_{i}}b_{j_{i}}^{-1}>M$ for some $i$. Using (2.8), we write the first part as $\mathrm{e}(\tfrac{1}{2}\bm{x}X\prescript{t}{}{\bm{x}})M^{\frac{n}{2}}(\det B)^{\frac{1}{2}}\sum_{\begin{subarray}{c}\bm{j}\geq 0\\\ 2^{j_{i}}b_{j_{i}}^{-1}\leq M\end{subarray}}2^{-\frac{1}{2}(j_{1}+\cdots+j_{n})}\Theta_{f_{n}}\left((h,g(MB,X))(h_{S},g_{\bm{j},S})\right),$ (4.51) where $h=(\bm{x},\bm{y}-\bm{x}X,0)$ and $g(MB,X)=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}\frac{1}{M}B^{-1}&0\\\ 0&MB\end{pmatrix}.$ (4.52) Bounding this is the main work of the proof, but we first bound the contribution of the terms $\bm{j}$ with a large index. Suppose that $L\subset\\{1,\dots,n\\}$ is not empty and that $2^{j_{l}}>b_{j_{l}}M$ for all $l\in L$. Then the compact support of $f_{1}$ implies that the sum over $\bm{m}^{(L)}$, the vector of entries of $\bm{m}$ with index in $L$, has a bounded number of terms. We write $\bm{m}X\prescript{t}{}{\\!\bm{m}}=\bm{m}^{(L)}X^{(L,L)}\prescript{t}{}{\\!\bm{m}^{(L)}}+2\bm{m}^{(L)}X^{(L,L^{\prime})}\prescript{t}{}{\\!\bm{m}^{(L^{\prime})}}+\bm{m}^{(L^{\prime})}X^{(L^{\prime},L^{\prime})}\prescript{t}{}{\\!\bm{m}^{(L^{\prime})}},$ (4.53) where $L^{\prime}$ is the complement of $L$, and $X^{(L_{1},L_{2})}$ is the matrix of entries of $X$ with row and column indices in $L_{1}$ and $L_{2}$ respectively. We have (4.39) that $f_{n}\left(\frac{1}{M}(\bm{m}+\bm{x}+M\bm{x}_{S}B)B^{-1}E_{S}A_{\bm{j}}\right)$ factors as $f_{\\#L}\left(\frac{1}{M}(\bm{m}^{(L)}+\bm{x}^{(L)}+M\bm{x}_{S}^{(L)})(B^{(L,L)})^{-1}E_{S}^{(L,L)}A_{\bm{j}}^{(L,L)}\right)\\\ \times f_{\\#L^{\prime}}\left(\frac{1}{M}(\bm{m}^{(L^{\prime})}+\bm{x}^{(L^{\prime})}+M\bm{x}_{S}^{(L^{\prime})})(B^{(L^{\prime},L^{\prime})})^{-1}E_{S}^{(L^{\prime},L^{\prime})}A_{\bm{j}}^{(L^{\prime},L^{\prime})}\right),$ (4.54) and so, by inclusion-exclusion and the boundedness of $f_{\\#L}$, the terms $\bm{j}$ of (4.50) with $\bm{j}_{l}>b_{j_{i}}M$ for some $i$ is at most a constant times $\sum_{\begin{subarray}{c}L\subset\\{1,\dots,n\\}\\\ L\neq\emptyset\end{subarray}}\sum_{S\subset L}\sum_{\bm{m}^{(L)}}\big{|}\theta_{\mathcal{B}^{(L^{\prime})}}(M,X^{L^{\prime},L^{\prime}},\bm{x}^{(L^{\prime})},\bm{y}^{(L^{\prime})}+\bm{m}^{(L)}X^{(L,L^{\prime})})\big{|},$ (4.55) where the sum over $\bm{m}^{(L)}$ has a bounded number of terms, $\mathcal{B}^{(L^{\prime})}$ is the edge of $\mathcal{B}$ associated to $L^{\prime}$, and we have used the decomposition (4.38) to express $\theta_{\mathcal{B}^{(L^{\prime})}}(M,X^{L^{\prime},L^{\prime}},\bm{x}^{(L^{\prime})},\bm{y}^{(L^{\prime})}+\bm{m}^{(L)}X^{(L,L^{\prime})})$ as $\sum_{S^{\prime}\subset L^{\prime}}\sum_{\bm{j}_{L^{\prime}}}\sum_{\bm{m}_{L^{\prime}}}f_{\\#L^{\prime}}\left(\frac{1}{M}(\bm{m}^{(L^{\prime})}+\bm{x}^{(L^{\prime})}+M\bm{x}_{S}^{(L^{\prime})})(B^{(L^{\prime},L^{\prime})})^{-1}E_{S}^{(L^{\prime},L^{\prime})}A_{\bm{j}}^{(L^{\prime},L^{\prime})}\right)\\\ \times\mathrm{e}\left(\tfrac{1}{2}\bm{m}^{(L^{\prime})}X^{(L^{\prime},L^{\prime})}\prescript{t}{}{\\!\bm{m}^{(L^{\prime})}}+\bm{m}^{(L^{\prime})}\prescript{t}{}{(\bm{y}^{(L^{\prime})}+\bm{m}^{(L)}X^{(L,L^{\prime})})}\right).$ (4.56) When $L=\\{1,\dots,n\\}$, the corresponding part of (4.55) is clearly bounded. For any other $L$, we may apply theorem 1.1 (emphasizing the importance of the uniformity in $\bm{y}$) to conclude for any $\epsilon>0$, there are full measure sets $\mathcal{X}^{(n-\\#L)}=\mathcal{X}^{(n-\\#L)}(\epsilon)$ such that if $X^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\\#L)}$, the corresponding part of (4.55) is $\ll M^{\frac{n-\\#L}{2}+\epsilon}$ for any $\epsilon>0$. It follows that (4.55) is $\ll M^{\frac{n}{2}}$ assuming that $X$ is such that $X^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\\#L)}$ for all nonempty $L\subset\\{1,\dots,n\\}$. We now return to (4.51). We let $\mathcal{X}_{\bm{j}}(\psi,C)$ to be the set of $(X,\bm{y})$ with all entries in the interval $(-\tfrac{1}{2},\tfrac{1}{2}]$ such that there exist $\bm{u}\in(-\tfrac{1}{2},\tfrac{1}{2})^{n}$, $A\in\mathrm{GL}(n,\mathbb{R})$ and $T\in\mathbb{R}^{n\times n}_{\mathrm{sym}}$ satisfying $\sup_{B\in\mathcal{K}}||\left((BA)^{-1}-I\right)A_{\bm{j}}||\leq\epsilon,$ (4.57) $||T||\leq\epsilon$, and $\tilde{\Gamma}\bigg{(}(\bm{u},\bm{y}-\bm{u}X,0),\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\\ 0&\prescript{t}{}{A}^{-1}\end{pmatrix}\begin{pmatrix}I&0\\\ T&I\end{pmatrix}\bigg{)}(h_{S},g_{\bm{j},S})\in\mathcal{G}_{\bm{j}}(\psi,C).$ (4.58) Here we let $\epsilon>0$ be a sufficiently small constant, $\mathcal{G}_{\bm{j}}(\psi,C)$ is defined in (4.40), and $\mathcal{K}$ is the compact subset from the statement of theorem 1.2 identified with the compact subset of positive diagonal matrices $B$ in the obvious way. We then set $\mathcal{X}(\psi)$ to be the set of $(X,\bm{y})\in\mathbb{R}^{n\times n}_{\mathrm{sym}}\times\mathbb{R}^{n}$ such that $(X+R,\bm{y}R+\bm{s}_{R}+\bm{s})\in\bigcup_{C>0}\bigcap\mathcal{X}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\\\ \cap\bigcap_{\begin{subarray}{c}L\subset\\{1,\dots,n\\}\\\ L\neq\emptyset\end{subarray}}\\{(X_{1},\bm{y}_{1})\in\mathbb{R}^{n\times n}\times\mathbb{R}^{n}:X_{1}^{(L^{\prime},L^{\prime})}\in\mathcal{X}^{(n-\\#L)}\\}$ (4.59) for some $(R,\bm{s})\in\mathbb{Z}^{n\times n}\times\mathbb{Z}^{n}$, where $\bm{s}_{R}\in\mathbb{R}^{n}$ has entries $0$ or $\tfrac{1}{2}$ depending on whether the corresponding diagonal entry of $R$ is even or odd, and $a>0$ is a constant to be determined. We first verify that $\mathcal{X}(\psi)$ has full measure, noting that it is enough to show that $\bigcup_{C>0}\bigcap_{\bm{j}\geq 0}\mathcal{X}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})$ (4.60) has full measure in the subset $\mathcal{X}_{0}$ of $\mathbb{R}^{n\times n}_{\mathrm{sym}}\times\mathbb{R}^{n}$ having all entries in the interval $(-\tfrac{1}{2},\tfrac{1}{2}]$. Let us suppose that the Lebesgue measure of the complement of $\mathcal{X}_{\bm{j}}(\psi,C)$ in $\mathcal{X}_{0}$ is greater than some $\delta>0$, which we assume is small. Now, with respect to the measure $(\det A)^{-2n-1}\prod_{i,j}\differential a_{ij}$ on $\mathrm{GL}(n,\mathbb{R})$, the volume of the set of $A\in\mathrm{GL}(n,\mathbb{R})$ satisfying (4.57) is within a constant multiple (depending on $\mathcal{K}$) of $2^{-n(j_{1}+\cdots+j_{n})}$. Then, using the expression (2.13), (2.14) for the Haar measure on $H\rtimes G$, we have $\tilde{\mu}\left(\tilde{\Gamma}\backslash(H\rtimes G)-\mathcal{G}_{\bm{j}}(\psi,C)\right)\gg\delta 2^{-n(j_{1}+\cdots+j_{n})},$ (4.61) with implied constant depending on $\mathcal{K}$. From lemma 4.6 it follows that $\mathrm{meas}\left(\mathcal{X}_{0}-\mathcal{X}_{\bm{j}}(\psi,C)\right)\ll C_{\psi}C^{-2n-4}2^{(n+1)(j_{1}+\cdots+j_{n})},$ (4.62) and we find that $\mathrm{meas}\left(\mathcal{X}_{0}-\bigcup_{C>0}\bigcap_{\bm{j}\geq 0}\mathcal{X}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})\right)\\\ \ll\lim_{C\to\infty}C_{\psi}C^{-2n-4}\sum_{\bm{j}\geq 0}2^{((n+1)-a(2n+4))(j_{1}+\cdots+j_{n})}=0$ (4.63) as long as $a>\frac{n+1}{2n+4}$. Now let us suppose that $(X,\bm{y})\in\mathcal{X}(\psi)$. By theorem 3.1, the size of the theta functions in (4.51) is invariant under the transformation on the left of (4.59), so we may assume that $X\in\mathcal{X}_{0}$ as well. In particular, we have that $(X,\bm{y})$ is in $\mathcal{X}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})$ for some $C>0$ (independent of $\bm{j}$) and all $\bm{j}\geq 0$. We have from corollary 4.1 and the definition of the height function $D$ that $\ll M^{\frac{n}{2}}\sum_{S\subset\\{1,\dots,n\\}}\sum_{\begin{subarray}{c}\bm{j}\geq 0\\\ 2^{j_{i}}b_{j_{i}}^{-1}\leq M\end{subarray}}2^{-\frac{1}{2}(j_{1}+\cdots+j_{n})}D\left(\tilde{\Gamma}(h,g(MB,X))(h_{S},g_{\bm{j},S})\right)^{\frac{1}{4}}$ (4.64) bounds (4.51). Now for all $\bm{j}\geq 0$ there is a $\tilde{\Gamma}(h^{\prime},g)\in\mathcal{G}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})$ with $g$ of the form $g=\begin{pmatrix}I&X\\\ 0&I\end{pmatrix}\begin{pmatrix}A&0\\\ 0&\prescript{t}{}{\\!A}^{-1}\end{pmatrix}\begin{pmatrix}I&0\\\ T&I\end{pmatrix}$ (4.65) satisfying (4.57) and $||T||\leq\epsilon$ and $h^{\prime}$ having the for $(\bm{u},\bm{y}-\bm{u}X,0)$ for some $\bm{u}\in(-\tfrac{1}{2},\tfrac{1}{2})^{n}$. We have $(h^{\prime},g)(1,\begin{pmatrix}\frac{1}{M}I&0\\\ 0&MI\end{pmatrix})(h_{S},g_{\bm{j},S})=(h,g(MB,X))(h_{S},g_{\bm{j},S})(h_{1},g_{1}),$ (4.66) where $h_{1}=\left(-\bm{x}_{S}A_{\bm{j}}E_{S}+\bm{x}_{S}(BA)^{-1}A_{\bm{j}}E_{S}+\frac{1}{M}(\bm{u}-\bm{x})B^{-1}A_{\bm{j}}E_{\bm{j}},0,0\right)$ (4.67) and $g_{1}=g_{\bm{j},S}^{-1}\begin{pmatrix}BA&0\\\ 0&\prescript{t}{}{(BA)}^{-1}\end{pmatrix}\begin{pmatrix}I&0\\\ \frac{1}{M^{2}}T&I\end{pmatrix}g_{\bm{j},S}.$ (4.68) Recalling that $2^{j_{i}}\leq M$, the conditions (4.57) and $||T||\leq\epsilon$ implies that $(h_{1},g_{1})$ satisfies the conditions of lemma 4.4 for all $M$, which then implies $D(\tilde{\Gamma}(h,g(MB,X)(h_{S},g_{\bm{j},S}))^{\frac{1}{4}}\asymp D\left(\tilde{\Gamma}(h^{\prime},g)(1,\begin{pmatrix}\frac{1}{M}I&0\\\ 0&MI\end{pmatrix})(h_{S},g_{\bm{j},S})\right)^{\frac{1}{4}}\\\ \ll C2^{a(j_{1}+\cdots+j_{n})}\psi(\log M)$ (4.69) since $(h^{\prime},g)\in\mathcal{G}_{\bm{j}}(\psi,C2^{a(j_{1}+\cdots+j_{n})})$. Taking $a=\frac{2n+3}{4n+8}$ so that $\frac{n+1}{2n+4}<a<\frac{1}{2}$, it follows that (4.64) is bounded by $\ll CM^{\frac{n}{2}}\psi(\log M)\sum_{\bm{j}\geq 0}2^{-(\frac{1}{2}-a)(j_{1}+\cdots+j_{n})}\ll CM^{\frac{n}{2}}\psi(\log M),$ (4.70) and theorem 1.2 follows. ∎ ## References * [1] Armand Borel. Introduction aux groupes arithmétiques. Publications de l’Institut de Mathématique de l’Université de Strasbourg, XV. Actualités Scientifiques et Industrielles, No. 1341. Hermann, Paris, 1969. * [2] Armand Borel and Lizhen Ji. Compactifications of locally symmetric spaces. J. Differential Geom., 73(2):263–317, 2006. * [3] Salvatore Cosentino and Livio Flaminio. Equidistribution for higher-rank Abelian actions on Heisenberg nilmanifolds. J. Mod. Dyn., 9:305–353, 2015. * [4] Alexander Fedotov and Frédéric Klopp. An exact renormalization formula for Gaussian exponential sums and applications. Amer. J. Math., 134(3):711–748, 2012. * [5] H. Fiedler, W. Jurkat, and O. Körner. Asymptotic expansions of finite theta series. Acta Arith., 32(2):129–146, 1977. * [6] Douglas Grenier. Fundamental domains for the general linear group. Pacific J. Math., 132(2):293–317, 1988. * [7] Anthony W. Knapp. Lie groups beyond an introduction, volume 140 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, second edition, 2002. * [8] Gérard Lion and Michèle Vergne. The Weil representation, Maslov index and theta series, volume 6 of Progress in Mathematics. Birkhäuser, Boston, Mass., 1980. * [9] Jens Marklof and Matthew Welsh. Segal-Shale-Weil representation, theta functions, and applications. In preparation, 2022. * [10] Jens Marklof and Matthew Welsh. Bounds for theta sums in higher rank I. J. d’Analyse Math., 2023. * [11] David Mumford. Tata lectures on theta. I, volume 28 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 1983. With the assistance of C. Musili, M. Nori, E. Previato and M. Stillman. * [12] Audrey Terras. Harmonic analysis on symmetric spaces and applications. II. Springer-Verlag, Berlin, 1988. JM: School of Mathematics, University of Bristol, Bristol BS8 1UG, U.K. MW: Department of Mathematics, University of Maryland, College Park, MD 20742, USA
# How Many Grid-Forming Converters do We Need? A Perspective From Power Grid Strength Huanhai Xin, Yuxuan Wang, Xinyu Liu, Bo Tang, Guangzheng Yu, and Linbin Huang This work was jointly supported by the National Nature Science Foundation of China (No. U2166204 and No. 51922094).H. Xin, Y. Wang, and X. Liu are with the College of Electrical Engineering, Zhejiang University, Hangzhou, China. (Email: [email protected])B. Tang and G. Yu are with the College of Electrical Engineering, Shanghai University of Electric Power, Shanghai, China.L. Huang is with the Department of Information Technology and Electrical Engineering at ETH Zürich, Switzerland. (Email<EMAIL_ADDRESS> ###### Abstract Grid-forming (GFM) control has been considered as a promising solution to accommodating large-scale power electronic converters into modern power grids due to its voltage source behaviors on the AC side. Unlike grid-following (GFL) converters, GFM converters do not rely on phase-locked loops (PLLs) for grid synchronization and can adapt to weak power grids. However, it is still not clear how to configure GFM converters in the grid and how many GFM converters we will need. This letter sheds some light on these questions by investigating how the capacity ratio between GFM and GFL converters affects the small signal stability of the system and how to choose this ratio to maintain a desired stability margin. Our analysis is based on characterizing the influences of GFM converters on the stability margin from the perspective of power grid strength. We validate our analysis using high-fidelity simulations. ###### Index Terms: Grid strength, grid-forming converters, small signal stability, short-circuit ratio. ## I Introduction To achieve net zero, the large-scale integration of power electronic converters into power systems is inevitable, as they act as grid interfaces of renewable energy sources [1]. Currently, most converters apply phase-locked loops (PLLs) in practice, which passively follow the grid frequency, also known as grid-following (GFL) control [2]. However, it has been widely recognized that GFL control cannot support the large-scale integration of converters, because i) the power grid needs some sources to establish the frequency and ii) GFL converters may induce instability in weak grids, i.e., power grids with low short circuit ratios (SCRs) [3, 4, 5]. By comparison, grid-forming (GFM) converters behave as coupled oscillators in a power network, which can establish their own frequencies and spontaneously synchronize with each other [6, 7, 8, 9, 10]. Moreover, they can adapt to very weak power grids. In this letter, we consider virtual synchronous machine (VSM) as a prototypical GFM control, similar to [6]. We conjecture that the combination of GFM and GFL converters can constitute a resilient power grid, where GFL converters follow the frequencies established by GFM converters. In our previous work [6], we investigated the impact of GFM converters on the small signal stability of PLL-integrated power systems. We demonstrated that replacing existing GFL converters with GFM converters is equivalent to increasing the grid strength. However, it still remains unclear how to configure newly installed GFM converters in the grid and how to decide their capacities (or equivalently, how many GFM converters we will need) to ensure the stability of the grid. This letter takes a step forward to answer the above question. Firstly, we review the relationship between the power grid strength and small signal stability. Then, by explicitly deriving how the integration of GFM converters affects the power grid strength, we link the capacity of GFM converters to the stability of a GFM-GFL hybrid system. On this basis, we give recommendations for the capacity ratio between GFM and GFL converters to satisfy a (prescribed) desired stability margin. Our analysis sheds some light on the question of how many GFM converters we will need from the perspective of power grid strength and small signal stability. ## II Power Grid Strength and Stability It has been widely recognized in power system studies that the system stability is strongly related to the power grid strength, especially when large-scale GFL converters are integrated into the grid. In a single-device- infinite-bus system, the power grid strength can be effectively characterized by SCR, which reflects the distance between the device and the infinite bus (an ideal voltage source). The characterization of power grid strength becomes nontrivial in a multi-device system. In our previous work [11], we rigorously showed that in terms of small signal stability, the power grid strength can be characterized by the so-called generalized short-circuit ratio (gSCR). We briefly review this concept in what follows. Figure 1: A power grid integrated with multiple wind farms. Though our approach is general, to illustrate the point we consider the integration of $n$ wind farms (Nodes $1\sim n$) into a power network, as shown in Fig. 1. The power network has $m\;(\geq 0)$ interior nodes ($n+1\sim n+m$) and $k\;(\geq 1)$ infinite buses ($n+m+1\sim n+m+k$). The infinite buses can be used to represent some large-capacity synchronous generators or other areas. Let $B_{ij}$ be the susceptance between Node $i$ and Node $j$ in per- unit values ($B_{ii}=0$), where the per-unit calculation is based on a global capacity $S_{\rm global}$. The susceptance matrix of the network is denoted by ${\bf B}\in\mathbb{R}^{(n+m)\times(n+m)}$, where ${\bf B}_{ij}=-B_{ij}\;(i\neq j)$ and ${\bf B}_{ii}=\sum_{j=1}^{n+m+k}B_{ij}$. The interior nodes can be eliminated by Kron reduction [12], and the Kron-reduced susceptance matrix is ${\bf B}_{\rm r}={\bf B}_{1}-{\bf B}_{2}{\bf B}_{4}^{-1}{\bf B}_{3}$, where ${\bf B}=:\begin{bmatrix}{\bf B}_{1}\in\mathbb{R}^{n\times n}&{\bf B}_{2}\in\mathbb{R}^{n\times m}\\\ {\bf B}_{3}\in\mathbb{R}^{m\times n}&{\bf B}_{4}\in\mathbb{R}^{m\times m}\end{bmatrix}$. ###### Definition II.1 (gSCR). The $\rm gSCR$ of the system in Fig. 1 is defined as the smallest eigenvalue of ${\bf S}_{\rm B}^{-1}{\bf B}_{\rm r}$, where ${\bf S}_{\rm B}\in\mathbb{R}^{n\times n}$ is a diagonal matrix whose $i$-th diagonal element is the ratio between the $i$-th wind farm’s capacity $S_{i}$ and the base capacity of per-unit calculation $S_{\rm global}$, i.e., ${\bf S}_{{\rm B}ii}=S_{i}/S_{\rm global}$. ###### Proposition II.2 (gSCR and stability [11]). When all the wind farms in Fig. 1 adopt GFL control and have homogeneous dynamics, the multi-wind-farm system is (small-signal) stable if and only if ${\rm gSCR}>{\rm CgSCR}$. Here ${\rm CgSCR}$ denotes the critical $\rm gSCR$, defined as the value of SCR that renders a wind farm critically stable in a single-wind-farm-infinite-bus system. A larger gSCR indicates a larger stability margin. We refer the interested readers to [11] for the rigorous proof of Proposition II.2. The gSCR mathematically reflects the connectivity of the power network (i.e., power grid strength). Moreover, it dramatically simplifies the small signal stability of large-scale power systems, as one can focus only on the network part instead of directly calculating the eigenvalues of a large-scale dynamical system. The intuition behind is that the power network should be strong enough, or equivalently, the sources/generators should be close enough to the ideal voltage sources (infinite buses) such that the GFL control can effectively follow the established frequency and voltage. From the power network perspective, one can increase the gSCR (and thus improve the stability) by connecting more infinite buses to the network, especially to the nodes that are far away from the existing infinite buses. In next section, we will show that the integration of GFM converters has similar effect to installing ideal voltage sources (i.e., infinite buses) in the network, and we will investigate how large the capacity should be to meet certain stability margins. ## III GFM Converters and Power Grid Strength Although GFM control has many superiorities over GFL control (e.g., voltage source behaviors, natural inertia emulations), it also has shortcomings. For instance, due to the current limitation of power converters, GFM converters have much more complicated transient behaviors than GFL converters under large disturbances [13, 14], and it may be challenging to ensure the transient stability of the grid when it has a great amount of GFM converters. Actually, one open question is: do we need to operate all the converters in a power grid as GFM converters? In our opinion, operating some of them as GFM converters would be enough in terms of ensuring the small signal stability of the system. We justify this thought below. Figure 2: Each wind farm is equipped with a GFM converter. Consider a wind farm (with GFL control) that is equipped with an (aggregated) GFM converter, as shown in Fig. 2. Since we focus on the power grid strength on the transmission level, the interaction among different wind turbines inside the wind farm is ignored. Hence, we use an aggregated wind turbine to represent its dynamics, which is connected to the high-voltage grid via two series step-up transformers. The GFM converter is usually an energy storage system, which on the one hand, can be used to compensate the fluctuation of wind power, and on the other hand, enhances the power grid strength. This setting has been widely accepted in industry and aligned with many on-going real-world GFM demonstration projects. The GFM converter is also connected to the high-voltage grid via two series step-up transformers. Let $Y_{\rm local}$ ($Z_{\rm local}$) be the per-unit susceptance (reactance) between the internal voltage of the GFM converter and Node $i$ (see Fig. 2), which can include the converter’s internal impedance. Here the per-unit calculation is based on the GFM converter’s capacity so that $Y_{\rm local}$ ($Z_{\rm local}$) remains the same in different (capacity) wind farms. In fact, GFM converters have voltage source behaviors in terms of small signal dynamics. Therefore, their integration should have similar effect to connecting infinite buses to the power network, thereby improving the stability. This intuition was theoretically confirmed in [6]. It was proved that changing the converters’ control scheme from GFL to GFM is equivalent to increasing the power grid strength thanks to the voltage source behaviors, which justifies the assumption below. ###### Assumption 1. In terms of small signal dynamics, a GFM converter can be approximated as an ideal voltage source (i.e., infinite bus) behind its internal impedance. We are now ready to present the connection between the capacity of GFM converters and the grid strength (i.e., gSCR). ###### Proposition III.1 (gSCR and capacity of GFM converters). Consider the power network with multiple wind farms in Fig. 1, where each wind farm (with GFL control) is equipped with a GFM converter, and the identical capacity ratio between the GFM converter and the GFL wind farm is $\gamma$. If Assumption 1 holds, then the gSCR of the system is ${\rm gSCR}={\rm gSCR}_{0}+\gamma Y_{\rm local}\,,$ (1) where ${\rm gSCR}_{0}$ is the gSCR value without GFM converters. ###### Proof. The per-unit susceptance between the GFM converter and Node $i$ becomes ${\bf S}_{{\rm B}ii}\gamma Y_{\rm local}$ when we use the global capacity $S_{\rm global}$ for per-unit calculations. With Assumption 1, the integration of a GFM converter in the $i$-th wind farm is equivalent to adding a branch between Node $i$ and an infinite bus with susceptance ${\bf S}_{{\rm B}ii}\gamma Y_{\rm local}$. Then, we have $\begin{split}{\rm gSCR}=&\lambda_{1}[{\bf S}_{\rm B}^{-1}({\bf B}_{\rm r}+\gamma Y_{\rm local}{\bf S}_{\rm B})]=\lambda_{1}({\bf S}_{\rm B}^{-1}{\bf B}_{\rm r}+\gamma Y_{\rm local}I)\\\ =&\lambda_{1}({\bf S}_{\rm B}^{-1}{\bf B}_{\rm r})+\gamma Y_{\rm local}={\rm gSCR}_{0}+\gamma Y_{\rm local}\,,\end{split}$ where $\lambda_{1}(\cdot)$ denotes the smallest eigenvalue of a matrix, and $I$ is the identity matrix. This completes the proof. ∎ Proposition III.1 indicates that the installations of GFM converters in the wind farms increase the gSCR and thus the power grid strength. Moreover, the gSCR is a linear function of the capacity ratio $\gamma$, with the slope being $Y_{\rm local}$. In practice, a typical value of $Z_{\rm local}$ is $0.16{\rm(p.u.)}$ if two step-up transformers are used, as shown in Fig. 2. We notice that sometimes there is only one step-up transformer between the GFM converter and the transmission line, and in this case, a typical value of $Z_{\rm local}$ is $0.08{\rm(p.u.)}$. With these typical values, we can then calculate the desired capacity ratio $\gamma$ using (1). Note that Once $\gamma$ is obtained, we can decide the number of GFM converters to satisfy this capacity ratio, based on the typical capacity of a GFM converter provided by the manufacturers. ###### Example 1. In practice, a lot of wind farms are integrated in weak grids, where the wind turbines often have to face a low SCR, e.g., ${\rm SCR}=1$ at its terminal (690V bus) that can only guarantee the power transmission, or equivalently, ${\rm gSCR}=1.2$ at the transmission level (110kV). However, currently many manufacturers design their (GFL) wind turbines to operate stably with SCR larger than 1.5 at its terminal (690V bus), i.e., ${\rm gSCR}\geq 2$ at 110kV. This gap (between 1.2 and 2) can be filled using GFM converter without further investment in enhancing the power network. According to (1), it requires the capacity ratio $\gamma\geq{\bf 12.8\%}$ if $Z_{\rm local}=0.16{\rm(p.u.)}$. ## IV Simulation Results Figure 3: A power grid that is integrated with four wind farms, with the capacity matrix being ${\bf S}_{\rm B}={\rm diag}(0.5,1.0,1.5,0.5)$. Figure 4: Time-domain responses of wind farm 1 with different capacity ratios (the other wind farms have similar damping ratios). Consider a power grid with four wind farms, as shown in Fig. 3, where each wind farm adopts the setting in Fig. 2. We consider direct-drive wind turbines that rely on GFL converters for grid connection. The main parameters of this test system are the same as those in [6]. We consider the scenario where the system is unstable with ${\rm gSCR}={\rm gSCR}_{0}=1.2$ (i.e., $\gamma=0$). Rather than changing the power network, we use GFM converters to improve the power grid strength and stabilize the system according to Proposition III.1. Fig. 4 shows the responses of the system with different capacity ratios $\gamma$. It can be seen that the damping ratio of the system is improved when a larger $\gamma$ is adopted (i.e., with more grid-forming converters), and the system has satisfactory performance with $\gamma=12.8\%$ (aligned with Example 1). The simulation results are fully consistent with our analysis in the previous sections. ## V Conclusions This letter focused on how to determine the capacity/number of GFM converters in a power grid. We explicitly derived the relationship between the capacity ratio of GFM converters and the power grid strength (characterized by the so- called gSCR), and proved that the installation of GFM converters improves the power grid strength and thus the overall small signal stability. Our analysis suggests that in terms of improving small signal stability, it is not necessary to operate all the converters in a power grid in GFM mode. For instance, our Example 1 and simulation results showed that a capacity ratio around $\bf 12.8\%$ can already increase the stability margin significantly. Future work can include how to configure GFM converters in the power grid considering frequency stability, transient stability, and small signal stability simultaneously. ## References * [1] F. Milano, F. Dörfler, G. Hug, D. J. Hill, and G. Verbič, “Foundations and challenges of low-inertia systems,” in _2018 power systems computation conference (PSCC)_. IEEE, 2018, pp. 1–25. * [2] X. Wang, M. G. Taul, H. Wu, Y. Liao, F. Blaabjerg, and L. Harnefors, “Grid-synchronization stability of converter-based resources—an overview,” _IEEE Open J. Ind. Appl._ , vol. 1, pp. 115–134, 2020. * [3] L. Fan and Z. Miao, “Wind in weak grids: 4 hz or 30 hz oscillations?” _IEEE Trans. Power Systems_ , vol. 33, no. 5, pp. 5803–5804, 2018. * [4] L. Huang, H. Xin, Z. Li, P. Ju, H. Yuan, Z. Lan, and Z. Wang, “Grid-synchronization stability analysis and loop shaping for pll-based power converters with different reactive power control,” _IEEE Trans. Smart Grid_ , vol. 11, no. 1, pp. 501–516, 2019. * [5] Y. Gu and T. C. Green, “Power system stability with a high penetration of inverter-based resources,” _Proceedings of the IEEE_ , 2022. * [6] C. Yang, L. Huang, H. Xin, and P. Ju, “Placing grid-forming converters to enhance small signal stability of pll-integrated power systems,” _IEEE Trans. Power Systems_ , vol. 36, no. 4, pp. 3563–3573, 2020. * [7] B. Johnson, S. Dhople, A. Hamadeh, and P. Krein, “Synchronization of parallel single-phase inverters with virtual oscillator control,” _IEEE Trans. Power Electronics_ , vol. 29, no. 11, pp. 6124–6138, 2013. * [8] D. Groß, M. Colombino, J.-S. Brouillon, and F. Dörfler, “The effect of transmission-line dynamics on grid-forming dispatchable virtual oscillator control,” _IEEE Trans. Control of Network Systems_ , vol. 6, no. 3, pp. 1148–1160, 2019. * [9] S. D’Arco, J. A. Suul, and O. B. Fosso, “A virtual synchronous machine implementation for distributed control of power converters in smartgrids,” _Elect. Power Syst. Res._ , vol. 122, pp. 180–197, 2015. * [10] Q.-C. Zhong and G. Weiss, “Synchronverters: Inverters that mimic synchronous generators,” _IEEE Trans. industrial electronics_ , vol. 58, no. 4, pp. 1259–1267, 2010. * [11] W. Dong, H. Xin, D. Wu, and L. Huang, “Small signal stability analysis of multi-infeed power electronic systems based on grid strength assessment,” _IEEE Trans. Power Systems_ , vol. 34, no. 2, pp. 1393–1403, 2018. * [12] F. Dorfler and F. Bullo, “Kron reduction of graphs with applications to electrical networks,” _IEEE Trans. Circuits and Systems I: Regular Papers_ , vol. 60, no. 1, pp. 150–163, 2012. * [13] L. Huang, H. Xin, Z. Wang, L. Zhang, K. Wu, and J. Hu, “Transient stability analysis and control design of droop-controlled voltage source converters considering current limitation,” _IEEE Trans. Smart Grid_ , vol. 10, no. 1, pp. 578–591, 2017. * [14] W. Du, R. H. Lasseter, and A. S. Khalsa, “Survivability of autonomous microgrid during overload events,” _IEEE Trans. Smart Grid_ , vol. 10, no. 4, pp. 3515–3524, 2018.
# Types of Transients in the Centers of Post-starburst and Quiescent Balmer- strong Galaxies Iair Arcavi The School of Physics and Astronomy, Tel Aviv University, Tel Aviv 69978, Israel CIFAR Azrieli Global Scholars program, CIFAR, Toronto, Canada Irura Nyiha Massachusetts Institute of Technology, Cambridge, MA 02139, USA K. Decker French Department of Astronomy, University of Illinois, 1002 W. Green St., Urbana, IL 61801, USA Center for Astrophysical Surveys, National Center for Supercomputing Applications, Urbana, IL, 61801, USA Iair Arcavi <EMAIL_ADDRESS> ###### Abstract Tidal Disruption Events (TDEs) have been found to show a preference for post- starburst (PS) and quiescent Balmer-strong (QBS) galaxies. This preference can be used to help find TDEs in transient surveys. But what other transients might “contaminate” such a search, and by how much? We examine all reported transients coincident with the centers of galaxies in the French & Zabludoff (2018) catalog of spectroscopically confirmed PS and QBS galaxies and photometrically identified PS and QBS galaxy candidates. We find that TDEs and Type Ia supernovae (SNe) are the only types of transients classified in the centers of these galaxies (aside from one active galactic nucleus flare), with Type Ia SNe being $8.3\pm 0.2$ times more prevalent than TDEs ($1\sigma$ confidence bounds). This factor is $\sim$2.7 times lower than in a control sample of quiescent galaxies. Narrowing the sample to spectroscopically confirmed QBS galaxies does not change these statistics much. In spectroscopically confirmed PS galaxies, however, TDEs are the ones that outnumber Type Ia SNe $2\pm 0.6$ to $1$. Unfortunately, there are few such galaxies in the catalog. By classifying transients from the entire catalog, three times more TDEs are expected to be found, but with a $\sim$16-times larger Type Ia SN contamination. We use the public ZTF photometric archive to search for possibly missed TDEs in the French & Zabludoff (2018) galaxies. We find three unclassified clear transients – none of which are likely missed TDEs based on their light-curve colors. E+A galaxies (424), Supernovae (1668), Tidal disruption (1696) ## 1 Introduction Tidal disruption events (TDEs), caused when a star is torn apart by tidal forces around a supermassive black hole (Rees, 1988), can generate observable flares. Such events are rare (e.g. Wang & Merritt, 2004; Stone & Metzger, 2016), but they are unique tools for learning about the population of otherwise quiescent supermassive black holes, accretion physics, strong gravity, and more. Therefore, finding them in transient surveys is desirable. However, such surveys are already producing orders of magnitude more transient candidates than can be vetted and classified spectroscopically. Any preference of TDEs for specific host galaxy types could be used not only to learn about the dynamical processes driving TDE rates, but also to help narrow the search for such events. Table 1: Sources of the Galaxies Consolidated from FZ18. FZ18 | Source Name | No. of | No. of Unique ---|---|---|--- Table No. | | Galaxies | Galaxiesa 1 | Spectroscopically Identified QBS Galaxies from SDSS | 19,514 | 19,514 2 | Spectroscopically Identified PS Galaxies from SDSS | 1683 | 50b 5 | Pan-STARRS + WISE Photometrically Identified QBS Galaxies | 57,299 | 57,254 6 | DES + WISE Photometrically Identified QBS Galaxies | 9337 | 9296 7 | SDSS + WISE Photometrically Identified QBS Galaxies | 848 | 832 8 | Pan-STARRS + WISE Photometrically Identified PS Galaxies | 9690 | 750 9 | DES + WISE Photometrically Identified PS Galaxies | 753 | 44 10 | SDSS + WISE Photometrically Identified PS Galaxies | 117 | 8 Total | | | 87,748 Arcavi et al. (2014) discovered that optical TDEs occur preferentially in post-starburst (PS; also known as “E+A”) galaxies. French et al. (2016) later quantified this preference, expanding the definition of the preferred hosts to include also quiescent Balmer-strong (QBS) galaxies. The reason that TDEs prefer PS and QBS galaxies is not yet fully understood (see French et al. 2020 for a recent review); however it can still be leveraged to help identify promising TDE candidates in transient surveys. French & Zabludoff (2018, hereafter FZ18) define QBSs as having a Lick H$\delta_{A}$ index $>1.3\,{\textrm{\AA}}$ in absorption and an H$\alpha$ equivalent width $<5\,{\textrm{\AA}}$ in emission. The PS galaxies are a subset of these, defined with H$\delta_{A}$ $>4\,{\textrm{\AA}}$ in absorption and H$\alpha$ equivalent width $<3\,{\textrm{\AA}}$ in emission. Unfortunately, spectra are not available for most galaxies. Thus, FZ18 use spectroscopically confirmed QBS and PS galaxies from the Sloan Digital Sky Survey (SDSS; York et al., 2000) Data Release (DR) 12 main galaxy survey (Strauss et al., 2002; Alam et al., 2015) to train a machine-learning algorithm to identify QBS and PS galaxies from photometry alone. They then run this algorithm on a combination of Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2016) and Wide-field Infrared Survey Explorer (WISE; Wright et al., 2010) data, Dark Energy Survey (DES; Abbott et al., 2018) and WISE data, and SDSS and WISE data to identify several tens of thousands of new QBS and PS galaxy candidates. Here, we search the the Transient Name Server (TNS)111http://www.wis-tns.org database and the Zwicky Transient Facility (ZTF; Bellm, 2014; Graham et al., 2019) public photometric data for transients coincident with the centers of galaxies in the FZ18 catalog. Our goals are to (1) measure the relative observed fractions of different types of transients occurring in the centers of such galaxies from the TNS data, and (2) check if any unclassified transients in these galaxies could have been missed TDEs, using the ZTF photometric data. We adopt the nine-year Wilkinson Microwave Anisotropy Probe (WMAP) cosmology (Hinshaw et al., 2013) throughout. ## 2 Consolidating the FZ18 Galaxy Catalog The FZ18 catalog is divided into eight subcatalogs, depending on how each galaxy was selected (hereafter we refer to these subcatalogs as “sources”). We number each source according to its table number in FZ18 and list them here in Table 1. We consolidate the galaxies identified by FZ18 from all sources into one master catalog. Since there is some overlap between galaxies in different sources, we note in the last column of Table 1 the number of new galaxies in each source that were not already included in the sources from the previous rows. In total, we obtain 87,748 unique galaxies222We find 50 galaxies in Source 2 (spectroscopically identified PS galaxies) that are not in Source 1 (spectroscopically identified QBS galaxies), even though Source 2 should be a subset of Source 1. Indeed, these galaxies were omitted from Source 1 in FZ18 by mistake. Here, we include them also as members of Source 1 for the rest of the analysis.. The redshift distribution of the spectroscopically identified galaxies (Sources 1 and 2) is plotted in the top panel of Figure 1. Since the FZ18 catalog was compiled from SDSS DR12 data, we check which galaxies in the photometrically selected catalog (i.e. sources 5-10) of FZ18 have since been observed spectroscopically by SDSS in DR16. We find that 3309 galaxies in the photometrically selected catalog have SDSS DR16 spectra. Of these, 41 have a “Quasi Stellar Object” (QSO) classification, and 12 have a “Star” classification (the rest all have a “Galaxy” classification). We remove from our sample the 53 galaxies with a spectrum having either a “QSO” or “Star” classification. Our final galaxy catalog thus consists of 87,695 galaxies. Figure 1: Redshift distributions of the galaxies and classified transients coincident with their centers for the FZ18 catalog (top; only galaxies from Sources 1 and 2 are shown) and the control catalog (bottom). ## 3 The Control Galaxy Catalog For a control sample we compile a catalog of quiescent galaxies (which are not necessarily Balmer-strong). We select all SDSS DR16 spectroscopically observed galaxies with an H$\alpha$ equivalent width $<3\,{\textrm{\AA}}$ in emission, as used for the FZ18 PS cut (i.e. Source 2). We also require (as done in FZ18) that the redshift of each galaxy be $>0.01$ to avoid aperture bias, that the median signal to noise ratio of the spectrum be $>10$, and that the h_alpha_eqw_err parameter be $>-1$ (i.e. no error flags were reported in the equivalent-width measurement). These are the same cuts used for the FZ18 Source 2 galaxies, just without the H$\delta_{A}$ absorption requirement. We find 297,284 such galaxies, which we designate as our control sample (of these, 13,213 are also in the FZ18 catalog). Their redshift distribution is very similar to that of the spectroscopically identified FZ18 galaxies, and is shown in the bottom panel of Figure 1. ## 4 Searching for Transients Table 2: Number of Transients in the TNS (and Their Reported TNS Classifications) within 1″ of a Galaxy in the Control Sample, in the FZ18 Catalogs, and in Each of Its Subcatalogs (or “Sources”). Source | Total | Not | SN Ia | TDE | AGN | Galaxy ---|---|---|---|---|---|--- | Transients | Classified | | | | Control Sample Control Catalog | 726 | 577 | 136 | 6 | 1 | 6 Percentage of All Transients | | 79% | 19% | 1% | 0% | 1% Percentage of Classified Transients | | | 91% | 4% | 1% | 4% FZ18 Catalog All FZ18 Galaxies | 101 | 71 | 25 | 3 | 1 | 1 Percentage of All Transients | | 70% | 25% | 3% | 1% | 1% Percentage of Classified Transients | | | 83% | 10% | 3% | 3% 1: SDSS Spec Identified QBSs | 74 | 50 | 20 | 3 | 0 | 1 Percentage of All Transients | | 68% | 27% | 4% | 0 | 1% Percentage of Classified Transients | | | 83% | 12% | 0 | 4% 2: SDSS Spec Identified PSs | 10 | 7 | 1 | 2 | 0 | 0 Percentage of All Transients | | 70% | 10% | 20% | 0 | 0 Percentage of Classified Transients | | | 33% | 67% | 0 | 0 5: Pan-STARRS+WISE Phot Identified QBSs | 22 | 17 | 4 | 0 | 1 | 0 Percentage of All Transients | | 77% | 18% | 0 | 5% | 0 Percentage of Classified Transients | | | 80% | 0 | 20% | 0 6: DES+WISE Phot Identified QBSs | 2 | 2 | 0 | 0 | 0 | 0 Percentage of All Transients | | 100% | 0 | 0 | 0 | 0 Percentage of Classified Transients | | | 0 | 0 | 0 | 0 7: SDSS+WISE Phot Identified QBSs | 3 | 2 | 1 | 0 | 0 | 0 Percentage of All Transients | | 67% | 33% | 0 | 0 | 0 Percentage of Classified Transients | | | 100% | 0 | 0 | 0 We next perform an archival search for transients coincident to within 1″ with the centers of galaxies in both the FZ18 and the control catalogs. This angular-separation cut is used to account for possible inaccuracies in transient or galaxy localizations. It corresponds to $\sim$2 kiloparsecs at the mean galaxy redshift of the FZ18 spectroscopic sample, and $\sim$1 kiloparsec at the mean transient redshift of matched events. ### 4.1 TNS Search Figure 2: TNS classifications of transients coincident with the centers of galaxies in the control sample, the entire FZ18 galaxy catalog, and in the spectroscopically identified QBS and PS galaxies. The share of TDEs among classified transients is larger in the FZ18 catalog compared to the control catalog. Specifically, in spectroscopically identified PS galaxies, most classified transients turn out to be TDEs (though the number of transients there is small). TNS is the official International Astronomical Union (IAU) service for reporting transient events. It incorporates all events reported to the IAU through circulars before the TNS existed in its current form. Public spectroscopic classifications of transients are also reported to the TNS. We search the complete TNS database up to 2021 August 8 for transients with positions within 1″ of objects in the FZ18 catalog. We find 101 such objects333Of these, 37 were identified by ZTF, 18 by ATLAS, 10 by Pan-STARRS, 3 by iPTF, 3 by Gaia, and 2 by ASAS-SN. (Table LABEL:tab:tns in the Appendix), of which 30% are spectroscopically classified (their redshift distribution is shown in the top panel of Figure 1). Of those, 83% are Type Ia SNe444Here, we do not distinguish between the different subtypes of SNe Ia., and 10% are TDEs. One event was an active galactic nucleus (AGN) flare and one is classified as “Galaxy” (i.e. only galaxy light was visible in the classification spectrum). This could mean that the event was not real, or that it faded before the spectrum was obtained. In the latter case, it could be a missed, rapidly evolving transient, hence we keep it in the sample. All TDEs are found within 0$\farcs$5 of their host, while Type Ia SNe show a uniform host-offset distribution out to our cut of 1″. A cut of 0$\farcs$5 would decrease the Type Ia SN fraction to 70% and increase the TDE fraction to 15%. However, we are dealing with small absolute numbers. A larger sample is required to more accurately analyze class fraction trends with measured host separations. Here, we keep the cut at 1″ to avoid biases related to position measurement accuracy. In the control catalog of quiescent galaxies we find 726 transients coincident to within 1″ of a galaxy. Here, only 20% of the transients are spectroscopically classified (their redshift distribution is shown in the bottom panel of Figure 1). Of those, 91% are Type Ia SNe, and only 4% are TDEs (half of which are in galaxies included also in the FZ18 catalog). The rest are classified as AGN or “Galaxy”555One event, SN 2018aii, has an ambiguous classification as either a Type Ia or Type Ic SN. Given the quiescent host galaxy, it is more likely to be a Type Ia SN, and we thus include it in that count. In any case, either option has a negligible effect on our statistical results.. If we remove from the control sample the 13,213 quiescent Balmer-strong galaxies that are also in the FZ18 sample, we find 669 transients, of which 19% are spectroscopically classified. Of those, 93% are Type Ia SNe. Because half of the TDEs in the quiescent sample are also in the quiescent Balmer- strong sample, removing them lowers the fraction of spectroscopically confirmed TDEs even further to 2%. The rest of the classified transients are AGN or “Galaxy”, as in the full control catalog. Table 2 lists the number of events and their classifications for the full control catalog, the entire FZ18 catalog, and per the FZ18 sources (no transients were reported in the centers of galaxies from Sources 8–10). Figure 2 presents the distribution of classes of transients coincident with the center of a galaxy in the control sample, the entire FZ18 catalog, and for transients only in the spectroscopically identified PS and QBS galaxies. ### 4.2 ZTF Search We next search the ZTF public alert stream for transient candidates with positions within 1″ of objects in the FZ18 galaxy catalog to check for any possible missed TDEs that were not reported to the TNS or were not classified there. We do this by using the “E+A Galaxies” watchlist666https://lasair.roe.ac.uk/watchlist/321/ on the Lasair Broker (Smith et al., 2019). We find 395 ZTF events as of 2021 August 8 coincident with a galaxy in the FZ18 catalog (Table LABEL:tab:ztf)777Here we removed 7 events which are in the Lasair watchlist, but are in galaxies identified by SDSS DR16 as “QSO” or “Star”.. Of those, 69 were reported to the TNS (and are therefore also included in Table LABEL:tab:tns), and 25 have classifications on the TNS. We wish to check for missed TDEs among the unclassified events using their publicly available light curves. To do this, we retrieve the ZTF photometry of unclassified events with at least 20 detections, using the ALeRCE broker (Förster et al., 2021) client888https://alerce.readthedocs.io/en/latest/. We divide the light curves qualitatively into three groups: “Gold” – those that are clearly transient showing a coherent rise and fall (three objects; Fig. 3), “Silver” – those that are clearly variable (one object; Fig. 5), and “Bronze” – those showing a rise and then remaining constant, or showing upper limits intertwined with detections, indicating they might be subtraction artifacts or flaring Galactic sources (27 objects; Fig. 6). Figure 3: ZTF light curves of our “Gold” set of unclassified transients coincident with the center of a galaxy in the FZ18 catalog (triangles denote 5$\sigma$ nondetection upper limits). Each light curve is compared to those of two optical TDEs: the prototypical PS1-10jh (Gezari et al., 2012) and the rapidly evolving iPTF16fnl (Blagorodnova et al., 2017; Brown et al., 2018). The comparison light curves are aligned to the top (rest-frame days from peak) and right (absolute magnitude) axes of each plot. While all events have plausible time scales and luminosities to be TDEs, all are redder than the reference TDEs. ## 5 Analysis and Discussion ### 5.1 TNS Search: Distribution of Transients Classes We use the Clopper–-Pearson method (Clopper & Pearson, 1934) to estimate the confidence bounds of the observed ratios. This method uses binomial statistics to estimate lower and upper confidence bounds for ratios of rates of different event types, when the numbers of observed events are small (as discussed by Gehrels, 1986). Using the $1\sigma$ confidence bounds calculated with this method, we find that in the control sample of quiescent galaxies, $91.3\%\pm 2.3\%$ of transients are classified as Type Ia SNe, and only $4.0\%\pm 1.6\%$ are classified as TDEs. Thus, one should expect to find $22.7\%\pm 0.1$ times more Type Ia SNe than TDEs in such galaxies. In the FZ18 catalog, by contrast, the Type Ia SN prevalence decreases to $83.3\%\pm 6.8\%$ and the TDE prevalence increases to $10.0\%\pm 5.5\%$, decreasing the Type Ia SN to TDE ratio to $8.3\pm 0.2$. This ratio is roughly the same when considering only spectroscopically confirmed QBS galaxies (rather than the entire FZ18 catalog), but drastically improves for TDEs in spectroscopically confirmed PS galaxies. There, TDEs are $66.7\%\pm 27.1\%$ of classified transients (with the rest being Type Ia SNe). The observed TDE to Type Ia SN ratio in these galaxies is thus $2.0\pm 0.6$ to $1$. The ratio of TDEs to Type Ia SNe for these various samples is summarized in the top panel of Figure 4. These are not intrinsic transient rates in each galaxy type, but rather observed fractions. There are likely several observational biases driving the numbers of transients of each type being discovered and classified. For example, most optical TDEs have longer rise-times and more luminous peak magnitudes compared to typical Type Ia SNe (e.g. Maguire, 2016; van Velzen et al., 2020, and references therein), making TDEs easier to discover and observe spectroscopically compared to Type Ia SNe. Both TDEs and Type Ia SNe are more luminous than typical core collapse SNe (e.g. Arcavi, 2016; Pian & Mazzali, 2016, and references therein), suppressing the observed fraction of any possible core collapse events in these bright galaxy centers. Also, the properties of the TDEs and their hosts will affect their detectability (Roth et al., 2021). In addition, here we combine data from various transient surveys and classification campaigns, some focused on classifying the most likely TDE candidates, some possibly looking for Type Ia SNe, and some possibly avoiding likely AGN. This introduces even more complex (and possibly competing) selection effects into the sample of transients. Therefore it is highly nontrivial, if not impossible, to translate these fractions into intrinsic rates (but see Roth et al., 2021). These observed fractions do, however, reflect the current prospects of community classification results when following up discoveries in galaxy centers. Naively, spectroscopically identified PS galaxies would thus be the best galaxies to focus a TDE search on, since a transient in such a galaxy is roughly 16 times more likely to be a TDE than a Type Ia SN than if it were in a random galaxy in the FZ18 catalog (and about 37 times more likely than if it were in a random quiescent galaxy). Unfortunately, there are only 1683 spectroscopically confirmed PS galaxies in the FZ18 catalog, constituting just 2% of it (middle panel of Figure 4). Hence, in absolute numbers, searching for transients in the full catalog will provide roughly 3 times more TDEs, but at the price of having to classify $\sim$8 Type Ia SNe per confirmed TDE (bottom panel of Figure 4). Of course, one can also employ photometric classification criteria to newly discovered transients in order to try to reduce Type Ia SN contamination before obtaining spectra. ### 5.2 ZTF Search: Light Curves of Unclassified Events An important parameter in trying to determine whether an unclassified transient could have been a TDE is the absolute magnitude of its light curve. We search SDSS DR16 and the 2dF Galaxy Redshift Survey (2dFGRS; Colless et al., 2003) for host galaxy redshifts of the ZTF photometrically selected events coincident with the center of a galaxy in the FZ18 catalog. Our findings are presented in Table LABEL:tab:ztf. For ZTF20abxphdt, a “Gold” event, we obtained our own spectrum of the host galaxy and measured the redshift to be 0.0675 from narrow Ca II H+K and Na I D absorption features (Fig. 7)999Our spectrum was obtained with the Floyds spectrograph mounted on the Las Cumbres Observatory 2-meter telescope in Haleakala, Hawaii (Brown et al., 2013), and was reduced using the floydsspec custom pipeline, which performs flux and wavelength calibration, cosmic-ray removal, and spectrum extraction. The pipeline is available at https://github.com/svalenti/FLOYDS_pipeline/blob/master/bin/floydsspec/.. For each event with a determined redshift, we include an absolute magnitude scale in its light curve in Figures 3, 5 and 6. The “Silver” and “Bronze” light curves do not have TDE-like transient behavior (per definition, these are events with no clear rise and decline as seen in optical TDEs; van Velzen et al., 2020). To determine whether any of the “Gold” events might have been a missed TDE, we compare in Figure 3 each the light curves to those of the prototypical optical TDE PS1-10jh (Gezari et al., 2012) and the faint rapidly evolving optical TDE iPTF16fnl (Blagorodnova et al., 2017; Brown et al., 2018), whose light curves we obtain from the Open TDE Catalog101010https://tde.space/. These two events roughly span the range of known optical TDE light-curve luminosities and time scales (see van Velzen et al. 2020 for a review). While all of the “Gold” light curves have peak absolute luminosities and time scales in the correct range, their $g$–$r$ colors are much redder than those of TDEs. We conclude that none of these events are likely missed TDEs, but transients of some other nature. ## 6 Summary and Conclusions Figure 4: Top: observed TDE to Type Ia SN ratio of transients in the centers of galaxies drawn from different galaxy catalogs analyzed here (1$\sigma$ Clopper–Pearson confidence bounds are shown but are sometimes smaller than the marker size). Middle: number of galaxies in each catalog. Bottom: total number of TDEs and Type Ia SNe expected in each galaxy catalog, normalized to the number of TDEs in spectroscopically identified PS galaxies. While the ratio of TDEs to Type Ia SNe is largest there, the small number of such galaxies in the catalog means that in absolute numbers, more TDEs can be discovered by using the entire FZ18 catalog, but at the price of having $\sim$16 times more Type Ia SNe per TDE. We quantify the chances of a transient discovered in the center of a galaxy from a catalog of likely TDE hosts to be a TDE or a Type Ia SN (no other types of true transients were classified in these hosts) by searching the classifications of all transients discovered in the centers of these galaxies. The catalog is made up of galaxies selected in different ways, with the bulk being photometrically selected. The catalog reduces the contamination of Type Ia SNe by a factor of roughly 2.7 compared to a control sample of quiescent galaxies. The lowest contamination of Type Ia SNe exists in the spectroscopically identified PS subcatalog, but it constitute only 2% of the entire catalog. By classifying transients from the entire catalog, three times more TDEs are expected to be found, but with a roughly 16 times larger Type Ia SN contamination. We have not identified any transients coincident with the center of a galaxy in the catalog as likely missed TDEs. We thank O. Yaron for assistance in obtaining TNS data, C. Pellegrino for reducing the ZTF20abxphdt host galaxy spectrum, and M. Nicholl for implementing the FZ18 galaxy catalog as a watchlist on Lasair. I.A. is a CIFAR Azrieli Global Scholar in the Gravity and the Extreme Universe Program and acknowledges support from that program, from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement number 852097), from the Israel Science Foundation (grant number 2752/19), from the United States – Israel Binational Science Foundation (BSF), and from the Israeli Council for Higher Education Alon Fellowship. I.N. acknowledges funding from the MIT International Science and Technology Initiatives (MISTI) Israel Program. ## References * Abbott et al. (2018) Abbott, T. M. C., Abdalla, F. B., Allam, S., & et al. 2018, ApJS, 239, 18, doi: 10.3847/1538-4365/aae9f0 * Alam et al. (2015) Alam, S., Albareti, F. D., Allende Prieto, C., & et al. 2015, ApJS, 219, 12, doi: 10.1088/0067-0049/219/1/12 * Arcavi (2016) Arcavi, I. 2016, Hydrogen-Rich Core-Collapse Supernovae, ed. A. W. Alsabti & P. Murdin (Cham: Springer International Publishing), 1–38, doi: 10.1007/978-3-319-20794-0_39-1 * Arcavi et al. (2014) Arcavi, I., Gal-Yam, A., Sullivan, M., et al. 2014, ApJ, 793, 38, doi: 10.1088/0004-637X/793/1/38 * Bellm (2014) Bellm, E. 2014, in The Third Hot-wiring the Transient Universe Workshop, ed. P. R. Wozniak, M. J. Graham, A. A. Mahabal, & R. Seaman, 27–33. https://arxiv.org/abs/1410.8185 * Blagorodnova et al. (2017) Blagorodnova, N., Gezari, S., Hung, T., et al. 2017, ApJ, 844, 46, doi: 10.3847/1538-4357/aa7579 * Brown et al. (2018) Brown, J. S., Kochanek, C. S., Holoien, T. W. S., et al. 2018, MNRAS, 473, 1130, doi: 10.1093/mnras/stx2372 * Brown et al. (2013) Brown, T. M., Baliber, N., Bianco, F. B., et al. 2013, PASP, 125, 1031, doi: 10.1086/673168 * Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv e-prints, arXiv:1612.05560. https://arxiv.org/abs/1612.05560 * Clopper & Pearson (1934) Clopper, C. J., & Pearson, E. S. 1934, Biometrika, 26, 404, doi: 10.1093/biomet/26.4.404 * Colless et al. (2003) Colless, M., Peterson, B. A., Jackson, C., et al. 2003, arXiv e-prints, astro. https://arxiv.org/abs/astro-ph/0306581 * Förster et al. (2021) Förster, F., Cabrera-Vives, G., Castillo-Navarrete, E., et al. 2021, AJ, 161, 242, doi: 10.3847/1538-3881/abe9bc * French et al. (2016) French, K. D., Arcavi, I., & Zabludoff, A. 2016, ApJ, 818, L21, doi: 10.3847/2041-8205/818/1/L21 * French et al. (2020) French, K. D., Wevers, T., Law-Smith, J., Graur, O., & Zabludoff, A. I. 2020, Space Sci. Rev., 216, 32, doi: 10.1007/s11214-020-00657-y * French & Zabludoff (2018) French, K. D., & Zabludoff, A. I. 2018, ApJ, 868, 99, doi: 10.3847/1538-4357/aaea64 * Gehrels (1986) Gehrels, N. 1986, ApJ, 303, 336, doi: 10.1086/164079 * Gezari et al. (2012) Gezari, S., Chornock, R., Rest, A., et al. 2012, Nature, 485, 217, doi: 10.1038/nature10990 * Graham et al. (2019) Graham, M. J., Kulkarni, S. R., Bellm, E. C., et al. 2019, PASP, 131, 078001, doi: 10.1088/1538-3873/ab006c * Hinshaw et al. (2013) Hinshaw, G., Larson, D., Komatsu, E., et al. 2013, ApJS, 208, 19, doi: 10.1088/0067-0049/208/2/19 * Maguire (2016) Maguire, K. 2016, Type Ia Supernovae, ed. A. W. Alsabti & P. Murdin (Cham: Springer International Publishing), 1–24, doi: 10.1007/978-3-319-20794-0_36-1 * Pian & Mazzali (2016) Pian, E., & Mazzali, P. A. 2016, Hydrogen-Poor Core-Collapse Supernovae, ed. A. W. Alsabti & P. Murdin (Cham: Springer International Publishing), 1–16, doi: 10.1007/978-3-319-20794-0_40-1 * Rees (1988) Rees, M. J. 1988, Nature, 333, 523, doi: 10.1038/333523a0 * Roth et al. (2021) Roth, N., van Velzen, S., Cenko, S. B., & Mushotzky, R. F. 2021, ApJ, 910, 93, doi: 10.3847/1538-4357/abdf50 * Smith et al. (2019) Smith, K. W., Williams, R. D., Young, D. R., et al. 2019, Research Notes of the American Astronomical Society, 3, 26, doi: 10.3847/2515-5172/ab020f * Smith et al. (2020) Smith, K. W., Smartt, S. J., Young, D. R., et al. 2020, PASP, 132, 085002, doi: 10.1088/1538-3873/ab936e * Stone & Metzger (2016) Stone, N. C., & Metzger, B. D. 2016, MNRAS, 455, 859, doi: 10.1093/mnras/stv2281 * Strauss et al. (2002) Strauss, M. A., Weinberg, D. H., Lupton, R. H., et al. 2002, AJ, 124, 1810, doi: 10.1086/342343 * van Velzen et al. (2020) van Velzen, S., Holoien, T. W. S., Onori, F., Hung, T., & Arcavi, I. 2020, Space Sci. Rev., 216, 124, doi: 10.1007/s11214-020-00753-z * Wang & Merritt (2004) Wang, J., & Merritt, D. 2004, ApJ, 600, 149, doi: 10.1086/379767 * Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868, doi: 10.1088/0004-6256/140/6/1868 * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513 ## Appendix A TNS Events We list in Table LABEL:tab:tns the full set of TNS events coincident with the center of a FZ18 galaxy. For each event we note, among other information, its type, if it is spectroscopically classified on the TNS, its additional survey names, if they were reported to the TNS, and its separation from the host center, according to the TNS coordinates of the event and host coordinates in FZ18. Table 3: Events in the TNS Coincident within 1″ with the Center of a Galaxy in the FZ18 Catalog. IAU Name | RA | Dec | Classification | Other Name(s) | FZ18 Table(s) | Separation From Host ---|---|---|---|---|---|--- | [deg] | [deg] | | | | [″] SN 1999du | 16.77475 | -0.13161 | SN Ia | | 5 | 0.95 | | SN 2001G | 137.38825 | 50.28092 | SN Ia | | 1 | 0.80 | | SN 2006ae | 222.09696 | 21.79764 | SN Ia | | 5 | 0.15 | | SN 2007mj | 53.68517 | 0.35553 | SN Ia | | 1 | 0.80 | | SN 2009fx | 253.29700 | 23.96525 | SN Ia | | 1 | 0.43 | | SN 2016aud | 228.47623 | 4.75726 | SN Ia | | 1,2 | 0.43 | | AT 2016sq | 195.87621 | 19.77461 | Not Classified | PS16ug | 1,2 | 0.83 | | AT 2017akd | 161.02167 | 20.57111 | Not Classified | PS17aql | 5 | 0.79 | | AT 2017bcg | 211.52899 | 42.67363 | Not Classified | iPTF17bcg | 1 | 0.76 | | AT 2017br | 128.71599 | 44.24706 | Not Classified | iPTF17br | 1 | 0.06 | | AT 2017brz | 152.29958 | 1.63526 | Not Classified | iPTF17brz | 1 | 0.68 | | AT 2017bs | 128.82731 | 44.02291 | Not Classified | | 1 | 0.18 | | AT 2017cks | 162.49356 | 38.09399 | Not Classified | | 1 | 0.08 | | AT 2017eu | 142.32666 | 62.35766 | Not Classified | | 1 | 0.68 | | AT 2017hvp | 178.31441 | 15.06375 | Not Classified | | 1 | 0.55 | | AT 2018ahh | 161.20269 | 51.95058 | Not Classified | ATLAS18mlv, PS18amw | 1 | 0.39 | | AT 2018ail | 196.08738 | 1.15093 | Galaxy | | 1 | 0.26 | | AT 2018bpb | 213.65545 | 61.12649 | Not Classified | ATLAS18opw | 5 | 0.33 | | AT 2018bvy | 180.51065 | 15.01067 | Not Classified | ZTF18aastwrz | 1 | 0.18 | | AT 2018cqh | 38.44554 | -1.02455 | Not Classified | | 1 | 0.08 | | SN 2018ffi | 325.04707 | 21.55839 | SN Ia | | 5 | 0.69 | | AT 2018fyv | 244.06311 | 21.00398 | Not Classified | PS18bnd | 5 | 0.11 | | SN 2018gvb | 249.64330 | 31.87302 | SN Ia | ZTF18abwmuua, ATLAS18vsb | 1 | 0.42 | | AT 2018hym | 228.45355 | 45.82884 | Not Classified | ATLAS18sjn | 1 | 0.19 | | AT 2018hyz | 151.71196 | 1.69280 | TDE | ASASSN-18zj, ZTF18acpdvos, ATLAS18bafs | 1,2 | 0.10 | | AT 2018jkl | 153.45776 | 39.64185 | Not Classified | ZTF18acsoyiv | 1 | 0.08 | | AT 2018kmp | 202.17649 | 55.36520 | Not Classified | | 1 | 0.64 | | AT 2019aale | 88.35786 | 11.38557 | AGN | | 5 | 0.07 | | AT 2019azh | 123.32061 | 22.64834 | TDE | ASASSN-19dj, ZTF17aaazdba, Gaia19bvo, ZTF18achzddr | 1,2 | 0.16 | | AT 2019bvk | 192.01691 | 18.97612 | Not Classified | | 1 | 0.82 | | SN 2019bxh | 222.97121 | 55.40762 | SN Ia | | 1 | 0.86 | | AT 2019cxz | 161.54031 | 24.26684 | Not Classified | ZTF19aaoxijx, ATLAS19ghr | 1 | 0.22 | | AT 2019dec | 186.84407 | 30.09247 | Not Classified | | 1 | 0.26 | | AT 2019ev | 106.68529 | 18.52649 | Not Classified | | 5 | 0.69 | | AT 2019gq | 120.01922 | 58.76122 | Not Classified | | 5 | 0.25 | | SN 2019kcj | 242.67834 | 15.15591 | SN Ia | | 1 | 0.83 | | AT 2019lwx | 184.82792 | 15.77210 | Not Classified | | 1 | 0.12 | | AT 2019mds | 248.50573 | 77.60043 | Not Classified | | 5 | 0.69 | | AT 2019nks | 97.10029 | -17.28797 | Not Classified | Gaia19dmk | 5 | 0.13 | | SN 2019nvh | 237.65586 | 23.04686 | SN Ia | ZTF19abpuikr | 1 | 0.23 | | SN 2019oc | 120.01929 | 40.07781 | SN Ia | | 1 | 0.43 | | AT 2019ofz | 126.61545 | 8.74346 | Not Classified | ZTF18acidntq | 1 | 0.17 | | AT 2019rru | 117.01817 | 22.80833 | Not Classified | ZTF19acbmotn | 1 | 0.07 | | AT 2019rvr | 222.22627 | 31.65809 | Not Classified | | 1,2 | 0.37 | | SN 2019vtx | 124.39083 | 56.04854 | SN Ia | ATLAS19bbyw, ZTF19acxlcdz | 1 | 0.26 | | SN 2019vva | 346.50381 | 0.31130 | SN Ia | ATLAS19bcew, ZTF19acxgwid | 1 | 0.41 | | AT 2019xdn | 325.53655 | -7.33813 | Not Classified | ZTF19adakvxy | 1,2 | 0.32 | | AT 2019xeg | 148.11258 | 36.71760 | Not Classified | ZTF19aczjwxq | 1 | 0.46 | | SN 2020K | 147.52712 | 5.72366 | SN Ia | ZTF20aaawbkz | 1 | 0.33 | | AT 2020aajc | 98.97280 | 36.69855 | Not Classified | | 5 | 0.64 | | AT 2020abg | 154.10578 | 42.09326 | Not Classified | | 1 | 0.46 | | AT 2020adfy | 136.58307 | 52.36386 | Not Classified | | 1,2 | 0.13 | | AT 2020aqw | 52.20150 | -4.21910 | Not Classified | | 5 | 0.19 | | AT 2020as | 157.73045 | -19.45978 | Not Classified | | 5 | 0.45 | | AT 2020bub | 123.50916 | 21.36741 | Not Classified | | 1 | 0.72 | | AT 2020bwk | 229.53838 | 45.17097 | Not Classified | | 7,10 | 0.56 | | AT 2020cj | 192.39751 | 47.83122 | Not Classified | ATLAS20ank | 1 | 0.40 | | AT 2020fwm | 229.41903 | 13.85177 | Not Classified | ZTF20aauolxq | 1,2 | 0.21 | | AT 2020hpd | 252.51063 | 26.42812 | Not Classified | | 1 | 0.27 | | SN 2020ism | 208.28775 | 47.29950 | SN Ia | ZTF20aavynba, ATLAS20lvt | 1 | 0.55 | | AT 2020iwa | 241.15838 | 26.61666 | Not Classified | | 1 | 0.79 | | AT 2020jeu | 156.96197 | 2.60946 | Not Classified | PS20ctp | 1 | 0.54 | | SN 2020jfn | 138.76949 | 10.25872 | SN Ia | | 1 | 0.53 | | AT 2020jgq | 237.06576 | 21.02477 | Not Classified | PS20cuf | 1 | 0.32 | | SN 2020jny | 180.74371 | 20.08543 | SN Ia | | 1 | 0.93 | | AT 2020kxs | 331.24181 | 1.00131 | Not Classified | | 1 | 0.13 | | AT 2020lce | 313.23469 | 3.42653 | Not Classified | PS20dki | 5 | 0.97 | | AT 2020nmi | 191.82891 | 44.45004 | Not Classified | | 1 | 0.54 | | SN 2020rba | 23.97291 | 39.95597 | SN Ia | | 5 | 0.13 | | AT 2020skf | 39.25393 | 26.62204 | Not Classified | | 5 | 0.10 | | AT 2020vao | 252.52531 | 32.30183 | Not Classified | ATLAS20bcop, ZTF20abimsxj | 1 | 0.29 | | SN 2020vf | 182.90418 | 0.32232 | SN Ia | ZTF20aafcjln, ATLAS20auq, PS20qz | 1 | 0.89 | | AT 2020vgn | 231.77848 | 34.06877 | Not Classified | ZTF19aanleed | 1 | 0.24 | | AT 2020wey | 136.35783 | 61.80255 | TDE | | 1 | 0.11 | | AT 2020xna | 93.25584 | 70.34574 | Not Classified | ZTF20aclflri | 5 | 0.99 | | SN 2020xsr | 142.20817 | 46.65334 | SN Ia | | 1 | 0.07 | | AT 2020ybp | 244.22378 | 31.31074 | Not Classified | ZTF18aavkrsj | 1 | 0.81 | | AT 2020ygl | 138.30031 | 5.13456 | Not Classified | | 1 | 0.23 | | AT 2020yln | 30.90734 | 0.35947 | Not Classified | PS20kme | 1,2 | 0.85 | | AT 2020zet | 155.16050 | -3.72955 | Not Classified | Gaia20fdo | 5 | 0.07 | | AT 2021abe | 79.57412 | 58.44900 | Not Classified | | 5 | 0.34 | | AT 2021bwg | 225.31005 | 13.02212 | Not Classified | ZTF21aahfizx, ATLAS21egb | 1 | 0.76 | | SN 2021cky | 229.39751 | 18.11807 | SN Ia | ATLAS21eev, ZTF21aahfjbs | 1 | 0.04 | | AT 2021duz | 207.86825 | 40.44792 | Not Classified | ZTF18aaviokz | 1 | 0.26 | | AT 2021ee | 1.42135 | 4.13466 | Not Classified | ZTF21aaaqmuf | 6 | 0.91 | | SN 2021hmc | 148.62746 | 53.16909 | SN Ia | ZTF18acsremz | 1 | 0.39 | | AT 2021igj | 223.81404 | 13.65745 | Not Classified | ATLAS21kcw | 1,2 | 0.64 | | AT 2021ksh | 199.28991 | 59.75154 | Not Classified | ZTF21aawehzm | 1 | 0.90 | | SN 2021kun | 166.41407 | 19.46029 | SN Ia | ZTF21aaxxjen, ATLAS21ofj | 1 | 0.83 | | AT 2021kxv | 123.96870 | 29.80600 | Not Classified | ZTF21aaycwpu | 1 | 0.58 | | AT 2021lkq | 316.71028 | 10.73572 | Not Classified | ZTF21aazmjaf, ATLAS21ovp | 1 | 0.71 | | AT 2021lml | 215.61896 | 31.75931 | Not Classified | ZTF21aaxyfzb | 1 | 0.67 | | AT 2021mkd | 233.68061 | 56.99324 | Not Classified | ZTF21abbsncs | 1 | 0.67 | | AT 2021no | 70.98989 | -38.21690 | Not Classified | | 6 | 0.83 | | AT 2021nzg | 216.72033 | 23.16105 | Not Classified | ZTF21abcpqpv | 1 | 0.91 | | AT 2021osm | 330.12554 | 21.05220 | Not Classified | ZTF21abebkok | 7 | 0.44 | | SN 2021qus | 196.26836 | 60.77193 | SN Ia | ZTF21abhqqwq | 7,10 | 0.16 | | AT 2021rw | 78.70566 | 48.03333 | Not Classified | | 5 | 0.37 | | AT 2021spt | 240.54084 | 6.64751 | Not Classified | ZTF21abkayhx | 1 | 0.57 | | AT 2021uhf | 233.22212 | 13.10198 | Not Classified | PS21iil, ATLAS21bdxd | 5 | 0.61 | | AT 2021vp | 187.14868 | 27.67625 | Not Classified | | 1 | 0.63 | | ## Appendix B ZTF Events We list in Table LABEL:tab:ztf the full set of events in the ZTF public alert stream coincident with an FZ18 galaxy, retrieved from the Lasair broker. For each event, we note, among other information, its Sherlock (Lasair machine- learning contextual) classification, its TNS name and spectroscopic classification (if they exist), and its redshift (from the host galaxy spectrum, if available, or from other sources - see main text for details). We also note its light curve “rank” if it has sufficient detections (see main text for details). We also provide the light curves, as retrieved from the ALeRCE broker, for the “silver” (Fig. 5) and “bronze” (Fig. 6) sets of events, and the spectrum we obtained of the host galaxy of ZTF20abxphdt used to determine its redshift (Fig. 7). Table 4: Events in the ZTF Public Alert Stream Coincident within 1″ With the Center of a Galaxy in the FZ18 Catalog. ZTF Name | RA | Dec | No. of | Sherlock | TNS | TNS | FZ18 | Rank | Redshift ---|---|---|---|---|---|---|---|---|--- | | | Detections | Classa | Name | Class | Table(s) | | ZTF17aaajpbn | 176.80933 | 14.14249 | 4 | NT | | | 1 | | | | ZTF17aaazdba | 123.32063 | 22.64830 | 111 | NT | AT 2019azh | TDE | 1,2 | | | | ZTF17aabwnst | 189.25383 | 29.66105 | 2 | NT | | | 1 | | | | ZTF17aabxbwj | 204.63031 | 33.17805 | 2 | NT | | | 1 | | | | ZTF17aaclxhm | 43.78316 | -3.69214 | 2 | SN | | | 6 | | | | ZTF17aacvwvn | 131.40772 | 36.93465 | 3 | NT | | | 1 | | | | ZTF18aaajljy | 184.82790 | 15.77213 | 2 | NT | AT 2019lwx | | 1 | | | | ZTF18aaapdih | 197.89169 | 18.33195 | 3 | NT | | | 1 | | | | ZTF18aaapdnx | 177.43238 | 23.44518 | 5 | NT | | | 1 | | | | ZTF18aabduzj | 194.59098 | 27.96778 | 4 | AGN | | | 5 | | | | ZTF18aabdvaj | 194.61838 | 27.55939 | 2 | NT | | | 1,2 | | | | ZTF18aabdvak | 194.32420 | 27.81093 | 2 | NT | | | 1 | | | | ZTF18aabdvqf | 183.38759 | 28.86269 | 2 | NT | | | 1 | | | | ZTF18aabeibv | 204.80101 | 28.40470 | 4 | NT | | | 1,2 | | | | ZTF18aabkjjg | 147.64402 | 44.32343 | 4 | NT | | | 1,5 | | | | ZTF18aabvmer | 179.28363 | 26.49880 | 4 | NT | | | 1 | | | | ZTF18aabvotu | 187.14849 | 27.67622 | 6 | NT | AT 2021vp | | 1 | | | | ZTF18aacbnkp | 183.11203 | 29.14921 | 4 | NT | | | 1 | | | | ZTF18aacbvcr | 198.16849 | 18.01383 | 2 | NT | | | 1 | | | | ZTF18aaccrsz | 199.42068 | 17.69778 | 4 | NT | | | 1,2 | | | | ZTF18aadurgi | 98.97258 | 36.69853 | 9 | SN | AT 2020aajc | | 5 | | | | ZTF18aafomqo | 106.03674 | 21.89812 | 2 | VS | | | 5,8 | | | | ZTF18aaggkrv | 154.38140 | 46.98529 | 2 | NT | | | 1 | | | | ZTF18aaguaep | 187.01362 | 19.29261 | 2 | NT | | | 1,2 | | | | ZTF18aagvaym | 196.24128 | 53.78086 | 10 | NT | | | 1 | | | | ZTF18aagydai | 213.87528 | 41.01925 | 2 | VS | | | 5 | | | | ZTF18aahhpyk | 142.55552 | 49.48830 | 2 | AGN | | | 1 | | | | ZTF18aahhvnr | 192.25703 | 28.14522 | 4 | NT | | | 1 | | | | ZTF18aahitda | 165.28990 | 51.36865 | 573 | AGN | | | 1 | Bronze | 0.252 | | ZTF18aahjcqm | 209.15875 | 41.69028 | 13 | NT | | | 1,2 | | | | ZTF18aahlyfs | 176.60789 | 11.12072 | 2 | NT | | | 1 | | | | ZTF18aahpyvk | 158.09435 | 40.16202 | 4 | NT | | | 1 | | | | ZTF18aahpyvz | 158.81903 | 39.89783 | 2 | NT | | | 1 | | | | ZTF18aahruqt | 194.24561 | 47.15973 | 2 | NT | | | 1,2 | | | | ZTF18aahrytm | 191.49862 | 40.77485 | 15 | NT | | | 1 | | | | ZTF18aahsldb | 179.93421 | 25.62104 | 4 | NT | | | 1 | | | | ZTF18aahugzi | 202.42121 | 36.90520 | 2 | AGN | | | 1 | | | | ZTF18aahvlpf | 194.87449 | 31.33566 | 2 | NT | | | 1 | | | | ZTF18aahvqfu | 199.61583 | 41.23478 | 3 | NT | | | 5,8 | | | | ZTF18aaiaasw | 230.50746 | 43.53232 | 9 | NT | | | 1,2 | | | | ZTF18aaierdy | 190.18980 | 49.99357 | 4 | NT | | | 1 | | | | ZTF18aaigdlt | 170.53431 | 19.54339 | 3 | NT | | | 1,2 | | | | ZTF18aaigdoo | 170.53412 | 19.65605 | 2 | NT | | | 5 | | | | ZTF18aaiidgi | 191.61182 | 50.79206 | 7 | NT | | | 1,2 | | | | ZTF18aaijdfe | 207.13159 | 26.94995 | 5 | NT | | | 1 | | | | ZTF18aailgpx | 207.24917 | 57.28680 | 2 | NT | | | 1 | | | | ZTF18aaitirs | 193.99900 | 27.95508 | 2 | NT | | | 1 | | | | ZTF18aaitisu | 194.41423 | 27.60608 | 2 | NT | | | 5 | | | | ZTF18aaititf | 194.44987 | 27.72655 | 2 | NT | | | 1 | | | | ZTF18aaititz | 194.55010 | 27.12764 | 2 | NT | | | 1 | | | | ZTF18aaitiuh | 194.89960 | 27.26185 | 4 | NT | | | 1 | | | | ZTF18aaiuidq | 205.06954 | 30.13916 | 2 | NT | | | 1 | | | | ZTF18aaiumcj | 207.87447 | 36.97036 | 4 | NT | | | 1 | | | | ZTF18aaivfcz | 220.38522 | 46.67626 | 2 | NT | | | 1,2 | | | | ZTF18aaizcpj | 227.22954 | 37.55827 | 2 | NT | | | 1 | | | | ZTF18aajhtpx | 161.77724 | 29.21479 | 2 | NT | | | 1 | | | | ZTF18aajiwps | 182.65186 | 37.09352 | 2 | NT | | | 1 | | | | ZTF18aajjcxo | 179.82603 | 56.01073 | 3 | NT | | | 1 | | | | ZTF18aakclqr | 169.97609 | 33.09011 | 2 | NT | | | 1 | | | | ZTF18aakdcaa | 180.83726 | 32.47185 | 4 | NT | | | 1 | | | | ZTF18aakearo | 203.34913 | 53.35837 | 3 | NT | | | 1,2 | | | | ZTF18aakeljn | 207.45322 | 54.61891 | 9 | NT | | | 1 | | | | ZTF18aakexgg | 179.82740 | 29.76499 | 2 | NT | | | 1 | | | | ZTF18aakexzw | 173.41881 | 61.88803 | 6 | NT | | | 1 | | | | ZTF18aakkvsz | 194.63185 | 28.46500 | 8 | AGN | | | 1 | | | | ZTF18aakkxxq | 207.33247 | 26.66142 | 2 | NT | | | 1 | | | | ZTF18aaklbpu | 194.63964 | 27.26884 | 2 | NT | | | 5 | | | | ZTF18aaklbrg | 194.64070 | 27.67467 | 2 | NT | | | 1 | | | | ZTF18aaklevm | 179.48817 | 26.64495 | 2 | NT | | | 1 | | | | ZTF18aakmawv | 180.15680 | 24.03613 | 5 | NT | | | 1 | | | | ZTF18aakocga | 182.02554 | 24.64104 | 4 | NT | | | 1 | | | | ZTF18aakopca | 181.78593 | 25.50764 | 4 | NT | | | 1 | | | | ZTF18aakqyen | 176.73658 | 55.48222 | 2 | NT | | | 1 | | | | ZTF18aaktrha | 202.97896 | 47.88283 | 2 | NT | | | 1,2 | | | | ZTF18aakycgz | 240.93534 | 52.40347 | 3 | NT | | | 1 | | | | ZTF18aaloxok | 166.69943 | 24.92974 | 2 | NT | | | 1 | | | | ZTF18aalpnoq | 172.95417 | 15.89748 | 2 | NT | | | 5 | | | | ZTF18aalpymq | 196.35761 | 53.59176 | 9 | NT | | | 1,2 | | | | ZTF18aalqeng | 184.26493 | 46.36008 | 2 | NT | | | 1 | | | | ZTF18aalqgpx | 184.66142 | 44.53526 | 2 | NT | | | 1 | | | | ZTF18aalvtaz | 169.37998 | 37.04431 | 2 | NT | | | 1,2 | | | | ZTF18aamtgdp | 195.58759 | 47.63087 | 2 | NT | | | 1 | | | | ZTF18aamtwrd | 215.59798 | 61.69753 | 9 | NT | | | 1 | | | | ZTF18aamvcmk | 223.73102 | 45.52405 | 3 | NT | | | 1,2 | | | | ZTF18aamvuds | 226.15446 | 48.73879 | 2 | NT | | | 1 | | | | ZTF18aamzhyk | 233.73798 | 58.49971 | 2 | NT | | | 1 | | | | ZTF18aanajij | 204.49158 | 65.73624 | 5 | NT | | | 1,2 | | | | ZTF18aancdpi | 219.52642 | 30.50827 | 2 | NT | | | 1 | | | | ZTF18aanyflw | 258.52783 | 57.99358 | 4 | NT | | | 5 | | | | ZTF18aaobdql | 175.74104 | 54.95524 | 4 | NT | | | 1 | | | | ZTF18aaodxwb | 192.30108 | 30.49497 | 2 | NT | | | 1 | | | | ZTF18aaoszhl | 202.17625 | 55.36532 | 4 | NT | AT 2018kmp | | 1 | | | | ZTF18aaozhtf | 252.51060 | 26.42819 | 2 | NT | AT 2020hpd | | 1 | | | | ZTF18aaqccbr | 161.31841 | 39.38040 | 2 | NT | | | 1 | | | | ZTF18aaqdfzb | 189.28210 | 10.70564 | 2 | NT | | | 1,5 | | | | ZTF18aaqjlvs | 178.85462 | 56.74740 | 2 | NT | | | 1 | | | | ZTF18aaqjuut | 197.79692 | 39.31070 | 2 | NT | | | 1 | | | | ZTF18aaqjvng | 199.61664 | 32.53776 | 2 | NT | | | 1 | | | | ZTF18aaqjxfc | 188.13463 | 40.01290 | 3 | NT | | | 1 | | | | ZTF18aaqkryj | 188.58514 | 65.50501 | 3 | NT | | | 1 | | | | ZTF18aaqldph | 199.80184 | 31.64632 | 2 | NT | | | 1 | | | | ZTF18aaqmphu | 219.43947 | 51.40579 | 69 | NT | | | 1,2 | Bronze | 0.148 | | ZTF18aaqqgit | 161.20284 | 51.95051 | 242 | NT | AT 2018ahh | | 1 | Bronze | 0.064 | | ZTF18aaqrssj | 218.44611 | 54.66503 | 2 | NT | | | 1 | | | | ZTF18aaqsbte | 238.66114 | 55.91026 | 2 | NT | | | 1 | | | | ZTF18aarbklo | 158.63712 | 19.22056 | 8 | NT | | | 1 | | | | ZTF18aarcjhu | 181.14671 | 30.10879 | 2 | NT | | | 1 | | | | ZTF18aardnpw | 165.15395 | 44.09749 | 2 | NT | | | 1 | | | | ZTF18aarfvib | 233.69244 | 31.57719 | 2 | NT | | | 1 | | | | ZTF18aaricta | 256.48927 | 63.01544 | 2 | NT | | | 1,2 | | | | ZTF18aariyxn | 207.02868 | 26.69441 | 2 | NT | | | 1 | | | | ZTF18aarlbsg | 176.73969 | 31.21330 | 2 | NT | | | 1,2 | | | | ZTF18aarlbvb | 176.72379 | 31.02253 | 2 | NT | | | 1 | | | | ZTF18aarlbvk | 175.92728 | 30.64308 | 2 | NT | | | 1 | | | | ZTF18aarlieh | 201.56987 | 58.68326 | 2 | AGN | | | 1 | | | | ZTF18aarlqcb | 179.38820 | 32.61041 | 2 | NT | | | 1 | | | | ZTF18aarzxlx | 201.31726 | 52.45665 | 5 | NT | | | 1 | | | | ZTF18aastwrz | 180.51064 | 15.01072 | 6 | AGN | AT 2018bvy | | 1 | | | | ZTF18aasvckm | 197.94258 | 14.57041 | 2 | NT | | | 5 | | | | ZTF18aasxihq | 197.96059 | 19.34811 | 84 | AGN | | | 1 | Bronze | 0.398 | | ZTF18aasxikm | 197.96059 | 19.34811 | 2 | AGN | | | 1 | | | | ZTF18aasyrnk | 219.41440 | 29.28828 | 7 | NT | | | 5 | | | | ZTF18aaszvlm | 221.06435 | 55.37334 | 2 | NT | | | 1 | | | | ZTF18aathzlp | 218.72835 | 8.05178 | 2 | NT | | | 1,2 | | | | ZTF18aauvezc | 166.83199 | 46.38329 | 2 | NT | | | 5 | | | | ZTF18aauxynp | 221.28056 | 52.15134 | 2 | NT | | | 1,2 | | | | ZTF18aavdqoy | 278.84087 | 53.45010 | 103 | VS | | | 5 | Bronze | | | ZTF18aaviokz | 207.86833 | 40.44796 | 12 | NT | AT 2021duz | | 1 | | | | ZTF18aavrofn | 165.17610 | 40.34667 | 2 | NT | | | 1 | | | | ZTF18aawamsd | 299.38607 | 36.07132 | 6 | VS | | | 5,8 | | | | ZTF18aawjgxa | 247.47959 | 24.02946 | 2 | NT | | | 1 | | | | ZTF18aawmnql | 135.30920 | 51.20681 | 17 | NT | | | 1 | | | | ZTF18aawnjqz | 167.62140 | 30.69830 | 2 | NT | | | 5 | | | | ZTF18aawnqoc | 171.99696 | 35.77907 | 2 | NT | | | 1 | | | | ZTF18aawohpc | 177.52612 | 26.58845 | 2 | NT | | | 1 | | | | ZTF18aawolyo | 154.21407 | 22.49742 | 2 | NT | | | 1 | | | | ZTF18aawpght | 169.73594 | 23.66164 | 2 | AGN | | | 1 | | | | ZTF18aawqupi | 229.53917 | 35.85220 | 4 | NT | | | 1 | | | | ZTF18aawxmbu | 172.67019 | 56.48612 | 2 | NT | | | 1,2 | | | | ZTF18aaxeoqk | 216.65238 | 15.40177 | 8 | NT | | | 1 | | | | ZTF18aaxlucv | 298.53403 | 21.56921 | 19 | VS | | | 5 | | | | ZTF18aaxqtyr | 156.72910 | 45.69590 | 2 | NT | | | 1 | | | | ZTF18aaxzlva | 172.15612 | 50.82742 | 7 | NT | | | 1 | | | | ZTF18aaybewz | 214.32642 | 46.69823 | 2 | NT | | | 1 | | | | ZTF18aaybkrg | 262.05487 | 51.26042 | 17 | SN | | | 5 | | | | ZTF18aayijie | 195.59515 | 31.96619 | 3 | NT | | | 5 | | | | ZTF18aayimoj | 196.32216 | 32.17940 | 2 | NT | | | 1 | | | | ZTF18aayinmz | 183.41029 | 40.33776 | 2 | AGN | | | 1 | | | | ZTF18aazjsus | 222.49990 | 49.86213 | 2 | NT | | | 1 | | | | ZTF18aazogjs | 200.51138 | 25.48533 | 3 | NT | | | 5 | | | | ZTF18abaeuzc | 158.76575 | 52.15126 | 40 | NT | | | 1 | Bronze | 0.143 | | ZTF18abakxep | 186.84405 | 30.09254 | 17 | NT | AT 2019dec | | 1 | | | | ZTF18abalpqy | 183.13028 | 15.27034 | 6 | AGN | | | 1 | | | | ZTF18abawbfn | 227.65716 | 61.74347 | 3 | NT | | | 1 | | | | ZTF18abbiyrc | 249.29601 | 25.45319 | 3 | NT | | | 1 | | | | ZTF18abbjazr | 232.32219 | 30.87345 | 2 | NT | | | 1 | | | | ZTF18abbmmqp | 284.48631 | 29.20436 | 4 | NT | | | 5 | | | | ZTF18abdmqna | 218.83266 | 30.09610 | 2 | NT | | | 1 | | | | ZTF18abeaizq | 241.56769 | 21.06364 | 5 | NT | | | 5 | | | | ZTF18abeakzq | 239.22698 | 25.54267 | 18 | AGN | | | 1 | | | | ZTF18abguhwj | 173.80969 | 45.03930 | 4 | NT | | | 1 | | | | ZTF18abhaejc | 355.91228 | 54.50881 | 2 | VS | | | 5 | | | | ZTF18abjpdmt | 16.65585 | 58.54964 | 66 | VS | | | 5 | Bronze | | | ZTF18abjyiua | 235.22248 | 9.74992 | 2 | NT | | | 1 | | | | ZTF18abkefdc | 336.05717 | 32.34489 | 2 | VS | | | 5 | | | | ZTF18ablpfdj | 274.47118 | 5.82349 | 13 | VS | | | 5,8 | | | | ZTF18abmhkff | 297.39815 | 13.01845 | 2 | VS | | | 5 | | | | ZTF18abmrhom | 325.04727 | 21.55846 | 13 | NT | SN 2018ffi | SN Ia | 5 | | | | ZTF18abmrlqs | 329.47782 | 55.16463 | 5 | VS | | | 8 | | | | ZTF18abmwycg | 233.82758 | 26.64937 | 2 | NT | | | 1 | | | | ZTF18abnvgif | 24.63887 | 54.75873 | 2 | NT | | | 5 | | | | ZTF18aboswes | 24.48174 | 56.08004 | 4 | SN | | | 5 | | | | ZTF18abottlo | 17.40277 | -15.98532 | 4 | NT | | | 6 | | | | ZTF18abpeoqr | 20.35424 | -16.01233 | 3 | VS | | | 6 | | | | ZTF18abqjfnx | 240.11169 | 53.25051 | 2 | NT | | | 1 | | | | ZTF18abrgace | 287.00244 | -8.07820 | 4 | VS | | | 5 | | | | ZTF18abryusp | 34.40855 | 29.74434 | 13 | VS | | | 5 | | | | ZTF18absgoav | 350.16196 | 42.86538 | 2 | NT | | | 5 | | | | ZTF18abtafdw | 321.34911 | 55.98211 | 4 | SN | | | 5 | | | | ZTF18abteulk | 330.57046 | 52.49530 | 2 | VS | | | 5 | | | | ZTF18abtfwhe | 15.17917 | -0.48611 | 4 | NT | | | 5 | | | | ZTF18abtgunq | 38.44556 | -1.02454 | 279 | AGN | AT 2018cqh | | 1 | Silver | 0.049 | | ZTF18abtmtit | 36.64124 | -1.10785 | 231 | NT | | | 1 | Bronze | 0.096 | | ZTF18abtsxba | 34.77290 | -17.42028 | 10 | AGN | | | 6 | | | | ZTF18abttdtx | 97.60642 | 63.67812 | 15 | AGN | | | 5 | | | | ZTF18abugoat | 332.14588 | 55.09960 | 13 | VS | | | 5 | | | | ZTF18abupbwb | 27.78328 | -7.53520 | 10 | NT | | | 6 | | | | ZTF18abupcpv | 28.82121 | -6.18079 | 3 | NT | | | 6 | | | | ZTF18abvttfg | 137.03671 | 50.15561 | 8 | NT | | | 1 | | | | ZTF18abwmuua | 249.64342 | 31.87296 | 4 | NT | SN 2018gvb | SN Ia | 1 | | | | ZTF18abwnzru | 305.44510 | 33.88024 | 3 | VS | | | 5 | | | | ZTF18abxmzjw | 26.52993 | 23.44960 | 2 | SN | | | 5 | | | | ZTF18abxpfzq | 21.75555 | 0.94317 | 2 | NT | | | 6 | | | | ZTF18abxplzo | 317.17288 | 54.14834 | 5 | VS | | | 5 | | | | ZTF18abxrrtb | 40.18780 | -20.61313 | 3 | NT | | | 6 | | | | ZTF18abxrsex | 32.08180 | -7.18915 | 9 | NT | | | 6 | | | | ZTF18abxrxwj | 27.27226 | -12.21063 | 13 | NT | | | 6 | | | | ZTF18abxrzrl | 26.62387 | -3.91350 | 2 | NT | | | 6 | | | | ZTF18abxtzvo | 52.46168 | -13.38191 | 6 | NT | | | 5 | | | | ZTF18abxudzf | 26.20205 | -6.18832 | 2 | NT | | | 5 | | | | ZTF18acalhmo | 26.60055 | -15.86588 | 2 | SN | | | 6 | | | | ZTF18acbxqdr | 354.67484 | -10.71099 | 2 | NT | | | 1 | | | | ZTF18accvowz | 169.60798 | 36.87661 | 2 | VS | | | 5 | | | | ZTF18accvpqx | 170.04222 | 38.10219 | 2 | NT | | | 1 | | | | ZTF18acdmrhz | 350.00448 | 55.48388 | 7 | VS | | | 5 | | | | ZTF18aceajmw | 60.78337 | -5.76494 | 2 | NT | | | 1 | | | | ZTF18acepkas | 136.84486 | 44.11084 | 2 | NT | | | 1 | | | | ZTF18acerrai | 174.62396 | 32.17043 | 6 | NT | | | 1 | | | | ZTF18acfvwnv | 40.04359 | -12.50032 | 2 | SN | | | 6 | | | | ZTF18acgunuw | 178.93276 | 80.21923 | 3 | NT | | | 5,8 | | | | ZTF18acgwntr | 147.59309 | 25.79599 | 2 | NT | | | 5,8 | | | | ZTF18achzddr | 123.32063 | 22.64830 | 6 | NT | AT 2019azh | TDE | 1,2 | | | | ZTF18aciblji | 174.01353 | 17.35970 | 2 | NT | | | 1 | | | | ZTF18acidntq | 126.61549 | 8.74349 | 21 | AGN | AT 2019ofz | | 1 | Bronze | 0.082 | | ZTF18acjvwsu | 10.41444 | 1.06842 | 5 | NT | | | 1,2,6 | | | | ZTF18acmtyar | 94.94340 | 19.15672 | 4 | VS | | | 5 | | | | ZTF18acpdhit | 140.95394 | 24.81479 | 5 | NT | | | 1 | | | | ZTF18acpdvos | 151.71195 | 1.69278 | 92 | NT | AT 2018hyz | TDE | 1,2 | | | | ZTF18acpeeih | 196.08742 | 1.15098 | 26 | NT | AT 2018ail | Galaxy | 1 | | | | ZTF18acpljng | 140.28578 | 44.91415 | 77 | AGN | | | 1 | Bronze | 0.156 | | ZTF18acpmhuj | 170.08733 | 32.24833 | 2 | NT | | | 1 | | | | ZTF18acpntil | 153.16353 | 46.52167 | 3 | NT | | | 1 | | | | ZTF18acpoxaq | 116.43683 | 46.25113 | 9 | NT | | | 1,2 | | | | ZTF18acqeuaj | 131.11845 | 19.09025 | 2 | NT | | | 1 | | | | ZTF18acqptsi | 213.99632 | 38.89357 | 2 | NT | | | 1 | | | | ZTF18acqyart | 126.61549 | 8.74349 | 38 | AGN | AT 2019ofz | | 1 | Bronze | 0.082 | | ZTF18acrmldj | 152.56724 | -0.07478 | 2 | NT | | | 1,2 | | | | ZTF18acsoyiv | 153.45777 | 39.64187 | 3 | NT | AT 2018jkl | | 1 | | | | ZTF18acsremz | 148.62736 | 53.16918 | 49 | NT | SN 2021hmc | SN Ia | 1 | | | | ZTF18acsripv | 151.42136 | 37.62661 | 2 | NT | | | 1 | | | | ZTF18acusldo | 156.19250 | 10.91342 | 2 | NT | | | 1 | | | | ZTF18acvgzlg | 198.24895 | 18.17804 | 44 | VS | | | 5 | Bronze | | | ZTF18acvimfo | 223.28041 | 3.53817 | 3 | AGN | | | 1 | | | | ZTF18acybdqr | 170.35937 | 28.23510 | 2 | NT | | | 1 | | | | ZTF18aczenvx | 163.73024 | 27.80237 | 12 | NT | | | 1 | | | | ZTF18adbditf | 16.65585 | 58.54964 | 43 | VS | | | 5 | Bronze | | | ZTF18adblgvo | 305.14883 | 19.80043 | 2 | SN | | | 5 | | | | ZTF18adcassp | 241.98193 | 9.46367 | 2 | NT | | | 1 | | | | ZTF19aaabslc | 14.14184 | -3.34520 | 2 | NT | | | 5 | | | | ZTF19aaadfcp | 120.01926 | 58.76128 | 6 | SN | AT 2019gq | | 5 | | | | ZTF19aaakdpz | 78.52702 | 2.51732 | 4 | NT | | | 5 | | | | ZTF19aabpdck | 110.59545 | -10.43971 | 2 | VS | | | 5 | | | | ZTF19aabybwz | 127.48938 | 44.94016 | 4 | NT | | | 1 | | | | ZTF19aacsofi | 120.01938 | 40.07791 | 7 | NT | SN 2019oc | SN Ia | 1 | | | | ZTF19aadufvo | 268.54761 | 13.90573 | 4 | NT | | | 5 | | | | ZTF19aadymum | 200.74947 | 27.11643 | 3 | NT | | | 1 | | | | ZTF19aafncky | 240.76515 | 21.69747 | 2 | NT | | | 1,2 | | | | ZTF19aafzmzl | 132.92260 | 48.57251 | 2 | NT | | | 1 | | | | ZTF19aagrxjd | 140.77516 | 24.02489 | 2 | NT | | | 1,2 | | | | ZTF19aakmdrn | 37.43868 | 53.88640 | 2 | VS | | | 5,8 | | | | ZTF19aalfugu | 248.50624 | 77.60058 | 3 | NT | AT 2019mds | | 5 | | | | ZTF19aamhhgu | 222.97147 | 55.40781 | 22 | NT | SN 2019bxh | SN Ia | 1 | | | | ZTF19aamohnt | 192.01674 | 18.97596 | 2 | NT | AT 2019bvk | | 1 | | | | ZTF19aamrjve | 250.26490 | 32.26472 | 2 | NT | | | 1,2 | | | | ZTF19aanleed | 231.77844 | 34.06871 | 2 | NT | AT 2020vgn | | 1 | | | | ZTF19aaoxijx | 161.54037 | 24.26686 | 9 | NT | AT 2019cxz | | 1 | | | | ZTF19aapatmf | 258.89200 | 36.17269 | 2 | SN | | | 5 | | | | ZTF19aarepdu | 88.35785 | 11.38558 | 99 | NT | AT 2019aale | AGN | 5 | Bronze | | | ZTF19aarsqmc | 120.73166 | 67.93110 | 5 | UNCLEAR | | | 5 | | | | ZTF19aavlnpb | 211.64991 | 8.75663 | 2 | NT | | | 1,2 | | | | ZTF19aavocqz | 221.02966 | 18.01263 | 5 | NT | | | 1 | | | | ZTF19aavowrh | 246.27715 | 46.48351 | 5 | NT | | | 1 | | | | ZTF19aayozuj | 230.99805 | 25.97714 | 4 | NT | | | 1 | | | | ZTF19aayrzba | 320.87642 | -7.74598 | 12 | NT | | | 1 | | | | ZTF19aazylqj | 326.96732 | 45.48559 | 4 | VS | | | 5 | | | | ZTF19abaevrx | 156.56979 | 53.41965 | 7 | NT | | | 1 | | | | ZTF19abakysz | 234.26447 | 10.50281 | 2 | NT | | | 1,2 | | | | ZTF19abboojm | 316.42663 | -5.62293 | 2 | NT | | | 1 | | | | ZTF19abcupln | 242.67822 | 15.15571 | 12 | NT | SN 2019kcj | SN Ia | 1 | | | | ZTF19abdkfbr | 24.81343 | 2.01678 | 95 | NT | | | 6 | Bronze | | | ZTF19abdsrof | 159.24873 | 25.92185 | 4 | NT | | | 1 | | | | ZTF19abeyuen | 319.63187 | 10.11603 | 13 | NT | | | 1 | | | | ZTF19abgwjfa | 298.35551 | 70.35697 | 5 | VS | | | 5 | | | | ZTF19abixawb | 2.56205 | 0.13911 | 20 | AGN | | | 1 | Bronze | 0.102 | | ZTF19abktbzk | 246.29572 | 46.44702 | 3 | NT | | | 1 | | | | ZTF19abmonpc | 352.73518 | 13.83201 | 7 | NT | | | 1 | | | | ZTF19abnktav | 249.48949 | 11.60236 | 6 | NT | | | 1 | | | | ZTF19abofgnr | 305.79420 | 15.39329 | 3 | VS | | | 5 | | | | ZTF19abpuikr | 237.65588 | 23.04692 | 5 | NT | SN 2019nvh | SN Ia | 1 | | | | ZTF19absrfvd | 352.59499 | 32.49691 | 2 | VS | | | 5 | | | | ZTF19abssxkv | 341.57856 | 37.14076 | 2 | VS | | | 5,8 | | | | ZTF19abtladp | 43.82673 | 20.12041 | 2 | VS | | | 5 | | | | ZTF19abtrvsq | 28.43509 | -23.61405 | 9 | NT | | | 6 | | | | ZTF19abuoqzz | 339.86966 | 16.62459 | 45 | NT | | | 5 | Bronze | | | ZTF19abxbybm | 285.89145 | -16.15508 | 2 | VS | | | 5 | | | | ZTF19abymgox | 58.46971 | -26.41953 | 26 | VS | | | 5,8 | Bronze | | | ZTF19abymuda | 97.10032 | -17.28794 | 36 | SN | AT 2019nks | | 5 | Bronze | | | ZTF19abzlmxk | 84.34894 | 51.63366 | 30 | NT | | | 5,8 | Bronze | | | ZTF19acaheuq | 29.05425 | 1.03518 | 2 | NT | | | 1 | | | | ZTF19acajpme | 222.22615 | 31.65806 | 2 | NT | AT 2019rvr | | 1,2 | | | | ZTF19acbkqdn | 320.21980 | 11.12030 | 5 | NT | | | 1 | | | | ZTF19acbmotn | 117.01818 | 22.80831 | 20 | NT | AT 2019rru | | 1 | Gold | 0.075 | | ZTF19ackilif | 121.39812 | 24.48656 | 3 | NT | | | 1 | | | | ZTF19acudnoy | 342.21106 | -0.49405 | 2 | NT | | | 1 | | | | ZTF19acuhpoi | 0.86881 | 0.45835 | 2 | NT | | | 1 | | | | ZTF19acumsmk | 34.09070 | -2.94162 | 2 | NT | | | 6 | | | | ZTF19acvtuva | 155.16052 | -3.72954 | 117 | NT | AT 2020zet | | 5 | Silver | 0.173 | | ZTF19acxgwid | 346.50389 | 0.31121 | 9 | NT | SN 2019vva | SN Ia | 1 | | | | ZTF19acxlcdz | 124.39072 | 56.04858 | 39 | NT | SN 2019vtx | SN Ia | 1 | | | | ZTF19acxyvwp | 170.21654 | -1.22918 | 2 | NT | | | 1 | | | | ZTF19aczjwxq | 148.11244 | 36.71754 | 3 | NT | AT 2019xeg | | 1 | | | | ZTF19adakvxy | 325.53655 | -7.33822 | 2 | NT | AT 2019xdn | | 1,2 | | | | ZTF19adbwqgp | 121.65541 | 25.43951 | 2 | NT | | | 5 | | | | ZTF19adcdxlv | 169.88449 | 18.69028 | 46 | NT | | | 8 | Bronze | | | ZTF20aaapfih | 115.67725 | 40.35026 | 3 | NT | | | 1 | | | | ZTF20aaawbkz | 147.52718 | 5.72373 | 10 | NT | SN 2020K | SN Ia | 1 | | | | ZTF20aabpmlx | 157.73044 | -19.45966 | 4 | NT | AT 2020as | | 5 | | | | ZTF20aadxwuh | 182.45354 | 22.74239 | 32 | AGN | | | 1 | Bronze | 0.026 | | ZTF20aaenbrp | 158.53777 | 4.35856 | 3 | NT | | | 1 | | | | ZTF20aaenpjf | 169.59660 | 3.43240 | 2 | NT | | | 1 | | | | ZTF20aaerplp | 227.14104 | 25.48306 | 2 | NT | | | 1 | | | | ZTF20aaerpyl | 228.53823 | 32.67463 | 3 | AGN | | | 1 | | | | ZTF20aafcjln | 182.90434 | 0.32250 | 42 | NT | SN 2020vf | SN Ia | 1 | | | | ZTF20aafjjfv | 134.13665 | 35.10125 | 2 | NT | | | 1 | | | | ZTF20aagffdu | 154.10567 | 42.09316 | 17 | NT | AT 2020abg | | 1 | | | | ZTF20aagiirb | 237.51859 | 39.25033 | 8 | NT | | | 1 | | | | ZTF20aagoidp | 355.28357 | 49.83056 | 2 | VS | | | 5 | | | | ZTF20aagxiaq | 170.87678 | 29.22858 | 2 | NT | | | 1 | | | | ZTF20aahgpaj | 164.55025 | 11.88881 | 2 | NT | | | 1 | | | | ZTF20aahjzhk | 131.86646 | 3.66901 | 2 | AGN | | | 1 | | | | ZTF20aahmzxd | 112.52822 | -5.41436 | 59 | AGN | | | 5,8 | Bronze | | | ZTF20aajbdvu | 132.65559 | 39.45760 | 5 | NT | | | 1 | | | | ZTF20aajccqq | 229.53817 | 45.17101 | 16 | NT | AT 2020bwk | | 7 | | | | ZTF20aajoyjf | 123.50894 | 21.36742 | 18 | NT | AT 2020bub | | 1 | | | | ZTF20aaozcrd | 192.59831 | -4.25625 | 11 | NT | | | 5,8 | | | | ZTF20aaqsrdk | 177.63922 | 17.85810 | 2 | NT | | | 1 | | | | ZTF20aaqtmxr | 232.53647 | 18.81420 | 5 | NT | | | 5,8 | | | | ZTF20aaraxda | 180.71940 | 27.17761 | 2 | NT | | | 5 | | | | ZTF20aatpsar | 190.93488 | 53.31267 | 2 | NT | | | 1,2 | | | | ZTF20aauolxq | 229.41904 | 13.85171 | 11 | NT | AT 2020fwm | | 1,2 | | | | ZTF20aavevtg | 130.23473 | 10.80814 | 2 | NT | | | 1 | | | | ZTF20aavtydb | 119.34340 | 52.60993 | 2 | NT | | | 1 | | | | ZTF20aavynba | 208.28754 | 47.29955 | 18 | NT | SN 2020ism | SN Ia | 1 | | | | ZTF20aawjeyc | 146.51510 | 31.73927 | 5 | NT | | | 5 | | | | ZTF20aawjwkh | 136.80276 | 15.60691 | 2 | NT | | | 1 | | | | ZTF20aayngca | 138.76935 | 10.25877 | 6 | NT | SN 2020jfn | SN Ia | 1 | | | | ZTF20aazfhyf | 331.24184 | 1.00129 | 3 | NT | AT 2020kxs | | 1 | | | | ZTF20aazgtmp | 180.74383 | 20.08520 | 20 | NT | SN 2020jny | SN Ia | 1 | | | | ZTF20aazmsko | 147.48496 | -0.23136 | 2 | NT | | | 1 | | | | ZTF20abblwrx | 234.53291 | 46.68459 | 2 | NT | | | 1 | | | | ZTF20abbvaqi | 261.04990 | 31.77627 | 5 | NT | | | 1,2 | | | | ZTF20abfrbyv | 155.54340 | 80.20390 | 2 | VS | | | 5 | | | | ZTF20abhuzpg | 233.94868 | 24.57072 | 2 | NT | | | 1 | | | | ZTF20abidglv | 42.66854 | 41.67134 | 3 | NT | | | 5 | | | | ZTF20abimsxj | 252.52541 | 32.30181 | 14 | NT | AT 2020vao | | 1 | | | | ZTF20abismcv | 163.71303 | 33.15199 | 2 | NT | | | 1 | | | | ZTF20abkxrun | 183.02606 | 40.82678 | 4 | NT | | | 1 | | | | ZTF20ablgxjx | 235.15980 | 47.55920 | 3 | NT | | | 1 | | | | ZTF20abmhfza | 260.79061 | 61.37880 | 2 | NT | | | 1 | | | | ZTF20aboquqi | 8.90010 | 31.67528 | 8 | VS | | | 5 | | | | ZTF20abqosnh | 23.97295 | 39.95597 | 70 | SN | SN 2020rba | SN Ia | 5 | | | | ZTF20abrclgl | 11.26834 | 17.69454 | 33 | SN | | | 5 | Silver | | | ZTF20abtjhgf | 31.43169 | 19.16404 | 40 | SN | | | 5 | Bronze | | | ZTF20abujvgp | 49.25103 | 40.72814 | 2 | NT | | | 5 | | | | ZTF20abxphdt | 39.25395 | 26.62206 | 45 | SN | AT 2020skf | | 5 | Gold | 0.068 | | ZTF20abzvzkn | 341.00105 | 39.13234 | 9 | SN | | | 5 | | | | ZTF20acaecko | 354.30551 | -10.09510 | 2 | NT | | | 1,2 | | | | ZTF20acbeztg | 22.15925 | -17.21419 | 28 | VS | | | 6,9 | Bronze | | | ZTF20achuvja | 75.33495 | -22.19069 | 53 | NT | | | 6 | Bronze | | | ZTF20acimzuq | 115.75596 | 41.71005 | 9 | NT | | | 1 | | | | ZTF20acitpfz | 136.35777 | 61.80255 | 30 | NT | AT 2020wey | TDE | 1 | | | | ZTF20ackhzba | 142.20816 | 46.65336 | 27 | NT | SN 2020xsr | SN Ia | 1 | | | | ZTF20ackryyi | 150.52813 | 10.61114 | 2 | AGN | | | 5 | | | | ZTF20aclflri | 93.25657 | 70.34562 | 9 | SN | AT 2020xna | | 5 | | | | ZTF20aclgfji | 138.30025 | 5.13455 | 18 | NT | AT 2020ygl | | 1 | | | | ZTF20aclzyyl | 9.82111 | -17.20901 | 2 | NT | | | 5,8 | | | | ZTF20acselme | 134.13635 | 39.14106 | 6 | NT | | | 1 | | | | ZTF20adafhdh | 158.59053 | 16.46315 | 6 | NT | | | 1 | | | | ZTF21aaaqmuf | 1.42153 | 4.13448 | 3 | SN | AT 2021ee | | 6 | | | | ZTF21aaaqxuf | 4.53661 | 2.51476 | 2 | NT | | | 5,8 | | | | ZTF21aabasub | 30.22086 | -4.77991 | 14 | NT | | | 5 | | | | ZTF21aacdjdi | 78.70575 | 48.03341 | 11 | SN | AT 2021rw | | 5 | | | | ZTF21aacilcv | 156.72314 | 0.55806 | 4 | NT | | | 1 | | | | ZTF21aadmkrm | 79.57394 | 58.44900 | 17 | SN | AT 2021abe | | 5 | | | | ZTF21aagjlpo | 166.83199 | 46.38329 | 3 | NT | | | 5 | | | | ZTF21aahfizx | 225.30988 | 13.02201 | 10 | NT | AT 2021bwg | | 1 | | | | ZTF21aahfjbs | 229.39751 | 18.11808 | 30 | NT | SN 2021cky | SN Ia | 1 | | | | ZTF21aahuvvx | 82.80987 | -18.43544 | 2 | SN | | | 6 | | | | ZTF21aaiahsu | 165.25533 | 8.08728 | 29 | NT | | | 1 | Gold | 0.088 | | ZTF21aakjhgt | 250.27260 | 25.48488 | 2 | NT | | | 1 | | | | ZTF21aambjvl | 114.78316 | 41.75661 | 3 | NT | | | 5 | | | | ZTF21aandeyw | 224.67104 | 9.18540 | 2 | NT | | | 1 | | | | ZTF21aaqhung | 176.49295 | 19.64975 | 2 | NT | | | 1,2 | | | | ZTF21aaridnv | 203.02324 | 1.59735 | 2 | NT | | | 1 | | | | ZTF21aatrajq | 217.69093 | 57.08014 | 2 | NT | | | 1 | | | | ZTF21aauufbz | 160.22916 | 24.74712 | 6 | NT | | | 1 | | | | ZTF21aaxxjen | 166.41431 | 19.46035 | 22 | NT | SN 2021kun | SN Ia | 1 | | | | ZTF21aaxyfzb | 215.61877 | 31.75940 | 8 | NT | AT 2021lml | | 1 | | | | ZTF21aaycwpu | 123.96885 | 29.80591 | 8 | NT | AT 2021kxv | | 1 | | | | ZTF21aaydzrb | 200.83530 | -1.72122 | 12 | NT | | | 1 | | | | ZTF21aazmjaf | 316.71008 | 10.73577 | 7 | NT | AT 2021lkq | | 1 | | | | ZTF21abbsncs | 233.68031 | 56.99315 | 2 | NT | AT 2021mkd | | 1 | | | | ZTF21abcpqpv | 216.72060 | 23.16110 | 4 | NT | AT 2021nzg | | 1 | | | | ZTF21abebkok | 330.12547 | 21.05209 | 5 | SN | AT 2021osm | | 7 | | | | ZTF21abhqqwq | 196.26842 | 60.77196 | 7 | VS | SN 2021qus | SN Ia | 7 | | | | Figure 5: ZTF light curve of our “Silver” unclassified transient coincident with the center of a galaxy in the FZ18 catalog (triangles denote 5$\sigma$ non-detection upper limits). This event appears to be variable rather than transient. Figure 6: ZTF light curves of our “Bronze” set of unclassified transients coincident with the center of a galaxy in the FZ18 catalog (triangles denote 5$\sigma$ non-detection upper limits). These events either show a rise followed by constant brightness, or have upper-limits intertwined with the detections, indicating they are either subtraction artifacts or flaring Galactic sources. Figure 7: Floyds spectrum of the host galaxy of ZTF20abxphdt used to determine its redshift from the Ca II H+K and Na I D lines (marked). The original spectrum is shown in gray, and a binned spectrum in black.
# TCM-ICP: Transformation Compatibility Measure for Registering Multiple LIDAR Scans Aby Thomas, Adarsh Sunilkumar, Shankar Shylesh, Aby Abahai T., Subhasree Methirumangalath, Dong Chen and Jiju Peethambaran Aby Thomas, Department of Computer Science and Engineering, National Institute of Technology, Calicut, Kerala, IndiaAdarsh Sunilkumar, Department of Computer Science and Engineering, National Institute of Technology, Calicut, Kerala, India Shankar Shylesh, Department of Computer Science and Engineering, National Institute of Technology, Calicut, Kerala, IndiaAby Abahai T, Department of Computer Science and Engineering, National Institute of Technology, Calicut, Kerala, IndiaSubhasree Methirumangalath, Associate Professor, Department of Computer Science and Engineering, National Institute of Technology, Calicut, Kerala, IndiaDong Chen, Associate Professor, College of Civil Engineering,Nanjing Forestry University, Nanjing, ChinaJiju Peethambaran, Assistant Professor, Department of Math & Computing Science, Saint Mary’s University, Halifax, Canada ###### Abstract Rigid registration of multi-view and multi-platform LiDAR scans is a fundamental problem in 3D mapping, robotic navigation, and large-scale urban modeling applications. Data acquisition with LiDAR sensors involves scanning multiple areas from different points of view, thus generating partially overlapping point clouds of the real world scenes. Traditionally, ICP (Iterative Closest Point) algorithm is used to register the acquired point clouds together to form a unique point cloud that captures the scanned real world scene. Conventional ICP faces local minima issues and often needs a coarse initial alignment to converge to the optimum. In this work, we present an algorithm for registering multiple, overlapping LiDAR scans. We introduce a geometric metric called Transformation Compatibility Measure (TCM) which aids in choosing the most similar point clouds for registration in each iteration of the algorithm. The LiDAR scan most similar to the reference LiDAR scan is then transformed using simplex technique. An optimization of the transformation using gradient descent and simulated annealing techniques are then applied to improve the resulting registration. We evaluate the proposed algorithm on four different real world scenes and experimental results shows that the registration performance of the proposed method is comparable or superior to the traditionally used registration methods. Further, the algorithm achieves superior registration results even when dealing with outliers. ###### Index Terms: Point Clouds, Registration Methods, PCL, 3D Registration, ICP, Feature Based ## I Introduction Over the last decade, Light Detection and Ranging (LiDAR) systems have emerged as a predominant tool for capturing outdoor scenes for various applications such as 3D mapping and navigation, urban modeling and simulation, and risk assessment of urban utilities such as powerlines. LiDAR sensors mounted on various platforms including aircraft or unmanned aerial vehicles (UAV), vehicle, satellite and tripod [12] collect the 3D data of outdoor scenes in the form of point clouds. Considering the operational efficiency and quality of scans, multiple scans of the same object or scene is acquired from various positions which helps in removing blind spots in the scan. Multiple scans are then stitched together to generate the final 3D scan of the scene. In this paper, we focus on a hybrid solution for registering multiple LiDAR scans using statistical and geometrical constraints. In general, point cloud registration is the process of assigning correspondences between two sets of points and recovering the transformation that maps one point set to the other [8]. Point cloud registration frequently applies a coarse-to-fine registration strategy[2]. In coarse registration, the initial registration parameters for the rigid body transformation of two point clouds are mainly estimated using the feature-based method[3]. Registration based on point, line and surface features are included in the feature-based coarse registration method. In fine registration, the main aim is to achieve maximum overlap of two point clouds, primarily using the iterative approximation method, normal distribution transform method, random sample consensus method or methods with auxiliary data[3]. Registration of noisy and overlapping LiDAR data of three-dimensional surfaces or scenes is a challenging problem[2]. The quality of data provided is a key factor that affects correctness of point cloud registration. The data obtained through LIDAR is sparse and non-homogeneous. All LIDAR registration mechanisms work on the principle of aligning the key points of the object scanned so as to combine multiple point clouds from various views in order to recreate a 3D model of the object. The presence of irrelevant and inconsistent data can thus cause complications in the registration process. Such irrelevant data points are commonly referred to as outliers. Thus removing outliers from the obtained input becomes vital in creating an efficient and accurate model. Only data that has been well preprocessed is expected to yield promising results. Many point cloud registration algorithms have been developed over the years. Iterative approximation method is widely used in the field of point cloud registration among the fine registration methods. Iterative approximation methods mainly refers to ICP (Iterative Closest Point) algorithm proposed by Mckay and Besel[1]. However the iterative nature of the ICP algorithm makes it less efficient in dealing with high density and large-scale point cloud scenes and also slow at finding corresponding points between two point clouds [3]. In order to overcome the problems of existing LIDAR registration methods such as need of auxiliary data or target, requirement of sufficient overlapping areas, and difficulty in feature extraction and matching an algorithm is developed called TCM ICP (Transformation Compatibility Measure for Registering Multiple LIDAR Scans). We make the following key contributions. * • A geometrical and statistical metric called Transformation Compatibility Measure (TCM) which is employed to select the point cloud that is most compatible with the reference/destination cloud for registration. * • A simplex algorithm based registration method for LIDAR data that takes multiple LIDAR scans (point clouds) of a given object or scene having variable densities and coordinate frames as input and generate a 3D model of the object or scene by transforming the various point clouds into a global coordinate system. * • An optimization method by combining gradient descent and simulated annealing techniques with the transformation matrix calculation to improve the results. ## II Related Work LiDAR point cloud data sets tend to have non-uniform point distribution and differing coordinate axes that lead to challenges in registering the points to 3D models. Various coarse registration and fine registration techniques have been developed over the years for registering LiDAR point clouds. In general,point cloud registration can be classified into two: feature based registration and iterative approximation methods such as ICP algorithm. The removal outliers is one among the important steps of point cloud registration. ### II-A Feature-based Registration Feature-based transformation models such as point-based, line-based and plane- based 3D transformation models were studied and improved by people across the globe. Over the years, these techniques have been used extensively for point cloud registration. The accuracy of registered points depends on the techniques used for feature extraction[3]. Feature based registration is a point cloud registration mechanism in which key features of the object scanned such as corners and edges are extracted from the point clouds and mapping of corresponding pairs of features among point clouds is carried out. Jaw and Chuang (2008)[4] described a feature based transformation technique that uses point-based, line-based and plane-based techniques. This 3D transformation was mathematically described by a 7-parameter spatial similarity transformation[4]. The technique showed promising results with high degrees of flexibility and accuracy when working with datasets of nominal sizes. The transformations are modeled mathematically and parameters are approximated using techniques such as least square distance approximation. The method takes a pair of point clouds, apply necessary transformations and combine the two point clouds according to matching results. The main drawback of the technique comes from the fact that taking random point clouds and joining their transformed results lead to propagation of errors to neighbouring point clouds. The method performs better than those which use a single feature for registration. Even though it provides a certain amount of robustness, the results proved to be inferior to those of other iterative methods of registration. Forstner and Khoshelham[9] proposed a method for point cloud registration based on plane to plane correspondences. This method works by extracting planar regions from point clouds, the extraction process uses maximum likelihood estimation to extract planes with some degree of uncertainty. Three direct solutions namely direct algebraic solution, direct whitened algebraic solution and single-iteration ML-solution were applied on the extracted features in order to guarantee convergence of the feature matching process. These methods are accurate and time efficient to some extent but prove to be statistically suboptimal compared to some iterative approximation methods. ### II-B ICP-based Registration A major challenge in registration of LIDAR point clouds using iterative approximation methods is finding a fast and efficient method for obtaining matching point pairs and devising a feasible algorithm for translation and rotation of one point cloud onto another reference point cloud. This process is vital in transforming all the point clouds to a single and global coordinate system[13]. Only after performing necessary preprocessing, transformation and error checks the point clouds can be registered to produce a 3D model. Besl (1992)[1] introduced an algorithm called iterative closest point algorithm. Given an initial guess of the rigid body transformation, the algorithm proved to be successful in a variety of applications related to aligning 3D models. The ICP algorithm fixes one point cloud as the reference point cloud and then runs iteratively on the other point clouds (source), through each iteration the source point cloud is transformed so as to match the reference point cloud. A root mean square distance between the points is used to align each point in the source point cloud to its match found in the reference point cloud. The process is stopped when the error metric (usually calculated as sum of distances between matched points) is within some threshold value or some predefined number of iteration is reached. Even though the method put forward by Besl gave promising results, it came with some drawbacks. It uses a point-to-point distance method for finding corresponding point pairs in the point clouds and removing them. This proved to be less efficient when dealing with datasets of large size. The preprocessing steps employed by this method also proved to be incapable of removing non-matching points and outliers from the point clouds. The need for an initial guess can also be considered as a drawback of this algorithm. Xin and Pu (2010)[1] examined the drawbacks of the existing ICP and came up with an improved ICP algorithm. This method used the center of gravity of the matching pairs of as the reference point. This reference point and a combination of orientation constraints are together used to remove false point pairs. Point pair distance constraints used in this method also improved the performance. This modified algorithm uses the same preprocessing step as the conventional ICP algorithm. It introduces improvements in the second step of the algorithm, instead of a point to point distance method this new algorithm uses point pair distance constraints and centre of gravities of the point clouds as the reference pairs to reject false pairs. The more the points pairs, better is the output quality of the improved algorithm put forward in [1]. Fewer number of point pairs leads to failure in registration. Higher number of erroneous or non-matching point pairs leads to higher error rates and improper transformation of point clouds. Matching pairs have high sensitivity to noise and have a high percentage of false pairs at the early stages of aligning. The accuracy of registration, speed and convergence rates are the points to be improved in this version of ICP. Go-ICP [14] is a global optimization method on the well-established Branch- and-Bound (BnB) theory. However, selecting an appropriate domain parametrization to construct a tree structure in BnB and, more importantly, to extract efficient error boundaries based on parametrization .In order to address the local minima problem,global registration methods have been investigated in Go-ICP. Here local ICP is integrated into the BnB scheme, which speeds up the new method while guaranteeing global optimality. It is also possible to accelerate the closest point search using fast global registration algorithm [16] that does not involve iterative sampling, model fitting, or local refinement. The algorithm does not require initialization and can align noisy partially overlapping surfaces. It optimizes a robust objective defined densely over the surfaces. Due to this dense coverage, the algorithm directly produces an alignment that is as precise as that computed by well-initialized local refinement algorithms. The optimization does not require closest-point queries in the inner loop. Another formulation of the ICP algorithm is registration using sparsity inducing norm[17] that optimizes by avoiding difficulties such as sensitivity to outliers and missing data.Also PointNet which represents point clouds itself can be thought of as a learnable imaging function[18]. Here classical vision algorithms for image alignment can be applied for point cloud registration. ### II-C Effect of outliers for registration The presence of outliers or irrelevant data in the dataset can lead to improper matching of data points among the point clouds during the process of calculating the transformation matrices. This inturn leads to lower accuracy and coarse edges in the 3D model that is generated[14]. Thus outlier detection and removal becomes vital in producing good results. The most popular outlier detection techniques are distance based outlier detection technique, density based outlier detection technique and cluster based outlier detection technique[5]. Distance based outlier detection techniques work by calculating distances between data points, if a point has a distance close to its nearest neighbour then it is considered as a normal point, otherwise it is marked as an outlier. In density based outlier detection algorithm, every object in the data set is referred as local outlier factor (LOF). The LOF is the degree which is assigned to the object of data set. It is also defined as the ratio between the local density of an object and the average of those of its k nearest neighbors[5]. In clustering based outlier detection, the given data points are clustered into groups, similar or neighbouring data points are expected to end up in the same cluster[5]. Many clustering algorithms are used for clustering with K-means algorithm, which being one of the most common choices. Some statistical methods or techniques such as weighted centre based methods are applied on each cluster to detect and remove outliers[6]. ## III Methodology The input to our multi scan registration system is a set if point clouds, $\Psi=\\{P_{1},P_{2},..,P_{n}\\}$. The proposed model for LiDAR registration (referred to as TCM ICP) has three steps. The first step is a preprocessing to remove outliers from the input scans. The second step consists of determining the rotational and translational matrices for each input point cloud. To this end, a reference cloud is selected based on a correspondence measure from the input point clouds. In each iteration, point cloud with the least TCM(Transformation Compatibility Measure) value is selected for the registration. Simplex and gradient optimization techniques are employed to find the optimal rotational and translational matrices for registering the selected point cloud to the reference point cloud. In the final step, the actual transformation of the point cloud to the reference frame is performed. Figure 1: A sample of point cloud before and after the outlier removal ### III-A Preprocessing In the preprocessing phase, we use a K-means clustering based technique to identify and remove outliers from each point cloud. First, a set of k points are selected from each point cloud with the help of K-D tree. To this end, each point cloud is embedded into a K-D tree and a centroid point is chosen at random. All the points lying within a sphere of radius $r$ are then removed. This process is repeated until $k$ number of centroids are selected. Thus selected $k$ points are then used as initial centroids in K-means clustering. Note that, K-D tree based centroid selection ensures that every pair of centroids are spatially well separated. Further, the use of K-D tree avoids the randomness in centroid selection and reduces the iteration count of K-means algorithm. Outlier removal is performed using the $k$ clusters generated by K-means, i.e., points that lie beyond a fixed threshold limit from each cluster centroid are removed from the point cloud. An example of outlier removal is shown in Figure 2. ### III-B Selection of the Reference Point Cloud The crucial step in the registration process is finding a rigid transformation that aligns the input point cloud to the reference point cloud. A reference point cloud is one of the input point cloud that exhibits a good correspondence with all the remaining point clouds. Linear inequalities are formulated for all other source point clouds using the underlying concepts of simplex method[11]. These inequalities are then solved to obtain the transformation matrices for each point cloud which is discussed in section III-C. Reference point cloud has a significant influence on the quality of registration. Good correspondence between the reference cloud and the remaining point clouds is essential and would greatly improves the computational performance in finding optimal transformations. To define the reference point cloud, the correspondence from a point cloud to every other point cloud is calculated. The correspondence of a point cloud $P_{i}$ to another point cloud $P_{j}$ is a closeness measure between $P_{i}$ and $P_{j}$. A threshold is imposed on correspondence values, i.e., if the correspondence value between $P_{i}$ and $P_{j}$ is less than a threshold, then a connection is said to exist between $P_{i}$ and $P_{j}$. The point cloud with the highest number of direct connections is chosen as the reference point cloud which will define the reference coordinate system for the final registration. If several point clouds meet this condition, the point cloud located in a central position regarding the list of files is arbitrarily defined as the reference data, assuming that the data have been acquired successively in the spatial distribution. Figure 2: An example of correspondence graph representing connections between point clouds. Note that the algorithm chooses node 7, which has the highest as the reference point cloud for the registration. ### III-C Transformation Compatibility Measure ($\tau$) The order in which the point clouds are merged using simplex method can drastically affect the accuracy of the final output. We propose a metric called the Transformation Compatibility Measure(denoted by $\tau$) to impose an ordering in the point cloud selection and merging. TCM measure (Equation 1) captures the compatibility among different point clouds which is quantified through the inter- and intra-cluster distances, and the normalized distance between the point clouds. TCM measure ensures that the most similar point clouds are selected for transformation in each iteration of the registration process. This selection process is pivotal in determining the correct order and values of the point cloud transformations. $\tau(P_{i},P_{j})=f_{ij}g_{ij}h_{ij}$ (1) The inter-cluster distance between two point clouds $P_{i}$ and $P_{j}$ is computed by adding the minimum distances from each point of one cloud to cluster centroid of the other point cloud as given in Equations 2-4. $f_{ij}=X_{i}+Y_{j}$ (2) $X_{i}=min(\parallel p_{k}-\mathcal{C}(P_{j})\parallel^{2}\mid\forall p_{k}\in P_{i},i\neq j)$ (3) Here $\mathcal{C}(.)$ denotes the centroid of the point set. $Y_{j}=min(\parallel p_{k}-\mathcal{C}(P_{i})\parallel^{2}\mid\forall p_{k}\in P_{j},i\neq j)$ (4) We can observe that Equations 2-4 is a customized version of the Hausdorff metric (Equation 5). $H(A,B)=max{(h(A,B),h(B,A))}$ (5) which defines the Hausdorff distance between two geometric shapes $A$ and $B$. The two distances $h(A,B$) and $h(B,A)$ are sometimes termed as forward and backward Hausdorff distances of A to B. $h(A,B)=\underset{a\in A}{max}(\underset{b\in B}{min}(\parallel a-b\parallel^{2}))$ (6) where $a$ and $b$ are points of sets $A$ and $B$ respectively. Lower the distance value, better is the matching between $A$ and $B$. The most important use of Hausdorff metric is that it can be used to check if a certain feature is present in a point cloud or not. This method gives interesting results, even in the presence of noise or occlusion (when the target is partially hidden). In our case, the presence of noise is very minimum due to initial preprocessing. The intra-cluster distance is obtained by calculating the minimum of all point to point distance in each cloud and subtracting one cloud value from the other (Equations 7-9). $g_{ij}=M_{i}-N_{j}$ (7) $M_{i}=min(\parallel p_{k}-p_{l}\parallel^{2}\mid\forall p_{k},p_{l}\in P_{i},k\neq l)$ (8) $N_{j}=min(\parallel p_{k}-p_{l}\parallel^{2}\mid\forall p_{k},p_{l}\in P_{j},k\neq l)$ (9) Finally, to avoid the bias which may induce due to the varying number of points in point clouds, we perform a normalization of the measure using Equation 10. $h_{ij}=\frac{1}{|P_{i}||P_{j}|}$ (10) We use the method of contradiction to to establish the correctness of the transformation compatibility measure (refer to Lemma III.1). ###### LEMMA III.1. Transformation compatibility metric (Equation 1) between the point clouds correctly aids in choosing the most similar point clouds in each iteration. ###### Proof. The proof is by contradiction. We assume that TCM does not choose the most similar point clouds for transformation. During $m^{th}$ iteration (in general any $m$) TCM chooses point clouds $P_{i}$ and $P_{j}$ as most similar point clouds instead of $P_{i}$ and $P_{k}$ which are actually the most similar in the $m^{th}$ iteration. $\tau(P_{i},P_{j})<=\tau(P_{i},P_{k})$ TCM has two main components-first is the customized version of Hausdorff distance and the other component is the difference between the intra-cluster distances between the two point clouds. As per our assumption, the difference between the intra-cluster should be lower for $P_{k}$ and $P_{i}$ as compared to $P_{i}$ and $P_{j}$. That is first part of TCM should be higher for $P_{i}$ and $P_{k}$ pair. This implies that the customized Hausdorff distances between the most similar point cloud is large, which is obviously not correct. Further, the point clouds are obtained after KD tree based clustering which implies outliers have been removed, indicating that $\tau(P_{i},P_{j})>\tau(P_{i},P_{k})$. Point clouds with least value of TCM are chosen during each iteration. Since $\tau(P_{i},P_{k})$ is lower than $\tau(P_{i},P_{j})$, the algorithm would have originally chosen $P_{i}$ and $P_{k}$ for transformation. Thus our assumption that the wrong point cloud pair was chosen by TCM is wrong and hence, the proof. ∎ ### III-D Registration of Point Clouds #### Simplex Algorithm for Registration Consider two point clouds $P_{i}$ and $P_{j}$, our objective is to find the rigid transformation that aligns $P_{j}$ to $P_{i}$. The transformation of one point cloud to another involves translation, $T$ as well as rotation, $R$. Given $P_{i}$ and $P_{j}$, computing the unknown $T$ and $R$ effectively is the key aspect of point cloud registration. A point cloud $P$transformed into $P^{\prime}$ can be expressed by the Equation 11. $P^{\prime}=R*P+T$ (11) Solving this equation yields the rotation and translation matrices. In our problem, the number of constraints that can be formulated is less than the number of variables present and as a consequence, most methods for solving linear equations fails to work here. The problem is thus modeled as an optimization problem with three possibilities. $P^{\prime}>R*P+T$ $P^{\prime}<R*P+T$ $P^{\prime}=R*P+T$ The values of $R$ and $T$ with least error obtained by optimizing these inequalities using simplex method [11] are used as the rotational and translational matrices for a point cloud. Though algebraic in nature, the underlying concepts of simplex algorithm are geometric. Simplex algorithm tests adjacent vertices of the feasible set (which is a polytope) in sequence so that at each new vertex the objective function improves or is unchanged. The simplex method is very efficient in practice, generally taking $2m$ to $3m$ iterations at most (where $m$ is the number of equality constraints), and converging in expected polynomial time for certain distributions of random inputs. However, its worst-case complexity is exponential. The point cloud with the best transformation, that is the point cloud that has minimum deviation from the reference point cloud is selected and merged with the reference point cloud after the transformation. This process of calculating translational and rotational matrices and merging of point clouds is continued until a single point cloud with a reference coordinate system is obtained. The merging of point clouds is done with the expectation that future iterations of the process will lead to improvement in the quality of the dataset. #### Optimization We combine gradient descent and simulated annealing techniques with the transformation matrix calculation to improve the results. The translation and rotation matrices calculated using simplex algorithm are fed to the gradient descent method to calculate the local optimum values for translation and rotation. Gradient descent is an optimization algorithm used to minimize the function by iteratively moving in the direction of steepest descent as defined by the negatives of the gradient. In our problem, gradient descent is used to update the parameters of transformation. Parameters refer to the coefficients in the rotational and translational matrices. All the combinations of marginal increase and marginal decrease of parameter values are performed and the combination that gives the least values of TCM (Transformation Compatibility Measure) measure after point cloud transformation are used to transform the point cloud. The pseudo code for the entire registration method is given in Algorithm 1. Input: Pre-processed set of points clouds $\Psi=\\{P_{1},P_{2},..,P_{n}\\}$ Output: Registered point cloud, $S$ 1 Construct the correspondence graph $G=(V,E)$, where $V$ is a set of vertices one for each $P_{i}\in\Psi$ and $E$ consists of all the edges $e=(v_{i},v_{j})$ such that $correspondence(P_{i},P_{j})<threshold$ ; 2 Let $S=\\{P_{parent}\in\Psi\mid v_{parent}$ is the maximum degree vertex in $V\\}$; 3 Update $\Psi$, i.e., $\Psi=\Psi\setminus P_{parent}$; 4 Initialize $i=2$; 5 while _$i\leq n$ _ do 6 for _each $P_{j}\in\Psi$_ do 7 Compute $tcm(P_{j},S)$; 8 9 end for 10 Let $P_{q}$ be the point cloud with minimum $tcm$; 11 [T,R]=Simplex($P_{q}$, $S$); 12 [T′, R′]=Gradient_descent(T, R); 13 Transform $P_{q}$, i.e., $P_{q}^{\prime}$=transform($P_{q},T^{\prime},R^{\prime})$; 14 Update the registered point cloud $S=S\cup P_{q}^{\prime}$; 15 Update the point set, $\Psi=\Psi\setminus P_{q}$; 16 17 end while 18return _$S$_ ; Algorithm 1 TCM-ICP($P$) Figure 3: simple point cloud plots in various colours of Data set 1 before registration. Figure 4: Gallery of registration results. The first two columns of the second row shows the overlapped images of LiDAR scans obtained after registration of Data set 2 using TCM ICP, and ICP based and feature based registration methods, respectively. Note that the yellow scans in both the results represent the output of TCM ICP and the red represents the other result. ## IV Experiments and Results All the experiments were performed on a system with Intel Xeon E5-2600 processor with 2.4 GHz and 32 GB of DDR4 RAM. Software used for point cloud processing were Point Cloud Library(PCL) [19], Computational Geometry Algorithms Library (CGAL) [20] and Cloud Compare [21]. The point cloud registration algorithms were tested with four LIDAR data sets. 1. 1. Data Set 1: This data set was recorded with the intention to test registration algorithm robustness in the context of navigation with low accuracy of the sensor and motion during acquisition which is localized in Clermont-Ferrand (France). It provides an urban scene consisting of buildings and trees. 2. 2. Data Set 2 & 4: This data set is a mobile LiDAR scene of road strip with building on either side. 3. 3. Data Set 3:This data set provides an excellent mix of the conditions that a surveying and geo-spatial firm would be challenged with logistically. This is localized in south shore of the Lynx Lake in Prescott, United States, which is ideal because of the large amount of assets to map and an open tree canopy. All the data sets stated before were registered using conventional ICP based method [1], feature based method [4] and the proposed TCM ICP. All the four data sets consists of mobile LiDAR of various cities. The subsequent evaluation comprise of qualitative and quantitative analysis of the outputs. These data sets contain sparse as well as dense areas. Further, the input scans contain isolated regions and overlaping areas. So the data sets are heterogeneous in nature and hence, represent a good choice for testing the proposed registration method. ### IV-A Qualitative Analysis Figure 3 is a plot consist of all the point clouds of Data set 1 before registration. Individual point clouds are shown in different colours. Figure 3 clearly shows that pure 3D plot of the scanned points is not meaningful with many misalignments between the individual scans. A simple visual inspection of Figure 4, suggests that majority of the point clouds have been correctly placed in the reference coordinate system. The first row of Figure 4 shows the outputs of feature based, ICP and TCM ICP registrations applied on Data set 1. Since TCM ICP algorithm not susceptible to isolated points, it identifies the overlapped regions and works well with both sparse and dense data sets. Hence the registration of Data Set 1 is done with minimum error compared to other methods as quantified in row 3 of Table I. The results given in the second row of Figure 4 shows the Data set 2 registered using various point cloud registration methods. The column 1, row 2 of Figure 4 is an overlap image of 3D scans obtained by registration using feature based method and TCM ICP. It is clear from this figure that some features such as the buildings towards the end were not properly registered by feature based registration method. The second figure in the second row of Figure 4 shows the overlapped image of 3D scans obtained by registration using ICP based method and TCM ICP. The points in red towards the outer edges of this figure represents outliers present in ICP based registration result. These outliers have been successfully removed by TCM ICP. Similarly, Third and fourth rows of Figure 4 shows outputs of registration of Data Set 3 and 4 using feature based, ICP, TCM ICP based registration methods, respectively. The registered 3D scan is that of a city street. All three registration methods produce outputs of comparable quality, but on a closer look, it can be seen that TCM ICP produced more accurate results as evident from Section IV-B. TABLE I: Performance of various registration algorithms on Data Set 1 Criteria | TCM ICP | ICP | Feature based ---|---|---|--- Time(min) | 3716 | 3700 | 3510 Iterations | 2940 | 3015 | 2081 RMS (50000 points) | 1.022 | 1.88 | 1.34 Error due to isolated points | 1.58 | 2.02 | 2.75 Error due to feature blurring | 2.85 | 3.02 | 2.95 ### IV-B Quantitative Analysis #### Criteria for Analysis The registration results using different methods are compared using various criteria including time, number of iterations, RMS (Root Mean Square) value of 50000 points, error after adding isolated points, error after removing points, error after feature blurring, cloud-to-cloud distance and standard deviation. The sensitivity of the algorithm is measured by feature blurring, i.e., the process of adding additional points to conceal the features in the point cloud. TABLE II: Performance of various registration algorithms on Data Set 2 Criteria | TCM ICP | ICP | Feature based ---|---|---|--- Time(min) | 4208 | 3542 | 4050 Iterations | 3528 | 4538 | 2938 RMS (50000 points) | 1.09 | 2.87 | 1.25 Error due to isolated points | 1.82 | 2.9 | 3.85 Error due to feature blurring | 3.74 | 4.53 | 3.22 #### Comparison A tabular comparison of time, number of iterations, RMS (50000 points), error after isolated point addition and error after feature blurring of Data Set 1 is shown in Table I. TCM ICP showed better results compared to ICP based and feature based registration when exposed to noise, occlusion of features and feature blurring. However TCM ICP relapsed in terms of time taken for completing the process of registration. ICP and feature based algorithms are implemented using the libraries like FLANN (Fast Library for Approximate Nearest Neighbours)[7], which consist of most optimal heuristic implementations. Table II-III reports various performance attributes of the compared algorithms on Dat Set 2 and 3. The results obtained are similar to that of Data Set 1, where TCM ICP outperforms the other two registration methods in terms of noise error, RMS and feature blurring, but trails behind in the case of computational time for the overall registration process. This is because of Simplex and Gradient descent method applied for optimization in Algorithm. TABLE III: Performance of various registration algorithms on Data Set 3 Criteria | TCM ICP | ICP | Feature based ---|---|---|--- Time(min) | 5316 | 4339 | 4875 Iterations | 4250 | 4037 | 3405 RMS (50000 points) | 2.13 | 3.83 | 2.68 Error due to isolated points | 2.53 | 3.99 | 3.98 Error due to feature blurring | 3.85 | 4.30 | 3.96 Figure 5: Bar chart showing the performance comparison of various registration methods with respect to (a) mean cloud-to-cloud distances and standard deviations and (b) RMS values of Data Set 2. Figure 6: Bar chart showing the performance comparison of various registration methods with respect to (a) mean cloud-to-cloud distances and standard deviations and (b) RMS values of Data Set 3. #### Ground truth based Evaluation Bar charts presented in Figures 5 and 6 show that TCM ICP performs better as compared to ICP based and feature based registrations in terms of mean distance and standard deviation of the registered scans from the corresponding ground truths of Data Set 2 and 3. The RMS value of TCM ICP is also lower than that of ICP based and feature based registration for both Data Set 2 and 3 as shown in Figures 5 and 6. Though the RMS values of TCM ICP and feature based registration are comparable, TCM ICP slightly scores over the feature based registration (refer to Figures 5(b) & 6(b)). It is evident that the high quality registration generated by TCM ICP is mainly due to the prepossessing for outlier removal and a well designed point cloud selection method. Figure 7: Graph showing the effect of (a) noise, (b) occlusion and (c) sensitivity of various registration algorithms on Data Set 2. Figure 8: Graph showing the effect of (a) noise, (b) occlusion and (c) sensitivity of various registration algorithms on Data Set 3. #### Robustness to Defect Laden Scans To evaluate the performance of TCM ICP for defect laden LiDAR scans, we performed another experiment by manually introducing the artifacts such as noise, occlusions and sparsity. Graphs plotted in Figures 7 and 8 represent the RMS values or sum of mean distance and standard deviation between clouds against percentage of noise added, percentage of occlusion of features and percentage of points removed(sensitivity). Addition of noise, occlusion of features and removal of points cause a monotonic increase in RMS value for the test data, i.e., Data set 2 and 3. It is clear that RMS values of TCM ICP are always lower than that of ICP based and feature based registrations for noise addition and point removal experiments. However, in the case of sum of mean distance and standard deviation versus percentage of occlusion of features, the curve representing feature based registration overtakes TCM ICP after a relatively high percentage of features are removed. This pattern is observed in both, Dataset 2 and 3. ICP based registration fails to produce the expected result because ICP suffers from local minima issues and it is always expect the clouds to have sufficient initial alignment before registration. This is the reason why a coarse initial alignment is provided before applying ICP[12]. ## V Conclusion We proposed an algorithm for registering multiple and partially overlapping LIDAR scans of an outdoor scene. The key enabler of the proposed multi-view registration is a geometrical and statistical measure called transformation compatibility measure which effectively identifies the point cloud most similar to the reference scan in each iteration of the algorithm. The method overcome the problems of existing LIDAR registration methods such as the need for auxiliary data or target, requirement of sufficient overlapping areas, and difficulty in feature extraction and matching. The suggested method for point cloud registration shows promising results when tested against traditionally used methods using LiDAR scans of various outdoor scenes. The method also works effectively in removing outliers from the input point clouds, which in turn enhances the accuracy of the registered 3D scans. In the future, further step is to be taken to integrate data sets from heterogeneous sources, e.g., airborne, mobile and terrestrial. The execution time of the method is a factor that needs to be looked upon further. Using custom point cloud data structures and libraries for processing the point clouds would cut down the run time by a large factor. Data driven Techniques such as geometric deep learning[13] and various fine tuning methods may be used to improve the performance of the registration process. ## Acknowledgment We acknowledge the developers of PCL and Dr. Gilles Debunne, developer of QGLViewer who provided invaluable insights in using PCL and CGAL libraries. ## References * [1] Wei Xin, Jiexin Pu, _An Improved ICP Algorithm for Point Cloud Registration_ , International Conference on Computational and Information Sciences, 2010. * [2] P. Dong, Q. Chen. _LiDAR Remote Sensing and Applications_. Boca Raton, Florida, United States: CRC Press, 2018. * [3] L. Cheng, S. Chen, X. Liu, Hao Xu, Yang Wu , Manchun Li and Y. Chen, _Registration of Laser Scanning Point Clouds: A Review_ , in Sensor, vol. 18(5), 1641; doi:10.3390/s18051641, 2018. * [4] J.J. Jaw and T.Y. Chuang, _Feature-Based Registration of terrestrial LIDAR point clouds_ , International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2010. * [5] H.C. Mandhare, S.R. Idate, _A comparative study of cluster based outlier detection, distance based outlier detection and density based outlier detection techniques_ , International Conference on Intelligent Computing and Control Systems, 2017. * [6] J.J. Manoharan, S.H. Ganesh, J.G.R. Sathiaseelan , _Outlier detection using enhanced K-means Clustering Algorithm and weight based senter approach_ , International Journal of Computer Science and Mobile Computing, 2016. * [7] Point Cloud Library Documentation, http://http://docs.pointclouds.orgs. Last accessed 28 Jan 2019 * [8] A. Myronenko , X. Song, _Point Set Registration: Coherent Point Drift_ , IEEE Transactions on Software Engineering, 2010. * [9] W. Forstner, K. Khoshelham, _Efficient and Accurate Registration of Point Clouds with Plane to Plane Correspondences_ , IEEE International Conference on Computer Vision Workshops (ICCVW), 2017. * [10] R. A. Brown, _Building a Balanced k-d Tree in O(kn log n) Time_ , Journal of Computer Graphics Techniques, 2015. * [11] F. S. Hillier, G. J. Lieberman, _Introduction to operations research_ , McGraw Hill, New York, 2001. * [12] Dirk Holz, Alexandru E. Ichim, Federico Tombari, Radu B. Rusu, and Sven Behnke, _Registration With the Point cloud Library PCL_ , IEEE Robotics & Automation Magazine, Volume 22, Issue 4, pp. 110-124, December 2015. * [13] Gil Elbaz, Tamar Avraham and Anath Fischer, _3D Point Cloud Registration for Localization using a Deep Neural Network Auto-Encoder_ , 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). * [14] Jiaolong Yang, Hongdong Li2 and Yunde Jia, _Go-ICP: Solving 3D Registration Efficiently and Globally Optimally_ , 2013 IEEE International Conference on Computer Vision. * [15] Jiaolong Yang, Hongdong Li, Dylan Campbell and Yunde Jia, _Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration_ , IEEE Transactions on Pattern Analysis and Machine Intelligence ( Volume: 38 , Issue: 11 , Nov. 1 2016 ). * [16] Timothee Jost and Heinz Hugli,_Fast ICP Alogorithms for the Registration of 3D Data_ ,Pattern Recognition-24th DAGM Symposium Zurich, Switzerland,pp 91-99,September 16–18, 2002 Proceedings. * [17] Sofien Bouaziz, Andrea Tagliasacchi and Mark Pauly,_Sparse Iterative Closest Point_ ,Volume 32 (2013), Number 5,Eurographics Symposium on Geometry Processing 2013. * [18] Yasuhiro Aoki, Hunter Goforth, Rangaprasad Arun Srivatsan and Simon Lucey,_PointNetLK: Robust and Efficient Point Cloud Registration using PointNet_ ,Open Access Version(2019),Computer Vision Foundation. * [19] Point Cloud Library, http://pointclouds.org/ [last accessed: 01-04-2020] * [20] Computational Geometry Algorithms Library, https://www.cgal.org/ [last accessed: 12-15-2019] * [21] Cloud Compare, https://www.danielgm.net/cc/, [last accessed: 01-04-2019]
* Ellingsen (2006) Ellingsen, S. P. 2006, ApJ, 638, 241 * Engels & Bunzel (2015) Engels, D., & Bunzel, F. 2015, A&A, 582, A68 * ESA (1997) ESA. 1997, in ESA Special Publication, Vol. 1200, ESA Special Publication (Noordwijk, Netherlands: ESA Publications Division) * Eubanks (1991) Eubanks, T. e. 1991, Proceedings of the U.S. Naval Observatory Workshop on Relativistic Models for use in Space Geodesy (U.S. Naval Observatory) * Fernández & Metzger (2016) Fernández, R., & Metzger, B. D. 2016, Annual Review of Nuclear and Particle Science, 66, 23 * Fey et al. (2015) Fey, A. L., Gordon, D., Jacobs, C. S., et al. 2015, AJ, 150, 58 * Fienga et al. (2020) Fienga, A., Avdellidou, C., & Hanuš, J. 2020, MNRAS, 492, 589 * Fienga et al. (2011) Fienga, A., Laskar, J., Kuchynka, P., et al. 2011, Celestial Mechanics and Dynamical Astronomy, 111, 363 * Fish & Reid (2006) Fish, V. L., & Reid, M. J. 2006, ApJS, 164, 99 * Fish & Reid (2007) Fish, V. L., & Reid, M. J. 2007, ApJ, 670, 1159 * Fish et al. (2005) Fish, V. L., Reid, M. J., Argon, A. L., & Zheng, X.-W. 2005, ApJS, 160, 220 * Folkner & Border (2015) Folkner, W. M., & Border, J. S. 2015, Highlights of Astronomy, 16, 219 * Folkner et al. (2014) Folkner, W. M., Williams, J. G., Boggs, D. H., Park, R. S., & Kuchynka, P. 2014, Interplanetary Network Progress Report, 42-196, 1 * Fomalont (2005) Fomalont, E. B. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 340, Future Directions in High Resolution Astronomy, ed. J. Romney & M. Reid, 460 * Fomalont et al. (2001) Fomalont, E. B., Geldzahler, B. J., & Bradshaw, C. F. 2001, ApJ, 558, 283 * Fomalont & Kopeikin (2008) Fomalont, E. B., & Kopeikin, S. 2008, in A Giant Step: from Milli- to Micro-arcsecond Astrometry, ed. W. J. Jin, I. Platais, & M. A. C. Perryman, Vol. 248, 383 * Fomalont & Kopeikin (2003) Fomalont, E. B., & Kopeikin, S. M. 2003, ApJ, 598, 704 * Fomalont et al. (2009) Fomalont, E., Kopeikin, S., Lanyi, G., & Benson, J. 2009, ApJ, 699, 1395 * Fomalont & Reid (2004) Fomalont, E., & Reid, M. 2004, New A Rev., 48, 1473 * Froeschle & Kovalevsky (1982) Froeschle, M., & Kovalevsky, J. 1982, A&A, 116, 89 * Fujisawa et al. (2014) Fujisawa, K., Sugiyama, K., Motogi, K., et al. 2014, PASJ, 66, 31 * Gaia Collaboration et al. (2016) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2016, A&A, 595, A2 * Gaia Collaboration et al. (2018) Gaia Collaboration, Mignard, F., Klioner, S. A., et al. 2018, A&A, 616, A14 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1 * Gaia Collaboration et al. (2022) Gaia Collaboration, Klioner, S. A., Lindegren, L., et al. 2022, A&A, 667, A148 * Gaia Collaboration et al. (2023) Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, A&A, 674, A1 * Gallagher & Starrfield (1978) Gallagher, J. S., & Starrfield, S. 1978, ARA&A, 16, 171 * Garrington et al. (1995) Garrington, S., Davis, R., Morrison, L., & Argyle, B. 1995, IEEE Spectrum, 8, 7 * Gawronski et al. (2013) Gawronski, M. P., Gozdziewski, K., & Katarzynski, K. 2013, arXiv e-prints, arXiv:1309.4639 * Gehrz et al. (1998) Gehrz, R. D., Truran, J. W., Williams, R. E., & Starrfield, S. 1998, PASP, 110, 3 * Georgelin & Georgelin (1976) Georgelin, Y. M., & Georgelin, Y. P. 1976, A&A, 49, 57 * Green et al. (2009) Green, J. A., Caswell, J. L., Fuller, G. A., et al. 2009, MNRAS, 392, 783 * Green et al. (2010) Green, J. A., Caswell, J. L., Fuller, G. A., et al. 2010, MNRAS, 409, 913 * Green et al. (2012) Green, J. A., Caswell, J. L., Fuller, G. A., et al. 2012, MNRAS, 420, 3108 * Guirado et al. (1997) Guirado, J. C., Reynolds, J. E., Lestrade, J. F., et al. 1997, ApJ, 490, 835 * Gulati et al. (2023) Gulati, A., Murphy, T., Kaplan, D. L., et al. 2023, PASA, 40, e025 * Guo et al. (2021) Guo, Y. J., Freire, P. C. C., Guillemot, L., et al. 2021, A&A, 654, A16 * Gwinn et al. (1997) Gwinn, C. R., Eubanks, T. M., Pyne, T., Birkinshaw, M., & Matsakis, D. N. 1997, ApJ, 485, 87 * Hao et al. (2021) Hao, C. J., Xu, Y., Hou, L. G., et al. 2021, A&A, 652, A102 * Harrison et al. (2016) Harrison, I., Camera, S., Zuntz, J., & Brown, M. L. 2016, MNRAS, 463, 3674 * Harrison et al. (2020) Harrison, I., Brown, M. L., Tunbridge, B., et al. 2020, MNRAS, 495, 1737 * Harrison et al. (1999) Harrison, T. E., McNamara, B. J., Szkody, P., et al. 1999, ApJ, 515, L93 * Hartley et al. (2021) Hartley, P., Jackson, N., Badole, S., et al. 2021, MNRAS, 508, 4625 * Herman & Habing (1985) Herman, J., & Habing, H. J. 1985, A&AS, 59, 523 * Hilmi et al. (2020) Hilmi, T., Minchev, I., Buck, T., et al. 2020, MNRAS, 497, 933 * Høg et al. (2000) Høg, E., Fabricius, C., Makarov, V. V., et al. 2000, A&A, 355, L27 * Horch (2013) Horch, E. 2013, in Planets, Stars and Stellar Systems. Volume 4: Stellar Structure and Evolution, ed. T. D. Oswalt & M. A. Barstow, Vol. 4 (Springer Science+Business Media Dordrecht), 653 * Hotan et al. (2021) Hotan, A. W., Bunton, J. D., Chippendale, A. P., et al. 2021, PASA, 38, e009 * Hou (2021) Hou, L. G. 2021, Frontiers in Astronomy and Space Sciences, 8, 103 * Hulse & Taylor (1975) Hulse, R. A., & Taylor, J. H. 1975, ApJ, 195, L51 * Hyland et al. (2023) Hyland, L. J., Reid, M. J., Orosz, G., et al. 2023, ApJ, 953, 21 * Immer et al. (2019) Immer, K., Li, J., Quiroga-Nuñez, L. H., et al. 2019, A&A, 632, A123 * Iorio (2023) Iorio, L. 2023, Universe, 9, 304 * Jacobson et al. (2006) Jacobson, R. A., Antreasian, P. G., Bordi, J. J., et al. 2006, AJ, 132, 2520 * Jaroenjittichai et al. (2022) Jaroenjittichai, P., Sugiyama, K., Kramer, B. H., et al. 2022, arXiv e-prints, arXiv:2210.04926 * Jiang et al. (2019) Jiang, P., Yue, Y., Gan, H., et al. 2019, Science China Physics, Mechanics, and Astronomy, 62, 959502 * Jiang et al. (2020) Jiang, P., Tang, N.-Y., Hou, L.-G., et al. 2020, RAA, 20, 064 * Jiang et al. (2024) Jiang, P., Cui, L., Liu, X., et al. 2024, MNRAS, 528, L112 * Johnston et al. (2003) Johnston, K., de Vegt, C., & Gaume, R. 2003, AJ, 125, 3252 * José & Hernanz (2007) José, J., & Hernanz, M. 2007, Journal of Physics G Nuclear Physics, 34, R431 * Kaczmarek et al. (2022) Kaczmarek, Z., McGill, P., Evans, N. W., et al. 2022, MNRAS, 514, 4845 * Karami et al. (2016) Karami, M., Broderick, A. E., Rahvar, S., & Reid, M. 2016, ApJ, 833, 169 * Kardashev (1986) Kardashev, N. S. 1986, AZh, 63, 845 * Karlsson et al. (2003) Karlsson, R., Sjouwerman, L. O., Sandqvist, A., & Whiteoak, J. B. 2003, A&A, 403, 1011 * Kayser et al. (1986) Kayser, R., Refsdal, S., & Stabell, R. 1986, A&A, 166, 36 * Knigge et al. (2011) Knigge, C., Baraffe, I., & Patterson, J. 2011, ApJS, 194, 28 * Koopmans et al. (2004) Koopmans, L. V. E., Browne, I. W. A., & Jackson, N. J. 2004, New A Rev., 48, 1085 * Kopeikin & Makarov (2007) Kopeikin, S. M., & Makarov, V. V. 2007, Phys. Rev. D, 75, 062002 * Kounkel et al. (2017) Kounkel, M., Hartmann, L., Loinard, L., et al. 2017, ApJ, 834, 142 * Kovalev et al. (2017) Kovalev, Y. Y., Petrov, L., & Plavin, A. V. 2017, A&A, 598, L1 * Kovalevsky et al. (1997) Kovalevsky, J., Lindegren, L., Perryman, M. A. C., et al. 1997, A&A, 323, 620 * Kruijssen et al. (2015) Kruijssen, J. M. D., Dale, J. E., & Longmore, S. N. 2015, MNRAS, 447, 1059 * Ladeyschikov et al. (2019) Ladeyschikov, D. A., Bayandina, O. S., & Sobolev, A. M. 2019, AJ, 158, 233 * Lanyi et al. (2007) Lanyi, G., Bagri, D. S., & Border, J. 2007, IEEE Proceedings, 95, 2193 * Laskar & Gastineau (2009) Laskar, J., & Gastineau, M. 2009, Nature, 459, 817 * Le Sidaner et al. (2007) Le Sidaner, P., Dubernet, M. L., Simon, G., et al. 2007, in SF2A-2007: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics, ed. J. Bouvier, A. Chalabaev, & C. Charbonnel, 71 * Lecavelier des Etangs et al. (2013) Lecavelier des Etangs, A., Sirothia, S. K., Gopal-Krishna, & Zarka, P. 2013, A&A, 552, A65 * Lestrade et al. (1996) Lestrade, J.-F., Phillips, R. B., Jones, D. L., & Preston, R. A. 1996, J. Geophys. Res., 101, 14837 * Lestrade et al. (1999) Lestrade, J. F., Preston, R. A., Jones, D. L., et al. 1999, A&A, 344, 1014 * Lestrade et al. (1986) Lestrade, J. F., Preston, R. A., Niell, A. E., Mutel, R. L., & Phillips, R. B. 1986, in Astrometric Techniques, ed. H. K. Eichhorn & R. J. Leacock, Vol. 109, 779 * Li et al. (2022a) Li, J. J., Immer, K., Reid, M. J., et al. 2022a, ApJS, 262, 42 * Li (2008) Li, Y. 2008, Science in China: Physics, Mechanics and Astronomy, 51, 105 * Li et al. (2022b) Li, Y., Xu, Y., Bian, S., et al. 2022b, ApJ, 938, 58 * Li et al. (2022c) Li, Y., Xu, Y., Li, J., et al. 2022c, ApJ, 925, 47 * LIGO Scientific Collaboration & Virgo Collaboration (2017) LIGO Scientific Collaboration, & Virgo Collaboration. 2017, GRB Coordinates Network, 21509, 1 * Lin et al. (2022) Lin, Z., Xu, Y., Hou, L., et al. 2022, ApJ, 931, 72 * Lindegren (2020) Lindegren, L. 2020, A&A, 633, A1 * Lindegren et al. (2021) Lindegren, L., Bastian, U., Biermann, M., et al. 2021, A&A, 649, A4 * Lindqvist et al. (1992) Lindqvist, M., Winnberg, A., Habing, H. J., & Matthews, H. E. 1992, A&AS, 92, 43 * Littlefair et al. (2024) Littlefair, S. P., Rodríguez-Gil, P., Marsh, T. R., Parsons, S. G., & Dhillon, V. S. 2024, MNRAS, 527, 4353 * Liu et al. (2023) Liu, D., Xu, Y., Hao, C., et al. 2023, ApJS, 268, 46 * Liu et al. (2021) Liu, N., Lambert, S. B., Charlot, P., et al. 2021, A&A, 652, A87 * Lodieu et al. (2019) Lodieu, N., Pérez-Garrido, A., Smart, R. L., & Silvotti, R. 2019, A&A, 628, A66 * Loinard (2002) Loinard, L. 2002, RMxAA, 38, 61 * Lorimer (2008) Lorimer, D. R. 2008, Living Reviews in Relativity, 11, 8 * Lovell et al. (2013) Lovell, J. E. J., McCallum, J. N., Reid, P. B., et al. 2013, Journal of Geodesy, 87, 527 * Ma et al. (1986) Ma, C., Clark, T. A., Ryan, J. W., et al. 1986, AJ, 92, 1020 * Ma et al. (1998) Ma, C., Arias, E. F., Eubanks, T. M., et al. 1998, AJ, 116, 516 * MacMillan et al. (2019) MacMillan, D. S., Fey, A., Gipson, J. M., et al. 2019, A&A, 630, A93 * Madison et al. (2013) Madison, D. R., Chatterjee, S., & Cordes, J. M. 2013, ApJ, 777, 104 * Makarov et al. (2019) Makarov, V. V., Berghea, C. T., Frouard, J., Fey, A., & Schmitt, H. R. 2019, ApJ, 873, 132 * Malkin (2016) Malkin, Z. 2016, MNRAS, 461, 1937 * Manchester et al. (2005) Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2005, AJ, 129, 1993 * Manchester et al. (2016) Manchester, R. N., Hobbs, G. B., Teoh, A., & Hobbs, M. 2016, VizieR Online Data Catalog, B/psr * Mao & Paczynski (1991) Mao, S., & Paczynski, B. 1991, ApJ, 374, L37 * Marcaide et al. (1985) Marcaide, J. M., Shapiro, I. I., Corey, B. E., et al. 1985, A&A, 142, 71 * Mason et al. (2001) Mason, B. D., Wycoff, G. L., Hartkopf, W. I., Douglass, G. G., & Worley, C. E. 2001, AJ, 122, 3466 * Matthews et al. (2023) Matthews, L. D., Evans, N. R., & Rupen, M. P. 2023, AJ, 165, 92 * Maureira et al. (2020) Maureira, M. J., Pineda, J. E., Segura-Cox, D. M., et al. 2020, ApJ, 897, 59 * McKean et al. (2015) McKean, J., Jackson, N., Vegetti, S., et al. 2015, in Advancing Astrophysics with the Square Kilometre Array (AASKA14), 84 * Melis et al. (2014) Melis, C., Reid, M. J., Mioduszewski, A. J., Stauffer, J. R., & Bower, G. C. 2014, Science, 345, 1029 * Menten (1991) Menten, K. M. 1991, ApJ, 380, L75 * Mignard et al. (2016) Mignard, F., Klioner, S., Lindegren, L., et al. 2016, A&A, 595, A5 * Miller-Jones et al. (2009) Miller-Jones, J. C. A., Jonker, P. G., Dhawan, V., et al. 2009, ApJ, 706, L230 * Miller-Jones et al. (2013) Miller-Jones, J. C. A., Sivakoff, G. R., Knigge, C., et al. 2013, Science, 340, 950 * Miller-Jones et al. (2021) Miller-Jones, J. C. A., Bahramian, A., Orosz, J. A., et al. 2021, Science, 371, 1046 * Millon et al. (2020) Millon, M., Galan, A., Courbin, F., et al. 2020, A&A, 639, A101 * Minier et al. (2003) Minier, V., Ellingsen, S. P., Norris, R. P., & Booth, R. S. 2003, A&A, 403, 1095 * Mirabel & Rodríguez (1998) Mirabel, I. F., & Rodríguez, L. F. 1998, Nature, 392, 673 * Misner et al. (1973) Misner, C. W., Thorne, K. S., & Wheeler, J. A. 1973, Gravitation (W.H. Freeman and Company) * Mogavero & Laskar (2021) Mogavero, F., & Laskar, J. 2021, A&A, 655, A1 * Mooley et al. (2018) Mooley, K. P., Deller, A. T., Gottlieb, O., et al. 2018, Nature, 561, 355 * Mori et al. (2013) Mori, K., Gotthelf, E. V., Zhang, S., et al. 2013, ApJ, 770, L23 * Moscadelli et al. (2013) Moscadelli, L., Li, J. J., Cesaroni, R., et al. 2013, A&A, 549, A122 * Neumann et al. (2023) Neumann, M., Avakyan, A., Doroshenko, V., & Santangelo, A. 2023, A&A, 677, A134 * Ng et al. (2023) Ng, M., Hughes, A. K., Homan, J., et al. 2023, arXiv e-prints, arXiv:2310.01511 * Ni (2017) Ni, W.-T. 2017, One Hundred Years of General Relativity: From Genesis and Empirical Foundations to Gravitational Waves, Cosmology and Quantum Gravity, Vol. 1, Vol. 1 (World Scientific Publishing Co. Pte. Ltd.) * Nice et al. (2005) Nice, D. J., Weisberg, J. M., & Taylor, J. H. 2005, in American Astronomical Society Meeting Abstracts, Vol. 207, American Astronomical Society Meeting Abstracts, 183.05 * O’Brien et al. (2006) O’Brien, T. J., Bode, M. F., Porcas, R. W., et al. 2006, Nature, 442, 279 * Orosz et al. (2014) Orosz, G., Dodson, R., Imai, H., et al. 2014, Testing methods for accurate OH maser astrometry, ATNF proposal id.V524, Semester: April, 2014 * Orosz et al. (2011) Orosz, J. A., McClintock, J. E., Aufdenberg, J. P., et al. 2011, ApJ, 742, 84 * Ortiz-León et al. (2017a) Ortiz-León, G. N., Loinard, L., Kounkel, M. A., et al. 2017a, ApJ, 834, 141 * Ortiz-León et al. (2017b) Ortiz-León, G. N., Dzib, S. A., Kounkel, M. A., et al. 2017b, ApJ, 834, 143 * Ortiz-León et al. (2018) Ortiz-León, G. N., Loinard, L., Dzib, S. A., et al. 2018, ApJ, 865, 73 * Paczynski (2003) Paczynski, B. 2003, Acta Astron., 53, 209 * Paragi et al. (2015) Paragi, Z., Godfrey, L., Reynolds, C., et al. 2015, in Advancing Astrophysics with the Square Kilometre Array (AASKA14), 143 * Park et al. (2021) Park, R. S., Folkner, W. M., Williams, J. G., & Boggs, D. H. 2021, AJ, 161, 105 * Perryman (2012) Perryman, M. 2012, European Physical Journal H, 37, 745 * Petit & Luzum (2010) Petit, G., & Luzum, B. 2010, IERS Technical Note, 36, 1 * Petrov & Kovalev (2017) Petrov, L., & Kovalev, Y. Y. 2017, MNRAS, 471, 3775 * Petrov et al. (2015) Petrov, L., Natusch, T., Weston, S., et al. 2015, PASP, 127, 516 * Pietrukowicz et al. (2021) Pietrukowicz, P., Soszyński, I., & Udalski, A. 2021, Acta Astron., 71, 205 * Pitjev & Pitjeva (2013) Pitjev, N. P., & Pitjeva, E. V. 2013, Astronomy Letters, 39, 141 * Pitjeva et al. (2022) Pitjeva, E., Pavlov, D., Aksim, D., & Kan, M. 2022, IAU Symposium, 364, 220 * Pitjeva (2013) Pitjeva, E. V. 2013, Solar System Research, 47, 386 * Pitjeva & Pitjev (2013) Pitjeva, E. V., & Pitjev, N. P. 2013, MNRAS, 432, 3431 * Pitjeva & Pitjev (2018) Pitjeva, E. V., & Pitjev, N. P. 2018, Celestial Mechanics and Dynamical Astronomy, 130, 57 * Poggio et al. (2021) Poggio, E., Drimmel, R., Cantat-Gaudin, T., et al. 2021, A&A, 651, A104 * Qiao et al. (2016) Qiao, H.-H., Walsh, A. J., Green, J. A., et al. 2016, ApJS, 227, 26 * Qiao et al. (2018) Qiao, H.-H., Walsh, A. J., Breen, S. L., et al. 2018, ApJS, 239, 15 * Qiao et al. (2020) Qiao, H.-H., Breen, S. L., Gómez, J. F., et al. 2020, ApJS, 247, 5 * Ransom et al. (2012) Ransom, R. R., Bartel, N., Bietenholz, M. F., et al. 2012, ApJS, 201, 6 * Ratner et al. (2012) Ratner, M. I., Bartel, N., Bietenholz, M. F., et al. 2012, ApJS, 201, 5 * Reid (2022) Reid, M. J. 2022, AJ, 164, 133 * Reid & Brunthaler (2020) Reid, M. J., & Brunthaler, A. 2020, ApJ, 892, 39 * Reid & Honma (2014) Reid, M. J., & Honma, M. 2014, ARA&A, 52, 339 * Reid et al. (2011) Reid, M. J., McClintock, J. E., Narayan, R., et al. 2011, ApJ, 742, 83 * Reid et al. (1999) Reid, M. J., Readhead, A. C. S., Vermeulen, R. C., & Treuhaft, R. N. 1999, ApJ, 524, 816 * Reid et al. (1988) Reid, M. J., Schneps, M. H., Moran, J. M., et al. 1988, ApJ, 330, 809 * Reid et al. (2014) Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2014, ApJ, 783, 130 * Reid et al. (2019a) Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2019a, ApJ, 885, 131 * Reid et al. (2019b) Reid, M., Loindard, L., Maccarone, T., & Melis, C. 2019b, BAAS, 51, 235 * Reynolds (2021) Reynolds, C. S. 2021, ARA&A, 59, 117 * Rickert (2017) Rickert, M. 2017, Searching for Sites of Recent Star Formation Towards the Central Molecular Zone: Surveys of Water and Methanol Masers, PhD thesis, Northwestern University * Rioja & Dodson (2020) Rioja, M. J., & Dodson, R. 2020, A&A Rev., 28, 6 * Ripepi et al. (2023) Ripepi, V., Clementini, G., Molinaro, R., et al. 2023, A&A, 674, A17 * Rivi et al. (2016) Rivi, M., Miller, L., Makhathini, S., & Abdalla, F. B. 2016, MNRAS, 463, 1881 * Rodríguez et al. (2003) Rodríguez, L. F., Curiel, S., Cantó, J., et al. 2003, ApJ, 583, 330 * Romano et al. (2017) Romano, D., Matteucci, F., Zhang, Z. Y., Papadopoulos, P. P., & Ivison, R. J. 2017, MNRAS, 470, 401 * Ryle & Vonberg (1946) Ryle, M., & Vonberg, D. D. 1946, Nature, 158, 339 * Sagdeyev et al. (1992) Sagdeyev, R. Z., Kerzhanovitch, V. V., Kogan, L. R., et al. 1992, A&A, 254, 387 * Sanna et al. (2019) Sanna, A., Moscadelli, L., Goddi, C., et al. 2019, A&A, 623, L3 * Schneider et al. (2011) Schneider, J., Dedieu, C., Le Sidaner, P., Savalle, R., & Zolotukhin, I. 2011, A&A, 532, A79 * Schneider et al. (2006) Schneider, P., Kochanek, C. S., & Wambsganss, J. 2006, Gravitational Lensing: Strong, Weak and Micro (Springer Berlin, Heidelberg), arXiv:astro-ph/0407232 * Schwarz et al. (2016) Schwarz, R., Funk, B., Zechner, R., & Bazsó, Á. 2016, MNRAS, 460, 3598 * Secrest (2022) Secrest, N. J. 2022, ApJ, 939, L32 * Sevenster et al. (1997a) Sevenster, M. N., Chapman, J. M., Habing, H. J., Killeen, N. E. B., & Lindqvist, M. 1997a, A&AS, 122, 79 * Sevenster et al. (1997b) Sevenster, M. N., Chapman, J. M., Habing, H. J., Killeen, N. E. B., & Lindqvist, M. 1997b, A&AS, 124, 509 * Sevenster et al. (2001) Sevenster, M. N., van Langevelde, H. J., Moody, R. A., et al. 2001, A&A, 366, 481 * Shapiro et al. (2012) Shapiro, I. I., Bartel, N., Bietenholz, M. F., et al. 2012, ApJS, 201, 1 * Shiohira et al. (2020) Shiohira, Y., Terada, Y., Mukuno, D., Fujii, Y., & Takahashi, K. 2020, MNRAS, 495, 1934 * Sjouwerman et al. (2010) Sjouwerman, L. O., Murray, C. E., Pihlström, Y. M., Fish, V. L., & Araya, E. D. 2010, ApJ, 724, L158 * Sjouwerman & Pihlström (2008) Sjouwerman, L. O., & Pihlström, Y. M. 2008, ApJ, 681, 1287 * Sugiyama et al. (2013) Sugiyama, K., Fujisawa, K., Hachisuka, K., et al. 2013, in Astronomical Society of the Pacific Conference Series, Vol. 476, New Trends in Radio Astronomy in the ALMA Era: The 30th Anniversary of Nobeyama Radio Observatory, ed. R. Kawabe, N. Kuno, & S. Yamamoto, 347 * Sugiyama et al. (2016) Sugiyama, K., Fujisawa, K., Hachisuka, K., et al. 2016, PASJ, 68, 72 * Sun et al. (2023) Sun, J., Er, X., & Tsupko, O. Y. 2023, MNRAS, 520, 994 * Szymczak et al. (2012) Szymczak, M., Wolak, P., Bartkiewicz, A., & Borkowski, K. M. 2012, Astronomische Nachrichten, 333, 634 * Taylor & Weisberg (1982) Taylor, J. H., & Weisberg, J. M. 1982, ApJ, 253, 908 * Taylor & Weisberg (1989) Taylor, J. H., & Weisberg, J. M. 1989, ApJ, 345, 434 * Thompson et al. (2017) Thompson, A. R., Moran, J. M., & Swenson, George W., J. 2017, Interferometry and Synthesis in Radio Astronomy, 3rd Edition (Springer) * Titov et al. (2011) Titov, O., Lambert, S. B., & Gontier, A. M. 2011, A&A, 529, A91 * Titov et al. (2018) Titov, O., Girdiuk, A., Lambert, S. B., et al. 2018, A&A, 618, A8 * Truebenbach & Darling (2017) Truebenbach, A. E., & Darling, J. 2017, ApJS, 233, 3 * Tsupko & Bisnovatyi-Kogan (2020) Tsupko, O. Y., & Bisnovatyi-Kogan, G. S. 2020, MNRAS, 491, 5636 * Uno et al. (2021) Uno, Y., Imai, H., Shinano, K., et al. 2021, MNRAS, 502, 3012 * van Altena et al. (1995) van Altena, W. F., Lee, J. T., & Hoffleit, E. D. 1995, The general catalogue of trigonometric [stellar] parallaxes (New Haven, CT: Yale University Observatory) * van Cappellen et al. (2022) van Cappellen, W. A., Oosterloo, T. A., Verheijen, M. A. W., et al. 2022, A&A, 658, A146 * van Langevelde et al. (1990) van Langevelde, H. J., van der Heiden, R., & van Schooneveld, C. 1990, A&A, 239, 193 * van Langevelde et al. (2000) van Langevelde, H. J., Vlemmings, W., Diamond, P. J., Baudry, A., & Beasley, A. J. 2000, A&A, 357, 945 * van Leeuwen (2010) van Leeuwen, F. 2010, Space Sci. Rev., 151, 209 * van Maanen (1923) van Maanen, A. 1923, ApJ, 57, 264 * Vedantham et al. (2020) Vedantham, H. K., Callingham, J. R., Shimwell, T. W., et al. 2020, Nature Astronomy, 4, 577 * Venter & Malan (2021) Venter, M., & Malan, J. A. 2021, in 2021 XXXIVth General Assembly and Scientific Symposium of the International Union of Radio Science (URSI GASS), 1 * VERA Collaboration et al. (2020) VERA Collaboration, Hirota, T., Nagayama, T., et al. 2020, PASJ, 72, 50 * Verma et al. (2014) Verma, A. K., Fienga, A., Laskar, J., Manche, H., & Gastineau, M. 2014, A&A, 561, A115 * Vigeland et al. (2018) Vigeland, S. J., Deller, A. T., Kaplan, D. L., et al. 2018, ApJ, 855, 122 * Vlemmings (2008) Vlemmings, W. H. T. 2008, A&A, 484, 773 * Vlemmings & van Langevelde (2007) Vlemmings, W. H. T., & van Langevelde, H. J. 2007, A&A, 472, 547 * Vlemmings et al. (2003) Vlemmings, W. H. T., van Langevelde, H. J., Diamond, P. J., Habing, H. J., & Schilizzi, R. T. 2003, A&A, 407, 213 * Wade & Johnston (1977) Wade, C. M., & Johnston, K. J. 1977, AJ, 82, 791 * Wajima et al. (2016) Wajima, K., Hagiwara, Y., An, T., et al. 2016, in Astronomical Society of the Pacific Conference Series, Vol. 502, Frontiers in Radio Astronomy and FAST Early Sciences Symposium 2015, ed. L. Qain & D. Li, 81 * Wang et al. (2022) Wang, M., Xu, Y., Wang, J., et al. 2022, Scientia Sinica Physica, Mechanica & Astronomica, 52, 119501 * Weisberg & Huang (2016) Weisberg, J. M., & Huang, Y. 2016, ApJ, 829, 55 * Weisberg & Taylor (2005) Weisberg, J. M., & Taylor, J. H. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 328, Binary Radio Pulsars, ed. F. A. Rasio & I. H. Stairs, 25 * Wendker (1995) Wendker, H. J. 1995, A&AS, 109, 177 * Weston et al. (2010) Weston, S., Natusch, T., & Gulyaev, S. 2010, arXiv e-prints, arXiv:1011.0227 * Will (2015) Will, C. M. 2015, Classical and Quantum Gravity, 32, 124001 * Wolszczan & Frail (1992) Wolszczan, A., & Frail, D. A. 1992, Nature, 355, 145 * Woodburn et al. (2015) Woodburn, L., Natusch, T., Weston, S., et al. 2015, PASA, 32, e017 * Xu et al. (2019) Xu, S., Zhang, B., Reid, M. J., Zheng, X., & Wang, G. 2019, ApJ, 875, 114 * Xu et al. (2023) Xu, Y., Hao, C. J., Liu, D. J., et al. 2023, ApJ, 947, 54 * Xu et al. (2021a) Xu, Y., Hou, L. G., Bian, S. B., et al. 2021a, A&A, 645, L8 * Xu et al. (2018a) Xu, Y., Hou, L.-G., & Wu, Y.-W. 2018a, RAA, 18, 146 * Xu et al. (2008) Xu, Y., Li, J. J., Hachisuka, K., et al. 2008, A&A, 485, 729 * Xu et al. (2006) Xu, Y., Reid, M. J., Zheng, X. W., & Menten, K. M. 2006, Science, 311, 54 * Xu et al. (2009) Xu, Y., Voronkov, M. A., Pandian, J. D., et al. 2009, A&A, 507, 1117 * Xu et al. (2013) Xu, Y., Li, J. J., Reid, M. J., et al. 2013, ApJ, 769, 15 * Xu et al. (2016) Xu, Y., Reid, M., Dame, T., et al. 2016, Science Advances, 2, e1600878 * Xu et al. (2018b) Xu, Y., Bian, S. B., Reid, M. J., et al. 2018b, A&A, 616, L15 * Xu et al. (2021b) Xu, Y., Bian, S. B., Reid, M. J., et al. 2021b, ApJS, 253, 1 * Yang et al. (2019) Yang, K., Chen, X., Shen, Z.-Q., et al. 2019, ApJS, 241, 18 * Yaron et al. (2005) Yaron, O., Prialnik, D., Shara, M. M., & Kovetz, A. 2005, ApJ, 623, 398 * Zhang et al. (2013) Zhang, B., Reid, M. J., Menten, K. M., et al. 2013, ApJ, 775, 79 * Zhang et al. (2009) Zhang, X.-Z., Zhu, X.-Y., Kong, D.-Q., et al. 2009, Research in Astronomy and Astrophysics, 9, 367 * Zhang (2022) Zhang, Z. 2022, Classical and Quantum Gravity, 39, 015003 * Zhu & Dong (2021) Zhu, W., & Dong, S. 2021, ARA&A, 59, 291
cov}}\left({{\Phi_{m}}(x),{\Phi_{n}}(x)}\right)\,}\right|_{\,w(x)}}={\left.{E\left({{\Phi_{m}}(x){\Phi_{n}}(x)}\right)\,}\right|_{\,w(x)}}-{\left.{E\left({{\Phi_{m}}(x){\Phi_{0}}(x)}\right)\,}\right|_{\,w(x)}}{\left.{E\left({{\Phi_{n}}(x){\Phi_{0}}(x)}\right)\,}\right|_{\,w(x)}}\\\ ={\left.{E\left({\Phi_{n}^{2}(x)}\right)\,}\right|_{\,w(x)}}{\delta_{m,n}}={\left.{{\mathop{\rm cov}}\,\left({\theta(x)\,{g_{m}}(x)+{\beta_{m}},\,\,\theta(x)\,{g_{n}}(x)+{\beta_{n}}}\right)\,}\right|_{\,w(x)}}\\\ ={\left.{{\mathop{\rm cov}}\,\left({\theta(x)\,{g_{m}}(x),\theta(x)\,{g_{n}}(x)}\right)\,}\right|_{\,w(x)}}=\frac{{\int_{a}^{b}{w(x)\,{\theta^{2}}(x)\,dx}}}{{\int_{a}^{b}{w(x)\,dx}}}{\left.{{{{\mathop{\rm cov}}}_{1}}\left({{g_{m}}(x),\,{g_{n}}(x);\frac{1}{{\theta(x)}}}\right)\,}\right|_{\,w(x)\,{\theta^{2}}(x)}},$ leading to the following biorthogonality relation according to the sub-section 4.1, ${\left.{E\left({{g_{m}}(x)\left({{g_{n}}(x)-\frac{{E\left({{g_{n}}(x)/\theta(x)}\right)}}{{E\left({1/{\theta^{2}}(x)}\right)}}\frac{1}{{\theta(x)}}}\right)}\right)\,}\right|_{\,w(x)\,{\theta^{2}}(x)}}=\frac{{\int_{a}^{b}{w(x)\,dx}}}{{\int_{a}^{b}{w(x)\,{\theta^{2}}(x)\,dx}}}{\left.{E\left({\Phi_{n}^{2}(x)}\right)\,}\right|_{\,w(x)}}{\delta_{m,n}}.$ Also, if it satisfies a second order equation as $a(x)\,{\Phi^{\prime\prime}_{n}}(x)+b(x)\,{\Phi^{\prime}_{n}}(x)+{\lambda_{n}}u(x)\,{\Phi_{n}}(x)=0,$ then ${\\{{g_{n}}(x)\\}_{n=0}}$ will satisfy the equation (8.3) $\left({a(x)\theta(x)}\right)\,{g^{\prime\prime}_{n}}(x)+\left({2a(x)\theta^{\prime}(x)+b(x)\theta(x)}\right)\,{g^{\prime}_{n}}(x)\\\ +\left({a(x)\theta^{\prime\prime}(x)+b(x)\theta^{\prime}(x)+{\lambda_{n}}u(x)\theta(x)}\right)\,{g_{n}}(x)=-{\lambda_{n}}{\beta_{n}}u(x).$ In this sense, the second point is that the relation $\frac{{\,\int_{\,a}^{b}{w(x)\,\theta(x)\,{g_{n}}(x)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,dx}}}=\frac{{\,\int_{\,a}^{b}{w(x)\,\left({{\Phi_{n}}(x)-{\beta_{n}}}\right)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,dx}}}=-{\beta_{n}},$ will change equation (8.3) to $\left({a(x)\theta(x)}\right)\,{g^{\prime\prime}_{n}}(x)+\left({2a(x)\theta^{\prime}(x)+b(x)\theta(x)}\right)\,{g^{\prime}_{n}}(x)\\\ +\left({a(x)\theta^{\prime\prime}(x)+b(x)\theta^{\prime}(x)+{\lambda_{n}}u(x)\theta(x)}\right)\,{g_{n}}(x)={\lambda_{n}}u(x)\frac{{\,\int_{\,a}^{b}{w(x)\,\theta(x)\,{g_{n}}(x)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,dx}}},$ which is a particular case of equation (5.3) in theorem 5.2. Now, noting the two above points, for a real parameter $\lambda$ let ${\left\\{{{P_{n}}(x;\lambda)=\sum\limits_{k=0}^{n}{a_{k}^{(n)}{{(x-\lambda)}^{k}}}}\right\\}_{n=0}}$ be a sequence of polynomials orthogonal with respect to $w(x)$ on $[a,b]$ as (8.4) ${\left.{E\left({{P_{m}}(x;\lambda){P_{n}}(x;\lambda)}\right)\,}\right|_{w(x)}}={\left.{E\left({P_{n}^{2}(x;\lambda)}\right)\,}\right|_{w(x)}}\,{\delta_{m,n}}.$ It can be verified that the sequence (8.5) ${Q_{n}}(x;\lambda)=\frac{{{P_{n+1}}(x;\lambda)-{P_{n+1}}(\lambda;\lambda)}}{{x-\lambda}}=\sum\limits_{k=0}^{n}{a_{k+1}^{(n+1)}{{(x-\lambda)}^{k}}},$ is also a polynomial of degree $n$. With reference to (8.2) and (8.4), the following equalities hold for the polynomial sequence (8.5), (8.6) $\displaystyle\frac{{\int_{a}^{b}{w(x)\,{{(x-\lambda)}^{2}}\,dx}}}{{\int_{a}^{b}{w(x)\,dx}}}{\left.{{{{\mathop{\rm cov}}}_{1}}\left({{Q_{m}}(x;\lambda),\,{Q_{n}}(x;\lambda);\frac{1}{{x-\lambda}}}\right)\,}\right|_{\,w(x)\,{{(x-\lambda)}^{2}}}}$ $\displaystyle\quad={\left.{{\mathop{\rm cov}}\left({(x-\lambda){Q_{m}}(x;\lambda),(x-\lambda){Q_{n}}(x;\lambda)}\right)\,}\right|_{\,w(x)}}$ $\displaystyle\quad={\left.{{\mathop{\rm cov}}\left({{P_{m+1}}(x;\lambda)-{P_{m+1}}(\lambda;\lambda),{P_{n+1}}(x;\lambda)-{P_{n+1}}(\lambda;\lambda)}\right)\,}\right|_{\,w(x)}}$ $\displaystyle\quad={\left.{{\mathop{\rm cov}}\left({{P_{m+1}}(x;\lambda),{P_{n+1}}(x;\lambda)}\right)\,}\right|_{\,w(x)}}={\left.{E\left({P_{n+1}^{2}(x;\lambda)}\right)\,}\right|_{\,w(x)}}{\delta_{m,n}}.$ ###### Corollary 8.1. From (8.6), the relation ${\left.{{{{\mathop{\rm cov}}}_{1}}\left({{Q_{m}}(x;\lambda),\,{Q_{n}}(x;\lambda);\frac{1}{{x-\lambda}}}\right)\,}\right|_{\,w(x)\,{{(x-\lambda)}^{2}}}}=\frac{{\int_{a}^{b}{w(x)\,P_{n+1}^{2}(x;\lambda)\,dx}}}{{\int_{a}^{b}{w(x)\,{{(x-\lambda)}^{2}}\,dx}}}\,\,{\delta_{m,n}},$ shows that the polynomial set ${\left\\{{{Q_{n}}(x;\lambda)}\right\\}_{n=0}}$ is a complete uncorrelated sequence with respect to the fixed function $z(x)=\frac{1}{{x-\lambda}}$ and the probability function ${P_{r}}(X=x)=\frac{{w(x)\,{{(x-\lambda)}^{2}}}}{{\int_{\,a}^{b}{w(x)\,{{(x-\lambda)}^{2}}dx}}}$ respectively. Also, relation (8.6) shows that the two defined sequences ${\left\\{{{P_{n}}(x;\lambda)=\sum\limits_{k=0}^{n}{a_{k}^{(n)}{{(x-\lambda)}^{k}}}}\right\\}_{n=0}}$ and ${\left\\{{{Q_{n}}(x;\lambda)=\sum\limits_{k=0}^{n}{a_{k+1}^{(n+1)}{{(x-\lambda)}^{k}}}}\right\\}_{n=0}}$ are biorthogonal with respect to the weight function $(x-\lambda)\,w(x)$ on $[a,b]$, as we have ${\left.{{{{\mathop{\rm cov}}}_{1}}\left({{Q_{m}}(x;\lambda),\,{Q_{n}}(x;\lambda);\frac{1}{{x-\lambda}}}\right)\,}\right|_{\,w(x)\,{{(x-\lambda)}^{2}}}}\\\ =E{\left.{\left({{Q_{m}}(x;\lambda)\left({{Q_{n}}(x;\lambda)-\frac{{E\left({{Q_{n}}(x;\lambda)/(x-\lambda)}\right)}}{{E\left({1/{{(x-\lambda)}^{2}}}\right)}}\frac{1}{{x-\lambda}}}\right)}\right)\,}\right|_{\,w(x)\,{{(x-\lambda)}^{2}}}},$ where ${Q_{n}}(x;\lambda)-\frac{{E\left({{Q_{n}}(x;\lambda)/(x-\lambda)}\right)}}{{E\left({1/{{(x-\lambda)}^{2}}}\right)}}\frac{1}{{x-\lambda}}=\frac{{{P_{n+1}}(x;\lambda)}}{{x-\lambda}}.$ There is a direct proof for this conclusion, too. If we suppose ${\left\langle{{P_{m}}(x;\lambda),{P_{n}}(x;\lambda)}\right\rangle_{w(x)}}={\left\langle{{P_{n}}(x;\lambda),{P_{n}}(x;\lambda)}\right\rangle_{w(x)}}{\delta_{m,n}},$ then for every $m\in\mathbb{N}$ we have (8.7) $\displaystyle{\left\langle{{Q_{n}}(x;\lambda),{P_{m}}(x;\lambda)}\right\rangle_{(x-\lambda)\,w(x)}}={\left\langle{\frac{{{P_{n+1}}(x;\lambda)-{P_{n+1}}(\lambda;\lambda)}}{{x-\lambda}},{P_{m}}(x;\lambda)}\right\rangle_{(x-\lambda)\,w(x)}}$ $\displaystyle\qquad\qquad={\left\langle{{P_{n+1}}(x;\lambda),{P_{m}}(x;\lambda)}\right\rangle_{w(x)}}-{P_{n+1}}(\lambda;\lambda){\left\langle{1,{P_{m}}(x;\lambda)}\right\rangle_{w(x)}}$ $\displaystyle\qquad\qquad={\left\langle{{P_{n+1}}(x;\lambda),{P_{n+1}}(x;\lambda)}\right\rangle_{w(x)}}{\delta_{n+1,m}}.$ Using (8.7), one can create a biorthogonal approximation (or expansion) for any appropriate function, say $f(x)$, in terms of the uncorrelated polynomials ${\left\\{{{Q_{k}}(x;\lambda)}\right\\}_{k=0}}$ as follows $f(x)\cong\sum\limits_{k=0}^{n\to\infty}{{c_{k}}{Q_{k}}(x;\lambda)}=\sum\limits_{k=0}^{n\to\infty}{\frac{{{{\left\langle{f(x),{P_{k+1}}(x;\lambda)}\right\rangle}_{(x-\lambda)\,w(x)}}}}{{{{\left\langle{{P_{k+1}}(x;\lambda),{P_{k+1}}(x;\lambda)}\right\rangle}_{w(x)}}}}\frac{{{P_{k+1}}(x;\lambda)-{P_{k+1}}(\lambda;\lambda)}}{{x-\lambda}}},$ whose error is clearly minimized with respect to the fixed function $z(x)=\frac{1}{{x-\lambda}}$ in the sense of least 1-variances. In the sequel, since ${\left\\{{{P_{n}}(x;\lambda)}\right\\}_{n=0}}$ was assumed to be orthogonal, its monic type must satisfy a three term recurrence relation [4] of the form (8.8) ${\bar{P}_{n+1}}(x;\lambda)=(x-{B_{n}}){\bar{P}_{n}}(x;\lambda)-{C_{n}}{\bar{P}_{n-1}}(x;\lambda)\,\,\,\,\,\,\text{with}\,\,\,\,\,{\bar{P}_{0}}(x;\lambda)=1\,\,\,\,{\rm{and}}\,\,\,{\bar{P}_{1}}(x;\lambda)=x-{B_{1}}.$ After doing some computations in hand, substituting (8.5) into (8.8) gives (8.9) ${\bar{Q}_{n+1}}(x;\lambda)=(x-{B_{n+1}}){\bar{Q}_{n}}(x;\lambda)-{C_{n+1}}{\bar{Q}_{n-1}}(x;\lambda)+{\bar{P}_{n+1}}(\lambda;\lambda)\,\,\,\,\,\,\text{with}\,\,\,\,\,{\bar{Q}_{0}}(x;\lambda)=1.$ This type of recurrence relation in (8.9) helps us obtain an analogue of the well-known Christoffel-Darboux identity [4] as follows. We have respectively in (8.9), $\left({x{{\bar{Q}}_{n}}(x;\lambda)+{{\bar{P}}_{n+1}}(\lambda;\lambda)}\right){{\bar{Q}}_{n}}(t;\lambda)={{\bar{Q}}_{n+1}}(x;\lambda){{\bar{Q}}_{n}}(t;\lambda)\\\ +{B_{n+1}}{{\bar{Q}}_{n}}(x;\lambda){{\bar{Q}}_{n}}(t;\lambda)+{C_{n+1}}{{\bar{Q}}_{n-1}}(x;\lambda){{\bar{Q}}_{n}}(t;\lambda),$ and $\left({t\,{{\bar{Q}}_{n}}(t;\lambda)+{{\bar{P}}_{n+1}}(\lambda;\lambda)}\right){{\bar{Q}}_{n}}(x;\lambda)={{\bar{Q}}_{n+1}}(t;\lambda){{\bar{Q}}_{n}}(x;\lambda)\\\ +{B_{n+1}}{{\bar{Q}}_{n}}(t;\lambda){{\bar{Q}}_{n}}(x;\lambda)+{C_{n+1}}{{\bar{Q}}_{n-1}}(t;\lambda){{\bar{Q}}_{n}}(x;\lambda).$ Therefore, by defining the kernel ${G_{n}}(x,t)=\frac{1}{{\prod\limits_{j=1}^{n+1}{{C_{j}}}}}\frac{{{{\bar{Q}}_{n+1}}(x;\lambda){{\bar{Q}}_{n}}(t;\lambda)-{{\bar{Q}}_{n+1}}(t;\lambda){{\bar{Q}}_{n}}(x;\lambda)}}{{x-t}},$ we eventually obtain (8.10) $\sum\limits_{n=0}^{m}{\frac{1}{{\prod\limits_{j=1}^{n+1}{{C_{j}}}}}\left({{{\bar{Q}}_{n}}(x;\lambda){{\bar{Q}}_{n}}(t;\lambda)-{{\bar{P}}_{n+1}}(\lambda;\lambda)\frac{{{{\bar{Q}}_{n}}(x;\lambda)-{{\bar{Q}}_{n}}(t;\lambda)}}{{x-t}}}\right)}\\\ =\sum\limits_{n=0}^{m}{{G_{n}}(x,t)-{G_{n-1}}(x,t)}=\frac{1}{{\prod\limits_{j=1}^{m+1}{{C_{j}}}}}\frac{{{{\bar{Q}}_{m+1}}(x;\lambda){{\bar{Q}}_{m}}(t;\lambda)-{{\bar{Q}}_{m+1}}(t;\lambda){{\bar{Q}}_{m}}(x;\lambda)}}{{x-t}}.$ Let us introduce two uncorrelated polynomials of hypergeometric type here which are built based on Jacobi and Laguerre polynomials and then apply all above-mentioned results on them. ### 8.1. An uncorrelated sequence of hypergeometric polynomials of ${}_{3}F_{2}$ type As is known, the monic Jacobi polynomials [21] (8.11) $\bar{P}_{n}^{(\alpha,\beta)}(x)=\frac{{{2^{n}}{{(\alpha+1)}_{n}}}}{{{{(n+\alpha+\beta+1)}_{n}}}}{}_{2}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+1}\\\ {\alpha+1}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right),$ satisfy the equation (8.12) $(1-{x^{2}})\frac{{{d^{2}}}}{{d{x^{2}}}}\bar{P}_{n}^{(\alpha,\beta)}(x)-\left({(\alpha+\beta+2)x+\alpha-\beta}\right)\frac{d}{{dx}}\bar{P}_{n}^{(\alpha,\beta)}(x)+n(n+\alpha+\beta+1)\bar{P}_{n}^{(\alpha,\beta)}(x)=0\,,$ and are orthogonal with respect to the weight function ${(1-x)^{\alpha}}{(1+x)^{\beta}}$ on $[-1,1]$ as (8.13) $\int_{-1}^{1}{{{(1-x)}^{\alpha}}{{(1+x)}^{\beta}}\bar{P}_{m}^{(\alpha,\beta)}(x)\bar{P}_{n}^{(\alpha,\beta)}(x)\,dx}\\\ =n!\,{2^{2n+\alpha+\beta+1}}\frac{{\Gamma(n+\alpha+\beta+1)\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}}{{\Gamma(2n+\alpha+\beta+1)\Gamma(2n+\alpha+\beta+2)}}{\delta_{n,m}}\,.$ Since $\bar{P}_{n}^{(\alpha,\beta)}(-x)={(-1)^{n}}\bar{P}_{n}^{(\beta,\alpha)}(x)$, another representation is as (8.14) $\bar{P}_{n}^{(\alpha,\beta)}(x)=\frac{{{{(-1)}^{n}}{2^{n}}{{(\beta+1)}_{n}}}}{{{{(n+\alpha+\beta+1)}_{n}}}}{}_{2}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+1}\\\ {\beta+1}\end{array}\,}\right|\,\frac{{1+x}}{2}}\right).$ Also, they satisfy a three term recurrence relation in the form (8.15) $\bar{P}_{n+1}^{(\alpha,\beta)}(x)=\left({x-\frac{{{\beta^{2}}-{\alpha^{2}}}}{{(2n+\alpha+\beta)(2n+\alpha+\beta+2)}}}\right)\,\bar{P}_{n}^{(\alpha,\beta)}(x)\\\ \quad-4\frac{{n(n+\alpha)(n+\beta)(n+\alpha+\beta)}}{{(2n+\alpha+\beta+1){{(2n+\alpha+\beta)}^{2}}(2n+\alpha+\beta-1)}}\bar{P}_{n-1}^{(\alpha,\beta)}(x)\,.$ Representations (8.11) and (8.14) show that there are two specific values for $\lambda$ in (8.5), i.e. $\lambda=1$ and $\lambda=-1$. Noting that $\bar{P}_{n}^{(\alpha,\beta)}(1)=\frac{{{2^{n}}{{(\alpha+1)}_{n}}}}{{{{(n+\alpha+\beta+1)}_{n}}}},$ the first kind of uncorrelated polynomials is defined as (8.16) $\displaystyle\bar{Q}_{n}^{(\alpha,\beta)}(x;1)$ $\displaystyle=\frac{{\bar{P}_{n+1}^{(\alpha,\beta)}(1)-\bar{P}_{n+1}^{(\alpha,\beta)}(x)}}{{1-x}}$ (8.19) $\displaystyle=\frac{{(n+1)\,{2^{n}}{{(\alpha+2)}_{n}}}}{{{{(n+\alpha+\beta+3)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right).$ Also, for $\lambda=-1$ the second kind is defined by (8.20) $\bar{Q}_{n}^{(\alpha,\beta)}(x;-1)=\frac{{\bar{P}_{n+1}^{(\alpha,\beta)}(x)-\bar{P}_{n+1}^{(\alpha,\beta)}(-1)}}{{x+1}}=\frac{{{{(-1)}^{n+1}}\bar{P}_{n+1}^{(\beta,\alpha)}(-x)-{{(-1)}^{n+1}}\bar{P}_{n+1}^{(\beta,\alpha)}(1)}}{{1-(-x)}}\\\ ={(-1)^{n}}\bar{Q}_{n}^{(\beta,\alpha)}(-x;1)=\frac{{(n+1){{(-2)}^{n}}{{(\beta+2)}_{n}}}}{{{{(n+\alpha+\beta+3)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+3,\,\,1}\\\ {\beta+2,\,\,2}\end{array}\,}\right|\,\frac{{1+x}}{2}}\right).$ Relation (8.20) shows that we shall deal with only one value, i.e. $\lambda=1$. If in (8.16), $\bar{P}_{n+1}^{(\alpha,\beta)}(x)=(x-1)\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)+\bar{P}_{n+1}^{(\alpha,\beta)}(1)$ is substituted into the differential equation (8.12), we obtain (8.21) ${(1-x)^{2}}(1+x)\frac{{{d^{2}}}}{{d{x^{2}}}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)-(1-x)\left({(\alpha+\beta+4)x+\alpha-\beta+2}\right)\frac{d}{{dx}}\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\\\ +\left({n(n+\alpha+\beta+3)(1-x)+2\alpha+2}\right)\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\\\ =(n+1)(n+\alpha+\beta+2)\bar{P}_{n+1}^{(\alpha,\beta)}(1)=\frac{{(n+1)(n+\alpha+\beta+2){2^{n}}{{(\alpha+1)}_{n}}}}{{{{(n+\alpha+\beta+1)}_{n}}}}.$ On the other hand, $\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,dx}=\int_{-1}^{1}{{{(1-x)}^{\alpha}}{{(1+x)}^{\beta}}\left({\bar{P}_{n+1}^{(\alpha,\beta)}(1)-\bar{P}_{n+1}^{(\alpha,\beta)}(x)}\right)\,dx}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\bar{P}_{n+1}^{(\alpha,\beta)}(1)\,\int_{-1}^{1}{{{(1-x)}^{\alpha}}{{(1+x)}^{\beta}}dx}=\bar{P}_{n+1}^{(\alpha,\beta)}(1)\,{2^{\alpha+\beta+1}}\frac{{\Gamma(\alpha+1)\Gamma(\beta+1)}}{{\Gamma(\alpha+\beta+2)}},$ changes equation (8.21) to (8.22) ${(1-x)^{2}}(1+x)\frac{{{d^{2}}}}{{d{x^{2}}}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)-(1-x)\left({(\alpha+\beta+4)x+\alpha-\beta+2}\right)\frac{d}{{dx}}\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\\\ +\left({\gamma_{n}^{*}(1-x)+2\alpha+2}\right)\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\\\ =\frac{{\Gamma(\alpha+\beta+2)(\gamma_{n}^{*}+\alpha+\beta+2)}}{{\,{2^{\alpha+\beta+1}}\Gamma(\alpha+1)\Gamma(\beta+1)}}\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,dx}\,,$ in which $\gamma_{n}^{*}=n(n+\alpha+\beta+3)$. ###### Theorem 8.2. For every $\alpha,\,\beta>-1$, we have ${\left.{{{{\mathop{\rm cov}}}_{1}}\left({\bar{Q}_{m}^{(\alpha,\beta)}(x;1),\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1);\frac{1}{{1-x}}}\right)\,}\right|_{{{(1-x)}^{\alpha+2}}{{(1+x)}^{\beta}}}}\\\ =n!\,{2^{2n-2}}\frac{{\Gamma(\alpha+\beta+4)\Gamma(n+\alpha+\beta+1)\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}}{{\Gamma(\alpha+3)\Gamma(\beta+1)\Gamma(2n+\alpha+\beta+1)\Gamma(2n+\alpha+\beta+2)}}\,{\delta_{m,n}}.$ ###### Proof. We would like to prove this theorem via differential equation (8.22) so that if it is written in the self adjoint form $\frac{d}{{dx}}\left({{{(1-x)}^{\alpha+3}}{{(1+x)}^{\beta+1}}\frac{d}{{dx}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)}\right)\\\ +\left({\gamma_{n}^{*}{{(1-x)}^{\alpha+2}}{{(1+x)}^{\beta}}+\left({2\alpha+2}\right){{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}}\right)\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\\\ =\frac{{\Gamma(\alpha+\beta+2)(\gamma_{n}^{*}+\alpha+\beta+2)}}{{\,{2^{\alpha+\beta+1}}\Gamma(\alpha+1)\Gamma(\beta+1)}}\left({\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,dx}}\right)\,{(1-x)^{\alpha+1}}{(1+x)^{\beta}},$ then $\left[{{{(1-x)}^{\alpha+3}}{{(1+x)}^{\beta+1}}\left({\bar{Q}_{m}^{(\alpha,\beta)}(x;1)\frac{d}{{dx}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)-\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\frac{d}{{dx}}\bar{Q}_{m}^{(\alpha,\beta)}(x;1)}\right)}\right]_{\,-1}^{\,1}\\\ +\left({\gamma_{n}^{*}-\gamma_{m}^{*}}\right)\int_{\,-1}^{1}{{{(1-x)}^{\alpha+2}}{{(1+x)}^{\beta}}\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,\bar{Q}_{m}^{(\alpha,\beta)}(x;1)\,dx}\\\ =\frac{{\Gamma(\alpha+\beta+2)}}{{\,{2^{\alpha+\beta+1}}\Gamma(\alpha+1)\Gamma(\beta+1)}}\left({\gamma_{n}^{*}-\gamma_{m}^{*}}\right)\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\times\left({\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,dx}}\right)\left({\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{m}^{(\alpha,\beta)}(x;1)\,dx}}\right),$ leading to the result (8.23) $\int_{\,-1}^{1}{{{(1-x)}^{\alpha+2}}{{(1+x)}^{\beta}}\,\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,\bar{Q}_{m}^{(\alpha,\beta)}(x;1)\,dx}\\\ =\frac{{\left({\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,dx}}\right)\left({\int_{-1}^{1}{{{(1-x)}^{\alpha+1}}{{(1+x)}^{\beta}}\bar{Q}_{m}^{(\alpha,\beta)}(x;1)\,dx}}\right)}}{{\int_{-1}^{1}{{{(1-x)}^{\alpha}}{{(1+x)}^{\beta}}\,dx}}}\\\ =\bar{P}_{n+1}^{(\alpha,\beta)}(1)\,\bar{P}_{m+1}^{(\alpha,\beta)}(1)\int_{-1}^{1}{{{(1-x)}^{\alpha}}{{(1+x)}^{\beta}}dx}\\\ ={2^{n+m+\alpha+\beta+1}}\frac{{\Gamma(\alpha+1)\Gamma(\beta+1)}}{{\Gamma(\alpha+\beta+2)}}\frac{{{{(\alpha+1)}_{n}}{{(\alpha+1)}_{m}}}}{{{{(n+\alpha+\beta+1)}_{n}}{{(m+\alpha+\beta+1)}_{m}}}}\Leftrightarrow n\neq m,$ which proves the first part. To obtain the variance value, i.e. for $n=m$ in (8.23), it is enough to refer to corollary 8.1 and then apply relation (8.13). ∎ Since in (8.15), ${C_{j}}=\frac{1}{4}\frac{{j(j+\alpha)(j+\beta)(j+\alpha+\beta)}}{{(j+\frac{{\alpha+\beta+1}}{2}){{(j+\frac{{\alpha+\beta}}{2})}^{2}}(j+\frac{{\alpha+\beta-1}}{2})}},$ we have $\prod\limits_{j=1}^{m+1}{{C_{j}}}=\frac{1}{{{4^{m+1}}}}\frac{{{{(1)}_{m+1}}{{(\alpha+1)}_{m+1}}{{(\beta+1)}_{m+1}}{{(\alpha+\beta+1)}_{m+1}}}}{{{{(\frac{{\alpha+\beta+3}}{2})}_{m+1}}(\frac{{\alpha+\beta+2}}{2})_{m+1}^{2}{{(\frac{{\alpha+\beta+1}}{2})}_{m+1}}}}.$ Therefore, the identity (8.9) for the polynomials (8.16) takes the form $\sum\limits_{n=0}^{m}\begin{array}[]{l}\frac{{{4^{n+1}}{{(\frac{{\alpha+\beta+3}}{2})}_{n+1}}(\frac{{\alpha+\beta+2}}{2})_{n+1}^{2}{{(\frac{{\alpha+\beta+1}}{2})}_{n+1}}}}{{(n+1)!\,\,{{(\alpha+1)}_{n+1}}{{(\beta+1)}_{n+1}}{{(\alpha+\beta+1)}_{n+1}}}}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\times\left({\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,\bar{Q}_{n}^{(\alpha,\beta)}(t;1)\,-\frac{{{2^{n+1}}{{(\alpha+1)}_{n+1}}}}{{{{(n+\alpha+\beta+2)}_{n+1}}}}\frac{{\bar{Q}_{n}^{(\alpha,\beta)}(x;1)\,-\bar{Q}_{n}^{(\alpha,\beta)}(t;1)}}{{x-t}}}\right)\end{array}\\\ =\frac{{{4^{m+1}}{{(\frac{{\alpha+\beta+3}}{2})}_{m+1}}(\frac{{\alpha+\beta+2}}{2})_{m+1}^{2}{{(\frac{{\alpha+\beta+1}}{2})}_{m+1}}}}{{(m+1)!\,\,{{(\alpha+1)}_{m+1}}{{(\beta+1)}_{m+1}}{{(\alpha+\beta+1)}_{m+1}}}}\\\ \times\,\,\frac{{\bar{Q}_{m+1}^{(\alpha,\beta)}(x;1)\,\bar{Q}_{m}^{(\alpha,\beta)}(t;1)-\bar{Q}_{m+1}^{(\alpha,\beta)}(t;1)\,\bar{Q}_{m}^{(\alpha,\beta)}(x;1)}}{{x-t}}.$ #### 8.1.1. Some particular trigonometric cases There are four trigonometric cases of Jacobi polynomials which are known in the literature as the Chebyshev polynomials of first, second, third and fourth kind. The main advantage of these polynomials is that their roots are explicitly known [4], see also [17]. Their monic forms are represented as (8.24) $\displaystyle{{\bar{T}}_{n}}(x)$ $\displaystyle=\bar{P}_{n}^{(-\frac{1}{2},-\frac{1}{2})}(x)=\frac{1}{{{2^{n-1}}}}\,\cos\left({n\arccos x}\right)=\prod\limits_{k=1}^{n}{(x-\cos\frac{{(2k-1)\pi}}{{2n}})},$ $\displaystyle{{\bar{U}}_{n}}(x)$ $\displaystyle=\bar{P}_{n}^{(\frac{1}{2},\frac{1}{2})}(x)=\frac{1}{{{2^{n}}\sqrt{1-{x^{2}}}}}\,\sin\left({(n+1)\arccos x}\right)=\prod\limits_{k=1}^{n}{(x-\cos\frac{{k\pi}}{{n+1}})},$ $\displaystyle{{\bar{V}}_{n}}(x)$ $\displaystyle=\bar{P}_{n}^{(-\frac{1}{2},\frac{1}{2})}(x)=\frac{1}{{{2^{n}}}}\sqrt{\frac{2}{{1+x}}}\,\cos((n+\frac{1}{2})\arccos x)=\prod\limits_{k=1}^{n}{(x-\cos\frac{{(2k-1)\pi}}{{2n+1}})},$ $\displaystyle{{\bar{W}}_{n}}(x)$ $\displaystyle=\bar{P}_{n}^{(\frac{1}{2},-\frac{1}{2})}(x)=\frac{1}{{{2^{n}}}}\sqrt{\frac{2}{{1-x}}}\,\sin((n+\frac{1}{2})\arccos x)=\prod\limits_{k=1}^{n}{(x-\cos\frac{{2k\pi}}{{2n+1}})}.$ Noting that ${T_{n}}(x)={2^{n-1}}{\bar{T}_{n}}(x),\,\,\,\,{U_{n}}(x)={2^{n}}{\bar{U}_{n}}(x),\,\,\,\,{V_{n}}(x)={2^{n}}{\bar{V}_{n}}(x)\,\,\,\,\text{and}\,\,\,\,{W_{n}}(x)={2^{n}}{\bar{W}_{n}}(x),$ they satisfy the following orthogonality relations (8.27) $\displaystyle\int_{-1}^{1}{{T_{n}}(x){T_{m}}(x)\frac{1}{{\sqrt{1-{x^{2}}}}}\,dx}=\left\\{\begin{array}[]{l}\frac{\pi}{2}\,{\delta_{n,m}},\\\ \pi\,\,\,\,{\rm{if}}\,\,\,n=m=0,\end{array}\right.$ $\displaystyle\int_{-1}^{1}{{U_{n}}(x){U_{m}}(x)\,\sqrt{1-{x^{2}}}\,dx}=\frac{\pi}{2}{\delta_{n,m}},$ $\displaystyle\int_{-1}^{1}{{V_{n}}(x)\,{V_{m}}(x)\sqrt{\frac{{1+x}}{{1-x}}}\,dx}=\pi\,{\delta_{n,m}},$ $\displaystyle\int_{-1}^{1}{{W_{n}}(x)\,{W_{m}}(x)\sqrt{\frac{{1-x}}{{1+x}}}\,dx}=\pi\,{\delta_{n,m}}.$ Now, we can use relations (8.24) and define four trigonometric uncorrelated sequences, according to the main definition (8.16) as follows (8.28) ${\bar{T}_{n}}(x;1)=\bar{Q}_{n}^{(-\frac{1}{2},-\frac{1}{2})}(x;1)=\frac{{1-{{\bar{T}}_{n+1}}(x)}}{{1-x}}=\frac{{(n+1)\,{2^{n}}{{(3/2)}_{n}}}}{{{{(n+2)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+2,\,\,1}\\\ {3/2,\,\,2}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right)\\\ =\prod\limits_{k=1}^{n}{(x-\cos\frac{{2k\pi}}{{n+1}}}),$ ${\bar{U}_{n}}(x;1)=\bar{Q}_{n}^{(\frac{1}{2},\frac{1}{2})}(x;1)=\frac{{n+2-{{\bar{U}}_{n+1}}(x)}}{{1-x}}=\frac{{(n+1)\,{2^{n}}{{(5/2)}_{n}}}}{{{{(n+4)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+4,\,\,1}\\\ {5/2,\,\,2}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right),$ (8.29) ${\bar{V}_{n}}(x;1)=\bar{Q}_{n}^{(-\frac{1}{2},\frac{1}{2})}(x;1)=\frac{{1-{{\bar{V}}_{n+1}}(x)}}{{1-x}}=\frac{{(n+1)\,{2^{n}}{{(3/2)}_{n}}}}{{{{(n+3)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+3,\,\,1}\\\ {3/2,\,\,2}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right)\\\ =\prod\limits_{k=1}^{n}{(x-\cos\frac{{2k\pi}}{n}}),$ ${\bar{W}_{n}}(x;1)=\bar{Q}_{n}^{(\frac{1}{2},-\frac{1}{2})}(x;1)=\frac{{2n+3-{{\bar{W}}_{n+1}}(x)}}{{1-x}}=\frac{{(n+1)\,{2^{n}}{{(5/2)}_{n}}}}{{{{(n+3)}_{n}}}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+3,\,\,1}\\\ {5/2,\,\,2}\end{array}\,}\right|\,\frac{{1-x}}{2}}\right).$ As we observe, only relations (8.28) and (8.29) are decomposable with multiple roots. In this direction, it is worth mentioning that there is a generic decomposable sequence as ${T_{n}}(x;\lambda)=\frac{{{T_{n+1}}(x)-{T_{n+1}}(\lambda)}}{{x-\lambda}}={2^{n}}\prod\limits_{k=1}^{n}{x-\left({\lambda\cos\frac{{2k\pi}}{{n+1}}-\sqrt{1-{\lambda^{2}}}\sin\frac{{2k\pi}}{{n+1}}}\right)}\,,$ satisfying the non-homogenous differential equation $(x-\lambda)(1-{x^{2}})\frac{{{d^{2}}}}{{d{x^{2}}}}{T_{n}}(x;\lambda)+\left({-3{x^{2}}+\lambda x+2}\right)\frac{d}{{dx}}\,{T_{n}}(x;\lambda)\\\ +\left({{{(n+1)}^{2}}(x-\lambda)-x}\right)\,{T_{n}}(x;\lambda)=-{(n+1)^{2}}\cos((n+1)\arccos\lambda),$ and the recurrence relation ${T_{n+1}}(x;\lambda)=2x{T_{n}}(x;\lambda)-{T_{n-1}}(x;\lambda)+2\cos(n\arccos\lambda).$ For instance ${T_{n}}(x;0)=\frac{{{T_{n+1}}(x)-{T_{n+1}}(0)}}{x}={2^{n}}\prod\limits_{k=1}^{n}{(x+\sin\frac{{2k\pi}}{{n+1}}}),$ is an uncorrelated polynomial with respect to the fixed function $z(x)=x^{-1}$ so that according to corollary 8.1 and relation (8.27) we have $\int_{-1}^{1}{\frac{{T_{n+1}^{2}(x)}}{{\sqrt{1-{x^{2}}}}}\,dx}=\int_{-1}^{1}{\frac{{{x^{2}}}}{{\sqrt{1-{x^{2}}}}}\,dx}=\frac{\pi}{2},$ and as a result ${\left.{{{{\mathop{\rm cov}}}_{1}}\left({{T_{m}}(x;0),\,{T_{n}}(x;0);\frac{1}{x}}\right)\,}\right|_{\frac{{{x^{2}}}}{{\sqrt{1-{x^{2}}}}}}}=\,\int_{-1}^{1}{\frac{{{x^{2}}}}{{\sqrt{1-{x^{2}}}}}\,{T_{m}}(x;0){T_{n}}(x;0)\,dx}\\\ -\frac{1}{\pi}\,\int_{-1}^{1}{\frac{x}{{\sqrt{1-{x^{2}}}}}\,{T_{m}}(x;0)\,dx}\,\int_{-1}^{1}{\frac{x}{{\sqrt{1-{x^{2}}}}}\,{T_{n}}(x;0)\,dx}={\delta_{m,n}}.$ ###### Remark 8.3. If in relation (8.24), $\arccos x=\theta$, the Chebyshev polynomials will be transformed to four trigonometric sequences orthogonal with respect to the constant weight function on $[0,\pi]$ and are respectively represented as $\\{\cos n\theta\\}_{n=0}$, $\\{\sin(n+1)\theta\\}_{n=0}$, $\\{\cos(n+\frac{1}{2})\theta\\}_{n=0}$ and $\\{\sin(n+\frac{1}{2})\theta\\}_{n=0}$ satisfying the following orthogonality relations, according to the relations (8.27), $\displaystyle\int_{0}^{\pi}{\cos n\theta\,\cos m\theta\,d\theta}=\left\\{\begin{array}[]{l}\frac{\pi}{2}\,{\delta_{n,m}},\\\ \pi\,\,\,\,{\rm{if}}\,\,\,n=m=0,\end{array}\right.$ $\displaystyle\int_{0}^{\pi}{\sin(n+1)\theta\,\sin(m+1)\theta\,d\theta}=\frac{\pi}{2}\,{\delta_{n,m}},$ $\displaystyle\int_{0}^{\pi}{\cos(n+\frac{1}{2})\theta\,\cos(m+\frac{1}{2})\theta\,dx}=\frac{\pi}{2}\,{\delta_{n,m}},$ and $\int_{0}^{\pi}{\sin(n+\frac{1}{2})\theta\,\sin(m+\frac{1}{2})\theta\,dx}=\frac{\pi}{2}\,{\delta_{n,m}}.$ Here our goal is to consider such orthogonal sequences as initial data corresponding to determinants (7.1) to see what the subsequent uncorrelated functions look like. Since the probability density function for all above-mentioned sequences is $w(\theta)=\frac{1}{\pi}$ on $[0,\pi]$, for the first sequence we obtain ${\mathop{\rm cov}}\,\left({\cos k\theta\,,\cos j\theta}\right)=E\left({\cos k\theta\cos j\theta}\right)-E\left({\cos k\theta}\right)E\left({\cos j\theta}\right)=0\Leftrightarrow k\neq j,$ which reveals that the initial data corresponding to the first orthogonal sequence do not cause to generate uncorrelated trigonometric functions because ${V_{0}}=\cos 0=1$. But, the story is somewhat different for the second sequence $\\{{V_{k}}=\sin(k+1)\theta\\}_{k=0}^{n}$ as we have ${\mathop{\rm cov}}\,\left({\sin(k+1)\theta\,,\sin(j+1)\theta}\right)=-\frac{1}{{{\pi^{2}}}}\frac{{(1+{{(-1)}^{k}})(1+{{(-1)}^{j}})}}{{(k+1)(j+1)\,}}+\frac{1}{2}{\delta_{k,j}},$ and for $k=j$, ${\mathop{\rm var}}\left({\sin(k+1)\theta}\right)=-\frac{2}{{{\pi^{2}}}}\frac{{1+{{(-1)}^{k}}}}{{{{(k+1)}^{2}}\,}}+\frac{1}{2}\,.$ Substituting the above data into (7.1), the monic type of the elements e.g. for $\\{{\bar{X}_{k}}={\bar{\Phi}_{k+1}}(\theta)\\}_{k=0}^{5}$ are derived as $\displaystyle{{\bar{\Phi}}_{1}}(\theta)$ $\displaystyle=\sin\theta,$ $\displaystyle{{\bar{\Phi}}_{2}}(\theta)$ $\displaystyle=\sin 2\theta,$ $\displaystyle{{\bar{\Phi}}_{3}}(\theta)$ $\displaystyle=\sin 3\theta+\frac{8}{3}\frac{1}{{{\pi^{2}}-8}}\sin\theta,$ $\displaystyle{{\bar{\Phi}}_{4}}(\theta)$ $\displaystyle=\sin 4\theta,$ $\displaystyle{{\bar{\Phi}}_{5}}(\theta)$ $\displaystyle=\sin 5\theta+\frac{{24}}{5}\frac{1}{{9{\pi^{2}}-80}}\sin 3\theta+\frac{8}{5}\frac{{9{\pi^{2}}-88}}{{(9{\pi^{2}}-80)({\pi^{2}}-8)}}\sin\theta,$ $\displaystyle{{\bar{\Phi}}_{6}}(\theta)$ $\displaystyle=\sin 6\theta,$ satisfying the uncorrelatedness condition (8.30) $\int_{\,0}^{\pi}{{{\bar{\Phi}}_{n}}(\theta)\,{{\bar{\Phi}}_{m}}(\theta)\,d\theta}=\frac{1}{\pi}\int_{\,0}^{\pi}{{{\bar{\Phi}}_{n}}(\theta)\,d\theta}\int_{\,0}^{\pi}{{{\bar{\Phi}}_{m}}(\theta)\,d\theta}\Leftrightarrow n\neq m.$ The samples show that the general structure of $\\{{\bar{\Phi}_{k}}(\theta)\\}_{k=1}^{n}$ is as follows ${\bar{\Phi}_{2k}}(\theta)=\sin 2k\theta\quad\text{and}\quad{\bar{\Phi}_{2k+1}}(\theta)=\sum\limits_{j=0}^{k}{{a_{2j+1}}\sin(2j+1)\theta}\quad\text{with}\quad{a_{2k+1}}=1.$ Also, if the change of variable $\theta=\arccos t$ is applied in (8.30), then $\int_{\,-1}^{1}{\frac{{{\Phi_{n}}(\arccos t)\,{\Phi_{m}}(\arccos t)}}{{\sqrt{1-{t^{2}}}}}\,dt}=\frac{1}{\pi}\int_{\,-1}^{1}{{\Phi_{n}}(\arccos t)\,dt}\int_{\,-1}^{1}{{\Phi_{m}}(\arccos t)\,dt}\Leftrightarrow n\neq m,$ where ${\Phi_{2k}}(\arccos t)=\sqrt{1-{t^{2}}}{U_{2k-1}}(t)\quad\text{and}\quad{\Phi_{2k+1}}(\arccos t)=\sqrt{1-{t^{2}}}\sum\limits_{j=0}^{k}{{a_{2j+1}}{U_{2j}}(t)}.$ The procedure for deriving two other sequences $\\{{V_{k}}=\cos(k+\frac{1}{2})\theta\\}_{k=0}^{n}$ and $\\{{V_{k}}=\sin(k+\frac{1}{2})\theta\\}_{k=0}^{n}$ is similar. For instance, we have ${\mathop{\rm cov}}\,\left({\sin(k+\frac{1}{2})\theta\,,\sin(j+\frac{1}{2})\theta}\right)=-\frac{4}{{{\pi^{2}}}}\frac{1}{{(2k+1)(2j+1)\,}}+\frac{1}{2}{\delta_{k,j}}.$ ### 8.2. An uncorrelated sequence of hypergeometric polynomials of ${}_{2}F_{2}$ type This turn, consider the monic type of the (generalized) Laguerre polynomials [4, 21] $\bar{L}_{n}^{(\alpha)}(x)={(-1)^{n}}{(\alpha+1)_{n}}{}_{1}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n}\\\ {\alpha+1}\end{array}\,}\right|\,x}\right),$ satisfying the equation $x\frac{{{d^{2}}}}{{d{x^{2}}}}\bar{L}_{n}^{(\alpha)}(x)+\left({\alpha+1-x}\right)\frac{d}{{dx}}\bar{L}_{n}^{(\alpha)}(x)+n\,\bar{L}_{n}^{(\alpha)}(x)=0\,,$ and orthogonal with respect to the weight function ${x^{\alpha}}{e^{-x}}$ on $[0,\infty)$ as $\int_{0}^{\infty}{{x^{\alpha}}{e^{-x}}\bar{L}_{m}^{(\alpha)}(x)\,\bar{L}_{n}^{(\alpha)}(x)\,dx}=n!\,\Gamma(n+\alpha+1)\,{\delta_{n,m}}\,.$ They also satisfy the recurrence relation (8.31) $\bar{L}_{n+1}^{(\alpha)}(x)=\left({x-2n-\alpha-1}\right)\,\bar{L}_{n}^{(\alpha)}(x)-n(n+\alpha)\bar{L}_{n-1}^{(\alpha)}(x)\,.$ Noting that $\bar{L}_{n}^{(\alpha)}(0)={(-1)^{n}}{(\alpha+1)_{n}},$ the uncorrelated polynomials based on Laguerre polynomials is defined as (8.32) $\bar{Q}_{n}^{(\alpha)}(x;0)=\frac{{\bar{L}_{n+1}^{(\alpha)}(x)-\bar{L}_{n+1}^{(\alpha)}(0)}}{x}={(-1)^{n}}(n+1){(\alpha+2)_{n}}\,{}_{2}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right).$ Similar to the previous example, for $\alpha>-1$ we can prove that ${\left.{{{{\mathop{\rm cov}}}_{1}}\left({\bar{Q}_{m}^{(\alpha)}(x;0),\,\bar{Q}_{n}^{(\alpha)}(x;0);\frac{1}{x}}\right)\,}\right|_{{x^{\alpha+2}}{e^{-x}}}}=\frac{{n!\,\,\Gamma(n+\alpha+1)}}{{\Gamma(\alpha+3)}}\,{\delta_{m,n}}.$ Also, since in (8.31), ${C_{j}}=j(j+\alpha)\,\,\,\,\text{and}\,\,\,\,\prod\limits_{j=1}^{m+1}{{C_{j}}}={(1)_{m+1}}{(\alpha+1)_{m+1}}=(m+1)!\,\frac{{\Gamma(\alpha+m+2)}}{{\Gamma(\alpha+1)}},$ the identity (8.9) for the monic polynomials (8.32) is derived as $\sum\limits_{n=0}^{m}{(n+1)!\,{{(\alpha+1)}_{n+1}}\left({\bar{Q}_{n}^{(\alpha)}(x;0)\,\bar{Q}_{n}^{(\alpha)}(t;0)\,-{{(-1)}^{n}}{{(\alpha+1)}_{n}}\frac{{\bar{Q}_{n}^{(\alpha)}(x;0)\,-\bar{Q}_{n}^{(\alpha)}(t;0)}}{{x-t}}}\right)}\\\ =(m+1)!\,{(\alpha+1)_{m+1}}\frac{{\bar{Q}_{m+1}^{(\alpha)}(x;0)\,\bar{Q}_{m}^{(\alpha)}(t;0)-\bar{Q}_{m+1}^{(\alpha)}(t;0)\,\bar{Q}_{m}^{(\alpha)}(x;0)}}{{x-t}}.$ ## 9\. A unified approach for the polynomials obtained in sections 6, 7 and 8 According to the distributions given in table 1, only beta and gamma weight functions can be considered for the non-symmetric infinite cases of uncorrelated polynomials, as the normal distribution is somehow connected to a special case of gamma distribution. In this direction, the general properties of two polynomials (6.15) and (7.3) reveal that the most general case of complete uncorrelated polynomials relevant to the beta weight function is when $w(x)={x^{a}}{(1-x)^{b}}$ and $w(x)\,z(x)={x^{c}}{(1-x)^{d}}$, i.e. $z(x)={x^{c-a}}{(1-x)^{d-b}}$ where $a,b,c,d\in\mathbb{R}$ and $x\in[0,1]$. Hence, if the corresponding uncorrelated polynomial is indicated as ${{\bf{P}}_{n}}(x;a,b,c,d)$, we have (9.1) $\displaystyle\int_{\,0}^{1}{{x^{a}}{{(1-x)}^{b}}{{\bf{P}}_{n}}(x;a,b,c,d)\,{{\bf{P}}_{m}}(x;a,b,c,d)\,dx}$ $\displaystyle-\frac{{\Gamma(2c+2d-a-b+2)}}{{\Gamma(2c-a+1)\Gamma(2d-b+1)}}\int_{\,0}^{1}{{x^{c}}{{(1-x)}^{d}}{{\bf{P}}_{n}}(x;a,b,c,d)\,dx}\,\int_{\,0}^{1}{{x^{c}}{{(1-x)}^{d}}{{\bf{P}}_{m}}(x;a,b,c,d)\,dx}$ $\displaystyle\qquad=\left(\int_{\,0}^{1}{{x^{a}}{{(1-x)}^{b}}{\bf{P}}_{n}^{2}(x;a,b,c,d)\,dx}\right.$ $\displaystyle\qquad\quad\left.-\frac{{\Gamma(2c+2d-a-b+2)}}{{\Gamma(2c-a+1)\Gamma(2d-b+1)}}{{\left({\int_{\,0}^{1}{{x^{c}}{{(1-x)}^{d}}{{\bf{P}}_{n}}(x;a,b,c,d)\,dx}}\right)}^{2}}\right){\delta_{m,n}},$ provided that $2c-a+1>0,\,\,\,2d-b+1>0\,\,\,\,\,{\rm{and}}\,\,\,\,b,d>-1.$ The components of the determinant (6.10) corresponding to this generic polynomial are computed as (9.2) ${\left.{{{{\mathop{\rm cov}}}_{1}}\,\left({{x^{i}},{x^{j}};{x^{c-a}}{{(1-x)}^{d-b}}}\right)\,}\right|_{w(x)={x^{a}}{{(1-x)}^{b}}}}=\\\ \Gamma(b+1)\,\frac{{\Gamma(a+i+j+1)}}{{\Gamma(a+b+i+j+2)}}-\frac{{\Gamma(2c+2d-a-b+2){\Gamma^{2}}(d+1)}}{{\Gamma(2c-a+1)\Gamma(2d-b+1)}}\,\frac{{\Gamma(c+i+1)\Gamma(c+j+1)}}{{\Gamma(c+d+i+2)\Gamma(c+d+j+2)}},$ in which $2c-a+1>0,\,\,\,2d-b+1>0$ and $b,d>-1$. According to the preceding information, the polynomials (6.15) can be represented as (9.3) ${}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+1,\,-r,\,\,r+2}\\\ {1,\,\,-r+1,\,\,r+1}\end{array}\,}\right|\,x}\right)\,={{\bf{P}}_{n}}(x;0,0,r,0),$ and the polynomials (7.3) as (9.4) ${}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+2r+1,\,r,\,\,r+2}\\\ {2r+1,\,\,r+1,\,\,r+1}\end{array}\,}\right|\,x}\right)={{\bf{P}}_{n}}(x;2r,0,r,0),$ and since in (8.23), $\int_{\,0}^{1}{{x^{\alpha+2}}{{(1-x)}^{\beta}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right){}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right)\,dx}\\\ =\frac{1}{{\int_{\,0}^{1}{{x^{\alpha}}{{(1-x)}^{\beta}}dx}}}(\left({\int_{\,0}^{1}{{x^{\alpha+1}}{{(1-x)}^{\beta}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right)\,dx}}\right)\\\ \times\left({\int_{\,0}^{1}{{x^{\alpha+1}}{{(1-x)}^{\beta}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right)\,dx}}\right)\quad\Leftrightarrow\,\,n\neq m,$ the shifted polynomials (8.20) on $[0,1]$ are represented as ${}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+\alpha+\beta+3,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right)={{\bf{P}}_{n}}(x;\alpha+2,\beta,\alpha+1,\beta).$ Similarly, the most general case of complete uncorrelated polynomials relevant to the gamma weight function is when $w(x)={x^{a}}{e^{-bx}}$ and $w(x)\,z(x)={x^{c}}{e^{-d\,x}}$, i.e. $z(x)={x^{c-a}}{e^{-(d-b)\,x}}$ where $a,b,c,d\in\mathbb{R}$ and $x\in\mathbb{R}^{+}$. Therefore, if the corresponding uncorrelated polynomial is indicated as ${{\bf{Q}}_{n}}(x;a,b,c,d)$, then (9.5) $\int_{\,0}^{\infty}{{x^{a}}{e^{-bx}}\,{{\bf{Q}}_{n}}(x;a,b,c,d)\,{{\bf{Q}}_{m}}(x;a,b,c,d)\,dx}\\\ -\frac{{\Gamma(2c-a+1)}}{{{{(2d-b)}^{2c-a+1}}}}\int_{\,0}^{\infty}{{x^{c}}{e^{-d\,x}}\,{{\bf{Q}}_{n}}(x;a,b,c,d)\,dx}\,\int_{\,0}^{\infty}{{x^{c}}{e^{-d\,x}}\,{{\bf{Q}}_{m}}(x;a,b,c,d)\,dx}\\\ =\left({\int_{\,0}^{\infty}{{x^{a}}{e^{-bx}}\,{\bf{Q}}_{n}^{2}(x;a,b,c,d)\,dx}-\frac{{\Gamma(2c-a+1)}}{{{{(2d-b)}^{2c-a+1}}}}{{\left({\int_{\,0}^{\infty}{{x^{c}}{e^{-d\,x}}\,{{\bf{Q}}_{n}}(x;a,b,c,d)\,dx}}\right)}^{2}}}\right){\delta_{m,n}},$ provided that $2c-a+1>0,\,\,\,2d-b>0\,\,\,\,{\rm{and}}\,\,\,\,b,d>0.$ The components of the determinant (6.10) corresponding to this second generic polynomial are computed as (9.6) ${\left.{{{{\mathop{\rm cov}}}_{1}}\,\left({{x^{i}},{x^{j}};{x^{c-a}}{e^{-(d\,-b)x}}}\right)\,}\right|_{w(x)={x^{a}}{e^{-bx}}}}\\\ =\frac{{\Gamma(a+i+j+1)}}{{{b^{a+i+j+1}}}}-\frac{{\Gamma(2c-a+1)}}{{{{(2d-b)}^{2c-a+1}}}}\,\frac{{\Gamma(c+i+1)\Gamma(c+j+1)}}{{{d^{2c+i+j+2}}}},$ in which $2c-a+1>0,\,\,\,2d-b>0\,\,\,\,{\rm{and}}\,\,\,\,b,d>0$. As a sample, the polynomials (8.32) can be represented as ${}_{2}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,1}\\\ {\alpha+2,\,\,2}\end{array}\,}\right|\,x}\right)={{\bf{Q}}_{n}}(x;\alpha+2,1,\alpha+1,1)\,.$ ### 9.1. A basic example of uncorrelated hypergeometric polynomials of ${}_{4}F_{3}$ type In this section, we are going to obtain the explicit form of ${{\bf{P}}_{n}}(x;a,0,c,0)$ using an interesting technique. Since $b=d=0$, so $w(x)=x^{a}$ and $z(x)=x^{c-a}$ defined on $[0,1]$. In the first step, we can simplify (9.2) for $b=d=0$ as ${\left.{{{{\mathop{\rm cov}}}_{1}}\,\left({{x^{i}},{x^{j}};{x^{c-a}}}\right)\,}\right|_{w(x)={x^{a}}}}=\frac{{(c-a-i)(c-a-j)}}{{(i+c+1)(j+c+1)(i+j+a+1)}}\,,$ where $2c-a+1>0,\,\,\,a\neq c\,\,\,\,{\rm{and}}\,\,\,\,a,c>-1$. Referring to the results (9.3) and (9.4) and this fact that ${{\bf{P}}_{n}}(x;a,0,c,0)$ is a generalization of both of them, we can imagine that it is of ${}_{4}F_{3}$ type without loss of generality. Since in a general ${}_{4}F_{3}$ polynomial of the form ${}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,{a_{2}},\,{a_{3}},\,\,{a_{4}}}\\\ {{b_{1}},\,\,{b_{2}},\,\,{b_{3}}}\end{array}\,}\right|\,x}\right)=\sum\limits_{k=0}^{n}{{u_{k}}\,{x^{k}}}\,\,\,\,{\rm{where}}\,\,\,\,\,{u_{k}}=\frac{{{{(-n)}_{k}}{{({a_{2}})}_{k}}{{({a_{3}})}_{k}}{{({a_{4}})}_{k}}}}{{{{({b_{1}})}_{k}}{{({b_{2}})}_{k}}{{({b_{3}})}_{k}}k!}},$ we have (9.7) $\frac{{{u_{n-1}}}}{{{u_{n}}}}=-\frac{{n\,(n+{b_{1}}-1)(n+{b_{2}}-1)(n+{b_{3}}-1)}}{{(n+{a_{2}}-1)(n+{a_{3}}-1)(n+{a_{4}}-1)}},$ which is to be equal to the minus of the coefficient of $x^{n-1}$ in the determinant of the monic polynomial ${{\bf{\bar{P}}}_{n}}(x;a,0,c,0)$ in (6.10), if for simplicity we set ${a_{i,j}}=\frac{{(c-a-i)(c-a-j)}}{{(i+c+1)(j+c+1)(i+j+a+1)}}\,\,\,\,\,\,{\rm{for}}\,\,\,i=0,1,...,n-1\,\,\,{\rm{and}}\,\,\,j=0,1,...,n,$ to achieve our goal, we should therefore compute the two following determinants (according to the main determinant (6.10)), $M_{n}=\begin{vmatrix}{{a_{0,0}}}&{{a_{0,1}}}&\cdots&{a_{0,n-2}}&{a_{0,n-1}}\\\ {{a_{1,0}}}&{{a_{1,1}}}&\cdots&{a_{1,n-2}}&{a_{1,n-1}}\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\ {{a_{n-1,0}}}&{{a_{n-1,1}}}&\cdots&{a_{n-1,n-2}}&{a_{n-1,n-1}}\end{vmatrix},$ and $N_{n}=\begin{vmatrix}{{a_{0,0}}}&{{a_{0,1}}}&\cdots&{a_{0,n-2}}&{a_{0,n}}\\\ {{a_{1,0}}}&{{a_{1,1}}}&\cdots&{a_{1,n-2}}&{a_{1,n}}\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\ {{a_{n-1,0}}}&{{a_{n-1,1}}}&\cdots&{a_{n-1,n-2}}&{a_{n-1,n}}\end{vmatrix},$ and then obtain the quotient $-{N_{n}}/{M_{n}}$ (with the aid of advanced mathematical software) as $-\frac{{{N_{n}}}}{{{M_{n}}}}=-\frac{{n\,(n+a)(n+a-c)(n+c)}}{{(2n+a)(n+a-c-1)(n+c+1)}},$ and compare it with (9.7) to finally obtain the explicit form of the polynomials as (9.8) ${{\bf{P}}_{n}}(x;a,0,c,0)={}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,,$ satisfying the uncorrelatedness condition (9.1) with $b=d=0$, i.e. (9.9) $\int_{\,0}^{1}{{x^{a}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ -(2c-a+1)\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ =\left({\int_{\,0}^{1}{{x^{a}}\,{\bf{P}}_{n}^{2}(x;a,0,c,0)\,dx}-(2c-a+1){{\left({\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}}\right)}^{2}}}\right){\delta_{m,n}}\\\ \Leftrightarrow 2c-a+1>0,\,\,a\neq c\,\,\,{\rm{and}}\,\,\,a,c>-1\,.$ For the limit case $2c-a+1=0$ in (9.9), the polynomials (9.8) would reduce to the well-known special case of shifted Jacobi polynomials defined on $[0,1]$, i.e. ${{\bf{P}}_{n}}(x;a,0,\frac{{a-1}}{2},0)=P_{n,+}^{(0,a)}(x)={}_{2}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1}\\\ {a+1}\end{array}\,}\right|\,x}\right)\,\,\,\,\,\,\,\,\,(a>-1).$ Also, we have $\displaystyle\mathop{\lim}\limits_{c\to\infty}{{\bf{P}}_{n}}(x;a,0,c,0)$ $\displaystyle=\mathop{\lim}\limits_{c\to\infty}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,$ $\displaystyle={}_{2}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1}\\\ {a+1}\end{array}\,}\right|\,x}\right)=P_{n,+}^{(0,a)}(x),$ and $\mathop{\lim}\limits_{a\to\infty}{{\bf{P}}_{n}}(x;a,0,c,0)={}_{2}{F_{1}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,c+2}\\\ {c+1}\end{array}\,}\right|\,x}\right)\,\,\,\,\,\,\,\,\,(c>-1).$ It seems that the results are straightforward for $a=c=0$ and $b=d=s$ in (9.5) and (9.6), as they take the Laplace transform form $L\left({f(x)}\right)=\int_{\,0}^{\infty}{{e^{-sx}}f(x)\,dx}\,\,\,\,\,\,(s>0),$ and (9.5) becomes $L\Big{(}{{{\bf{Q}}_{n}}(x;0,s,0,s)\,{{\bf{Q}}_{m}}(x;0,s,0,s)}\Big{)}-\frac{1}{s}L\Big{(}{{{\bf{Q}}_{n}}(x;0,s,0,s)}\Big{)}L\Big{(}{{{\bf{Q}}_{m}}(x;0,s,0,s)}\Big{)}\\\ =\left({L\Big{(}{{\bf{Q}}_{n}^{2}(x;0,s,0,s)}\Big{)}-\frac{1}{s}{L^{2}}\Big{(}{{{\bf{Q}}_{n}}(x;0,s,0,s)}\Big{)}}\right){\delta_{m,n}}.$ As we mentioned, the polynomials (9.8) are a generalization of (6.15) or (9.3) for $(a,c)=(0,r)$ and a generalization of (7.3) or (9.4) for $(a,c)=(2r,r)$. In order to evaluate the existing integrals in (9.9), we first have (9.10) $\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}=\sum\limits_{k=0}^{n}{\frac{{{{(-n)}_{k}}{{(n+a+1)}_{k}}{{(a-c)}_{k}}{{(c+2)}_{k}}}}{{{{(a+1)}_{k}}{{(a-c+1)}_{k}}{{(c+1)}_{k}}k!\,\,(c+1+k)}}\,}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,=\frac{1}{{c+1}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,\,1}\right)=\frac{1}{{c+1}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}},$ and (9.11) $\int_{\,0}^{1}{{x^{a}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ =\frac{1}{{a+1}}\sum\limits_{k=0}^{n}\frac{{{{(-n)}_{k}}{{(n+a+1)}_{k}}{{(a-c)}_{k}}{{(c+2)}_{k}}}}{{{{(a+2)}_{k}}{{(a-c+1)}_{k}}{{(c+1)}_{k}}k!\,}}\\\ \times{}_{5}{F_{4}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+a+1,\,a-c,\,\,c+2,\,a+1+k}\\\ {a+1,\,\,a-c+1,\,c+1,\,\,a+2+k}\end{array}\,}\right|\,\,1}\right)\,,$ which simplifies the left side of (9.9) as (9.12) $\displaystyle\int_{\,0}^{1}{{x^{a}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}$ $\displaystyle\qquad-(2c-a+1)\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}$ $\displaystyle\quad=\frac{1}{{a+1}}\sum\limits_{k=0}^{n}\frac{{{{(-n)}_{k}}{{(n+a+1)}_{k}}{{(a-c)}_{k}}{{(c+2)}_{k}}}}{{{{(a+2)}_{k}}{{(a-c+1)}_{k}}{{(c+1)}_{k}}k!\,}}$ (9.15) $\displaystyle\qquad\times{}_{5}{F_{4}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+a+1,\,a-c,\,\,c+2,\,a+1+k}\\\ {a+1,\,\,a-c+1,\,c+1,\,\,a+2+k}\end{array}\,}\right|\,\,1}\right)\,$ $\displaystyle\qquad-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}\frac{{m!\,\,{{(c+1)}_{m}}}}{{{{(a+1)}_{m}}\,{{(a+1-c)}_{m}}}}.$ To calculate (9.10), we have again used the recurrence relations technique. Since $N(n)={}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,\,1}\right),$ satisfies the first order relation $N(n+1)=\frac{{(n+1)(n+c+1)}}{{(n+a+1)(n+a-c+1)}}N(n),$ so ${}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,\,1}\right)=\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}.$ However, note that all results obtained in (9.10), (7.6) and (6.17) could also be derived by the Saalschutz theorem [11], which says if $c^{*}$ is a negative integer and ${a^{*}}+{b^{*}}+{c^{*}}+1={d^{*}}+{e^{*}}$ then ${}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{\begin{array}[]{*{20}{c}}{{a^{*},}}&{{b^{*},}}&{{c^{*}}}\end{array}}\\\ {\begin{array}[]{*{20}{c}}{{d^{*},}}&{{e^{*}}}\end{array}}\end{array}\,}\right|\,\,1}\right)=\frac{{{{({d^{*}}-{a^{*}})}_{|{c^{*}}|}}{{({d^{*}}-{b^{*}})}_{|{c^{*}}|}}}}{{{{({d^{*}})}_{|{c^{*}}|}}{{({d^{*}}-{a^{*}}-{b^{*}})}_{|{c^{*}}|}}}}.$ Similarly, to calculate ${}_{5}{F_{4}}(.)$ in (9.11), we can apply a recurrence technique as follows. Since $M(m)={}_{5}{F_{4}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+a+1,\,a-c,\,\,c+2,\,a+1+k}\\\ {a+1,\,\,a-c+1,\,c+1,\,\,a+2+k}\end{array}\,}\right|\,\,1}\right),$ satisfies the second order equation $(2m+a+2)(m+a+2)(m+a+1)(m+a+2-c)(m+a+3+k)\,M(m+2)\\\ -(2m+a+3)(m+a+1)(m+2)\left({2m(m+a+3)+(2c-a)k+(a+2)(c+2)}\right)\,M(m+1)\\\ +(2m+a+4)(m+2)(m+1)(m+1+c)(m-k)\,M(m)=0,$ having two independent solutions ${M_{1}}(m)=\frac{{m!\,\,{{(c+1)}_{m}}}}{{{{(a+1)}_{m}}{{(a+1-c)}_{m}}}},$ and ${M_{2}}(m)=\frac{{m!\,\,{{(-k)}_{m}}}}{{{{(a+1)}_{m}}{{(a+2+k)}_{m}}}},$ so (9.18) $\displaystyle{}_{5}{F_{4}}\left({\left.{\begin{array}[]{*{20}{c}}{-m,\,\,m+a+1,\,a-c,\,\,c+2,\,a+1+k}\\\ {a+1,\,\,a-c+1,\,c+1,\,\,a+2+k}\end{array}\,}\right|\,\,1}\right)$ $\displaystyle\quad=\frac{{m!\,}}{{(c+1)(c+1+k){{(a+1)}_{m}}}}$ $\displaystyle\qquad\times\left({(2c-a+1)(a+1+k)\frac{{{{(c+1)}_{m}}}}{{{{(a+1-c)}_{m}}}}+\,(a-c)(a-c+k)\frac{{{{(-k)}_{m}}}}{{{{(a+2+k)}_{m}}}}}\right).$ Hence, in order to compute ${{\mathop{\rm var}}_{1}}\left({{{\bf{P}}_{n}}(x;a,0,c,0);\,{x^{c-a}}}\right)\\\ =\int_{\,0}^{1}{{x^{a}}\,{\bf{P}}_{n}^{2}(x;a,0,c,0)\,dx}-(2c-a+1){\left({\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}}\right)^{2}},$ we first suppose in (9.12) that $m=n$ and then refer to (9.18) to arrive at $\displaystyle\left({a+1}\right){M^{*}}$ $\displaystyle=\sum\limits_{k=0}^{n}\frac{{{{(-n)}_{k}}{{(n+a+1)}_{k}}{{(a-c)}_{k}}{{(c+2)}_{k}}}}{{{{(a+2)}_{k}}{{(a-c+1)}_{k}}{{(c+1)}_{k}}k!\,}}$ $\displaystyle\qquad\times{}_{5}{F_{4}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2,\,a+1+k}\\\ {a+1,\,\,a-c+1,\,c+1,\,\,a+2+k}\end{array}\,}\right|\,\,1}\right)$ $\displaystyle=\frac{{(a+1)(2c-a+1)}}{{{{(c+1)}^{2}}}}\frac{{{{(n!)}^{2}}(c+1)_{n}^{2}}}{{(a+1)_{n}^{2}(a-c+1)_{n}^{2}}}$ $\displaystyle\quad+\frac{{(a+1){{(a-c)}^{2}}}}{{{{(c+1)}^{2}}}}\frac{{n!}}{{(n+a+1)(a+1)_{n}^{2}}}\sum\limits_{k=0}^{n}{\frac{{{{(-n)}_{k}}{{(n+a+1)}_{k}}{{(-k)}_{n}}}}{{{{(n+a+2)}_{k}}k!\,}}}.$ Again, noting that ${(-k)_{n}}=0$ for any $k<n$, the following final result will be derived. ###### Corollary 9.1. If $2c-a+1>0,\,\,a\neq c\,\,\,{\rm{and}}\,\,\,a,c>-1$, then $\displaystyle\int_{\,0}^{1}{{x^{a}}\,{\bf{P}}_{n}^{2}(x;a,0,c,0)\,dx}-(2c-a+1){\left({\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}}\right)^{2}}$ $\displaystyle\qquad\qquad\qquad\qquad\qquad=\frac{1}{{2n+a+1}}\left({\frac{{(a-c)\,n!}}{{(c+1)\,{{(a+1)}_{n}}}}}\right)^{2}.$ Moreover, (9.12) is simplified as $\int_{\,0}^{1}{{x^{a}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ -(2c-a+1)\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\frac{{{{(a-c)}^{2}}}}{{{{(c+1)}^{2}}}}\frac{{m!}}{{(m+a+1)(a+1)_{m}^{2}\,}}\sum\limits_{k=0}^{n}{\frac{{{{(n+a+1)}_{k}}{{(-n)}_{k}}}}{{{{(m+a+2)}_{k}}}}\frac{{{{(-k)}_{m}}}}{{k!}}}=0\Leftrightarrow m\neq n,$ leading to the same as relation (7.12) for $2r=\alpha>-1$. ###### Corollary 9.2. Using the latter corollary, one can now construct an optimized polynomial approximation (or expansion) for $f(x)$ whose error 1-variance is minimized as follows (9.19) $f(x)=\sum\limits_{k=0}^{n\to\infty}{\,\frac{{{A_{k}}}}{{{B_{k}}}}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-k,\,\,k+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,,$ in which (9.22) $\displaystyle{A_{k}}$ $\displaystyle=\int_{\,0}^{1}{{x^{a}}\,f(x)\,{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-k,\,\,k+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,dx}$ (9.25) $\displaystyle\quad-\left({2c-a+1}\right)\int_{\,0}^{1}{{x^{c}}f(x)\,dx}\,\int_{\,0}^{1}{{x^{c}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-k,\,\,k+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,dx}$ $\displaystyle=\sum\limits_{j=0}^{k}{\frac{{{{(-k)}_{j}}{{(k+a+1)}_{j}}{{(a-c)}_{j}}{{(c+2)}_{j}}}}{{{{(a+1)}_{j}}{{(a-c+1)}_{j}}{{(c+1)}_{j}}\,j!}}\int_{\,0}^{1}{{x^{a+j}}f(x)\,dx}}$ $\displaystyle\qquad-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{k!\,\,{{(c+1)}_{k}}}}{{{{(a+1)}_{k}}\,{{(a+1-c)}_{k}}}}\,\int_{\,0}^{1}{{x^{c}}f(x)\,dx},$ and (9.26) ${B_{k}}=\frac{1}{{2k+a+1}}{\left({\frac{{(a-c)\,k!}}{{(c+1)\,{{(a+1)}_{k}}}}}\right)^{2}}.$ We clearly observe in the polynomial type approximation (9.19) that its basis do not satisfy an orthogonal condition but a complete uncorrelatedness condition. However, if we define the non-polynomial sequence (9.29) $\displaystyle{{\bf{G}}_{n}}(x;a,c)={}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)$ $\displaystyle\qquad\qquad\qquad-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}{x^{c-a}},$ satisfying the orthogonality condition (9.30) $\int_{\,0}^{1}{{x^{a}}\,{{\bf{G}}_{n}}(x;a,c)\,{{\bf{G}}_{m}}(x;a,c)\,dx}=\frac{1}{{2n+a+1}}{\left({\frac{{(a-c)\,n!}}{{(c+1)\,{{(a+1)}_{n}}}}}\right)^{2}}{\delta_{m,n}}\\\ \Leftrightarrow 2c-a+1>0,\,\,a\neq c\,\,\,{\rm{and}}\,\,\,a,c>-1,$ we then obtain a non-polynomial type approximation (or expansion) as follows (9.31) $f(x)=\sum\limits_{k=0}^{n\to\infty}{\,\frac{{A_{k}^{*}}}{{B_{k}^{*}}}}\,{{\bf{G}}_{k}}(x;a,c)\,,$ in which $A_{k}^{*}={A_{k}}\,\,\,\,{\rm{and}}\,\,\,B_{k}^{*}={B_{k}}$ according to the remark 4.2, i.e. as the same forms as (9.22) and (9.26). Once again, the orthogonal sequence (9.29) can be generated only if we have previously obtained the uncorrelated polynomial sequence (9.8). Also, the non- polynomial approximation (9.31) is optimized in the sense of ordinary least squares while the polynomial type approximation (9.19) is optimized in the sense of least 1-variances. #### 9.1.1. A special property for ${{\bf{P}}_{n}}(x;a,0,c,0)$ Let us study the polynomials (9.8) from a differential equations point of view. In general, it is known that the hypergeometric series $y(x)={}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{{a_{1}},\,\,{a_{2}},{a_{3}},\,\,{a_{4}}}\\\ {{b_{1}},\,{b_{2}},\,\,{b_{3}}}\end{array}\,}\right|\,x}\right),$ satisfies a fourth order equation as (9.32) ${x^{3}}(1-x)\,{y^{(4)}}(x)+{x^{2}}\Big{(}{{s_{1}}+3-({s_{2}}+6)x}\Big{)}{y^{(3)}}(x)\\\ +x\Big{(}{{s_{1}}+1+{s_{3}}-\left({3\,{s_{2}}+7+\,{s_{4}}}\right)x}\Big{)}y^{\prime\prime}(x)\\\ +\Big{(}{{b_{1}}{b_{2}}{b_{3}}-\,\left({{s_{2}}+1+{s_{4}}+{s_{5}}}\right)x}\Big{)}y^{\prime}(x)-{a_{1}}\,{a_{2}}\,{a_{3}}\,{a_{4}}\,y(x)=0,$ in which $\displaystyle{s_{1}}$ $\displaystyle={b_{1}}+{b_{2}}+{b_{3}},$ $\displaystyle{s_{2}}$ $\displaystyle={a_{1}}+{a_{2}}+{a_{3}}+{a_{4}},$ $\displaystyle{s_{3}}$ $\displaystyle={b_{1}}{b_{2}}+{b_{1}}{b_{3}}+{b_{2}}{b_{3}},$ $\displaystyle{s_{4}}$ $\displaystyle={a_{1}}{a_{2}}+{a_{1}}{a_{3}}+{a_{1}}{a_{4}}+{a_{2}}{a_{3}}+{a_{2}}{a_{4}}+{a_{3}}{a_{4}},$ $\displaystyle{s_{5}}$ $\displaystyle={a_{1}}{a_{2}}{a_{3}}+{a_{1}}{a_{2}}{a_{4}}+{a_{1}}{a_{3}}{a_{4}}+{a_{2}}{a_{3}}{a_{4}},$ with a general solution in the special form (9.33) $y(x)={c_{1}}\,{x^{-{a_{1}}}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{{a_{1}},\,\,{a_{1}}-{b_{1}}+1,{a_{1}}-{b_{2}}+1,\,\,{a_{1}}-{b_{3}}+1}\\\ {{a_{1}}-{a_{2}}+1,{a_{1}}-{a_{3}}+1,\,\,{a_{1}}-{a_{4}}+1}\end{array}\,}\right|\,\frac{1}{x}}\right)\\\ +{c_{2}}\,{x^{-{a_{2}}}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{{a_{2}},\,\,{a_{2}}-{b_{1}}+1,{a_{2}}-{b_{2}}+1,\,\,{a_{2}}-{b_{3}}+1}\\\ {{a_{2}}-{a_{1}}+1,{a_{2}}-{a_{3}}+1,\,\,{a_{2}}-{a_{4}}+1}\end{array}\,}\right|\,\frac{1}{x}}\right)\\\ +{c_{3}}\,{x^{-{a_{3}}}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{{a_{3}},\,\,{a_{3}}-{b_{1}}+1,{a_{3}}-{b_{2}}+1,\,\,{a_{3}}-{b_{3}}+1}\\\ {{a_{3}}-{a_{1}}+1,{a_{3}}-{a_{2}}+1,\,\,{a_{3}}-{a_{4}}+1}\end{array}\,}\right|\,\frac{1}{x}}\right)\\\ +{c_{4}}\,{x^{-{a_{4}}}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{{a_{4}},\,\,{a_{4}}-{b_{1}}+1,{a_{4}}-{b_{2}}+1,\,\,{a_{4}}-{b_{3}}+1}\\\ {{a_{4}}-{a_{1}}+1,{a_{4}}-{a_{2}}+1,\,\,{a_{4}}-{a_{3}}+1}\end{array}\,}\right|\,\frac{1}{x}}\right),$ provided that ${a_{1}}-{a_{2}},\,\,{a_{1}}-{a_{3}},\,\,{a_{1}}-{a_{4}},\,\,{a_{2}}-{a_{3}},\,\,{a_{2}}-{a_{4}}\,\,{\rm{and}}\,\,\,{a_{3}}-{a_{4}}\notin\mathbb{Z}.$ The above information can be found in mathematical sites, e.g. mathworld.wolfram.com. According to (9.32) and (9.33), the polynomials $y={{\bf{P}}_{n}}(x;a,0,c,0)$ satisfy the differential equation (9.34) ${x^{3}}(1-x)\,{y^{(4)}}(x)+{x^{2}}\Big{(}{2a+6-(2a+9)x}\Big{)}{y^{(3)}}(x)\\\ +x\Big{(}{{a^{2}}+(c+6)a+7-{c^{2}}+\big{(}{n(n+a+1)+{c^{2}}+(2-a)c-{a^{2}}-11a-18}\big{)}x}\Big{)}y^{\prime\prime}(x)\\\ +\Big{(}{(a+1)(c+1)(a-c+1)+\big{(}{n(n+a+1)(a+3)-(a+2)(c+3)(a-c+1)}\big{)}x}\Big{)}y^{\prime}(x)\\\ +n(n+a+1)\,(a-c)(c+2)\,y(x)=0,$ with the general solution (9.35) $y(x)={c_{1}}\,{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)+{c_{3}}\,{x^{c-a}}\\\ \,\,\,\,\,\,\,\,\,\,\,+{c_{2}}\,{x^{-n-a-1}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{n+a+1,\,\,n+1,\,n+c+1,\,\,n+a-c+1}\\\ {2n+a+2,\,\,n+c+2,\,\,n+a-c}\end{array}\,}\right|\,\frac{1}{x}}\right)\\\ \,\,\,\,\,\,\,\,\,\,\,+{c_{4}}\,{x^{-c-2}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{c+2,\,\,c-a+2,\,2c-a+2,\,\,2}\\\ {n+c+3,\,\,-n+c-a+2,\,\,2c-a+3}\end{array}\,}\right|\,\frac{1}{x}}\right),$ where the fixed function $z(x)={x^{c-a}}$ and polynomials (9.8) both appeared in the basis solutions. Note in (9.35) and subsequently (9.33) that ${}_{4}{\bar{F}_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\\\ ={x^{n}}{}_{4}{\bar{F}_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,-n-a,\,-n-a+c,\,\,-n-c}\\\ {-2n-a,\,\,-n-a+c+1,\,\,-n-c-1}\end{array}\,}\right|\,\frac{1}{x}}\right),$ and ${}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{a-c,\,\,-c,\,0,\,\,a-2c}\\\ {n+a-c+1,\,\,-n-c,\,\,a-2c-1}\end{array}\,}\right|\,\frac{1}{x}}\right)=1.$ Now if in (9.35), ${c_{1}}=1,\,\,{c_{3}}=-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}\quad\text{and}\quad{c_{2}}={c_{4}}=0,$ the orthogonal sequence ${{\bf{G}}_{n}}(x;a,c)$ in (9.29) will appear that clearly satisfies the same as differential equation (9.34). The important point is that for $y={{\bf{G}}_{n}}(x;a,c)$ equation (9.34) can be rewritten as (9.36) $\displaystyle\quad{x^{2}}\frac{{{d^{2}}}}{{d{x^{2}}}}\Big{(}x(1-x)\,{\bf{G}}^{\prime\prime}_{n}(x;a,c)+(a+1-(a+2)x)\,{\bf{G}}^{\prime}_{n}(x;a,c)$ $\displaystyle\qquad+\left({n(n+a+1)+(2c-a+1){x^{-1}}}\right){{\bf{G}}_{n}}(x;a,c)\Big{)}$ $\displaystyle\qquad+(a+3)x\frac{d}{{dx}}\Big{(}x(1-x)\,{\bf{G}}^{\prime\prime}_{n}(x;a,c)+(a+1-(a+2)x)\,{\bf{G}}^{\prime}_{n}(x;a,c)$ $\displaystyle\qquad+\left({n(n+a+1)+(2c-a+1){x^{-1}}}\right){\bf{G}_{n}}(x;a,c)\Big{)}$ $\displaystyle\qquad+(a-c)(c+2)\Big{(}x(1-x)\,{\bf{G}}^{\prime\prime}_{n}(x;a,c)+(a+1-(a+2)x)\,{\bf{G}}^{\prime}_{n}(x;a,c)$ $\displaystyle\qquad+\left({n(n+a+1)+(2c-a+1){x^{-1}}}\right){{\bf{G}}_{n}}(x;a,c)\Big{)}$ $\displaystyle=(2c-a+1)(a-c-1)(c+1)\,{x^{-1}}\,{{\bf{G}}_{n}}(x;a,c)\,.$ If for simplicity we assume that (9.37) $x(1-x)\,{\bf{G}}^{\prime\prime}_{n}(x;a,c)+(a+1-(a+2)x)\,{\bf{G}}^{\prime}_{n}(x;a,c)\\\ +\left({n(n+a+1)+(2c-a+1){x^{-1}}}\right){{\bf{G}}_{n}}(x;a,c)={{\bf{R}}_{n}}(x;a,c),$ then (9.36) changes to (9.38) ${x^{2}}\,{\bf{R}}^{\prime\prime}_{n}(x;a,c)+(a+3)x\,{\bf{R}}^{\prime}_{n}(x;a,c)+(a-c)(c+2)\,{{\bf{R}}_{n}}(x;a,c)\\\ =(2c-a+1)(a-c-1)(c+1)\,{x^{-1}}\,{{\bf{G}}_{n}}(x;a,c)\,,$ which is a non-homogenous second order linear equation with the analytical solution (9.39) ${{\bf{R}}_{n}}(x;a,c)=\frac{{(2c-a+1)(a-c-1)(c+1)}}{{(2c-a+2)}}\\\ \times\left({{x^{c-a}}\int{{x^{a-c-2}}{{\bf{G}}_{n}}(x;a,c)\,dx}-{x^{-c-2}}\int{{x^{c}}{{\bf{G}}_{n}}(x;a,c)\,dx}}\right),$ where ${y_{1}}(x)={x^{c-a}}\,\,\,{\rm{and}}\,\,\,{y_{2}}(x)={x^{-c-2}}$ are two basis solutions of equation (9.38). The two integrals in (9.39) can be simplified as follows $\displaystyle\int{{x^{a-c-2}}{{\bf{G}}_{n}}(x;a,c)\,dx}$ $\displaystyle=\int{{x^{a-c-2}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,dx}$ $\displaystyle\quad-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}\int{{x^{-2}}\,dx}$ $\displaystyle=\frac{{{x^{a-c-1}}}}{{a-c-1}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c-1,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)$ $\displaystyle\quad+\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}{x^{-1}},$ and $\int{{x^{c}}{{\bf{G}}_{n}}(x;a,c)\,dx}=\int{{x^{c}}{}_{4}{F_{3}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c,\,\,c+2}\\\ {a+1,\,\,a-c+1,\,\,c+1}\end{array}\,}\right|\,x}\right)\,dx}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{{2c-a+1}}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}\int{{x^{2c-a}}\,dx}\\\ =\frac{{{x^{c+1}}}}{{c+1}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,x}\right)-\frac{1}{{{{(c+1)}^{2}}}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}{x^{2c-a+1}}.$ Therefore (9.40) ${{\bf{R}}_{n}}(x;a,c)=(2c-a+1)\,{x^{-1}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c-1}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,x}\right)\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\frac{{(2c-a+1)(a-c-1)}}{{c+1}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}{x^{c-a-1}}.$ Relations (9.27), (9.28) and (9.25) now show that in addition to (9.24), ${{\bf{G}}_{n}}(x;a,c)$ satisfies the equation (9.41) $\displaystyle x(1-x)\,{\bf{G}}^{\prime\prime}_{n}(x;a,c)+(a+1-(a+2)x)\,{\bf{G}}^{\prime}_{n}(x;a,c)+\left({n(n+a+1)+(2c-a+1){x^{-1}}}\right){{\bf{G}}_{n}}(x;a,c)$ $\displaystyle\qquad=\frac{{(2c-a+1)(a-c-1)(c+1)}}{{(2c-a+2)}}$ $\displaystyle\qquad\qquad\times\left({{x^{c-a}}\int{{x^{a-c-2}}{{\bf{G}}_{n}}(x;a,c)\,dx}-{x^{-c-2}}\int{{x^{c}}{{\bf{G}}_{n}}(x;a,c)\,dx}}\right)$ (9.44) $\displaystyle\qquad=(2c-a+1)\,{x^{-1}}{}_{3}{F_{2}}\left({\left.{\begin{array}[]{*{20}{c}}{-n,\,\,n+a+1,\,a-c-1}\\\ {a+1,\,\,a-c+1}\end{array}\,}\right|\,x}\right)$ $\displaystyle\qquad\qquad+\frac{{(2c-a+1)(a-c-1)}}{{c+1}}\frac{{n!\,\,{{(c+1)}_{n}}}}{{{{(a+1)}_{n}}\,{{(a+1-c)}_{n}}}}{x^{c-a-1}}.$ Let us write equation (9.41) in a self-adjoint form as (9.45) ${\left({{x^{a+1}}(1-x)\,{\bf{G}}^{\prime}_{n}(x;a,c)}\right)^{\prime}}+\left({n(n+a+1){x^{a}}+(2c-a+1){x^{a-1}}}\right){{\bf{G}}_{n}}(x;a,c)\\\ ={x^{a}}\,{{\bf{R}}_{n}}(x;a,c),$ and for the index $m$ as (9.46) ${\left({{x^{a+1}}(1-x)\,{\bf{G}}^{\prime}_{m}(x;a,c)}\right)^{\prime}}+\left({m(m+a+1){x^{a}}+(2c-a+1){x^{a-1}}}\right){{\bf{G}}_{m}}(x;a,c)\\\ ={x^{a}}\,{{\bf{R}}_{m}}(x;a,c).$ On multiplying by ${{\bf{G}}_{m}}(x;a,c)$ in (9.45) and ${{\bf{G}}_{n}}(x;a,c)$ in (9.46) and subtracting we get (9.47) $\Big{[}{x^{a+1}}(1-x)\left({\bf{G}}_{m}(x;a,c){\bf{G}}^{\prime}_{n}(x;a,c)-{{\bf{G}}_{n}}(x;a,c){\bf{G}}^{\prime}_{m}(x;a,c)\right)\Big{]}_{0}^{1}\\\ +\big{(}{n(n+a+1)-m(m+a+1)}\big{)}\int_{0}^{1}{{x^{a}}\,{{\bf{G}}_{n}}(x;a,c){{\bf{G}}_{m}}(x;a,c)\,dx}\\\ =\int_{0}^{1}{{x^{a}}\,\Big{(}{{{\bf{G}}_{m}}(x;a,c){{\bf{R}}_{n}}(x;a,c)-{{\bf{G}}_{n}}(x;a,c){{\bf{R}}_{m}}(x;a,c)}\Big{)}\,dx}\,.$ Since in (9.47), $\int_{0}^{1}{{x^{a}}\,{{\bf{G}}_{n}}(x;a,c){{\bf{G}}_{m}}(x;a,c)\,dx}=\\\ \Big{[}{{x^{a+1}}(1-x)\left({{{\bf{G}}_{m}}(x;a,c){\bf{G}}^{\prime}_{n}(x;a,c)-{{\bf{G}}_{n}}(x;a,c){\bf{G}}^{\prime}_{m}(x;a,c)\,}\right)}\Big{]}_{0}^{1}=0\Leftrightarrow n\neq m,$ if we take ${\bf{A}}(n,m)=\int_{0}^{1}{{x^{a}}\,{{\bf{G}}_{n}}(x;a,c)\,{{\bf{R}}_{m}}(x;a,c)\,dx}\,,$ relation (9.47) shows that for any $n\neq m$ we finally have ${\bf{A}}(m,n)={\bf{A}}(n,m).$ ## 10\. p-uncorrelated vectors with respect to a fixed vector The concept of p-uncorrelatedness can also be employed in vector spaces. Let ${\vec{A}_{m}}=({a_{1}},{a_{2}},...,{a_{m}})$ and ${\vec{B}_{m}}=({b_{1}},{b_{2}},...,{b_{m}})$ be two arbitrary vectors and ${\vec{I}_{m}}=(1,1,...,1)$ denote a unit vector. Also let ${\vec{Z}_{m}}=({z_{1}},{z_{2}},...,{z_{m}})$ be a fixed and predetermined vector. Recalling the definition of the inner product of two vectors as ${\vec{A}_{m}}.{\vec{B}_{m}}=\sum\limits_{k=1}^{m}{{a_{k}}{b_{k}}},$ it is not difficult to verify that (10.1) $\displaystyle\left({{{\vec{A}}_{m}}-(1-\sqrt{1-p})\frac{{{{\vec{A}}_{m}}.{{\vec{Z}}_{m}}}}{{{{\vec{Z}}_{m}}.{{\vec{Z}}_{m}}}}{{\vec{Z}}_{m}}}\right).\left({{{\vec{B}}_{m}}-(1-\sqrt{1-p})\frac{{{{\vec{B}}_{m}}.{{\vec{Z}}_{m}}}}{{{{\vec{Z}}_{m}}.{{\vec{Z}}_{m}}}}{{\vec{Z}}_{m}}}\right)$ $\displaystyle\qquad\qquad\qquad\qquad\qquad={\vec{A}_{m}}.{\vec{B}_{m}}-p\frac{{({{\vec{A}}_{m}}.{{\vec{Z}}_{m}})({{\vec{B}}_{m}}.{{\vec{Z}}_{m}})}}{{{{\vec{Z}}_{m}}.{{\vec{Z}}_{m}}}}.$ For instance, if ${\vec{Z}_{m}}={\vec{I}_{m}}$, then $\displaystyle\left({{{\vec{A}}_{m}}-(1-\sqrt{1-p})\frac{{{{\vec{A}}_{m}}.{{\vec{I}}_{m}}}}{m}{{\vec{I}}_{m}}}\right).\left({{{\vec{B}}_{m}}-(1-\sqrt{1-p})\frac{{{{\vec{B}}_{m}}.{{\vec{I}}_{m}}}}{m}{{\vec{I}}_{m}}}\right)$ $\displaystyle\qquad\qquad\qquad\qquad\qquad={\vec{A}_{m}}.{\vec{B}_{m}}-\frac{p}{m}({\vec{A}_{m}}.{\vec{I}_{m}})({\vec{B}_{m}}.{\vec{I}_{m}}),$ where ${\vec{I}_{m}}.{\vec{I}_{m}}=m$ and $p\in[0,1]$. Relation (10.1) shows that the two vectors ${\vec{A}_{m}}\,\,\,{\rm{and}}\,\,\,{\vec{B}_{m}}$ are p-uncorrelated with respect to the fixed vector ${\vec{Z}_{m}}$ if $p\,({\vec{A}_{m}}.{\vec{Z}_{m}})({\vec{B}_{m}}.{\vec{Z}_{m}})=({\vec{A}_{m}}.{\vec{B}_{m}})({\vec{Z}_{m}}.{\vec{Z}_{m}})\,.$ Also, the notions of p-covariance and p-variance can be defined for these two vectors (with respect to ${\vec{Z}_{m}}$) as follows (10.2) ${{\mathop{\rm cov}}_{p}}({\vec{A}_{m}},{\vec{B}_{m}};\,{\vec{Z}_{m}})=\frac{1}{m}\left({{{\vec{A}}_{m}}.{{\vec{B}}_{m}}-p\frac{{({{\vec{A}}_{m}}.{{\vec{Z}}_{m}})({{\vec{B}}_{m}}.{{\vec{Z}}_{m}})}}{{{{\vec{Z}}_{m}}.{{\vec{Z}}_{m}}}}}\right),$ and (10.3) ${{\mathop{\rm var}}_{p}}({\vec{A}_{m}};\,{\vec{Z}_{m}})=\frac{1}{m}\left({{{\vec{A}}_{m}}.{{\vec{A}}_{m}}-p\frac{{{{({{\vec{A}}_{m}}.{{\vec{Z}}_{m}})}^{2}}}}{{{{\vec{Z}}_{m}}.{{\vec{Z}}_{m}}}}}\right)\geq 0.$ Referring to the basic representation (3.38) and definitions (10.2) and (10.3), we can establish a set of p-uncorrelated vectors in terms of the parameter $p\in[0,1]$ if and only if $m=n+1$ and the finite set of initial vectors are linearly independent. Under such conditions we have (10.4) $\Delta_{n-1}^{(p)}\left({\\{{{\vec{V}}_{k,m}}\\}_{k=0}^{n-1};{{\vec{Z}}_{m}}}\right){\vec{X}_{n,m}}(p)\\\ =\begin{vmatrix}{{{{\mathop{\rm var}}}_{p}}\,({{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{0,m}},{{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{0,m}},{{\vec{V}}_{n,m}};{{\vec{Z}}_{m}})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{1,m}},{{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm var}}}_{p}}\,({{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{1,m}},{{\vec{V}}_{n,m}};{{\vec{Z}}_{m}})}\\\ \vdots&\vdots&\vdots&\vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{n-1,m}},{{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{n-1,m}},{{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{n-1,m}},{{\vec{V}}_{n,m}};{{\vec{Z}}_{m}})}\\\ {{\vec{V}}_{0,m}}&{{\vec{V}}_{1,m}}&\cdots&{{\vec{V}}_{n,m}}\end{vmatrix},$ where (10.5) $\Delta_{n-1}^{(p)}\left({\\{{{\vec{V}}_{k,m}}\\}_{k=0}^{n-1};{{\vec{Z}}_{m}}}\right)\\\ =\begin{vmatrix}{{{{\mathop{\rm var}}}_{p}}\,({{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{0,m}},{{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{0,m}},{{\vec{V}}_{n,m}};{{\vec{Z}}_{m}})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{1,m}},{{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm var}}}_{p}}\,({{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{1,m}},{{\vec{V}}_{n,m}};{{\vec{Z}}_{m}})}\\\ \vdots&\vdots&\vdots&\vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{n-1,m}},{{\vec{V}}_{0,m}};{{\vec{Z}}_{m}})}&{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{V}}_{n-1,m}},{{\vec{V}}_{1,m}};{{\vec{Z}}_{m}})}&\cdots&{{{{\mathop{\rm var}}}_{p}}\,({{\vec{V}}_{n-1,m}};{{\vec{Z}}_{m}})}\end{vmatrix},$ and $\Delta_{-1}^{(p)}(.)=1$. To better clarify the issue, here we consider a specific example. ###### Example 10.1. For $m=3$, given three initial orthogonal vectors ${\vec{V}_{0,3}}(1,0,0),\,\,{\vec{V}_{1,3}}(0,1,0),\,\,{\vec{V}_{2,3}}(0,0,1),$ together with the fixed vector ${\vec{Z}_{3}}(1,2,3)$. Substituting them into (10.4) and (10.5) eventually yields $\displaystyle{{\vec{X}}_{0,3}}(p)$ $\displaystyle=(1,0,0),$ (10.6) $\displaystyle{{\vec{X}}_{1,3}}(p)$ $\displaystyle=(\frac{{2p}}{{14-p}},1,0),$ $\displaystyle{{\vec{X}}_{2,3}}(p)$ $\displaystyle=(\frac{{3p}}{{14-5p}},\frac{{6p}}{{14-5p}},1),$ which satisfy the conditions ${{\mathop{\rm cov}}_{p}}\Big{(}{\vec{X}_{0,3}}(p),{\vec{X}_{1,3}}(p);{\vec{Z}_{3}}\Big{)}={{\mathop{\rm cov}}_{p}}\Big{(}{\vec{X}_{0,3}}(p),{\vec{X}_{2,3}}(p);{\vec{Z}_{3}}\Big{)}={{\mathop{\rm cov}}_{p}}\Big{(}{\vec{X}_{1,3}}(p),{\vec{X}_{2,3}}(p);{\vec{Z}_{3}}\Big{)}=0.$ According to theorem 4.1, every arbitrary vector of dimension 3 can be expanded in terms of the above vectors so that we have (10.7) $\vec{A}=(a,b,c)=\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{0,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{0,3}}(p);{{\vec{Z}}_{3}})}}(1,0,0)+\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{1,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{1,3}}(p);{{\vec{Z}}_{3}})}}(\frac{{2p}}{{14-p}},1,0)\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{2,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{2,3}}(p);{{\vec{Z}}_{3}})}}(\frac{{3p}}{{14-5p}},\frac{{6p}}{{14-5p}},1),$ in which $\displaystyle\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{0,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{0,3}}(p);{{\vec{Z}}_{3}})}}=a-\frac{{2p}}{{14-p}}b-\frac{{3p}}{{14-p}}c,$ (10.8) $\displaystyle\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{1,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{1,3}}(p);{{\vec{Z}}_{3}})}}=b-\frac{{6p}}{{14-5p}}c,$ $\displaystyle\frac{{{{{\mathop{\rm cov}}}_{p}}\,({{\vec{X}}_{2,3}}(p),\vec{A};{{\vec{Z}}_{3}})}}{{{{{\mathop{\rm var}}}_{p}}\,({{\vec{X}}_{2,3}}(p);{{\vec{Z}}_{3}})}}=c.$ For $p=0$, the finite expansion (10.7) would reduce to an ordinary orthogonal expansion, while there is an important point for the case $p=1$. We observe that replacing $p=1$ in the last item of (10.6) gives ${\vec{X}_{2,3}}(1)=\frac{1}{3}(1,2,3)=\frac{1}{3}{\vec{Z}_{3}}.$ Hence, in (10.7) (and subsequently (10.1)), ${{\mathop{\rm cov}}_{1}}\,({\vec{X}_{2,3}}(1),\vec{A};{\vec{Z}_{3}})={{\mathop{\rm cov}}_{1}}\,({\vec{X}_{2,3}}(1),\vec{A};3{\vec{X}_{2,3}}(1))=0,$ and ${{\mathop{\rm var}}_{1}}\,({\vec{X}_{2,3}}(1);{\vec{Z}_{3}})={{\mathop{\rm var}}_{1}}\,(\frac{1}{3}{\vec{Z}_{3}};{\vec{Z}_{3}})=0,$ which shows that the expansion (10.7) is not valid for the sole case $p=1$, although it is valid for any other $p\in[0,1)$. This is one of the reasons why we have considered this theory for every arbitrary parameter $p\in[0,1]$. For a better analysis, see figure 1 again. In general, we conjecture that (10.9) ${\vec{X}_{n,m}}(1)=\frac{{\det\,({{\vec{V}}_{0,m}},{{\vec{V}}_{1,m}},...,{{\vec{V}}_{n-1,m}},{{\vec{V}}_{n,m}})}}{{\det\,({{\vec{V}}_{0,m}},{{\vec{V}}_{1,m}},...,{{\vec{V}}_{n-1,m}},{{\vec{Z}}_{m}})}}\,\,{\vec{Z}_{m}}\,\,\,\,\,\,\,\,\,\,\,(m=n+1),$ as for the above-mentioned example $\det\,({\vec{V}_{0,3}},{\vec{V}_{1,3}},{\vec{V}_{2,3}})=\left|{\,\begin{array}[]{*{20}{c}}1&0&0\\\ 0&1&0\\\ 0&0&1\end{array}\,}\right|=1\,\,\,\,\,\,\,{\rm{and}}\,\,\,\,\,\,\,\det\,({\vec{V}_{0,3}},{\vec{V}_{1,3}},{\vec{Z}_{3}})=\left|{\,\begin{array}[]{*{20}{c}}1&0&1\\\ 0&1&2\\\ 0&0&3\end{array}\,}\right|=3.$ We symbolically examined such a conjecture (10.9) for the particular cases $n=2,3$ and the results were true. Now that the explicit forms of the p-uncorrelated vectors (10.6) are available, we can make three parametric orthogonal vectors based on them as follows (10.10) $\displaystyle{{\vec{V}}_{0,3}}(p)$ $\displaystyle={{\vec{X}}_{0,3}}(p)-(1-\sqrt{1-p})\frac{{{{\vec{X}}_{0,3}}(p).{{\vec{Z}}_{3}}}}{{{{\vec{Z}}_{3}}.{{\vec{Z}}_{3}}}}{{\vec{Z}}_{3}}$ $\displaystyle=\frac{1}{{14}}(13+\sqrt{1-p},-2+2\sqrt{1-p},-3+3\sqrt{1-p}),$ $\displaystyle{{\vec{V}}_{1,3}}(p)$ $\displaystyle=\frac{1}{{14-p}}(2p-2+2\sqrt{1-p},-p+10+4\sqrt{1-p},-6+6\sqrt{1-p}),$ $\displaystyle{{\vec{V}}_{2,3}}(p)$ $\displaystyle=\frac{1}{{14-5p}}(3p-3+3\sqrt{1-p},6p-6+6\sqrt{1-p},-5p+5+9\sqrt{1-p}).$ Note in the above vectors that ${\vec{V}_{0,3}}(0)={\vec{V}_{0,3}},\,\,\,{\vec{V}_{1,3}}(0)={\vec{V}_{1,3}}\,\,\,\,{\rm{and}}\,\,\,\,\,{\vec{V}_{2,3}}(0)={\vec{V}_{2,3}},$ while for $p=1$ we have ${\vec{V}_{0,3}}(1)=\frac{1}{{14}}(13,-2,-3),\,\,\,\,\,\,{\vec{V}_{1,3}}(1)=\frac{3}{{13}}(0,3,-2)\,\,\,\,\,\,\text{and}\,\,\,\,\,{\vec{V}_{2,3}}(1)=(0,0,0),$ which confirms that it is not valid for choosing in the orthogonal vectors (10.10). ## 11\. An upper bound for 1-covariances As the optimized case is $p=1$, in this part we are going to obtain an upper bound for ${{\mathop{\rm cov}}_{1}}\,(X,Y;Z)$. Let ${m_{X}},{m_{Y}},{M_{X}}$ and $M_{Y}$ be real numbers such that (11.1) ${m_{X}}\,Z\leq X\leq{M_{X}}\,Z\quad\text{and}\quad{m_{Y}}\,Z\leq Y\leq{M_{Y}}\,Z.$ It can be verified that the following identity holds true (11.2) ${{\mathop{\rm var}}_{1}}(X;Z)=\\\ \frac{1}{{E({Z^{2}})}}\Big{(}{{M_{X}}E({Z^{2}})-E(XZ)}\Big{)}\Big{(}{E(XZ)-{m_{X}}E({Z^{2}})}\Big{)}-E\Big{(}{\left({{M_{X}}Z-X}\right)\left({X-{m_{X}}Z}\right)}\Big{)}.$ Noting the conditions (11.1) and this fact that $E\Big{(}{\left({{M_{X}}\,Z-X}\right)\left({X-{m_{X}}\,Z}\right)}\Big{)}\,\geq 0\,,$ equality (11.2) leads to the inequality (11.3) ${{\mathop{\rm var}}_{1}}(X;Z)\leq\frac{1}{{E({Z^{2}})}}\Big{(}{{M_{X}}E({Z^{2}})-E(XZ)}\Big{)}\Big{(}{E(XZ)-{m_{X}}E({Z^{2}})}\Big{)}\\\ \leq\frac{1}{{4E({Z^{2}})}}{\Big{(}{{M_{X}}E({Z^{2}})-{m_{X}}E({Z^{2}})}\Big{)}^{2}}=\frac{{E({Z^{2}})}}{4}{\left({{M_{X}}-{m_{X}}}\right)^{2}}.$ On the other side, following (11.3) in the well-known inequality ${\mathop{\rm cov}}_{1}^{2}(X,Y;Z)\leq{{\mathop{\rm var}}_{1}}\,(X;Z)\,\,{\rm var}_{1}(Y;Z),$ gives (11.4) $\displaystyle{\mathop{\rm cov}}_{1}^{2}(X,Y;Z)$ $\displaystyle\leq\frac{1}{{{E^{2}}({Z^{2}})}}\Big{(}{{M_{X}}E({Z^{2}})-E(XZ)}\Big{)}\Big{(}{E(XZ)-{m_{X}}E({Z^{2}})}\Big{)}$ $\displaystyle\times\,\,\Big{(}{{M_{Y}}E({Z^{2}})-E(YZ)}\Big{)}\Big{(}{E(YZ)-{m_{Y}}E({Z^{2}})}\Big{)}$ $\displaystyle\leq\frac{{{E^{2}}({Z^{2}})}}{{16}}{\left({{M_{X}}-{m_{X}}}\right)^{2}}{\left({{M_{Y}}-{m_{Y}}}\right)^{2}}.$ One of the direct consequences of (11.4) is that (11.5) $\left|{\,{{{\mathop{\rm cov}}}_{1}}(X,Y;Z)\,}\right|\leq\frac{{E({Z^{2}})}}{4}\left({{M_{X}}-{m_{X}}}\right)\left({{M_{Y}}-{m_{Y}}}\right),$ where the constant $1/4$ in (11.5) is the best possible number in the sense that it cannot be replaced by a smaller quantity. As a particular case, if in (11.5) we take ${P_{r}}(X=x)=\frac{{w(x)}}{{\int_{\alpha}^{\beta}{w(x)\,dx}}},\quad X=f(x),\,\,Y=g(x)\,\,\,{\rm{and}}\,\,\,\,Z=z(x)=1,$ it will reduce to the weighted Grüss inequality [19, 20] $\left|{\frac{{\int_{\alpha}^{\beta}{w(x)f(x)g(x)\,dx}}}{{\int_{\alpha}^{\beta}{w(x)\,dx}}}-\frac{{\left({\int_{\alpha}^{\beta}{w(x)f(x)\,dx}}\right)\,\left({\int_{\alpha}^{\beta}{w(x)g(x)\,dx}}\right)}}{{{{(\int_{\alpha}^{\beta}{w(x)\,dx})}^{2}}}}\,}\right|\leq\frac{1}{4}({M_{f}}-{m_{f}})({M_{g}}-{m_{g}}),$ in which ${m_{f}}\leq f(x)\leq{M_{f}}\quad\text{and}\quad{m_{g}}\leq g(x)\leq{M_{g}}\quad\text{for all}\quad x\in[\alpha,\beta].$ ## 12\. An approximation for p-variances using quadrature rules We begin this section with a general $n$-point (weighted) quadrature rule as (12.1) $\int_{\,a}^{\,b}{w(x)\,f(x)\,dx}=\sum\limits_{k=1}^{n}{\,{w_{k}}\,f({x_{k}})}+{R_{n}}[f],$ in which $w(x)$ is positive on $[a,b]$, $\\{{x_{k}}\\}_{k=1}^{n}$ and $\\{{w_{k}}\\}_{k=1}^{n}$ are respectively nodes and weight coefficients and ${R_{n}}[f]$ is the corresponding error, see e.g. [14, 16]. If ${\mathbf{\Pi}}_{d}$ denotes the set of all algebraic polynomials of degree at most $d$, the rule (12.1) has degree of exactness $d$ if for every $p\in{\mathbf{\Pi}}_{d}$ we have ${R_{n}}[p]=0$. Moreover, if ${R_{n}}[p]\neq 0$ for some $p\in{\mathbf{\Pi}}_{d+1}$, formula (12.1) has precise degree of exactness $d$. It is well known that for given $n$ mutually different nodes $\\{{x_{k}}\\}_{k=1}^{n}$ we can always achieve a degree of exactness $d=n-1$ by interpolating at these nodes and integrating the interpolated polynomial instead of $f$. Namely, taking the node polynomial ${N_{n}}(x)=\prod\limits_{k=1}^{n}{(x-{x_{k}})},$ and integrating from the Lagrange interpolation formula (12.2) $f(x)=\sum\limits_{k=1}^{n}{f({x_{k}})\,L(x\,;\,{x_{k}})}+\frac{1}{{n!}}{f^{(n)}}({\xi_{x}})\,{N_{n}}(x)\,,$ where $L(x\,;\,{x_{k}})=\frac{{{N_{n}}(x)}}{{{{N^{\prime}}_{n}}({x_{k}})(x-{x_{k}})}}\,,$ we obtain (12.1), with ${w_{k}}=\frac{1}{{{{N^{\prime}}_{n}}({x_{k}})}}\,\,\int_{\,a}^{\,b}{\frac{{{N_{n}}(x)\,w(x)}}{{x-{x_{k}}}}\,dx}\,,$ and (12.3) ${R_{n}}[f]=\,\frac{1}{{n!}}\int_{\,a}^{\,b}{{f^{(n)}}({\xi_{x}})\,{N_{n}}(x)\,w(x)\,dx}\,.$ It is clear in (12.3) that if $f\in{\mathbf{\Pi}}_{n-1}$ then ${R_{n}}[f]=0$. To approximate the p-variance values (12.4) ${{\mathop{\rm var}}_{p}}(f(x);z(x))=\frac{{\int_{\,a}^{b}{w(x)\,{f^{2}}(x)\,dx}}}{{\int_{\,a}^{b}{w(x)\,dx}}}-p\frac{{{{(\int_{\,a}^{b}{w(x)\,f(x)\,z(x)\,dx})}^{2}}\,}}{{\,\int_{\,a}^{b}{w(x)\,dx}\,\int_{\,a}^{b}{w(x)\,{z^{2}}(x)\,dx}}},$ we can similarly follow the above-mentioned approach. For this purpose, it is just sufficient to apply ${{\mathop{\rm cov}}_{p}}\left({f(x),\,(.);z(x)}\right)$ on both sides of (12.2) to get $\displaystyle{{\mathop{\rm var}}_{p}}\left({f(x);z(x)}\right)={{\mathop{\rm cov}}_{p}}\left({f(x),\sum\limits_{k=1}^{n}{f({x_{k}})\,L(x\,;\,{x_{k}})};z(x)}\right)$ $\displaystyle\qquad\qquad\qquad\qquad+{{\mathop{\rm cov}}_{p}}\left({f(x),\frac{1}{{n!}}{f^{(n)}}({\xi_{x}})\,{N_{n}}(x);z(x)}\right)$ $\displaystyle=\sum\limits_{k=1}^{n}{f({x_{k}}){{{\mathop{\rm cov}}}_{p}}\left({f(x),L(x\,;\,{x_{k}});z(x)}\right)\,}+\frac{1}{{n!}}{{\mathop{\rm cov}}_{p}}\left({f(x),{f^{(n)}}({\xi_{x}})\,{N_{n}}(x);z(x)}\right)\,,$ which gives the right side of quadrature formula (12.1) with (12.5) ${w_{k}}={{\mathop{\rm cov}}_{p}}\left({f(x),L(x\,;\,{x_{k}});z(x)}\right)\,,$ and (12.6) ${R_{n}}[f]=\frac{1}{{n!}}\,{{\mathop{\rm cov}}_{p}}\left({f(x),{f^{(n)}}({\xi_{x}})\,{N_{n}}(x);z(x)}\right)\,.$ We observe in (12.6) that for any $f\in{\mathbf{\Pi}}_{n-1}$ we automatically have ${R_{n}}[f]=0$. Of course, another method to obtain the coefficients (12.5) is to use the undetermined coefficient method via solving the linear system (12.7) $\left[{\begin{array}[]{*{20}{c}}{\begin{array}[]{*{20}{c}}1\\\ {{x_{1}}}\\\ \vdots\\\ {x_{1}^{n-1}}\end{array}}&{\begin{array}[]{*{20}{c}}1\\\ {{x_{2}}}\\\ \vdots\\\ {x_{2}^{n-1}}\end{array}}&{\begin{array}[]{*{20}{c}}\cdots\\\ \cdots\\\ \vdots\\\ \cdots\end{array}}&{\begin{array}[]{*{20}{c}}1\\\ {{x_{n}}}\\\ \vdots\\\ {x_{n}^{n-1}}\end{array}}\end{array}}\right]\left[{\begin{array}[]{*{20}{c}}{{w_{1}}}\\\ {{w_{2}}}\\\ \vdots\\\ {{w_{n}}}\end{array}}\right]=\left[{\begin{array}[]{*{20}{c}}{{{{\mathop{\rm var}}}_{p}}\,(1;z(x))}\\\ {{{{\mathop{\rm var}}}_{p}}\,(x;z(x))}\\\ \vdots\\\ {{{{\mathop{\rm var}}}_{p}}\,({x^{n-1}};z(x))}\end{array}}\right].$ For example, the two-point approximate formula for evaluating (12.4) via (12.7) is $\displaystyle{{\mathop{\rm var}}_{p}}(f(x);z(x))$ $\displaystyle\cong\frac{{{x_{2}}{{{\mathop{\rm var}}}_{p}}(1;z(x))-{{{\mathop{\rm var}}}_{p}}(x;z(x))}}{{{x_{2}}-{x_{1}}}}\,f({x_{1}})$ $\displaystyle\,\,\,-\frac{{{x_{1}}{{{\mathop{\rm var}}}_{p}}(1;z(x))-{{{\mathop{\rm var}}}_{p}}(x;z(x))}}{{{x_{2}}-{x_{1}}}}\,f({x_{2}})\,.$ ## 13\. On improving the approximate solutions of over-determined systems For $n>m$, consider the linear system of equations (13.1) $\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}={b_{i}}\,\,\,\,\,\,(i=1,2,...,n),$ whose matrix representation is ${A_{n\times m}}{X_{m\times 1}}={B_{n\times 1}}$ where $A=\left[{\begin{array}[]{*{20}{c}}{\begin{array}[]{*{20}{c}}{{a_{11}}}\\\ {{a_{21}}}\\\ \vdots\\\ {{a_{n1}}}\end{array}}&{\begin{array}[]{*{20}{c}}{{a_{12}}}\\\ {{a_{22}}}\\\ \vdots\\\ {{a_{n1}}}\end{array}}&{\begin{array}[]{*{20}{c}}\cdots\\\ \cdots\\\ \vdots\\\ \cdots\end{array}}&{\begin{array}[]{*{20}{c}}{{a_{1m}}}\\\ {{a_{2m}}}\\\ \vdots\\\ {{a_{nm}}}\end{array}}\end{array}}\right],\,\,\,\,X=\left[{\begin{array}[]{*{20}{c}}{{x_{1}}}\\\ {{x_{2}}}\\\ \vdots\\\ {{x_{m}}}\end{array}}\right]\,\,\,\,\,{\rm{and}}\,\,\,\,\,B=\,\left[{\begin{array}[]{*{20}{c}}{{b_{1}}}\\\ {{b_{2}}}\\\ \vdots\\\ {{b_{n}}}\end{array}}\right].$ As we mentioned in the introduction part, the linear system (13.1) is called an over-determined system since the number of equations is more than the number of unknowns. Such systems usually have no exact solution and the goal is instead to find an approximate solution for the unknowns $\\{{x_{j}}\\}_{j=1}^{m}$ which fit equations in the sense of solving the problem (13.2) $\mathop{\min}\limits_{\\{{x_{j}}\\}}{E_{m,n}}({x_{1}},...,{x_{m}})=\mathop{\min}\limits_{\\{{x_{j}}\\}}\,\sum\limits_{i=1}^{n}\Big{(}\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}-{b_{i}}\Big{)}^{2}.$ It has been proved [2] that the minimization problem (13.2) has a unique vector solution provided that the $m$ columns of the matrix $A$ are linearly independent, given by solving the normal equations ${A^{T}}A\,\tilde{X}\,={A^{T}}b,$ where $A^{T}$ indicates the matrix transpose of $A$ and $\tilde{X}$ is the approximate solution of the least squares type expressed by $\tilde{X}={({A^{T}}A)^{-1}}{A^{T}}b.$ Instead of considering the problem (12.2), we would like now to consider the minimization problem (13.3) $\mathop{\min}\limits_{\\{{x_{j}}\\}}{V_{m,n}}(\left.{{x_{1}},...,{x_{m}}}\right|\\{{z_{i}}\\}_{i=1}^{n})=\mathop{\min}\limits_{\\{{x_{j}}\\}}\,\,\sum\limits_{i=1}^{n}\Big{(}\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}-{b_{i}}\Big{)}^{2}-\frac{p}{{\sum\limits_{i=1}^{n}{z_{i}^{2}}}}{\left({\sum\limits_{i=1}^{n}{{z_{i}}\,(\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}-{b_{i}})}}\right)^{2}},$ based on the fixed vector $Z_{1\times n}^{T}=[{z_{1}},{z_{2}},...,{z_{n}}]$, where the quantity (13.3) is clearly smaller than the quantity (13.2) for any arbitrary selection of $\\{{z_{i}}\\}_{i=1}^{n}$. In this direction, $\frac{{\partial{V_{m,n}}(\left.{{x_{1}},...,{x_{m}}}\right|\\{{z_{i}}\\}_{i=1}^{n})}}{{\partial{x_{k}}}}\\\ =2\sum\limits_{i=1}^{n}{{a_{i,k}}(\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}-{b_{i}})}-\frac{{2p}}{{\sum\limits_{i=1}^{n}{z_{i}^{2}}}}\left({\sum\limits_{i=1}^{n}{{a_{i,k}}{z_{i}}}}\right)\left({\sum\limits_{i=1}^{n}{{z_{i}}\,(\sum\limits_{j=1}^{m}{{a_{i,j}}\,{x_{j}}}-{b_{i}})}}\right)=0,$ leads to the linear system (13.4) $\sum\limits_{j=1}^{m}{\left({\sum\limits_{i=1}^{n}{{a_{i,k}}{a_{i,j}}}-p\frac{{\sum\limits_{i=1}^{n}{{a_{i,k}}{z_{i}}}\sum\limits_{i=1}^{n}{{a_{i,j}}\,{z_{i}}}}}{{\sum\limits_{i=1}^{n}{z_{i}^{2}}}}}\right)\,{x_{j}}}\\\ =\sum\limits_{i=1}^{n}{{a_{i,k}}{b_{i}}}-p\frac{{\sum\limits_{i=1}^{n}{{a_{i,k}}{z_{i}}}\sum\limits_{i=1}^{n}{{b_{i}}\,{z_{i}}}}}{{\sum\limits_{i=1}^{n}{z_{i}^{2}}}}\qquad(k=1,2,...,m),$ which can also be represented as the matrix form $\left({{A^{T}}A-\frac{p}{{{Z^{T}}Z}}{A^{T}}Z\,{Z^{T}}A}\right)\,{\tilde{X}_{p,Z}}={A^{T}}B-\frac{p}{{{Z^{T}}Z}}{A^{T}}Z\,{Z^{T}}B,$ with the solution (13.5) ${\tilde{X}_{p,Z}}={\left({{A^{T}}A-\frac{p}{{{Z^{T}}Z}}{A^{T}}Z\,{Z^{T}}A}\right)^{-1}}\left({{A^{T}}B-\frac{p}{{{Z^{T}}Z}}{A^{T}}Z\,{Z^{T}}B}\right).$ A simple case of the approximate solution (13.5) is when $Z_{1\times n}^{T}={I_{n}}=[1,1,...,1]$ and $p=1$, i.e. an ordinary least variance problem. In this case, (13.5) becomes ${\tilde{X}_{1,{I_{n}}}}={\left({{A^{T}}A-\frac{1}{n}{A^{T}}I_{n}^{T}{I_{n}}\,A}\right)^{-1}}\left({{A^{T}}B-\frac{1}{n}{A^{T}}I_{n}^{T}{I_{n}}\,B}\right).$ Let us consider a numeric example for the ordinary variances case. ###### Example 13.1. Suppose $m=2$, $Z_{1\times n}^{T}={I_{n}}=[1,1,...,1]$ and $p=1$. Then, the corresponding over-determined system takes the simple form ${a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}={b_{i}}\,\,\,\,\,\,(i=1,2,...,n>2),$ and the problem (13.3) reduces to (13.6) $\mathop{\min}\limits_{\\{{x_{1}},{x_{2}}\\}}{V_{2,n}}(\left.{{x_{1}},{x_{2}}}\right|{I_{n}})=\mathop{\min}\limits_{\\{{x_{1}},{x_{2}}\\}}\,\,\sum\limits_{i=1}^{n}{{{({a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}-{b_{i}})}^{2}}}-\frac{1}{n}{\left({\sum\limits_{i=1}^{n}{({a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}-{b_{i}})}}\right)^{2}}.$ Hence, the explicit solutions of the system (13.4), i.e. $\left\\{\begin{array}[]{l}\left({\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}-\dfrac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}}\Big{)}^{2}}}\right){x_{1}}+\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}-\dfrac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{a_{i,2}}}}\right){x_{2}}\\\\[8.53581pt] =\sum\limits_{i=1}^{n}{{a_{i,1}}{b_{i}}}-\dfrac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{b_{i}}},\\\\[8.53581pt] \left({\sum\limits_{i=1}^{n}{{a_{i,2}}{a_{i,1}}}-\dfrac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,2}}}\sum\limits_{i=1}^{n}{{a_{i,1}}}}\right){x_{1}}+\left({\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-\dfrac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,2}}}\Big{)}^{2}}}\right){x_{2}}\\\\[8.53581pt] =\sum\limits_{i=1}^{n}{{a_{i,2}}{b_{i}}}-\dfrac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,2}}}\sum\limits_{i=1}^{n}{{b_{i}}},\end{array}\right.$ are respectively ${x_{1}}=\frac{\begin{array}[]{l}\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{b_{i}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{b_{i}}}}\right)\left({\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,2}}}\Big{)}^{2}}}\right)\\\ -\left({\sum\limits_{i=1}^{n}{{a_{i,2}}{b_{i}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,2}}}\sum\limits_{i=1}^{n}{{b_{i}}}}\right)\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{a_{i,2}}}}\right)\end{array}}{\begin{array}[]{l}\left({\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}}\Big{)}^{2}}}\right)\left({\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,2}}}\Big{)}}^{2}}\right)\\\ -{{\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{a_{i,2}}}}\right)}^{2}}\end{array}},$ and ${x_{2}}=\frac{\begin{array}[]{l}\left({\sum\limits_{i=1}^{n}{{a_{i,2}}{b_{i}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,2}}}\sum\limits_{i=1}^{n}{{b_{i}}}}\right)\left({\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}}\Big{)}^{2}}}\right)\\\ -\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{b_{i}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{b_{i}}}}\right)\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{a_{i,2}}}}\right)\end{array}}{\begin{array}[]{l}\left({\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}}\Big{)}^{2}}}\right)\left({\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-\frac{1}{n}{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,2}}}\Big{)}^{2}}}\right)\\\ -{{\left({\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}-\frac{1}{n}\sum\limits_{i=1}^{n}{{a_{i,1}}}\sum\limits_{i=1}^{n}{{a_{i,2}}}}\right)}^{2}}\end{array}},$ while the approximate solutions corresponding to the well-known problem (13.2) are ${\tilde{x}_{1}}=\frac{{(\sum\limits_{i=1}^{n}{{a_{i,1}}{b_{i}}})\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-(\sum\limits_{i=1}^{n}{{a_{i,2}}{b_{i}}})\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}}}{{\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}\Big{)}^{2}}}},$ and ${\tilde{x}_{2}}=\frac{{(\sum\limits_{i=1}^{n}{{a_{i,2}}{b_{i}}})\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}-(\sum\limits_{i=1}^{n}{{a_{i,1}}{b_{i}}})\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}}}{{\sum\limits_{i=1}^{n}{{{({a_{i,1}})}^{2}}}\sum\limits_{i=1}^{n}{{{({a_{i,2}})}^{2}}}-{\Big{(}\sum\limits_{i=1}^{n}{{a_{i,1}}{a_{i,2}}}\Big{)}^{2}}}}.$ Let us compare these solutions for a particular numerical case. If for example ${A_{4\times 2}}=\left[{\begin{array}[]{*{20}{c}}{\begin{array}[]{*{20}{c}}{-1}\\\ 2\\\ 1\\\ {-1}\end{array}}&{\begin{array}[]{*{20}{c}}1\\\ {-1}\\\ {-2}\\\ 2\end{array}}\end{array}}\right],\,\,\,\,{X_{2\times 1}}=\left[{\begin{array}[]{*{20}{c}}{{x_{1}}}\\\ {{x_{2}}}\end{array}}\right]\,\,\,\,\,{\rm{and}}\,\,\,\,\,{B_{4\times 1}}=\,\left[{\begin{array}[]{*{20}{c}}1\\\ 2\\\ 3\\\ 4\end{array}}\right],$ then the solutions of the ordinary least squares problem are $({\tilde{x}_{1}},{\tilde{x}_{2}})=(\frac{9}{7},\,1),$ while the solutions corresponding to the minimization problem (13.6) are $\left({{x_{1}}(p=1;{I_{4}}),{x_{2}}(p=1;{I_{4}})}\right)=(\frac{8}{{74}},\,\frac{{13}}{{74}}).$ By substituting such values into the remaining term ${V_{2,4}}(\left.{{x_{1}},{x_{2}}}\right|{I_{4}})=\,\sum\limits_{i=1}^{4}{{{({a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}-{b_{i}})}^{2}}}-\frac{1}{4}{\left({\sum\limits_{i=1}^{4}{({a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}-{b_{i}})}}\right)^{2}},$ we observe that ${V_{2,4}}\left({\left.{\frac{9}{7},\,1\,}\right|{I_{4}}}\right)=\frac{{53983}}{{7252}}\cong 7.4438,$ whereas ${V_{2,4}}\left({\left.{\frac{8}{{74}},\,\frac{{13}}{{74}}\,}\right|{I_{4}}}\right)=\frac{{35378}}{{7252}}\cong 4.8783.$ On the other hand, for the well-known remaining term ${E_{2,4}}({x_{1}},{x_{2}})=\,\sum\limits_{i=1}^{4}{{{({a_{i,1}}\,{x_{1}}+{a_{i,2}}\,{x_{2}}-{b_{i}})}^{2}}},$ we observe that ${E_{2,4}}\left({\frac{9}{7},\,1}\right)=\frac{{185}}{7}\cong 26.4285,$ whereas ${E_{2,4}}\left({\frac{8}{{74}},\,\frac{{13}}{{74}}}\right)=\frac{{80335}}{{2738}}\cong 29.3407.$ In conclusion, ${V_{2,4}}\left({\left.{\frac{8}{{74}},\,\frac{{13}}{{74}}\,}\right|{I_{4}}}\right)<{V_{2,4}}\left({\left.{\frac{9}{7},\,1\,}\right|{I_{4}}}\right)<{E_{2,4}}\left({\frac{9}{7},\,1}\right),$ which confirms inequality (2.4). ## 14\. On improving the Bessel inequality and Parseval identity Two cases can be considered for the aforesaid purpose. ### 14.1. First type of improvement Let $\\{{\Phi_{k}}(x)\\}_{k=0}^{\infty}$ be a sequence of continuous functions which are p-uncorrelated with respect to the fixed function $z(x)$ and the probability density function $w(x)/\int_{a}^{b}{w(x)\,dx}$ on $[a,b]$ as before. Then, according to (4.1), (14.1) $f(x)\sim\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}\,,$ denotes a p-uncorrelated expansion for $f(x)$ in which $\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}=\\\ \frac{{\int_{\,a}^{b}{w(x)\,{z^{2}}(x)\,dx}\int_{\,a}^{b}{w(x)\,{\Phi_{k}}(x)f(x)\,dx}-p\int_{\,a}^{b}{w(x)\,{\Phi_{k}}(x)\,z(x)\,dx}\,\int_{\,a}^{b}{w(x)f(x)\,z(x)\,dx}}}{{\int_{\,a}^{b}{w(x)\,{z^{2}}(x)\,dx}\int_{\,a}^{b}{w(x)\,\Phi_{k}^{2}(x)\,dx}-p{{\left({\int_{\,a}^{b}{w(x)\,{\Phi_{k}}(x)\,z(x)\,dx}}\right)}^{2}}\,}}.$ Referring to corollary 4.5 and relation (4.16), the following inequality holds for the expansion (14.1): (14.2) $0\leq\sum\limits_{k=0}^{\infty}{\frac{{{\mathop{\rm cov}}_{p}^{2}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}}\leq{{\mathop{\rm var}}_{p}}\left({f(x);z(x)}\right).$ Also, according to the definition of convergence in p-variance, inequality (14.2) will be transformed to an equality if (14.3) $\mathop{\lim}\limits_{n\to\infty}\,\,{{\mathop{\rm var}}_{p}}\left({f(x)-\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}\,}\right)=0,$ which results in (14.4) $\sum\limits_{k=0}^{\infty}{\frac{{{\mathop{\rm cov}}_{p}^{2}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}}={{\mathop{\rm var}}_{p}}\left({f(x);z(x)}\right).$ If (14.3) or equivalently (14.4) is satisfied, the p-uncorrelated sequence $\\{{\Phi_{k}}(x)\\}_{k=0}^{\infty}$ is “complete” with respect to the fixed function $z(x)$ and the symbol “$\sim$” in (14.1) will change to the equality. Noting the above comments, now let $f,g$ be two expandable functions of type (14.1) and $\\{{\Phi_{k}}(x)\\}_{k=0}^{\infty}$ be a “complete” p-uncorrelated sequence. Since $f(x)=\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}\,,$ and $g(x)=\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),g(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}\,,$ thanks to the general identity ${{\mathop{\rm cov}}_{p}}\left({\sum\limits_{k=0}^{n}{{a_{k}}{\Phi_{k}}(x)},\,\sum\limits_{j=0}^{m}{{b_{j}}{\Phi_{j}}(x)};z(x)}\right)=\sum\limits_{k=0}^{n}{\sum\limits_{j=0}^{m}{{a_{k}}{b_{j}}\,{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),{\Phi_{j}}(x);z(x)}\right)}},$ and this fact that ${{\mathop{\rm cov}}_{p}}\,\left({{\Phi_{k}}(x),{\Phi_{j}}(x);z(x)}\right)={{\mathop{\rm var}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)\,{\delta_{k,j}},$ we obtain (14.5) ${{\mathop{\rm cov}}_{p}}\,\left({f(x),g(x);z(x)}\right)=\\\ {{\mathop{\rm cov}}_{p}}\left({\left({\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}}\right),\,\left({\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),g(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}{\Phi_{k}}(x)}}\right);z(x)}\right)\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\sum\limits_{k=0}^{\infty}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),f(x);z(x)}\right)\,\,{{{\mathop{\rm cov}}}_{p}}\,\left({{\Phi_{k}}(x),g(x);z(x)}\right)}}{{{{{\mathop{\rm var}}}_{p}}\left({{\Phi_{k}}(x);z(x)}\right)}}}.$ which is an extension of the identity (14.4) for $f(x)=g(x)$. Also, for $p=0$, this important identity leads to the generalized Parseval identity [5] (14.6) $E\left({f(x)g(x)}\right)=\sum\limits_{k=0}^{\infty}{\frac{{E\left({f(x)\,{\Phi_{k}}(x)}\right)\,\,E\left({g(x)\,{\Phi_{k}}(x)}\right)}}{{E\left({\Phi_{k}^{2}(x)}\right)}}}.$ The finite type of (14.5) is when $f,g$ and $\\{{\Phi_{k}}(x)\\}_{k=0}^{\infty}$ are all polynomial functions. For example, let ${\Phi_{k}}(x)={{\bf{P}}_{k}}(x;a,0,c,0)$ denote the same as polynomials (9.8) satisfying $\int_{\,0}^{1}{{x^{a}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ -(2c-a+1)\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{n}}(x;a,0,c,0)\,dx}\,\int_{\,0}^{1}{{x^{c}}\,{{\bf{P}}_{m}}(x;a,0,c,0)\,dx}\\\ =\frac{1}{{2n+a+1}}{\left({\frac{{(a-c)\,n!}}{{(c+1)\,{{(a+1)}_{n}}}}}\right)^{2}}{\delta_{m,n}}\quad\Leftrightarrow\quad 2c-a+1>0,\,\,a\neq c\,\,\,{\rm{and}}\,\,\,a,c>-1\,.$ Also let ${Q_{m}}(x)=\sum\limits_{k=0}^{m}{{q_{k}}{x^{k}}}\,\,\,{\rm{and}}\,\,\,{R_{m}}(x)=\sum\limits_{k=0}^{m}{{r_{k}}{x^{k}}},$ be two arbitrary polynomials of the same degree. Since ${Q_{m}}(x)=\sum\limits_{k=0}^{m}(2k+a+1){{\left({\frac{{(c+1)\,{{(a+1)}_{k}}}}{{(a-c)\,k!}}}\right)}^{2}}\\\ \times{{{\mathop{\rm cov}}}_{1}}\,\left({{{\bf{P}}_{k}}(x;a,0,c,0),{Q_{m}}(x);{x^{c-a}}}\right){{\bf{P}}_{k}}(x;a,0,c,0)\,,$ and ${R_{m}}(x)=\sum\limits_{k=0}^{m}(2k+a+1){{\left({\frac{{(c+1)\,{{(a+1)}_{k}}}}{{(a-c)\,k!}}}\right)}^{2}}\\\ \times{{{\mathop{\rm cov}}}_{1}}\,\left({{{\bf{P}}_{k}}(x;a,0,c,0),{R_{m}}(x);{x^{c-a}}}\right){{\bf{P}}_{k}}(x;a,0,c,0)\,,$ according to (14.5) we have ${{\mathop{\rm cov}}_{1}}\,\left({{Q_{m}}(x),{R_{m}}(x);{x^{c-a}}}\right)=\sum\limits_{k=0}^{m}(2k+a+1){{\left({\frac{{(c+1)\,{{(a+1)}_{k}}}}{{(a-c)\,k!}}}\right)}^{2}}\\\ \times{{{\mathop{\rm cov}}}_{1}}\,\left({{{\bf{P}}_{k}}(x;a,0,c,0),{Q_{m}}(x);{x^{c-a}}}\right){{{\mathop{\rm cov}}}_{1}}\,\left({{{\bf{P}}_{k}}(x;a,0,c,0),{R_{m}}(x);{x^{c-a}}}\right).$ ### 14.2. Second type of improvement As inequality (4.14) is valid for any arbitrary selection of the coefficients $\\{{\alpha_{k}}\\}_{k=0}^{n}$, i.e. (14.7) $0\leq{{\mathop{\rm var}}_{p}}\,\left({Y-\sum\limits_{k=0}^{n}{{\alpha_{k}}{X_{k}}}\,;Z}\right)\leq E\,\left({{\Big{(}Y-\sum\limits_{k=0}^{n}{{\alpha_{k}}{X_{k}}}\Big{)}^{2}}}\right),$ such kind of inequalities can be applied for orthogonal expansions. Suppose that $\\{{\Phi_{k}}(x)\\}_{k=0}^{\infty}$ is a sequence of continuous functions orthogonal with respect to the weight function $w(x)$ on $[a,b]$. If $f(x)$ is a piecewise continuous function, then $f(x)\sim\sum\limits_{k=0}^{\infty}{{\alpha_{k}}{\Phi_{k}}(x)}\,\,\,\,\,\,\,\text{with}\,\,\,\,\,\,{\alpha_{k}}=\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}},$ is known as its corresponding orthogonal expansion in which ${\left\langle{f,g}\right\rangle_{w}}=\int_{\,a}^{b}{w(x)\,f(x)g(x)\,dx}\,.$ The positive quantity (14.8) ${S_{n}}=\int_{\,a}^{b}{w(x)\,{{\left({\sum\limits_{k=0}^{n}{{\alpha_{k}}{\Phi_{k}}(x)}-f(x)}\right)}^{2}}dx}\,,$ will eventually lead to the Bessel inequality [26] $0\leq{S_{n}}={\left\langle{f,f}\right\rangle_{w}}-\sum\limits_{k=0}^{n}{\frac{{\left\langle{f,{\Phi_{k}}}\right\rangle_{w}^{2}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}\,.$ Now, noting (14.7) and (14.8), instead of $S_{n}$ we define the following positive quantity ${V_{n}}(p;z(x))={S_{n}}-R_{n}^{2}(p;z(x))=\int_{\,a}^{b}{w(x)\,{{\left({\sum\limits_{k=0}^{n}{{\alpha_{k}}{\Phi_{k}}(x)}-f(x)}\right)}^{2}}dx}\\\ -\frac{p}{{\int_{\,a}^{b}{w(x)\,{z^{2}}(x)dx}}}{\left({\int_{\,a}^{b}{w(x)\,z(x)\,\left({\sum\limits_{k=0}^{n}{{\alpha_{k}}{\Phi_{k}}(x)}-f(x)}\right)dx}}\right)^{2}}.$ It is clear that (14.9) $0\leq{V_{n}}(p;z(x))\leq{S_{n}}\,.$ Therefore $0\leq{V_{n}}(p;z(x))={\left\langle{f,f}\right\rangle_{w}}-\sum\limits_{k=0}^{n}{\frac{{\left\langle{f,{\Phi_{k}}}\right\rangle_{w}^{2}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}\,\\\ -\frac{p}{{{{\left\langle{z,z}\right\rangle}_{w}}}}\left({{{\left({\sum\limits_{k=0}^{n}{\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}{{\left\langle{z,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}}\right)}^{2}}+\left\langle{f,z}\right\rangle_{w}^{2}-2{{\left\langle{f,z}\right\rangle}_{w}}\sum\limits_{k=0}^{n}{\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}{{\left\langle{z,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}}\right)\,,$ can be re-written as (14.10) ${\left\langle{z,z}\right\rangle_{w}}{\left\langle{f,f}\right\rangle_{w}}-p\left\langle{f,z}\right\rangle_{w}^{2}\geq{\left\langle{z,z}\right\rangle_{w}}\sum\limits_{k=0}^{n}{\frac{{\left\langle{f,{\Phi_{k}}}\right\rangle_{w}^{2}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}+p\,{\left({\sum\limits_{k=0}^{n}{\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}{{\left\langle{z,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}}\right)^{2}}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-2p{\left\langle{f,z}\right\rangle_{w}}\sum\limits_{k=0}^{n}{\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}{{\left\langle{z,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}\,.$ Inequality (14.10) is an improvement of the well-known Bessel inequality for every $p\in[0,1]$ with respect to the fixed function $z(x)$. For example, if ${\Phi_{k}}(x)=\sin(k+1)x$ for $x\in[0,\pi]$ and $w(x)=z(x)=1$ are replaced in (14.10), the Bessel inequality of the Fourier sine expansion will be improved as follows $\int_{0}^{\pi}{{f^{2}}(x)\,dx}-\frac{p}{\pi}{\left({\int_{0}^{\pi}{f(x)\,dx}}\right)^{2}}\geq\frac{2}{\pi}\sum\limits_{k=0}^{n}{{{\left({\int_{0}^{\pi}{f(x)\sin(k+1)x\,dx}}\right)}^{2}}}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,+\frac{{4p}}{{{\pi^{3}}}}\,{\left({\sum\limits_{k=0}^{n}{\frac{{1+{{(-1)}^{k}}}}{{k+1}}\int_{0}^{\pi}{f(x)\sin(k+1)x\,dx}}}\right)^{2}}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,-\frac{{4p}}{{{\pi^{2}}}}\left({\int_{0}^{\pi}{f(x)\,dx}}\right)\sum\limits_{k=0}^{n}{\frac{{1+{{(-1)}^{k}}}}{{k+1}}\int_{0}^{\pi}{f(x)\sin(k+1)x\,dx}}\,.$ Obviously (14.10) will remain an inequality if the sequence $\\{{\Phi_{k}}(x)\\}_{k=0}$ does not form a complete orthogonal system. On the other side, if they are a complete orthogonal set, inequality (14.10) becomes an equality when $n\rightarrow\infty$. In other words, suppose $\\{{\Phi_{k}}(x)\\}_{k=0}$ is a complete orthogonal set. Since $\mathop{\lim}\limits_{n\to\infty}{S_{n}}=\mathop{\lim}\limits_{n\to\infty}\int_{\,a}^{b}{w(x)\,{{\left({\sum\limits_{k=0}^{n}{{\alpha_{k}}{\Phi_{k}}(x)}-f(x)}\right)}^{2}}dx}\,=0,$ we directly conclude from (14.9) that $0\leq\mathop{\lim}\limits_{n\to\infty}{V_{n}}(p;z(x))\leq\mathop{\lim}\limits_{n\to\infty}{S_{n}}=0,$ and therefore $\mathop{\lim}\limits_{n\to\infty}\,\,{S_{n}}-R_{n}^{2}(p;z(x))=\mathop{\lim}\limits_{n\to\infty}\,{\left({\int_{\,a}^{b}{w(x)\,z(x)\,\left({\sum\limits_{k=0}^{n}{{\alpha_{k}}{\Phi_{k}}(x)}-f(x)}\right)dx}}\right)^{2}}=0,$ eventually yields $\sum\limits_{k=0}^{\infty}{\frac{{{{\left\langle{f,{\Phi_{k}}}\right\rangle}_{w}}{{\left\langle{z,{\Phi_{k}}}\right\rangle}_{w}}}}{{{{\left\langle{{\Phi_{k}},{\Phi_{k}}}\right\rangle}_{w}}}}}={\left\langle{f,\,z}\right\rangle_{w}},$ which is known in the literature as the inner product form of the generalized Parseval identity (14.6) In the next section, we will refer to the above-mentioned results in order to extend the presented theory in terms of a set of fixed mutually orthogonal variables. ## 15\. Least p-variances with respect to fixed orthogonal variables Since the parts of this section are somewhat similar to the previous sections, we just state basic concepts and related theorems without proof. Suppose $x,y$ and $\\{{z_{k}}\\}_{k=1}^{m}$ are elements of an inner product space $\mathbf{S}$ such that $\\{{z_{k}}\\}_{k=1}^{m}$ are mutually orthogonal as (15.1) $\left\langle{{z_{i}}\,,\,\,{z_{j}}}\right\rangle=\left\langle{{z_{j}}\,,\,\,{z_{j}}}\right\rangle\,{\delta_{i,j}}.$ Due to the orthogonality property (15.1), the following identity holds true (15.2) $\displaystyle\left\langle{x-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{\left\langle{x,{z_{k}}}\right\rangle}}{{\left\langle{{z_{k}},{z_{k}}}\right\rangle}}\,{z_{k}}}\,,\,\,y-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{\left\langle{y,{z_{k}}}\right\rangle}}{{\left\langle{{z_{k}},{z_{k}}}\right\rangle}}\,{z_{k}}}}\right\rangle$ $\displaystyle\qquad\qquad\qquad=\left\langle{x,y}\right\rangle-p\sum\limits_{k=1}^{m}{\frac{{\left\langle{x,{z_{k}}}\right\rangle\left\langle{y,{z_{k}}}\right\rangle}}{{\left\langle{{z_{k}},{z_{k}}}\right\rangle}}},$ and for $y=x$, gives (15.3) $\left\langle{x,x}\right\rangle-p\sum\limits_{k=1}^{m}{\frac{{{{\left\langle{x,{z_{k}}}\right\rangle}^{2}}}}{{\left\langle{{z_{k}},{z_{k}}}\right\rangle}}}\geq 0\,\,\,\,\,\,\,\,\,\,\,\forall p\in[0,1].$ The identity (15.2) and inequality (15.3) can again be employed in mathematical statistics. ###### Definition 15.1. Let $X,Y$ and $\\{{Z_{k}}\\}_{k=1}^{m}$ be arbitrary random variables such that $\\{{Z_{k}}\\}_{k=1}^{m}$ are mutually orthogonal, i.e. (15.4) $E({Z_{i}}\,{Z_{j}})=E(Z_{j}^{2})\,{\delta_{i,j}},\,\,\,\,\,\,\,\,\,\,i,j=1,2,...,m.$ Corresponding to (15.2) we define (15.5) $\displaystyle{{\mathop{\rm cov}}_{p}}(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})$ $\displaystyle\quad=E\left({\left({X-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}}}\right)\left({Y-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E(Y{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}}}\right)}\right)$ $\displaystyle\quad=E(XY)-p\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})E(Y{Z_{k}})}}{{E(Z_{k}^{2})}}}\,,$ and call it ”p-covariance of $X$ and $Y$ with respect to the fixed orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$”. For $Y=X$, (15.5) changes to (15.6) $\displaystyle{{\mathop{\rm var}}_{p}}(X;\\{{Z_{k}}\\}_{k=1}^{m})$ $\displaystyle=E\left({{{\left({X-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}}}\right)}^{2}}}\right)$ $\displaystyle=E({X^{2}})-p\sum\limits_{k=1}^{m}{\frac{{{E^{2}}(X{Z_{k}})}}{{E(Z_{k}^{2})}}}\geq 0\,,$ where $\\{E(Z_{k}^{2})\\}_{k=1}^{m}$ are all positive. Note in (15.5) that $\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}}=\sum\limits_{k=1}^{m}{{\rm{pro}}{{\rm{j}}_{\,{Z_{k}}}}X},$ and therefore e.g. for $p=1$, ${{\mathop{\rm cov}}_{1}}(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})=E\left({(X-\sum\limits_{k=1}^{m}{{\rm{pro}}{{\rm{j}}_{\,{Z_{k}}}}X})\,(Y-\sum\limits_{k=1}^{m}{{\rm{pro}}{{\rm{j}}_{\,{Z_{k}}}}Y})}\right).$ Moreover, for orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$ and $p\in[0,1]$, we have (15.7) $0\leq{{\mathop{\rm var}}_{1}}(X;\\{{Z_{k}}\\}_{k=1}^{m})\leq{{\mathop{\rm var}}_{p}}(X;\\{{Z_{k}}\\}_{k=1}^{m})\leq{{\mathop{\rm var}}_{0}}(X;\\{{Z_{k}}\\}_{k=1}^{m})=E({X^{2}}).$ A remarkable point in (15.7) is that if $m,n$ are two natural numbers such that $n>m$, then $0\leq{{\mathop{\rm var}}_{p}}(X;\\{{Z_{k}}\\}_{k=1}^{n})\leq{{\mathop{\rm var}}_{p}}(X;\\{{Z_{k}}\\}_{k=1}^{m}),$ which can be proved directly via (15.6). For instance, if $n=2$ and $m=1$, then $0\leq{{\mathop{\rm var}}_{1}}(X;{Z_{1}},{Z_{2}})\leq{{\mathop{\rm var}}_{p}}(X;{Z_{1}},{Z_{2}})\leq{{\mathop{\rm var}}_{p}}(X;{Z_{1}})\leq{{\mathop{\rm var}}_{0}}(X;{Z_{1}})=E({X^{2}}),$ in which $E({Z_{1}}\,{Z_{2}})=0.$ The following properties hold true for definitions (15.5) and (15.6) provided that the orthogonal condition (15.4) is satisfied: $\displaystyle b1)$ $\displaystyle\qquad\qquad\qquad{{\mathop{\rm cov}}_{p}}(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})={{\mathop{\rm cov}}_{p}}(Y,X;\\{{Z_{k}}\\}_{k=1}^{m}).$ $\displaystyle b2)$ $\displaystyle\qquad\qquad{{\mathop{\rm cov}}_{p}}\,(\alpha X,\beta Y;\\{{Z_{k}}\\}_{k=1}^{m})=\alpha\beta\,\,{{\mathop{\rm cov}}_{p}}(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})\,\,\,\,\,\,\,\,\,\,\,(\alpha,\beta\in\mathbb{R}).$ $\displaystyle b3)$ $\displaystyle\begin{array}[]{l}{{\mathop{\rm cov}}_{p}}\,(X+\alpha,Y+\beta;\\{{Z_{k}}\\}_{k=1}^{m})=\,\,{{\mathop{\rm cov}}_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})+\alpha\,{{\mathop{\rm cov}}_{p}}\,(1,Y;\\{{Z_{k}}\\}_{k=1}^{m})\\\\[8.53581pt] \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\qquad\qquad+\beta\,\,{{\mathop{\rm cov}}_{p}}\,(X,1;\\{{Z_{k}}\\}_{k=1}^{m})+\alpha\beta\,\,{{\mathop{\rm cov}}_{p}}\,(1,1;\\{{Z_{k}}\\}_{k=1}^{m}).\end{array}$ $\displaystyle b4)$ $\displaystyle\quad{{\mathop{\rm cov}}_{p}}\,\left({\sum\limits_{k=0}^{n}{{c_{k}}{X_{k}}},{X_{j}};\\{{Z_{k}}\\}_{k=1}^{m}}\right)=\sum\limits_{k=0}^{n}{{c_{k}}{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},{X_{j}};\\{{Z_{k}}\\}_{k=1}^{m})}\,\,\,\,\,\,\,(\\{{c_{k}}\\}_{k=0}^{n}\in\mathbb{R}).$ $\displaystyle b5)$ $\displaystyle\begin{array}[]{l}{\rm var}_{p}(\alpha X+\beta Y;\\{{Z_{k}}\\}_{k=1}^{m})={\alpha^{2}}\,{{\mathop{\rm var}}_{p}}\,(X;\\{{Z_{k}}\\}_{k=1}^{m})+{\beta^{2}}{{\mathop{\rm var}}_{p}}\,(Y;\\{{Z_{k}}\\}_{k=1}^{m})\\\\[8.53581pt] \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\qquad\qquad+2\alpha\beta\,\,{{\mathop{\rm cov}}_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m}).\end{array}$ ###### Definition 15.2. Based on definitions (15.5) and (15.6) and the orthogonality condition (15.4), we define (15.8) ${\rho_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})=\frac{{{{{\mathop{\rm cov}}}_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{\sqrt{{{{\mathop{\rm var}}}_{p}}\,(X;\\{{Z_{k}}\\}_{k=1}^{m}){\rm var}_{p}(Y;\\{{Z_{k}}\\}_{k=1}^{m})}}},$ and call it “p-correlation coefficient of $X$ and $Y$ with respect to the fixed orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$”. Clearly ${\rho_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})\in[-1,1],$ because if $U=X-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}}\,\,\,\,\,\,\,{\rm{and}}\,\,\,\,\,\,V=Y-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E(Y{Z_{k}})}}{{E(Z_{k}^{2})}}\,{Z_{k}}},$ are replaced in the Cauchy-Schwarz inequality (1.11), then ${\mathop{\rm cov}}_{p}^{2}(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})\leq{{\mathop{\rm var}}_{p}}\,(X;\\{{Z_{k}}\\}_{k=1}^{m})\,\,{\rm var}_{p}(Y;\\{{Z_{k}}\\}_{k=1}^{m}).$ ###### Definition 15.3. If ${\rho_{p}}\,(X,Y;\\{{Z_{k}}\\}_{k=1}^{m})=0$ in (15.8), we say that $X$ and $Y$ are p-uncorrelated with respect to $\\{{Z_{k}}\\}_{k=1}^{m}$ and we have $E(XY)=p\sum\limits_{k=1}^{m}{\frac{{E(X{Z_{k}})E(Y{Z_{k}})}}{{E(Z_{k}^{2})}}\,}.$ ### 15.1. Least p-variance approximations based on fixed orthogonal variables Again, consider the approximation (2.1), $Y\cong\sum\limits_{k=0}^{n}{{c_{k}}{X_{k}}}\,,$ and define the p-variance of the remaining term $R({c_{0}},{c_{1}},...,{c_{n}})=\sum\limits_{k=0}^{n}{{c_{k}}{X_{k}}}-Y\,,$ with respect to the orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$ as follows (15.9) ${{\mathop{\rm var}}_{p}}\,\left({R({c_{0}},...,{c_{n}});\\{{Z_{k}}\\}_{k=1}^{m}}\right)\\\ =E\left({{{\left({R({c_{0}},...,{c_{n}})-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{E({Z_{k}}R({c_{0}},...,{c_{n}}))}}{{E(Z_{k}^{2})}}\,{Z_{k}}}}\right)}^{2}}}\right).$ To minimize (15.9), the relations $\frac{{\partial{{{\mathop{\rm var}}}_{p}}\,\left({R({c_{0}},...,{c_{n}});\\{{Z_{k}}\\}_{k=1}^{m}}\right)}}{{\partial{c_{j}}}}=0\,\,\,\,\,{\rm{for}}\,\,\,j=0,1,\,...\,,\,n,$ eventually lead to the following linear system (15.27) $\displaystyle\left[{\begin{array}[]{*{20}{c}}{\begin{array}[]{*{20}{c}}{{{{\mathop{\rm var}}}_{p}}\,({X_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{1}},{X_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ \vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{n}},{X_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}\end{array}}&{\begin{array}[]{*{20}{c}}{{{{\mathop{\rm cov}}}_{p}}\,({X_{0}},{X_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {{{{\mathop{\rm var}}}_{p}}\,({X_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ \vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{n}},{X_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}\end{array}}&{\begin{array}[]{*{20}{c}}\cdots\\\ \cdots\\\ \vdots\\\ \cdots\end{array}}&{\begin{array}[]{*{20}{c}}{{{{\mathop{\rm cov}}}_{p}}\,({X_{0}},{X_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{1}},{X_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ \vdots\\\ {{{{\mathop{\rm var}}}_{p}}\,({X_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\end{array}}\end{array}}\right]$ (15.36) $\displaystyle\qquad\qquad\qquad\qquad\qquad\qquad\times\left[{\begin{array}[]{*{20}{c}}{{c_{0}}}\\\ {{c_{1}}}\\\ \vdots\\\ {{c_{n}}}\end{array}}\right]=\left[{\begin{array}[]{*{20}{c}}{{{{\mathop{\rm cov}}}_{p}}\,({X_{0}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{1}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}\\\ \vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({X_{n}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}\end{array}}\right].$ Two continuous and discrete spaces can be considered for the system (15.27). #### 15.1.1. First case. If $\left\\{{{X_{k}}={\Phi_{k}}(x)}\right\\}_{k=0}^{n}$, $Y=f(x)$ and the orthogonal set $\left\\{{{Z_{k}}={z_{k}}(x)}\right\\}_{k=1}^{m}$ are defined in a continuous space with a probability density function as ${P_{r}}\left({X=x}\right)=\frac{{w(x)}}{{\int_{\,a}^{b}{w(x)\,dx}}},$ the elements of the system (15.27) appear as ${{\mathop{\rm cov}}_{p}}\,\left({{\Phi_{i}}(x),{\Phi_{j}}(x)\,;\left\\{{{z_{k}}(x)}\right\\}_{k=1}^{m}}\right)=\\\ \frac{1}{{\int_{\,a}^{b}{w(x)\,dx}}}\Big{(}\int_{\,a}^{b}{w(x)\,{\Phi_{i}}(x)\,{\Phi_{j}}(x)\,dx}-p\sum\limits_{k=1}^{m}{\frac{{\int_{\,a}^{b}{w(x)\,{\Phi_{i}}(x)\,{z_{k}}(x)\,dx}\,\int_{\,a}^{b}{w(x)\,{\Phi_{j}}(x)\,{z_{k}}(x)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,z_{k}^{2}(x)\,dx}}}}\Big{)},$ only if $\int_{\,a}^{b}{w(x)\,{z_{i}}(x)\,{z_{j}}(x)\,dx}=\left({\int_{\,a}^{b}{w(x)\,z_{j}^{2}(x)\,dx}}\right){\delta_{i,j}}.$ #### 15.1.2. Second case. If the above-mentioned variables are defined on a counter set, say ${A^{*}}=\\{{x_{k}}\\}_{k=0}^{m}$, with a discrete probability density function as ${P_{r}}\left({X=x}\right)=\frac{{j(x)}}{{\sum\limits_{x\in{A^{*}}}{j(x)}}},$ then $\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{{\mathop{\rm cov}}_{p}}\,\left({{\Phi_{i}}(x),{\Phi_{j}}(x)\,;\\{{z_{k}}(x)\\}_{k=1}^{m}}\right)=\\\ \frac{1}{{\sum\limits_{x\in{A^{*}}}{j(x)}}}\Big{(}\sum\limits_{x\in{A^{*}}}{j(x){\Phi_{i}}(x)\,{\Phi_{j}}(x)}-p\sum\limits_{k=1}^{m}{\frac{{\sum\limits_{x\in{A^{*}}}{j(x){\Phi_{i}}(x)\,{z_{k}}(x)}\,\sum\limits_{x\in{A^{*}}}{j(x){\Phi_{j}}(x)\,{z_{k}}(x)}}}{{\,\sum\limits_{x\in{A^{*}}}{j(x)\,z_{k}^{2}(x)}}}}\Big{)},$ only if $\sum\limits_{x\in{A^{*}}}{j(x){z_{i}}(x)\,{z_{j}}(x)}=\left({\sum\limits_{x\in{A^{*}}}{j(x)z_{i}^{2}(x)}}\right){\delta_{i,j}}.$ For example, suppose $m=2$, $x\in[0,\pi]$ and ${P_{r}}\left({X=x}\right)=\frac{1}{\pi}$. In this case, it is well known that if ${z_{1}}(x)=\sin x$ and ${z_{2}}(x)=\cos x$, then $E\left({{z_{1}}(x){z_{2}}(x)}\right)=\frac{1}{\pi}\int_{\,0}^{\pi}{\sin x\,\cos x\,dx}=0,$ and therefore $\,{{\mathop{\rm cov}}_{p}}\,\left({{\Phi_{i}}(x),{\Phi_{j}}(x)\,;\sin x,\,\cos x}\right)=\frac{1}{\pi}\int_{\,0}^{\pi}{{\Phi_{i}}(x)\,{\Phi_{j}}(x)\,dx}\\\ -\frac{{2p}}{{{\pi^{2}}}}\left({\int_{\,0}^{\pi}{{\Phi_{i}}(x)\,\sin x\,dx}\int_{\,0}^{\pi}{{\Phi_{j}}(x)\,\sin x\,dx}+\int_{\,0}^{\pi}{{\Phi_{i}}(x)\,\cos x\,dx}\int_{\,0}^{\pi}{{\Phi_{j}}(x)\,\cos x\,dx}}\right).$ The weighted version of the above example can be considered in various cases. For example, if $w(x)={e^{x}},z_{1}(x)=e^{-x}\sin x$ and $z_{2}(x)=\cos x$ all defined on $[0,\pi]$, then $E\left({{z_{1}}(x){z_{2}}(x)}\right)=\frac{1}{{{e^{\pi}}-1}}\int_{\,0}^{\pi}{\sin x\,\cos x\,dx}=0,$ and consequently the corresponding space is defined by $\,\left({{e^{\pi}}-1}\right)\,\,{{\mathop{\rm cov}}_{p}}\,\left({{\Phi_{i}}(x),{\Phi_{j}}(x)\,;\sin x,\,\cos x}\right)=\int_{\,0}^{\pi}{{e^{x}}{\Phi_{i}}(x)\,{\Phi_{j}}(x)\,dx}\\\ -p\left({\frac{{\int_{\,0}^{\pi}{{\Phi_{i}}(x)\,\sin x\,dx}\int_{\,0}^{\pi}{{\Phi_{j}}(x)\,\sin x\,dx}}}{{\int_{\,0}^{\pi}{{e^{-x}}{{\sin}^{2}}x\,dx}}}+\frac{{\int_{\,0}^{\pi}{{\Phi_{i}}(x)\,{e^{x}}\cos x\,dx}\int_{\,0}^{\pi}{{\Phi_{j}}(x)\,{e^{x}}\cos x\,dx}}}{{\int_{\,0}^{\pi}{{e^{x}}{{\cos}^{2}}x\,dx}}}}\right),$ where $\int_{\,0}^{\pi}{{e^{-x}}{{\sin}^{2}}x\,dx}=\,\frac{2}{5}(1-{e^{-\pi}})\,\,\,\,\,\,\,{\rm{and}}\,\,\,\,\,\,\int_{\,0}^{\pi}{{e^{x}}{{\cos}^{2}}x\,dx}=\frac{3}{5}({e^{\pi}}-1).$ In the sequel, applying the uncorrelatedness condition (15.37) ${{\mathop{\rm cov}}_{p}}\,({X_{i}},{X_{j}};\\{{Z_{k}}\\}_{k=1}^{m})={{\mathop{\rm var}}_{p}}\,({X_{j}};\\{{Z_{k}}\\}_{k=1}^{m})\,{\delta_{i,j}}\,\,\,{\rm{for}}\,{\rm{every}}\,\,\,i,j=0,1,...,n,$ on the elements of the linear system (15.27), one can obtain the unknown coefficients as ${c_{k}}=\frac{{{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}.$ In this case $Y\cong\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}{X_{k}}}\,,$ is the best approximation in the sense of least p-variance of the error with respect to the fixed orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$. ###### Theorem 15.4. Any finite set of random variables satisfying the condition (15.37) is linearly independent. ###### Theorem 15.5. Let $\\{{V_{k}}\\}_{k=0}$ be a finite or infinite sequence of random variables such that any finite number of elements $\\{{V_{k}}\\}_{k=0}^{n}$ are linearly independent. One can find constants $\\{{a_{i,j}}\\}$ such that the elements $\displaystyle{X_{0}}$ $\displaystyle={V_{0}},$ $\displaystyle{X_{1}}$ $\displaystyle={V_{1}}+{a_{12}}{V_{0}},$ $\displaystyle{X_{2}}$ $\displaystyle={V_{2}}+{a_{22}}{V_{1}}+{a_{23}}{V_{0}},$ $\displaystyle\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\vdots$ $\displaystyle{X_{n}}$ $\displaystyle={V_{n}}+{a_{n2}}{V_{n-1}}+...+{a_{n,n+1}}{V_{0}}={V_{n}}-\sum\limits_{k=0}^{n-1}{\frac{{{{{\mathop{\rm cov}}}_{p}}({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}{X_{k}}},$ are mutually p-uncorrelated with respect to the fixed orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$. Also, there are constants $\\{{b_{i,j}}\\}$ such that $\displaystyle{V_{0}}$ $\displaystyle={X_{0}},$ $\displaystyle{V_{1}}$ $\displaystyle={X_{1}}+{b_{12}}{X_{0}},$ $\displaystyle{V_{2}}$ $\displaystyle={X_{2}}+{b_{22}}{X_{1}}+{b_{23}}{X_{0}},$ $\displaystyle\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\vdots$ $\displaystyle{V_{n}}$ $\displaystyle={X_{n}}+{b_{n2}}{X_{n-1}}+...+{b_{n,n+1}}{X_{0}},$ and ${{\mathop{\rm cov}}_{p}}\,({X_{n}},{V_{k}};\\{{Z_{k}}\\}_{k=1}^{m})=0\,\,\,\,\,\,\,\,{\rm{for}}\,\,\,k=0,1,...,n-1,$ provided that ${{\mathop{\rm cov}}_{p}}\,({X_{i}},{X_{j}};\\{{Z_{k}}\\}_{k=1}^{m})=0\,\,\,\,\,\,\,{\rm{for}}\,\,\,\,\,\,i\neq j.$ ### 15.2. A general representation for p-uncorrelated variables with respect to the fixed orthogonal variables Let $\\{{V_{k}}\\}_{k=0}$ be a finite or infinite sequence of random variables such that any finite number of its elements are linearly independent. As before, if $\\{{Z_{k}}\\}_{k=1}^{m}$ are orthogonal satisfying the condition (15.4), the following non-monic type determinant shows a general representation for the generic p-uncorrelated variable $X_{n}$: (15.38) ${X_{n}}=\begin{vmatrix}{{{{\mathop{\rm var}}}_{p}}\,({V_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}&{{{{\mathop{\rm cov}}}_{p}}\,({V_{0}},{V_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}&\ldots&{{{{\mathop{\rm cov}}}_{p}}\,({V_{0}},{V_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {{{{\mathop{\rm cov}}}_{p}}\,({V_{1}},{V_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}&{{{{\mathop{\rm var}}}_{p}}\,({V_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}&\ldots&{{{{\mathop{\rm cov}}}_{p}}\,({V_{1}},{V_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ \vdots&\vdots&\vdots&\vdots\\\ {{{{\mathop{\rm cov}}}_{p}}\,({V_{n-1}},{V_{0}};\\{{Z_{k}}\\}_{k=1}^{m})}&{{{{\mathop{\rm cov}}}_{p}}\,({V_{n-1}},{V_{1}};\\{{Z_{k}}\\}_{k=1}^{m})}&\ldots&{{{{\mathop{\rm cov}}}_{p}}\,({V_{n-1}},{V_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}\\\ {V_{0}}&{V_{1}}&\ldots&{V_{n}}\end{vmatrix}.$ ###### Theorem 15.6. Let $\\{{V_{k}}\\}_{k=0}^{n}$ be linearly independent variables and $\\{{X_{k}}\\}_{k=0}^{n}$ be their corresponding p-uncorrelated elements generated by $\\{{V_{k}}\\}_{k=0}^{n}$ in (15.38). If $\sum\limits_{k=0}^{n}{{a_{k}}{V_{k}}}={W_{n}}$ then ${W_{n}}=\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},{W_{n}};\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};Z)}}{X_{k}}}\,.$ ###### Theorem 15.7. Let $\\{{X_{k}}\\}_{k=0}^{n}$ be p-uncorrelated variables satisfying the condition (15.37) and let $Y$ be arbitrary. Then ${{\mathop{\rm var}}_{p}}\,\left({Y-\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}{X_{k}}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right)\leq{{\mathop{\rm var}}_{p}}\,\left({Y-\sum\limits_{k=0}^{n}{{\alpha_{k}}{X_{k}}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right),$ for any selection of constants $\\{{\alpha_{k}}\\}_{k=0}^{n}$. ###### Corollary 15.8. Under the conditions of the above theorem we have the following equality $\displaystyle{{\mathop{\rm var}}_{p}}\,\left({Y-\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}{X_{k}}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right)$ $\displaystyle\qquad\qquad\qquad\qquad={{\mathop{\rm var}}_{p}}\,(Y;\\{{Z_{k}}\\}_{k=1}^{m})-\sum\limits_{k=0}^{n}{\frac{{{\mathop{\rm cov}}_{p}^{2}({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}}$ $\displaystyle\qquad\qquad\qquad\qquad={{\mathop{\rm var}}_{p}}\,(Y;\\{{Z_{k}}\\}_{k=1}^{m})\left({1-\sum\limits_{k=0}^{n}{\rho_{p}^{2}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}\right),$ and as a result $\sum\limits_{k=0}^{n}{\rho_{p}^{2}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}\leq 1.$ ###### Theorem 15.9. Let $\\{{V_{k}}\\}_{k=0}^{n}$ be linearly independent variables and $\\{{X_{k}}\\}_{k=0}^{n}$ be their corresponding p-uncorrelated elements generated by $\\{{V_{k}}\\}_{k=0}^{n}$ in (15.38). Then, for any selection of constants $\\{{\lambda_{k}}\\}_{k=0}^{n-1}$ we have ${{\mathop{\rm var}}_{p}}\,\left({{X_{n}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right)\leq{{\mathop{\rm var}}_{p}}\,\left({{V_{n}}+\sum\limits_{k=0}^{n-1}{{\lambda_{k}}{V_{k}}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right).$ ###### Corollary 15.10. Let $\\{{X_{k}}\\}_{k=0}^{n}$ be p-uncorrelated variables with respect to the fixed orthogonal variables $\\{{Z_{k}}\\}_{k=1}^{m}$. Then, for any variable $Y$ and every $j=0,1,...,n$ we have ${{\mathop{\rm cov}}_{p}}\,\left({Y-\sum\limits_{k=0}^{n}{\frac{{{{{\mathop{\rm cov}}}_{p}}\,({X_{k}},Y;\\{{Z_{k}}\\}_{k=1}^{m})}}{{{{{\mathop{\rm var}}}_{p}}\,({X_{k}};\\{{Z_{k}}\\}_{k=1}^{m})}}{X_{k}}},{X_{j}}\,;\\{{Z_{k}}\\}_{k=1}^{m}}\right)=0.$ ### 15.3. p-uncorrelated functions with respect to the fixed orthogonal functions Let $\left\\{{{X_{k}}={\Phi_{k}}(x)}\right\\}_{k=0}^{n}$, $Y=f(x)$ and the orthogonal set $\left\\{{{Z_{k}}={z_{k}}(x)}\right\\}_{k=1}^{m}$ be defined in a continuous space with a probability density function as ${P_{r}}\left({X=x}\right)=\frac{{w(x)}}{{\int_{\,a}^{b}{w(x)\,dx}}}.$ We say that $\left\\{{{\Phi_{k}}(x)}\right\\}_{k=0}^{\infty}$ are (weighted) p-uncorrelated functions with respect to the fixed orthogonal functions $\left\\{{{z_{k}}(x)}\right\\}_{k=1}^{m}$ if they satisfy the condition $\int_{\,a}^{b}{w(x)\,{\Phi_{i}}(x)\,{\Phi_{j}}(x)\,dx}-p\sum\limits_{k=1}^{m}{\frac{{\int_{\,a}^{b}{w(x)\,{\Phi_{i}}(x)\,{z_{k}}(x)\,dx}\,\int_{\,a}^{b}{w(x)\,{\Phi_{j}}(x)\,{z_{k}}(x)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,z_{k}^{2}(x)\,dx}}}}\\\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,=\left({\int_{\,a}^{b}{w(x)\,\Phi_{j}^{2}(x)\,dx}-p\sum\limits_{k=1}^{m}{\frac{{{{\left({\int_{\,a}^{b}{w(x)\,{\Phi_{j}}(x)\,{z_{k}}(x)\,dx}}\right)}^{2}}}}{{\,\int_{\,a}^{b}{w(x)\,z_{k}^{2}(x)\,dx}}}}}\right)\,{\delta_{i,j}},$ provided that $\int_{\,a}^{b}{w(x)\,{z_{i}}(x)\,{z_{j}}(x)\,dx}=\left({\int_{\,a}^{b}{w(x)\,z_{j}^{2}(x)\,dx}}\right){\delta_{i,j}}.$ ###### Remark 15.11. After deriving the above-mentioned p-uncorrelated functions, one will be able to define a sequence of orthogonal functions in the form ${\Phi_{n}}(x)-(1-\sqrt{1-p})\sum\limits_{k=1}^{m}{\frac{{\,\int_{\,a}^{b}{w(x)\,{\Phi_{n}}(x)\,{z_{k}}(x)\,dx}}}{{\,\int_{\,a}^{b}{w(x)\,z_{k}^{2}(x)\,dx}}}\,{z_{k}}(x)}={G_{n}}(x;p,\\{{z_{k}}(x)\\}_{k=1}^{m})\,,$ having the orthogonality property $\int_{\,a}^{b}{w(x)\,{G_{i}}(x;p,\\{{z_{k}}(x)\\}_{k=1}^{m}){G_{j}}(x;p,\\{{z_{k}}(x)\\}_{k=1}^{m})\,dx}\\\ =\left({\int_{\,a}^{b}{w(x)\,dx}}\right)\Big{(}{{{{\mathop{\rm var}}}_{p}}({\Phi_{j}}(x);\,\\{{z_{k}}(x)\\}_{k=1}^{m})}\Big{)}{\delta_{i,j}}.$ ### 15.4. p-uncorrelated vectors with respect to fixed orthogonal vectors Let ${\vec{A}_{m}}=({a_{1}},{a_{2}},...,{a_{m}})$ and ${\vec{B}_{m}}=({b_{1}},{b_{2}},...,{b_{m}})$ be two arbitrary vectors and $\left\\{{{{\vec{Z}}_{k,m}}=({z_{k,1}},{z_{k,2}},...,{z_{k,m}})}\right\\}_{k=1}^{l}$ be a set of fixed orthogonal vectors. It can be verified that $\displaystyle\left({{{\vec{A}}_{m}}-(1-\sqrt{1-p})\sum\limits_{k=1}^{l}{\frac{{{{\vec{A}}_{m}}.{{\vec{Z}}_{k,m}}}}{{{{\vec{Z}}_{k,m}}.{{\vec{Z}}_{k,m}}}}{{\vec{Z}}_{k,m}}}}\right).\left({{{\vec{B}}_{m}}-(1-\sqrt{1-p})\sum\limits_{k=1}^{l}{\frac{{{{\vec{B}}_{m}}.{{\vec{Z}}_{k,m}}}}{{{{\vec{Z}}_{k,m}}.{{\vec{Z}}_{k,m}}}}{{\vec{Z}}_{k,m}}}}\right)$ $\displaystyle\qquad\qquad\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,={{\vec{A}}_{m}}.{{\vec{B}}_{m}}-p\sum\limits_{k=1}^{l}{\frac{{({{\vec{A}}_{m}}.{{\vec{Z}}_{k,m}})({{\vec{B}}_{m}}.{{\vec{Z}}_{k,m}})}}{{{{\vec{Z}}_{k,m}}.{{\vec{Z}}_{k,m}}}}},$ only if ${\vec{Z}_{k,m}}.\,{\vec{Z}_{j,m}}=\left({{{\vec{Z}}_{j,m}}.\,{{\vec{Z}}_{j,m}}}\right)\,{\delta_{k,j}}.$ Final Remark. As we observed, this theory was established based on an algebraic inequality of type (1.6), i.e. $0\leq{{\mathop{\rm var}}_{1}}\,(X;Z)\leq{{\mathop{\rm var}}_{p}}\,(X;Z)\leq{{\mathop{\rm var}}_{0}}\,(X;Z)=E({X^{2}}).$ A notable point is that there is still another algebraic inequality that refines the above inequality and consequently improves the theory presented in this work. However, it contains various new definitions, concepts and calculations and therefore needs a much time to be completed. We hope inshallah to finish it soon. ## Acknowledgments This work has been supported by the Alexander von Humboldt Foundation under the grant number: Ref 3.4 - IRN - 1128637 - GF-E. ## References * [1] I. Area, Hypergeometric multivariate orthogonal polynomials, 2nd AIMS-Volkswagen Stiftung Workshop, (2020) 165-193, Springer International Publishing. * [2] A. Bjorck, (1996). Numerical methods for least squares problems. SIAM. * [3] C. Brezinski, (1991) Biorthogonality and its applications to numerical analysis, Taylor and Francis. * [4] T. S. Chichara (1978), An introduction to orthogonal polynomials, Gordon and Breach, New York. * [5] P. J. Davis, (1975), Interpolation and approximation, Dover books on advanced mathematics, Courier Corporation. * [6] S. van de Geer, A new approach to least squares estimation with applications, Annals of Statistics, 15 (1987) 587-602. * [7] L. Guo, A. Narayan, T. Zhou, Constructing least squares polynomial approximations, SIAM Review 62 (2020) 483-508. * [8] A. Iserles, S. P. Norsett, On the theory of biorthogonal polynomials, Transaction on. American Mathematical Society, 306 (1988) 455-474. * [9] T. Kariya, H. Kurata, (2004), Generalized least squares, Hoboken: Wiley. * [10] W. Koepf, Power series in computer algebra. Journal of Symbolic Computation, 13 (1992) 581-603. * [11] W. Koepf, (2014), Hypergeometric summation: An algorithmic approach to summation and special function identities, Springer-Verlag, London. * [12] P. Lancaster, K. Salkauskas, Surfaces generated by moving least squares methods, Mathematics of Computation, 37 (1981) 141-158. * [13] A. M. Legendre, Adrien-Marie (1805), Nouvelles méthodes pour la détermination des orbites des comètes: New methods for the determination of the orbits of comets (in French), Paris: F. Didot. * [14] M. Masjed-Jamei, G. V. Milovanovic, Weighted Hermite quadrature rules, Electronic Transactions on Numerical Analysis, 45 (2016) 476-498. * [15] M. Masjed-Jamei, A basic class of symmetric orthogonal polynomials using the extended Sturm–Liouville theorem for symmetric functions, Journal of Mathematical Analysis and Applications, 325 (2007) 753-775. * [16] M. Masjed-Jamei, On constructing new interpolation formulas using linear operators and an operator type of quadrature rules, Journal of Computational and Applied Mathematics, 216 (2008) 307-318. * [17] M. Masjed-Jamei, Biorthogonal exponential sequences with weight function $\exp(ax^{2}+ibx)$ on the real line and an orthogonal sequence of trigonometric functions, Proceedings of the American Mathematical Society, 136 (2008) 409-417. * [18] M. Masjed-Jamei, A functional generalization of the Cauchy-Schwarz inequality and some subclasses, Applied Mathematics Letters, 22 (2009) 1335-1339. * [19] M. Masjed-Jamei, A linear constructive approximation for integrable functions and a parametric quadrature model based on a generalization of Ostrowski-Grüss type inequalities, Electronic Transactions on Numerical Analysis, 38 (2011) 218-232. * [20] M. Masjed-Jamei, A certain class of weighted approximations for integrable functions and applications, Numerical Functional Analysis and Optimization, 34(2013) 1224-1244. * [21] M. Masjed-Jamei, (2020), Special functions and generalized Sturm-Liouville problems. Switzerland, Springer. * [22] G.V. Milovanovic, Some orthogonal polynomials on the finite interval and Gaussian quadrature rules for fractional Riemann-Liouville integrals, Mathematical Methods in the Applied Sciences 44 (2021) 493-516. * [23] G.V. Milovanovic, On the Markov extremal problem in the L2-norm with the classical weight functions, Journal of Nonlinear and Convex Analysis, 23 (2022), 1179-1212. * [24] G.V. Milovanovic, Orthogonality on the semicircle: old and new results, Electronic Transactions on Numerical Analysis, 59 (2023), 99-115. * [25] T. Park, G. Casella, The Bayesian Lasso, Journal of the American Statistical Association, 103 (2008) 681-686. * [26] M. J. D. Powell (1981), Approximation theory and methods, Cambridge Univ. Press, Cambridge. * [27] C. R. Rao, H. Toutenburg, et al. (2008), Linear models: least squares and alternatives. Springer Series in Statistics, Berlin: Springer. * [28] D. A. Shepard, Two-dimensional interpolation function for irregularly spaced data, Proceedings of the 1968, 23rd ACM National Conference, ACM Press, New York, (1968) 517-524. * [29] S. M. Stigler, Gauss and the invention of least squares, Annals of. Statistics. 9 (1981) 465-474. * [30] J. Wolberg, (2005), Data analysis using the method of least squares: extracting the most information from experiments. Berlin: Springer.
# Effect of Centrifugal Force on Transmission Spectroscopy of Exoplanet Atmospheres Agnibha Banerjee,1 Joanna K. Barstow,1 Carole A. Haswell1 and Stephen R. Lewis1 1School of Physical Sciences, The Open University, Milton Keynes, MK7 6AA, UK E-mail<EMAIL_ADDRESS> (Accepted XXX. Received YYY; in original form ZZZ) ###### Abstract Transmission spectroscopy is one of the most successful methods of learning about exoplanet atmospheres. The process of retrievals using transmission spectroscopy consists of creating numerous forward models and comparing them to observations to solve the inverse problem of constraining the atmospheric properties of exoplanets. We explore the impact of one simplifying assumption commonly employed by forward models of transiting exoplanets: namely that the planet can be treated as an isolated, non-rotating spherical body. The centrifugal acceleration due to a planet’s rotation opposes the gravitational pull on a planet’s atmosphere and increases its scale height. Conventional forward models used for retrievals generally do not include this effect. We find that atmospheric retrievals produce significantly different results for close-in planets with low gravity when this assumption is removed, e.g., differences between true and retrieved values of gas abundances greater than 1$\sigma$ for a simulated planet analogous to WASP-19 b. We recommend that the correction to the atmospheric scale height due to this effect be taken into account for the analysis of high precision transmission spectra of exoplanets in the future, most immediately JWST Cycle 1 targets WASP-19 b and WASP-121 b. ###### keywords: methods: analytical – techniques: spectroscopic – exoplanets – radiative transfer ††pubyear: 2023††pagerange: Effect of Centrifugal Force on Transmission Spectroscopy of Exoplanet Atmospheres–Effect of Centrifugal Force on Transmission Spectroscopy of Exoplanet Atmospheres ## 1 Introduction The study of exoplanet atmospheres through transmission spectroscopy using retrievals is now a well-established method to deduce their composition and thermal structure (Charbonneau et al., 2002; Tinetti et al., 2007; Sing et al., 2011; Kreidberg et al., 2014). Robust detections of atmospheric species such as Na, K, $\textrm{H}_{2}\textrm{O}$ (Wakeford et al., 2018a) have already been made with previous studies using space-based telescopes such as the Hubble Space Telescope and the Spitzer Space Telescope. Evidence of some carbon-bearing species such as $\textrm{CO}_{2}$ (Wakeford et al., 2018b) have also been found. Recently, definitive evidence has been found to confirm the presence of $\textrm{CO}_{2}$ in the atmosphere of WASP-39b (The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022) using JWST. With the increase in quality of spectral data owing to JWST, the analysis of much finer aspects of these atmospheres is now possible. This jump in signal-to-noise ratio will warrant the removal of assumptions that have so far been considered routine. In this letter, we explore one such assumption and evaluate the degree to which it impacts our inferences regarding atmospheric properties. When a planet spins, the centrifugal force due to rotation acts to oppose the gravitational force, resulting in a slight reduction in the effective gravitational acceleration acting on the atmosphere. The gravitational equipotentials and hence the effective gravity in a two body system like a planet orbiting a star has been well-studied in the context of close binary stars, see e.g. Frank et al. (2002), and this Roche geometry has been applied to exoplanets in Busuttil (2017) and Berardo & de Wit (2022). Atmospheric scale height is defined as the altitude at any point in the atmosphere over which the atmospheric pressure reduces by a factor of $e$, and is a quantity often used to measure the extent of an atmosphere. The scale height is inversely proportional to effective gravitational acceleration and thus increases when the centrifugal force is included in the effective gravity calculation. The modified scale height, in turn, affects the transmission spectra obtained from it by increasing the observable feature amplitudes. Whilst the effect of rotation on the effective gravity is well known and commonly accounted for in models of the fast-rotating Solar System giant planets (Lindal et al., 1985; Irwin, 2009), it has not generally been included in retrieval models for exoplanets. In this letter, we analyse the impact of reduced effective gravity on synthetic transmission spectra for a range of hypothetical planets and also examine the resulting influence on retrieved atmospheric properties. We first describe the atmospheric retrieval code, NEMESIS (Non-linear optimal Estimator for MultivariatE spectral analySIS) (Irwin et al., 2008). We then demonstrate the difference in transmission spectra with and without a latitudinally- averaged correction due to centrifugal forces, and perform two retrievals on the transmission spectrum generated with the correction included: one with an atmospheric model that includes the correction, and one without. We find that the retrieval model without the centrifugal correction is unable to recover the input atmospheric state to within 1$\sigma$. Finally, we perform a parameter space exploration of planetary bulk density, equilibrium temperature, stellar radius and orbital period to constrain the regions in which this correction creates a significant difference in the retrieval results. ## 2 Methods ### 2.1 The NEMESIS Code We use the NEMESIS algorithm to create synthetic spectra and also to generate forward models for the retrievals. Originally developed for the characterisation of the atmospheres of Solar System planets, NEMESIS has since been modified for use in studying exoplanet atmospheres (Lee et al., 2012). NEMESIS uses a correlated-k radiative transfer model (Lacis & Oinas, 1991). We use the nested sampling algorithm option (Krissansen-Totton et al., 2018) to perform retrievals as it allows the exploration of non-Gaussian posterior distributions. We use the pymultinest package (Buchner et al., 2014) based on the MultiNest (Skilling, 2004; Feroz et al., 2009) algorithm, which is widely used for retrieval of exoplanet transmission spectra. Figure 1: A schematic cross section of a planet along the plane of the terminator showing the accelerations acting on an atmospheric parcel (light grey). On the left half of the diagram, the atmospheric extent corresponding to the scenario with rotation included is illustrated by an orange annulus (outer). The atmospheric scale heights in the diagram have been exaggerated for visibility. The atmospheric extent corresponding to the scenario without the rotation included is illustrated by a blue annulus (inner). On the right half of the diagram, an exaggerated elliptical atmospheric annulus is shown. The centrifugal acceleration acting on the parcel is perpendicular to the axis of rotation and marked as $\omega^{2}R\cos\theta$, and the radial component of this acceleration which opposes the gravitational acceleration $g$ is marked as $\omega^{2}R\cos^{2}\theta$. In the quantitative parts of this paper, we approximate the atmospheric annulus to be circular. ### 2.2 Analytic Estimate of the Magnitude of the Centrifugal Effect As an initial assessment of the magnitude of the change in the transmission spectrum due to the scale height being modified by rotation, a synthetic spectrum was created using NEMESIS with the modified value of the acceleration due to gravity. Figure 2 demonstrates the difference between the simulated spectra with and without the effect of centrifugal acceleration included. The atmospheric model used is described in detail in Section 2.3. In transmission spectroscopy, we only sample the annulus of the atmosphere around the terminator of an exoplanet at superior conjunction of the planet, i.e. during planet transit. Though some close-in planets which are particularly attractive targets for transmission spectroscopy are significantly non-spherical (see e.g. Fig. 1 of Staab et al., 2017), the cross-section viewed during transit has minimal deviation from a circular cross-section. Thus we can treat the annulus as being bounded by concentric circles. We discuss the implications of non-sphericity further in Section 4.1. We also assume that the planet rotates as a rigid body. The acceleration caused by the apparent outward force due to rotation is perpendicular to the axis of rotation and is computed as $\omega^{2}R\cos\theta$, where $\omega$ is the angular velocity, $R$ is the radius of the planet, and $\theta$ is the latitude (Figure 1). The radially outwards component of this acceleration is, therefore $\omega^{2}R\cos^{2}\theta$. We assume that the planet is tidally locked, and thus the orbital period is the same as the rotation period, and the axis of rotation of the planet is perpendicular to the orbital plane (Heller et al., 2011). An average value of net gravitational acceleration over the annulus relevant to transmission spectroscopy is then obtained by integrating over latitudes from the equator to the pole and then dividing by the range of latitudes covered. $\begin{split}g^{\prime}(\theta,R^{\prime})&=g(R^{\prime})-\omega^{2}R^{\prime}\cos^{2}\theta\\\ g_{\rm{av}}^{\prime}(R^{\prime})&=\frac{\int_{0}^{\frac{\pi}{2}}g^{\prime}(\theta,R^{\prime})\textrm{d}\theta}{\int_{0}^{\frac{\pi}{2}}\textrm{d}\theta}\\\ g_{\rm{av}}^{\prime}(R^{\prime})&=\frac{\int_{0}^{\frac{\pi}{2}}\left(g(R^{\prime})-\omega^{2}R^{\prime}\cos^{2}\theta\right)\textrm{d}\theta}{\frac{\pi}{2}}\\\ &=g(R^{\prime})-\frac{2\omega^{2}R^{\prime}}{\pi}\int_{0}^{\frac{\pi}{2}}\cos^{2}\theta\textrm{d}\theta\\\ &=g(R^{\prime})-\frac{1}{2}\omega^{2}R^{\prime}\end{split}$ This corrected value of $g$ impacts the calculated atmospheric scale height in the forward model. Scale height $H$ is given by $H=\frac{kT}{\mu{}g_{\rm{av}}^{\prime}(R^{\prime})}$ where $k$ is the Boltzmann constant, $T$ is the atmospheric temperature and $\mu$ is the mean molecular weight of the atmosphere. Since the scale height varies as the inverse of gravitational acceleration, it is increased when the modified value of gravitational acceleration is used. The extent of atmospheric features in the transmission spectrum is proportional to $H$, so the increase in $H$ produces a stretch in the transmission spectrum (Figure 2). ### 2.3 Retrieval Set-Up The NEMESIS model used assumes a cloud-free, isothermal, well-mixed atmosphere containing $\textrm{H}_{2}\textrm{O}$ and $\textrm{CO}_{2}$ as the spectrally active trace gases and a background made up of $\textrm{H}_{2}$ and He (He:$\textrm{H}_{2}$ = 0.17). The $\textrm{H}_{2}\textrm{O}$ k-table was obtained from Polyansky et al. (2018) and the $\textrm{CO}_{2}$ k-table was obtained from Yurchenko et al. (2020), both tables generated as in Chubb et al. (2021) Collision-induced absorption from H2 and He is from Borysow (2002). A simplistic atmospheric model was adopted as we wish to isolate the effect of rotation from all other complexities. The model goes to a maximum pressure of 10 bar, and the planet radius is defined as the radius at 100 mbar. Priors and ranges for each parameter are included in Table 1. The planet used is an analogue of WASP-19 b, with a radius of 1.415 $\textrm{R}_{\textrm{Jup}}$, a mass of 1.154 $\textrm{M}_{\textrm{Jup}}$, equilibrium temperature of 2113.0 K and an orbital period of 0.78 days (Cortés-Zuleta et al., 2020). It is used to demonstrate the impact of rotation on retrievals as it has a large atmospheric scale height. An error envelope of 30 ppm was used over the simulated spectrum which included the modification in $g$ due to rotation. A spectral range of 1.5 $\mu$m – 5.0 $\mu$m was used, with a spectral resolution ($\lambda/\Delta\lambda$) ranging between 60-200 – corresponding to the binned resolution for the JWST NIRSpec PRISM instrument used in The JWST Transiting Exoplanet Community Early Release Science Team et al. (2022). No additional scatter noise was added to the simulated spectrum. Figure 2: The difference between the simulated spectra with (orange) and without (blue) the effect of centrifugal acceleration included. The planet used is a WASP-19 b analogue, with parameters as described in Section 2.3. The forward model with this effect taken into consideration has a more extended atmosphere and, thus, a higher value for the observed planetary radius at each wavelength. The increase in transit depth is a scaled version of the base spectra and, thus, is wavelength dependent. Parameter | Prior Type | Prior Range | True Value | Retrieved Value (With Spin) | Retrieved Value (No Spin) ---|---|---|---|---|--- $\log(\textrm{H}_{2}\textrm{O})$ | log-uniform | -12, -1 | -2.00 | $-2.020^{+0.054}_{-0.061}$ | $-2.373^{+0.129}_{-0.229}$ $\log(\textrm{CO}_{2})$ | log-uniform | -12, -1 | -4.00 | $-4.024^{+0.065}_{-0.071}$ | $-4.380^{+0.142}_{-0.246}$ $\textrm{T}_{\textrm{iso}}$ (K) | uniform | 1600.0, 2400.0 | 2113.00 | $2108.95^{+27.55}_{-27.91}$ | $2108.35^{+27.51}_{-28.21}$ Radius ($\textrm{R}_{\textrm{Jup}}$) | uniform | 1.1, 1.8 | 1.415 | $1.415^{+0.001}_{-0.001}$ | $1.421^{+0.003}_{-0.002}$ Table 1: The true values used for the simulated spectrum and the retrieved values for the cases with and without planetary rotation, obtained from the sample retrieval. The values reported are the median and the $1\sigma$ errors. ## 3 Results ### 3.1 Retrieval Two retrievals were performed on the synthetic spectrum including centrifugal effects which is shown in orange in Fig. 2; one including centrifugal effects in the retrieval model, and one assuming zero rotation. Table 1 compares the retrieved values with the inputs for each parameter. Both retrievals produced an equally good fit to the input spectrum, but the retrieved parameters varied between them, and the retrieval excluding rotational effects does not correctly recover the input atmospheric state. The retrieved gas abundances are $\sim 0.3\textrm{dex}$ (or $\sim 3\sigma$) lower than their true values when the retrieval is performed without the corrections for spin. The posterior distributions for the retrieval excluding spin also have increased skewness. In the scenario without rotation included, a higher value is retrieved for the radius of the planet which compensates for the increased transit depth, actually caused by lower gravity and higher scale height due to rotation. However, increasing the radius lifts up the spectrum while also stretching feature amplitudes. The gas abundances in this scenario are retrieved to be lower than the truth to compensate for this effect and reduce the feature amplitudes to match the simulated spectrum. This test indicates that retrievals that do not include the centrifugal effect can be apparently successful, whilst producing spurious results. Figure 3: A corner plot showing the difference in retrieved parameters with (orange) and without (blue) the correction to $g$ due to planetary rotation included. A significant shift can be seen in the retrieved gas abundances and planetary radius. ### 3.2 Parameter Space Exploration We used a grid of planetary parameters to test which planets would be most affected by the inclusion of the modified effective gravity in the calculation of the scale height. The planetary parameters varied were the planet’s radius, equilibrium temperature, stellar radius and orbital period. The planetary radius was varied from 1.0 to 2.0 $\textrm{R}_{\textrm{Jup}}$, the equilibrium temperature was varied from 1000 K to 2000 K, the stellar radius was varied from 0.8 $\textrm{R}_{\sun}$ to 1.5 $\textrm{R}_{\sun}$, and the rotation period was varied from 0.6 days to 4.5 days. The planetary mass was set to 0.8 $\textrm{M}_{\textrm{Jup}}$ 111$\textrm{R}_{\textrm{Jup}}$ = 69911 km, $\textrm{M}_{\textrm{Jup}}$ = 1.898 $\times 10^{27}$ kg, $\textrm{R}_{\sun}$ = 695700 km. All other parameters used in the atmospheric forward model setup were kept the same, as described in Section 2.3. The equilibrium temperature divided by bulk density and the square of the stellar radius (Similar to the Transmission Spectroscopy Metric, see Kempton et al. (2018)) was used as a metric for how inflated the planet’s atmosphere is. Denser planets have a greater gravitational pull and combined with lower equilibrium temperatures, they are expected to have lower atmospheric scale heights, and vice versa. A greater value of stellar radius also contributes to a lower relative transit depth. In Figure 4, we illustrate the magnitude of the centrifugal correction over a range of planetary parameters. The synthetic spectra with and without rotation were compared at each grid point, and the average difference between them was calculated. Bounding curves to denote values of this difference at 30, 60 and 100 ppm (Barstow et al., 2017; Taylor, 2022) respectively, are plotted. We can distinguish the change in the transmission spectrum due to rotation from random noise at either 30, 60 or 100 ppm, for the planets that lie below and to the right of each bounding curve. Real planets are overplotted on this figure, with the planetary radius, mass, equilibrium temperature and host star radius for each obtained from the NASA Exoplanet Archive. Two of the exoplanets which would show significant differences in retrieved parameters with rotation included are Cycle 1 JWST targets: WASP-19 b and WASP-121 b. The exact magnitude of the effect will depend not only on the bulk properties of the planet but also on its atmospheric composition, and therefore the location of a particular planet in Figure 4 is intended as a guide rather than an exact statement of how large the effect will be in each case. Figure 4: Orbital period versus a metric of planet atmosphere inflation with real planets plotted. Lines indicate the boundaries where the offset between transmission spectra with and without rotation reach values of 30 ppm, 60 ppm, and 100 ppm. The change in the transmission spectrum due to rotation is strongest towards the lower right. Where the effect of rotation exceeds the 30 ppm limit, the planets are marked with their names. Cycle 1 JWST targets WASP-19 b and WASP-121 b are also marked and highlighted in orange. ## 4 Discussion ### 4.1 Dependence of Centrifugal Acceleration on Latitude and Planetary Oblateness We have not accounted for the variation of centrifugal acceleration with latitude in this process. To demonstrate that taking the latitudinal average is a reasonable approximation, we generated a transmission spectrum for a planet segmented into 20 latitudinal sections. The effective gravitational acceleration for each segment increases towards both poles as the perpendicular distance from the rotation axis decreases. However, the difference between this transmission spectrum and one generated without considering the latitudinal variation for realistic values of the orbital period (1 day), planetary mass (1 $\textrm{M}_{\textrm{Jup}}$), planetary radius (1 $\textrm{R}_{\textrm{Jup}}$), and equilibrium temperature (2000 K) is of the order of 5 ppm. We have also not accounted for the oblateness of a planet. Recent works (Berardo & de Wit, 2022; Busuttil, 2017) have considered the effect of the oblateness of a planet on the density derived from the transit lightcurve. Because the planet is elongated in the direction of the star, along the line of sight at transit, the planet volume is greater than that inferred from the planet radius derived from transit light curves. This leads to the planet density, and hence the gravity acting on the annulus of atmosphere, being over-estimated. The most strongly affected exoplanets are those which are closest to Roche lobe filling, and tend to be short period. In addition, when taking the depth in full transit, the oblateness of the planet does not need to be considered; however, as discussed in Grant & Wakeford (2022), the oblateness of a planet does affect the shape of the transit curve at ingress and egress, especially at large impact parameters. Routines to fit the transmission spectrum as a function of $\theta$ (see Fig 1) have recently been developed (Grant & Wakeford, 2022), and the oblateness of the planet will need to be considered for this type of analysis. Accounting for the oblateness of the atmosphere would serve to further decrease the gravity from the value we calculate above. Our quantitative estimates of the magnitude of the effect of rotation are therefore conservative, although we do not expect them to deviate from reality by more than a few ppm. ### 4.2 Tidal Locking and Rotation Period The type of planets most affected by the centrifugal effect are ultra short period hot Jupiters. We have assumed that these planets are tidally locked, because for these planets, e.g., a Jupiter-like planet orbiting a Sun-like star with a 1 day orbit, the tidal locking timescale ($\sim$10 Myr) is much shorter than the estimated ages of these systems (>1 Gyr). ## 5 Conclusion In this work, we have analysed the effect of the rotation of an exoplanet on atmospheric retrievals using transmission spectroscopy. We find that for low- density, fast-spinning planets (planets on the lower right section of Figure 4), the centrifugal effect increases the spectroscopic transit depth on the order of 10—100ppm. The combined effect of decreased surface gravity and increased centrifugal acceleration causes the increase in scale height due to rotation to be most prominent on these planets. The difference in the transmission spectrum with and without rotation is more than 30 ppm for the planets labelled with their respective names in the lower right section. Ignoring this effect in our models may lead to retrieved values of gas abundances or temperatures that are significantly different from the true values. Table 1 demonstrates that the abundances derived for $\textrm{H}_{2}\textrm{O}$ and $\textrm{CO}_{2}$ are both displaced from their true values by $\sim 0.3\textrm{dex}$ (or $\sim 3\sigma$) when rotation is not included in the models used for retrieval. As the feature amplitudes are increased by the inclusion of the centrifugal forces, the resulting transmission spectrum mimics the properties of one produced by a lighter and hotter atmosphere without centrifugal forces. We also find that the effect, while mostly grey, is a scaled version of the actual transmission spectrum. We have not included effects such as superrotation or drag of the atmosphere caused by the spinning of the planet. As the quality of data and speed of computational methods improve, more realistic atmospheric models will be required to get a detailed understanding of exoplanet atmospheres. In conclusion, we recommend that our proposed correction to scale heights, which does not add any significant computational time to the retrievals, should be considered for forward models and atmospheric retrievals of gas- giant exoplanets with orbital periods under 3.0 days. ## Acknowledgements AB is supported by a PhD studentship funded by STFC and The Open University. JKB is supported by an STFC Ernest Rutherford Fellowship, grant number ST/T004479/1. CAH is supported by STFC grants ST/T000295/1 and ST/X001164/1. SRL is supported by UKSA grants ST/W002949/1 and ST/V005332/1. We thank the anonymous referee for a very helpful and constructive review. The NEMESIS code is open-source and can be found at: https://nemesiscode.github.io/. The k-tables used for retrievals can be obtained from the Exomol (Tennyson et al., 2016) database at: https://www.exomol.com/. This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program. ## Data Availability The grid of forward models generated for the parameter space exploration (Figure 4) can be found at: https://github.com/riobanerjee/spinny-stuff. ## References * Barstow et al. (2017) Barstow J. K., Aigrain S., Irwin P. G. J., Sing D. K., 2017, The Astrophysical Journal, 834, 50 * Berardo & de Wit (2022) Berardo D., de Wit J., 2022, ApJ, 935, 178 * Borysow (2002) Borysow A., 2002, Astronomy & Astrophysics, 390, 779–782 * Buchner et al. (2014) Buchner J., et al., 2014, Astronomy & Astrophysics, 564, A125 * Busuttil (2017) Busuttil R., 2017, PhD thesis, Open University Milton Keynes, UK * Charbonneau et al. (2002) Charbonneau D., Brown T. M., Noyes R. W., Gilliland R. L., 2002, ApJ, 568, 377 * Chubb et al. (2021) Chubb K. L., et al., 2021, A&A, 646, A21 * Cortés-Zuleta et al. (2020) Cortés-Zuleta P., Rojo P., Wang S., Hinse T. C., Hoyer S., Sanhueza B., Correa-Amaro P., Albornoz J., 2020, A&A, 636, A98 * Feroz et al. (2009) Feroz F., Hobson M. P., Bridges M., 2009, MNRAS, 398, 1601 * Frank et al. (2002) Frank J., King A., Raine D., 2002, Accretion Power in Astrophysics, 3 edn. Cambridge University Press, doi:10.1017/CBO9781139164245 * Grant & Wakeford (2022) Grant D., Wakeford H. R., 2022, arXiv e-prints, p. arXiv:2212.07294 * Heller et al. (2011) Heller R., Barnes R., Leconte J., 2011, Origins of Life and Evolution of the Biosphere, 41, 539 * Irwin (2009) Irwin P. G. J., 2009, Giant Planets of Our Solar System. Springer Berlin, Heidelberg, doi:10.1007/978-3-540-85158-5 * Irwin et al. (2008) Irwin P., et al., 2008, Journal of Quantitative Spectroscopy and Radiative Transfer, 109, 1136 * Kempton et al. (2018) Kempton E. M. R., et al., 2018, PASP, 130, 114401 * Kreidberg et al. (2014) Kreidberg L., et al., 2014, Nature, 505, 69 * Krissansen-Totton et al. (2018) Krissansen-Totton J., Garland R., Irwin P., Catling D. C., 2018, AJ, 156, 114 * Lacis & Oinas (1991) Lacis A. A., Oinas V., 1991, J. Geophys. Res., 96, 9027 * Lee et al. (2012) Lee J. M., Fletcher L. N., Irwin P. G. J., 2012, MNRAS, 420, 170 * Lindal et al. (1985) Lindal G. F., Sweetnam D. N., Eshleman V. R., 1985, AJ, 90, 1136 * Polyansky et al. (2018) Polyansky O. L., Kyuberis A. A., Zobov N. F., Tennyson J., Yurchenko S. N., Lodi L., 2018, MNRAS, 480, 2597 * Sing et al. (2011) Sing D. K., et al., 2011, A&A, 527, A73 * Skilling (2004) Skilling J., 2004, AIP Conference Proceedings, 735, 395 * Staab et al. (2017) Staab D., Haswell C. A., Smith G. D., Fossati L., Barnes J. R., Busuttil R., Jenkins J. S., 2017, MNRAS, 466, 738 * Taylor (2022) Taylor J., 2022, Monthly Notices of the Royal Astronomical Society: Letters, 513, L20 * Tennyson et al. (2016) Tennyson J., et al., 2016, Journal of Molecular Spectroscopy, 327, 73 * The JWST Transiting Exoplanet Community Early Release Science Team et al. (2022) The JWST Transiting Exoplanet Community Early Release Science Team et al., 2022, Nature * Tinetti et al. (2007) Tinetti G., et al., 2007, Nature, 448, 169–171 * Wakeford et al. (2018a) Wakeford H. R., et al., 2018a, AJ, 155, 29 * Wakeford et al. (2018b) Wakeford H. R., et al., 2018b, AJ, 155, 29 * Yurchenko et al. (2020) Yurchenko S. N., Mellor T. M., Freedman R. S., Tennyson J., 2020, MNRAS, 496, 5282
# Diffuse $\gamma$-ray emission around the massive star forming region of Carina Nebula Complex Ting-Ting Ge1, Xiao-Na Sun1, Rui-Zhi Yang234, Yun-Feng Liang1, En-Wei Liang1 1Guangxi Key Laboratory for Relativistic Astrophysics, School of Physical Science and Technology, Guangxi University, Nanning 530004, China 2Department of Astronomy, School of Physical Sciences, University of Science and Technology of China, Hefei, Anhui 230026, China 3CAS Key Labrotory for Research in Galaxies and Cosmology, University of Science and Technology of China, Hefei, Anhui 230026, China 4School of Astronomy and Space Science, University of Science and Technology of China, Hefei, Anhui 230026, China <EMAIL_ADDRESS> ###### Abstract We report the Fermi Large Area Telescope (Fermi-LAT) detection of the $\gamma$-ray emission toward the massive star forming region of Carina Nebula Complex (CNC). Using the latest source catalog and diffuse background models, we found that the GeV $\gamma$-ray emission in this region can be resolved into three different components. The GeV $\gamma$-ray emission from the central point source is considered to originate from the $\eta$ Carina ($\eta$ Car). We further found the diffuse GeV $\gamma$-ray emission around the CNC which can be modelled by two Gaussian disks with radii of 0.4∘ (region A) and 0.75∘ (region B), respectively. The GeV $\gamma$-ray emission from both the regions A and B have good spatial consistency with the derived molecular gas in projection on the sky. The GeV $\gamma$-ray emission of region A reveals a characteristic spectral shape of the pion-decay process, which indicates that the $\gamma$-rays are produced by the interactions of hadronic cosmic rays with ambient gas. The $\gamma$-rays spectrum of region B has a hard photon index of 2.12$\pm 0.02$, which is similar to other young massive star clusters. We argue that the diffuse GeV $\gamma$-ray emission in region A and region B likely originate from the interaction of accelerated protons in clusters with the ambient gas. ###### keywords: cosmic rays – gamma-rays: ISM - open clusters and associations: individual: CNC ††pubyear: 2022††pagerange: Diffuse $\gamma$-ray emission around the massive star forming region of Carina Nebula Complex–References ## 1 Introduction The origin of cosmic rays (CRs) in the Milky Way is still a mystery. Supernova remnants (SNRs) have long been considered as the main acceleration sites of Galactic CRs (Baade & Zwicky, 1934). Moreover, growing evidence suggests that the young massive star clusters (YMCs) play an important role in accelerating Galactic CRs. Several such systems have been identified, e.g., Cygnus cocoon (Ackermann et al., 2011; Aharonian et al., 2019), Westerlund 1 (Abramowski et al., 2012), Westerlund 2 (Yang et al., 2018), NGC 3603 (Yang & Aharonian, 2017), 30 Dor C (H.E.S.S. Collaboration et al., 2015), RSGC 1 (Sun et al., 2020a), W40 (Sun et al., 2020b), Mc20 (Sun et al., 2022), and NGC 6618 (Liu et al., 2022). CNC is one of the most active and nearest massive star forming regions in our Galaxy. It is located in the Carina spiral arm (Vallée, 2014) with the distance of $\sim$2.3 kpc (Smith & Brooks, 2007). It contains 8 open clusters with more than 66 O-type stars, 3 Wolf-Rayet (WR) stars, and the peculiar object of the Luminous Blue Variable (LBV) $\eta$ Car (Smith, 2008). Potential particle acceleration sites in the CNC include massive binary systems (e.g., $\eta$ Car), massive star clusters (e.g., Tr 14, 15, 16), and possibly some unrecognized SNRs (Smith et al., 2000). The central region of CNC mainly consists of the young star clusters Tr 14, 15, and 16. The northwestern part of CNC contains prominent ionized hydrogen (H ii) region Gum 31 around the very young ($\sim$1-2 Myr) stellar cluster NGC 3324, and the oldest ($\sim$8-10 Myr) cluster NGC 3293 (Göppl & Preibisch, 2022; Preibisch et al., 2017). Tr 14 is one of the most extensively studied young ($\sim$1 Myr) massive clusters in our Galaxy. It contains no less than 13 O-type stars, and its total mass is estimated to be $1\times 10^{4}$$M_{\odot}$ (Ascenso et al., 2007). Tr 14 may be 1-2 Myr younger than Tr 16 (Smith, 2006), and is closer to its associated molecular cloud than Tr 16 (fujita21). Tr 16 includes 42 O-type stars. It is well-known for the presence of $\eta$ Car, which is a massive, variable star binary. $\eta$ Car is composed of a primary star (a LBV star with more than $90\mbox{$M_{\odot}$}$) and a companion star (an O or WR star with less than $30\mbox{$M_{\odot}$}$) (Hillier et al., 2001; Verner et al., 2005). Both stars in $\eta$ Car have strong winds and high mass-loss rates. The stellar mass-loss rate of the primary is $\dot{M_{\rm 1}}\approx 2.5\times 10^{-4}\mbox{$M_{\odot}$}\ \rm yr^{-1}$ with terminal velocity of 500 km $\rm s^{-1}$ (Pittard & Corcoran, 2002). The companion star has a stellar mass-loss rate of $\dot{M_{\rm 2}}\approx 10^{-5}\mbox{$M_{\odot}$}\ \rm yr^{-1}$ and faster stellar wind with terminal velocity of 3000 km $\rm s^{-1}$ (Pittard & Corcoran, 2002; Parkin et al., 2009). In colliding-wind binaries (CWBs) or YMCs, strong shocks produced by the interaction between their powerful stellar winds which interacts with the interstellar medium likely accelerate particles to very high energy (Del Valle & Romero, 2012; De Becker & Raucq, 2013). $\rm X$-rays observations with $\it NuSTAR$ (Hamaguchi et al., 2018) and $\gamma$-rays observations (White et al., 2020) show that the accelerated particles produce non-thermal emission through colliding with surrounding gas. The Astro-Rivelatore Gamma a Immagini Leggero (AGILE) (Tavani et al., 2009) and Fermi-LAT (Abdo et al., 2009a) have detected $\gamma$-ray emissions from the direction of $\eta$ Car. Its $\gamma$-ray spectrum measured using Fermi- LAT can be described by two different components with a division of 10 GeV. The high-energy component is generally suggested to be the hadronic origin (Farnier et al., 2011; Reitberger et al., 2015; Balbo & Walter, 2017; White et al., 2020). The origin of the lower-energy component is still uncertain (Farnier et al., 2011; Gupta & Razzaque, 2017; Balbo & Walter, 2017; Ohm et al., 2015; White et al., 2020). Recently, a point-like VHE $\gamma$-ray source from the direction of $\eta$ Car was detected by HESS. Its spectrum is described by a power-law. (H. E. S. S. Collaboration et al., 2020). For GeV $\gamma$-ray emission, $\eta$ Car is considered as a point-like source. Until recently, White et al. (2020) found significant extended $\gamma$-ray emission around the $\eta$ Car, and they use a CO template of this region to model the extended emission. Yang et al. (2018) analyzed the origin of the $\gamma$-ray emission from FGES J1036.3-5833 which includes the regions of Westerlund 2 and CNC. FGES J1023.3-5747 (Ackermann et al., 2017) and HESS J1023-575 (Aharonian et al., 2007; H. E. S. S. Collaboration et al., 2011) are the diffuse $\gamma$-ray emission seen from the vicinity of Westerlund 2 (Yang et al., 2018; Mestre et al., 2021) which seem to indicate that the YMC Westerlund 2 can provide sufficient non-thermal energy to account for the $\gamma$-ray emission. In this paper, we analyzed the $\gamma$-ray emission toward CNC taking advantage of more than 13 years of Fermi-LAT data, and tried to study the possible origin of CNC $\gamma$-ray emission. The paper is organized as follows. In Sect.2, we present the data set and the results of the data analysis. In Sect.3, we study the gas distributions in this region. In Sect.4, we investigate the possible origin of the $\gamma$-ray emissions. In Sect.5, the CR content around this region is discussed. Finally, In Sect.6, we discuss the implications of our results. ## 2 Fermi-LAT data analysis We selected the latest Fermi-LAT Pass 8 data around the CNC region from August 4, 2008 (MET 239557417) until September 25, 2021 (MET 654297860), and used the standard LAT analysis software package $\it v11r5p3$ 111https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/. We chose a 10∘$\times$ 10∘ square region centered at the position of CNC (R.A. = 161.00∘, Dec. = -59.55∘) as the region of interest (ROI). The instrument response functions (IRFs) P8R3_SOURCE_V3 was selected to analyze the events in the ROI of evtype = 3 and evclass = 128. We also applied the recommended expression $\rm(DATA\\_QUAL>0)\&\&(LAT\\_CONFIG==1)$ to select the good time intervals (GTIs) based on the information provided in the spacecraft file. In order to reduce $\gamma$-ray contamination from the Earth’s albedo, only the events with zenith angles less than 90∘ are included for the analysis. We used the Python module that implements a maximum likelihood optimization technique for a standard binned analysis 222https://fermi.gsfc.nasa.gov/ssc/data/analysis/scitools/python_tutorial.html. In the background model, we included the recently released Fermi-LAT 10-year Source Catalog (4FGL-DR2, Ballet et al., 2020; Abdollahi et al., 2020) within the ROI enlarged by 5∘. The source model file was generated using the script make4FGLxml.py333https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/, and all sources within 4.5∘ of center were set free. For the diffuse background components, we use the latest Galactic diffuse emission model gll_iem_v07.fits and isotropic extragalactic emission model iso_P8R3_SOURCE_V3_v1.txt444https://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html with their normalization parameters free. First, we used the events above 500 MeV to study the spatial distribution of the $\gamma$-ray emission near CNC. The $\gamma$-ray counts map in the $8\hbox{${}^{\circ}$}\times 8\hbox{${}^{\circ}$}$ region around CNC is shown in Fig.1. In our analysis, we found a new source of which the TS and corresponding position are (TS = 129; l = 285.29; b = 0.15). It is marked with green cross in Fig.1. We find no counterpart for this new source in other wavelengths. We put the new source in the spatial analysis model with a power- law spectrum. In addition, Martí-Devesa & Reimer (2021) found a new source (i.e. 4FGL J 1036.1-5934) identified as the nova that occurred in March 2018 (Jean et al., 2018). This new source is also included in the latest catalog. Thus, to prevent contamination at the position of CNC from the emission of nova, we excluded the data from MET 542636972 to 558588527. We note that there are three 4FGL-DR2 catalog point sources: 4FGL J1045.1-5940, 4FGL J1048.5-5923, 4FGL J1046.7-6010. 4FGL J1045.1-5940 was identified as the massive binary $\eta$ Car. Figure 1: Fermi-LAT counts map above 500 MeV in the $8^{\circ}\times 8^{\circ}$ region surrounding the CNC, with pixel size of $0.1^{\circ}\times 0.1^{\circ}$. All black pluses represent the 4FGL-DR2 sources within the region. The black circle shows the extended emission related to FGES J1036.4-5833. The green circle shows the YMCs Westerlund 2. The crosses indicate the three point sources 4FGL J1045.1–5940, 4FGL J1048.5-5923, 4FGL J1046.7-6010 around the CNC. ### 2.1 Multi-point source model To study the excess $\gamma$-ray emission around CNC, we test several different models. The tested models are summarized in Table.1. We firstly determine a point source that used to model $\eta$ Car binary stars. To do that, we excluded these three point sources from our background model. We added a point-like source at the $\eta$ $\rm Car^{\prime}s$ position into our background model, and optimized the localization using the gtfindsrc tool. The best-fit position of the excess above 500 MeV is [RA = $161.30\hbox{${}^{\circ}$}$ , Dec = $-59.70\hbox{${}^{\circ}$}$], with $2\sigma$ error radius of $0.15\hbox{${}^{\circ}$}$. In the later analysis, the point source at this position (source $\eta$ Car) will be always included in the models and is used to represent the emission from $\eta$ Car. Our first model (model 1) use three point sources (point source $\eta$ Car, 4FGL J1046.7-6010 and 4FGL J1048.5-5923) to model the excess $\gamma$-ray emission around CNC. Each point source has a LogParabola spectral shape. We performed a binned likelihood analysis to derive the likelihood value ($-\log({\cal L})$) and the Akaike information criterion (AIC, Akaike (1974)) value. The AIC is defined as AIC = $-2\log({\cal L})+2k$, where $k$ is the number of free parameters in the model. The derived $-\log({\cal L})$ and AIC for the multi- point source model are -1265307 and -2530414, respectively. ### 2.2 Spatial template for two Gaussian disks To further investigate the diffusion of the GeV $\gamma$-ray emission, we used a spatial template that consists of two regions (A, B). Each region are modelled as a Gaussian disk and we varied the positions and sizes of the disks to find the best-fit parameters. The significance of the extended source is quantified by $\rm TS_{ext}=2\log({\cal L}_{ext}/{\cal L}_{ps})$, where $\rm{\cal L}_{ext}$ is the maximum likelihood for the extended source model, and $\rm{\cal L}_{ps}$ for the point-like sources model (Lande et al., 2012). For region A, we found that the $\gamma$-ray emission of $\eta$ Car is very strong. Hence, the added the center of Gaussian disk is set at above best-fit position. The radius of the disk varies from $0.2\hbox{${}^{\circ}$}$ to $1\hbox{${}^{\circ}$}$ in steps of $0.05\hbox{${}^{\circ}$}$. We used this Gaussian disks to replace the spatial components of the two unassociated point sources: 4FGL J1046.7-6010 and 4FGL J1048.5-5923 in the multi-point source model. The likelihood ratio peak at the Gaussian disk template with a radius of $0.4\hbox{${}^{\circ}$}\pm\ 0.02\hbox{${}^{\circ}$}$ can fit the $\gamma$-ray excess for the central part of the CNC (region A). It was found that there exists extended residual $\gamma$-ray emission in the northwestern part of the CNC (region B) after this process. Therefore, to find out whether the residual emission is extended or not, we used a point-like source or a Gaussian disk to model this residuals. The position of the added point source or the center of the Gaussian disk is set to the peak position of the residuals. The tested radius of the Gaussian disk varies from $0.2\hbox{${}^{\circ}$}$ to $1\hbox{${}^{\circ}$}$ with a step of 0.05 for region B. In the above test, we find that the Gaussian disks with the radii of 0.75∘ for the region B and 0.4∘ for the region A (model 2) can best fit the data. The derived -$\log({\cal L})$ and AIC for this model are -1265580 and -2530978 , respectively. We subtracted the $0.4\hbox{${}^{\circ}$}$ Gaussian disk and the $0.75\hbox{${}^{\circ}$}$ Gaussian disk from the model 2 and derived the residual map shown in Fig.2. The map revealed significant diffuse residuals in this region. In the following subsections, we test different spatial models of these diffuse emissions. Figure 2: Residual map above 500 MeV near the CNC after subtracting two Gaussian disks with a radii of 0.4∘ and 0.75∘ from the model 2. Details are given in Sect. 2.2. The map has a size of $5\hbox{${}^{\circ}$}\times\ 5\hbox{${}^{\circ}$}$ ($0.1\hbox{${}^{\circ}$}\times\ 0.1\hbox{${}^{\circ}$}$ pixelsize) and has been smoothed with a Gaussian kernel of 0.9∘. The regions A and B is marked with the red circles. The green crosses indicate the star clusters. The yellow asterisk indicates the position of $\eta$ Car. The white contours show the $\rm H_{2}$ column density derived in Sect. 3. ### 2.3 Spatial template for molecular hydrogen To determine whether the extended GeV $\gamma$-ray emission is correlated with the gas distribution, we considered a spatial template of H2. For the H2 template, we use the Carbon monoxide (CO) composite survey (see Sect. 3 for details) to produce. As shown in Fig.2, the white contours represent the H2 distribution from the CO measurements, which overlap well with the excess emission. We added the H2 template (with a power-law spectrum) into the multi- point source model to obtain our model 3. Through the binned likelihood analysis, the derived -$\log({\cal L})$ and the AIC for the H2 template are -1265540 and -2530890, respectively. ### 2.4 Spatial template for molecular and ionized hydrogen CNC contains one of the largest and most active H ii regions in our Galaxy. The derivation for the details of the H ii distribution is given in Sect. 3. We also note that the $\gamma$-ray emission has good spatially correlation with the H ii gas (see Sect. 3). Thus, we adopted a spatial template considering H ii \+ H2 gases. We summed the column density of H ii and H2 gases to generate the H2 \+ H ii template with a power-law spectral shape. The H2 \+ H ii template is added into the multi-point source model as a diffuse component (model 4). After performing the binned likelihood analysis, the derived -$\log({\cal L})$ and the AIC for the spatial model of H2 \+ H ii are -1265619 and -2531048, respectively. Table 1: Spatial analysis (>500MeV) results for the different models. Model | -$\log({\cal L})$ | $\rm TS_{ext}$ | d.o.f. | $\rm\Delta AIC$ ---|---|---|---|--- Model 1 (multi-point sources) | -1265277 | - | 93 | - Model 2 ($0.4\hbox{${}^{\circ}$}$ Gaussian disk + $0.75\hbox{${}^{\circ}$}$ Gaussian disk) | -1265580 | 606 | 91 | -610 Model 3 (multi-point sources + $\rm H_{2}$ template) | -1265540 | 526 | 95 | -522 Model 4 (multi-point sources + $\rm H_{2}$ \+ H ii template) | -1265619 | 684 | 95 | -680 Model 5 (multi-point sources + $0.4\hbox{${}^{\circ}$}$ Gaussian disk + $0.75\hbox{${}^{\circ}$}$ Gaussian disk) | -1265647 | 740 | 97 | -732 ### 2.5 Spatial template for two Gaussian disks and multi-point sources In the model 2, we removed the two point sources 4FGL J1046.7-6010 and 4FGL J1048.5-5923 from the background and accounted for the emissions close to CNC with extended components. Here we further test the inclusion of both the extended components and two point sources in the model (model 5). Thus, we added the two 4FGL point sources back into the model file based on the model 2. Each point source has a LogParabola spectral shape. We find that this model (multi-point sources + $0.4\hbox{${}^{\circ}$}$ \+ $0.75\hbox{${}^{\circ}$}$ Gaussian disks), can explain the observational data best among the 5 models we have considered. The derived -$\log({\cal L})$ for model 5 is equal to -1265647, and the AIC is equal to -2531100. To compare the goodness of the fit in the different models, we also calculated the $\rm\Delta AIC$, the AIC of the model 1 and the model 2-5. It is evident from Table 1 that the model with the two Gaussian disks and multi-point sources provides the highest $\rm TS_{ext}$ value and the minimum $\rm\Delta AIC$ value. Therefore, in the following analysis, we use the model 5 as the spatial template. The derived photon index for $0.4\hbox{${}^{\circ}$}$ Gaussian disk above 500 MeV is $2.36\pm 0.01$ and the total $\gamma$-ray flux can be estimated as $(2.65\pm 0.02)\times 10^{-8}\rm ph\ cm^{-2}\ s^{-1}$. Considering the distance of about 2.3 kpc, the total $\gamma$-ray luminosity is estimated to be $(1.34\pm 0.01)\times 10^{34}\rm erg\ s^{-1}$ above 500 MeV with the single power-law spectrum. The photon index of 0.75∘ Gaussian disk is $2.12\pm 0.02$ and the total flux is estimated as $(1.97\pm 0.02)\times 10^{-8}\rm ph\ cm^{-2}\ s^{-1}$, corresponding to $(9.95\pm 0.08)\times 10^{33}\rm erg\ s^{-1}$ above 500 MeV. We found that this model give the best log-likelihood value. We argue these additional point sources may represent the contamination from bright central source $\eta$ Car. In our following spectral analysis we derive the spectral energy distributions (SEDs) of the two Gaussian disks with and without the additional multiple point sources (model 2 and model 5), respectively. We found the results are consistent with each other within error bars. And we include the difference as systematic errors. ### 2.6 Spectral analyses We used the best-fit spatial template as the spatial model of the extended $\gamma$-ray emission, and assumed a power-law spectral shape to extract the SED. We divided the energy range 300 MeV-200 GeV into seven logarithmically spaced energy bins, and in each bin the SED flux is derived via the maximum- likelihood method. We calculated the upper limits within 3$\sigma$ for the energy bins with a significance lower than 2$\sigma$. Fig.3 shows the derived SEDs of the regions A (black) and B (red). The dashed line represents the predicted $\gamma$-ray emissions assuming the CR density in regions A and B, respectively. In the analysis, we estimated the uncertainties of SEDs system due to the Galactic diffuse emission model and the LAT effective area ($\rm A_{eff}$) by changing the normalization by $\pm 6\%$ from the best-fit value for each energy bins, and considered the maximum flux deviations of the source as the systematic error (Abdo et al., 2009b). Figure 3: SEDs of $\gamma$-ray emission in the region A (black point) and region B (red data) based on the model 5. The dashed curves represent the predicted $\gamma$-ray emission assuming the CR density in the two regions is the same as those measured locally by AMS-02 (Aguilar et al., 2015). Both the statistical and systematic errors are considered. For details, see the context in Sect. 2.6. ## 3 Gas content around CNC We investigated three different gas phases, i.e., the H2, the neutral atomic hydrogen (H i), and the H ii, in the vicinity of CNC region. The H i data is from the data-cube of the H i $\rm{4\pi}$ survey (HI4PI), which is a 21-cm all-sky database of Galactic H i (HI4PI Collaboration et al., 2016). We estimated the H i column density using the equation, $N_{\text{H\,{i}}}=-1.83\times 10^{18}T_{\rm s}\int\mathrm{d}v\ {\rm ln}\left(1-\frac{T_{\rm B}}{T_{\rm s}-T_{\rm bg}}\right),$ (1) where $T_{\rm bg}\approx 2.66\ \rm K$ is the brightness temperature of the cosmic microwave background radiation at 21 cm, and $T_{\rm B}$ is the brightness temperature of the H i emission. In the case when $T_{\rm B}>T_{\rm s}-5\ \rm K$, we truncate $T_{\rm B}$ to $T_{s}-5\ \rm K$; $T_{s}$ is chosen to be 150 K. The derived H i column map integrated in the velocity range $v_{\rm LSR}=[-32,-5]$ $\rm km\ s^{-1}$ (Seo et al., 2019; Rebolledo et al., 2021) is shown in the left panel of Fig.4. We also use this range to integrate the line emission of the CO in this velocity range. We use the CO composite survey (Dame et al., 2001) to trace the $\rm H_{2}$. The standard assumption of a linear relationship between the velocity- integrated brightness temperature of CO 2.6-mm line, $W_{\rm CO}$, and the column density of molecular hydrogen, $N(\rm H_{2})$, i.e., $N({\rm H_{2}})=X_{\rm CO}\times W_{\rm CO}$ (Lebrun et al., 1983). $X_{\rm CO}$ is the $\rm H_{2}/CO$ conversion factor that chosen to be $\rm 2.0\times 10^{20}\ cm^{-2}\ K^{-1}\ km^{-1}\ s$ as suggested by Dame et al. (2001) and Bolatto et al. (2013). The derived molecular gas column density is shown in the middle panel of Fig.4. Figure 4: Maps of gas column densities in three gas phases. Left shows the map of H i column density derived from 21-cm all-sky survey. Middle shows the H2 column density derived from the CO data. Right shows the H ii column density derived from the Planck free-free map assuming the effective density of electrons $n_{\rm e}=10~{}\rm cm^{-3}$. The white circles indicate the region A and region B, which is the same as the red circles in Fig.2. For details, see the context in Sect.3. CNC is one of the largest and most active H ii regions in the Galaxy. To obtain the H ii column density we used the Planck free-free map (Planck Collaboration et al., 2016). First, we transformed the emission measure (EM) into free-free intensity by using the conversion factor in Table 1 of Finkbeiner (2003). Then, we calculate the H ii column density from the intensity ($I_{\nu}$) of free-free emission by using Eq.(5) of Sodroski et al. (1997), $\displaystyle N_{\text{H\,{ii}}}=$ $\displaystyle 1.2\times 10^{15}\ {\rm cm^{-2}}\left(\frac{T_{\rm e}}{1\ \rm K}\right)^{0.35}\left(\frac{\nu}{1\ \rm GHz}\right)^{0.1}\left(\frac{n_{\rm e}}{1\ \rm cm^{-3}}\right)^{-1}$ (2) $\displaystyle\times\frac{I_{\nu}}{1\ \rm Jy\ sr^{-1}},$ where $\nu=\rm 353\ GHz$ is the frequency, and an electron temperature of $T_{e}=\rm 8000\ K$. The H ii column density is inversely proportional to the effective density of electrons $n_{\rm e}$. Thus, we adopted an effective density $10\ \rm cm^{-3}$, which is the value suggested in Sodroski et al. (1997) for the region inside the solar circle. The derived H ii column density is also shown in the right panel of Fig.4. We note that the H ii gas distribution is similar to that of the YMCs NGC 3603 (Yang & Aharonian, 2017) and W40 (Sun et al., 2020b). They all have the $\gamma$-ray emission region, which shows good spatial consistency with the H ii column density. Moreover, several YMCs are located here, i.e Tr 16, Tr 14, which had ionized the ambient media in the CNC. Table 2: Gas total masses and number densities within the region A and region B. See Sect. 3 for details. Tracer | Region | Mass ($\rm{10^{4}\mbox{$M_{\odot}$}}$) | Number density ($\rm{cm^{-3}}$) ---|---|---|--- H2 \+ H ii | A | 5.54 + 4.31 | 231 H2 \+ H ii | B | 16.18 + 3.99 | 72 The total mass within the cloud in each pixel can be calculated from the expression $M_{\rm H}=m_{\rm H}N_{\rm H}A_{\rm angular}d^{2}$ (3) where $M_{\rm H}$ is the mass of the hydrogen atom, $N_{\rm H}=N_{\text{H\,{ii}}}+2N_{\rm H_{2}}+N_{\text{H\,{i}}}$ is the total column density of the hydrogen atom in each pixel. $A_{\rm angular}$ is the angular area, and $d$ is the distance of CNC. The total mass in the GeV $\gamma$-ray emission region is estimated to be $\sim 5.54\times 10^{4}~{}\mbox{$M_{\odot}$}$ for region A and $\sim 1.62\times 10^{5}~{}\mbox{$M_{\odot}$}$ for region B. If we assume the GeV $\gamma$-ray emission within both the regions A and B are spherical in geometry, with the corresponding sizes of $0.4\hbox{${}^{\circ}$}$ and $0.75\hbox{${}^{\circ}$}$. The radius can be estimated as $r_{\rm{A,B}}=d\times\theta_{\rm{A,B}}(\rm{rad})$, and $d$ is the distance to the objective region. The averaged over the volume gas number density of region A is $\rm n_{gas}=231\ cm^{-3}$, while for region B, the values is $\rm n_{gas}=72\ cm^{-3}$. Table 2 shows the gas total masses and number densities within the region A and region B. ## 4 The origin of gamma-ray emission There is the massive binary $\eta$ Car in the center of region A. In spatial analysis, we added $\eta$ Car in the background model as a point-like source. PSR J1048-5832 is located about 1.2∘ away from the center of the $\gamma$-ray emission region (as shown in Fig.1), which makes the association of the diffuse $\gamma$-ray emission to this pulsar unlikely (Danilenko et al., 2013). On the other hand, there are no known SNRs inside this region. ### 4.1 Region A White et al. (2020) found that the significant extended $\gamma$-ray emission in this region, suggesting an additional component associated with CRs interactions. In our case, the YMCs Tr 14 and Tr 16 are the most promising sources of the CRs. Although we cannot rule out the possibility that $\eta$ Car is the source of the CRs that produced the diffuse $\gamma$-ray emissions, we postulate here that those clusters may be another natural acceleration site of the CRs. We used Naima555https://naima.readthedocs.io/en/latest/index.html (Zabalza, 2015) to fit the SEDs. Naima is a numerical package that allows us to implement different functions and includes tools to perform Markov chain Monte Carlo (MCMC) fitting of nonthermal radiative processes to the data. We note that the extended GeV emission and the molecular hydrogen gas are spatially correlated. Thus, we assume the $\gamma$-rays are produced in the pion-decay process from the interaction of the CRs with the ambient molecular clouds. Because the low-energy data points are poorly constrained, we used a broken power-law spectrum $N(E)=\begin{cases}A(E/E_{0})^{-\alpha_{1}}&\mbox{: }E<E_{\mathrm{b}}\\\ A(E_{\mathrm{b}}/E_{0})^{(\alpha_{2}-\alpha_{1})}(E/E_{0})^{-\alpha_{2}}&\mbox{: }E>E_{\mathrm{b}}\end{cases},$ (4) to fit the GeV $\gamma$-ray data. The derived SED is shown in Fig.5. We treated A, $\alpha_{1}$, $\alpha_{2}$, $E_{\mathrm{b}}$ as free parameters for the fitting. As calculated in this section, the average number densities of the target protons for regions A and B are 231 $\rm cm^{-3}$ and 72 $\rm cm^{-3}$, respectively, which are derived from the gas distributions in Sect. 3 consider $\rm H_{2}$ \+ H ii. In Fig.5, we present the best-fitting results for region A. The maximum log-likelihood value is -0.1. The derived parameters are $\alpha_{1}=0.9\pm 0.3$, $\alpha_{2}=3.2^{+0.7}_{-0.3}$, $E_{\mathrm{b}}=21^{+4}_{-3}\rm GeV$ and the total energy is $W_{\rm p}=(1.11\pm 0.12)\times 10^{48}\ \rm erg$ for the protons above 2 GeV. The red dashed line in Fig.5 represents the predicted the fluxes of $\gamma$-ray emissions based on the $\rm H_{2}$ \+ H ii column density map in region A, assuming the CRs have the same as the local measurement by AMS-02 (Aguilar et al., 2015). We found that a significant CR enhancement in this region, which is predicted near the CR acceleration site. Figure 5: SED of emission in the region A for a $0.4\hbox{${}^{\circ}$}$ Gaussian disk spatial model. The red dashed line represents the predicted $\gamma$-ray emissions assuming the CR density in this region is the same as those measured locally by AMS-0.2 (Aguilar et al., 2015). We also tested the leptonic scenario that the $\gamma$-rays are generated via the inverse Compton (IC) scattering of relativistic electrons off the low- energy seed photons around this region. For the photon field of the IC calculations, we considered the cosmic microwave background (CMB) radiation field, optical-UV radiation field from the star light, and the dust infrared radiation field based on the model by Popescu et al. (2017). We note that region A is located in H ii regions, the ionizing massive stars will increase the optical and UV fields significantly and thus produce additional IC emissions (Liu & Yang, 2022). We calculated the IC spectrum using the formalism described in Khangulyan et al. (2014). To fit the lower energy break in the $\gamma$-ray spectrum, we require a relevant break in the spectrum of parent electrons. Thus, we assumed a broken power-law distribution of the relativistic electrons. As is shown in Fig.5, the solid black curve represents the total predicted $\gamma$-ray emission from the IC upscattering of the seed photons by relativistic electrons. The derived parameters for the electrons are $\alpha_{1}=0.51\pm 0.1$, $\alpha_{2}=4.1^{+0.6}_{-0.4}$, $E_{\mathrm{b}}=15.3^{+3.0}_{-1.7}\ \rm GeV$ and the total energy of the electrons (>2 GeV) is $W_{e}=(7.8\pm 1.2)\times 10^{49}\ \rm erg$. The IC model can fit the observable data well, and the corresponding maximum- likelihood values is -0.91. We cannot formally rule out the leptonic origin of this region. ### 4.2 Region B It should be noted that there is another possible particle accelerator located at the northwestern part of the CNC. For region B, including a bubble-shaped young H ii region Gum 31 around the young stellar clusters NGC 3324, and NGC 3293 (Göppl & Preibisch, 2022). Due to the good spatial correlation of the extended GeV $\gamma$-ray emission and the $\rm H_{2}$ gas, it is very probable that the $\gamma$-rays are related to the $\rm H_{2}$ gas. Thus, we used a hadronic scenario in which high energy $\gamma$-rays produced in the pion-decay process follow the proton-proton inelastic interactions, using the parameterization of the cross-section of Kafexhiu et al. (2014). In this region, we used a single power-law spectrum for the parent proton distribution, $N(E)=A~{}E^{-\alpha},$ (5) treating $A$, $\alpha$ as free parameters for the fitting. As shown in Fig.6, we present the best-fit results for region B. The maximum log-likelihood value is -1.87. The derived index of the region B, $\alpha=2.27\pm 0.05$, with the total energy $W_{\rm p}=(5.0\pm 0.4)\times 10^{48}\ \rm erg$ for the protons above 2 GeV. In addition to the hadronic model, we also tried to consider a leptonic model where the $\gamma$-rays are from the IC scattering. We assume that the electrons have a single power-law spectrum. The target photon fields for relativistic electrons to scatter include the CMB, infrared, and optical field adopted from the local interstellar radiation field calculated in Popescu et al. (2017). As shown in Fig.6, the leptonic model can also explain the observed $\gamma$-ray emission with a maximum log-likelihood value of -2.17. The derived index of the electrons is $\alpha=3.05\pm 0.07$. However, for the IC model the derived energy budget for the relativistic electrons (>2 GeV) is as high as $(1.1\pm 0.3)\times 10^{50}\ \rm erg$, which is almost $10\%$ of the typical kinetic energy of a supernova explosion $10^{51}\rm erg$. The total kinetic energy supplied by the stellar wind from a single massive star within $\sim$1 Myr, is $\sim 3\times 10^{48}\ \rm erg$ (Ezoe et al., 2006). Since CNC contains > 66 OB stars, the stellar wind power produced by the CNC $>2\times 10^{50}\rm erg$. If we consider the different clusters i.e. Tr 14 (20 OB stars), Tr 16 (21 OB stars) (Shull et al., 2021) and NGC 3324 (20 OB stars) (Bisht et al., 2021), we evaluate the stellar wind power $6\times 10^{49}\ \rm erg$, $6.3\times 10^{49}\ \rm erg$ and $6\times 10^{49}\ \rm erg$, respectively. Figure 6: SED of emission in the region B for a $0.75\hbox{${}^{\circ}$}$ Gaussian disk spatial model. The red dashed line represents the predicted $\gamma$-ray emissions assuming the CR density in this region is the same as those measured locally by AMS-0.2 (Aguilar et al., 2015). ## 5 CR content in the vicinity of the CNC The spatial distributions of CRs can provide key information about their injection history. In Westerlund 1, Cygnus Cocoon (Aharonian et al., 2019), and Westerlund 2 (Yang et al., 2018), the 1/r CR profiles are derived, which implies the continuous injection and diffusion dominated propagation of CRs from these massive star clusters (Yang & Liu, 2022). We study the propagated CR contents in the vicinity of CNC. Because of the limited angular resolution and size of this region, we chose only region A and region B, rather than annuli to derive the CR density. In both regions, the gamma-ray flux above 5 GeV and gas mass are derived separately. For gas content, the results of spatial analysis in Sect. 2.4 have shown that the $\gamma$-ray emission region has good spatial consistency with the H2 \+ H ii template. Thus, we derived the corresponding gas mass for regions A and B, respectively. Then we calculated CR density according to the function (A.4) in Huber et al. (2013). The derived CR density from the two $\gamma$-ray emission regions and gas distributions are shown in Fig. 7. The first red point is related to region A and the second one is related to region B. As mentioned in Aharonian et al. (2019), the radial distribution of the CR density in the form, $w(r)=w_{0}(r/r_{0})^{-1}\ $ (6) where r0 is assumed to be 10 pc, i.e. normalise the CR proton density $w_{0}$ outside but not far from the cluster. The total energy of CR protons within the volume of the radius $R_{0}$ is $W_{\rm p}=4\pi\int_{0}^{R_{\rm 0}}w(r)r^{2}\,\mathrm{d}r\approx\\\ 2.7\times 10^{47}(w_{0}/1\ \rm eV/cm^{3})(R_{0}/10\ \rm pc)^{2}\ \rm erg\ .$ (7) The derived CR profile is shown in the Fig. 7. We fit the density profile using a 1/r type distribution, which would result from a continuous injection of CRs from region A. We found no 1/r dependence and the CR density of region B is significantly lower than that of region A. For such a profile, the CR content in this system are indeed from some recent impulsive process rather than continuous injections. Certainly, here we note the recent studies have shown that the CR radial profiles for Westerlund 1 (Aharonian et al., 2022) and Cygnus Cocoon (Abeysekara et al., 2021) are not compatible with 1/r. Moreover, from a theoretical point of view, the 1/r profile is not the only possible outcome for the case of a steady central source. Actually, flatter spatial profiles are more favoured in the case of acceleration by the termination shock of stellar winds (Morlino et al., 2021). ## 6 Discussion and conclusion Figure 7: Derived CR density profile near the CNC. The data points are the $\gamma$-ray emission above 5 GeV of the CNC. The solid curve is the 1/r profile, which is predicted by the continuous injection. For details, see the context in Sect. 5. In this paper we analyze the $\gamma$-rays emission from the massive star forming region of CNC. We found that the $\gamma$-ray emission in this region can be resolved into three components. Besides the point-like source coincides with the massive binary $\eta$ Car, we further found two diffuse $\gamma$-ray emission components that are spatially correlated with the molecular and ionized gas. The $\gamma$-ray emission from the CNC (region A) can be modelled by a Gaussian disk with radius of 0.4∘. The spectral of this component can be described by a broken power-law function with an index of $2.36\pm 0.01$. The spectrum shape of region A reveals a significant pion-bump feature, which indicates the $\gamma$-ray emissions are from the interaction of hadronic CRs with ambient gas. However, compared with the $\gamma$-ray emissions from other YMCs, the spectrum is significantly softer but is similar to the spectral shape of the Galactic diffuse emission and the emissions from $\eta$ Car itself. So it is possible the derived emission in this region are still significantly contaminated by the $\eta$ Car which is by far brighter. Another possibility is that the CO gas and H ii gas in this region are illuminated by the soft CR component accelerated by the $\eta$ Car. In the latter, the derived $W_{\rm p}$ is $10^{48}~{}\rm erg$ for the protons above 2 GeV, considering wind power of the $\eta$ Car of $10^{38}~{}\rm erg/s$ and a acceleration efficiency of 10%. The CR injection power can be $P_{\rm CR}\sim 10^{37}~{}\rm erg/s$. Taking into account the size of region A of about $l=15~{}\rm pc$ ($0.4\hbox{${}^{\circ}$}$ in $2.3~{}\rm kpc$), the required diffusion coefficient in this region can be estimated as $D\sim\frac{l}{4T}$, where the confinement time $T$ can be estimated as $W_{\rm p}/P_{\rm CR}\sim 10^{11}\rm s$. The derived $D$ is $5\times 10^{27}~{}\rm cm/s$, which is one order of magnitude smaller than the average value in the galactic plane. Such a suppression of diffusion coefficient is also found in other regions near CR source (Aharonian et al., 2019). We note that other than $\eta$ Car other massive stars in the young massive cluster Tr14 and Tr 16 may also accelerate CRs, which will loose our constraints on the diffusion coefficient. The GeV $\gamma$-ray emission from the northern part of the CNC (region B) can be modelled as a 0.75∘ Gaussian disk. The $\gamma$-ray emission has a hard spectrum with a spectral index of about $2.12\pm 0.02$. Although the spectrum of region B can be fit by both leptonic and hadronic scenarios, the high gas density and good spatial correlation between $\gamma$-ray emission and molecular gas strongly favor a hadronic origin. In this case the hard $\gamma$-ray spectrum and the hard spectrum of parent CRs are similar to those in other YMC systems. A natural acceleration site of the CRs are the YMC Tr 14 and Tr 16, in this case we plot the radial distribution of CRs in region A and region B, which we found are not consistent with the $1/r$ distributions measured near other YMCs (Aharonian et al., 2019). One possible explanation may be that the CR content in this system are indeed from some recent impulsive process rather than continuous injections. In this case the harder spectrum in region B compared with region A can also be explained as energy dependent propagation. Another possibility is that it is the young star cluster NGC 3324 or other unknown CR sources , rather than Tr14 and Tr 16, that accelerate the CRs in region B. Then the different spectrum in region A and region B can be attributed to the different CR injection spectrum in two systems. Indeed, in Yang et al. (2018) the diffuse emission in this region are attributed to the CRs accelerated by the YMC Westerlund 2, which locates about $5~{}\rm kpc$ from the solar system. We note that after the detailed analysis in this paper we still cannot formally rule out such a possibility, although the projection distance of region B are significantly nearer with respect to $\eta$ Car and Tr14/Tr16. The gas distribution derived near Westerlund 2 also reveal a peak in coincidence with region B (Yang et al., 2018). Indeed the speed interval used in CNC ( $v_{\rm LSR}=[-32,-5]$ $\rm km\ s^{-1}$) and Westerlund 2 ( $v_{\rm LSR}=[-11,20]$ $\rm km\ s^{-1}$) are overlapped with each other, although their distance to the solar system are significantly different. More detailed studies on the gas distributions in this region may be required to pin down the origin of the diffuse $\gamma$-ray emissions in this region. ## 7 Acknowledgements This work is supported by the National Natural Science Foundation of China (Grant No.12133003, 12103011, and U1731239), Guangxi Science Foundation (grant No. AD21220075). Rui-zhi Yang is supported by the NSFC under grants 11421303, 12041305 and the national youth thousand talents program in China. ## 8 Data availability The Fermi-LAT data used in this work is publicly available, which is provided online by the NASA-GSFC Fermi Science Support Center666https://fermi.gsfc.nasa.gov/ssc/data/access/lat/. We make use of the CO data777https://lambda.gsfc.nasa.gov/product/ to derive the H2. The data from Planck legacy archive888http://pla.esac.esa.int/pla/#home are used to derive the H ii. The H i data are from the HI4PI999http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/594/A116. ## References * Abdo et al. (2009a) Abdo A. A., et al., 2009a, ApJS, 183, 46 * Abdo et al. (2009b) Abdo A. A., et al., 2009b, ApJ, 706, L1 * Abdollahi et al. (2020) Abdollahi S., et al., 2020, ApJS, 247, 33 * Abeysekara et al. (2021) Abeysekara A. U., et al., 2021, Nature Astronomy, 5, 465 * Abramowski et al. (2012) Abramowski A., et al., 2012, A&A, 537, A114 * Ackermann et al. (2011) Ackermann M., et al., 2011, Science, 334, 1103 * Ackermann et al. (2017) Ackermann M., et al., 2017, ApJ, 843, 139 * Aguilar et al. (2015) Aguilar M., et al., 2015, Phys. Rev. Lett., 114, 171103 * Aharonian et al. (2007) Aharonian F., et al., 2007, A&A, 467, 1075 * Aharonian et al. (2019) Aharonian F., Yang R., de Oña Wilhelmi E., 2019, Nature Astronomy, 3, 561 * Aharonian et al. (2022) Aharonian F., et al., 2022, arXiv e-prints, p. arXiv:2207.10921 * Akaike (1974) Akaike H., 1974, IEEE Transactions on Automatic Control, 19, 716 * Ascenso et al. (2007) Ascenso J., Alves J., Vicente S., Lago M. T. V. T., 2007, A&A, 476, 199 * Baade & Zwicky (1934) Baade W., Zwicky F., 1934, Proceedings of the National Academy of Science, 20, 259 * Balbo & Walter (2017) Balbo M., Walter R., 2017, A&A, 603, A111 * Ballet et al. (2020) Ballet J., Burnett T. H., Digel S. W., Lott B., 2020, arXiv e-prints, p. arXiv:2005.11208 * Bisht et al. (2021) Bisht D., Zhu Q., Yadav R. K. S., Ganesh S., Rangwal G., Durgapal A., Sariya D. P., Jiang I.-G., 2021, MNRAS, 503, 5929 * Bolatto et al. (2013) Bolatto A. D., Wolfire M., Leroy A. K., 2013, ARA&A, 51, 207 * Bykov et al. (2015) Bykov A. M., Ellison D. C., Gladilin P. E., Osipov S. M., 2015, MNRAS, 453, 113 * Dame et al. (2001) Dame T. M., Hartmann D., Thaddeus P., 2001, ApJ, 547, 792 * Damineli et al. (2008) Damineli A., et al., 2008, MNRAS, 384, 1649 * Danilenko et al. (2013) Danilenko A., Kirichenko A., Sollerman J., Shibanov Y., Zyuzin D., 2013, A&A, 552, A127 * De Becker & Raucq (2013) De Becker M., Raucq F., 2013, A&A, 558, A28 * Del Valle & Romero (2012) Del Valle M. V., Romero G. E., 2012, A&A, 543, A56 * Ezoe et al. (2006) Ezoe Y., Kokubun M., Makishima K., Sekimoto Y., Matsuzaki K., 2006, ApJ, 638, 860 * Farnier et al. (2011) Farnier C., Walter R., Leyder J. C., 2011, A&A, 526, A57 * Finkbeiner (2003) Finkbeiner D. P., 2003, ApJS, 146, 407 * Fujita et al. (2021) Fujita S., et al., 2021, PASJ, 73, S201 * Göppl & Preibisch (2022) Göppl C., Preibisch T., 2022, arXiv e-prints, p. arXiv:2201.09097 * Gupta & Razzaque (2017) Gupta N., Razzaque S., 2017, Phys. Rev. D, 96, 123017 * H. E. S. S. Collaboration et al. (2011) H. E. S. S. Collaboration et al., 2011, A&A, 525, A46 * H. E. S. S. Collaboration et al. (2020) H. E. S. S. Collaboration et al., 2020, A&A, 635, A167 * HESS Collaboration et al. (2012) HESS Collaboration et al., 2012, MNRAS, 424, 128 * H.E.S.S. Collaboration et al. (2015) H.E.S.S. Collaboration et al., 2015, Science, 347, 406 * HI4PI Collaboration et al. (2016) HI4PI Collaboration et al., 2016, A&A, 594, A116 * Hamaguchi et al. (2007) Hamaguchi K., et al., 2007, PASJ, 59, 151 * Hamaguchi et al. (2018) Hamaguchi K., et al., 2018, Nature Astronomy, 2, 731 * Hillier et al. (2001) Hillier D. J., Davidson K., Ishibashi K., Gull T., 2001, ApJ, 553, 837 * Huber et al. (2013) Huber B., Tchernin C., Eckert D., Farnier C., Manalaysay A., Straumann U., Walter R., 2013, A&A, 560, A64 * Jean et al. (2018) Jean P., Cheung C. C., Ojha R., van Zyl P., Angioni R., 2018, The Astronomer’s Telegram, 11546, 1 * Kafexhiu et al. (2014) Kafexhiu E., Aharonian F., Taylor A. M., Vila G. S., 2014, Phys. Rev. D, 90, 123014 * Kelner et al. (2006) Kelner S. R., Aharonian F. A., Bugayov V. V., 2006, Phys. Rev. D, 74, 034018 * Khangulyan et al. (2014) Khangulyan D., Aharonian F. A., Kelner S. R., 2014, ApJ, 783, 100 * Lande et al. (2012) Lande J., et al., 2012, ApJ, 756, 5 * Lebrun et al. (1983) Lebrun F., et al., 1983, ApJ, 274, 231 * Liu & Yang (2022) Liu B., Yang R.-z., 2022, A&A, 659, A101 * Liu et al. (2022) Liu B., Yang R.-z., Chen Z., 2022, arXiv e-prints, p. arXiv:2205.06430 * Martí-Devesa & Reimer (2021) Martí-Devesa G., Reimer O., 2021, A&A, 654, A44 * Mestre et al. (2021) Mestre E., et al., 2021, MNRAS, 505, 2731 * Morlino et al. (2021) Morlino G., Blasi P., Peretti E., Cristofari P., 2021, MNRAS, 504, 6096 * Ohm et al. (2015) Ohm S., Zabalza V., Hinton J. A., Parkin E. R., 2015, MNRAS, 449, L132 * Parkin et al. (2009) Parkin E. R., Pittard J. M., Corcoran M. F., Hamaguchi K., Stevens I. R., 2009, MNRAS, 394, 1758 * Pittard & Corcoran (2002) Pittard J. M., Corcoran M. F., 2002, A&A, 383, 636 * Planck Collaboration et al. (2016) Planck Collaboration et al., 2016, A&A, 594, A10 * Popescu et al. (2017) Popescu C. C., Yang R., Tuffs R. J., Natale G., Rushton M., Aharonian F., 2017, MNRAS, 470, 2539 * Preibisch et al. (2011) Preibisch T., et al., 2011, A&A, 530, A34 * Preibisch et al. (2017) Preibisch T., Flaischlen S., Gaczkowski B., Townsley L., Broos P., 2017, A&A, 605, A85 * Rebolledo et al. (2021) Rebolledo D., Green A. J., Burton M. G., Breen S. L., Garay G., 2021, ApJ, 909, 93 * Reitberger et al. (2015) Reitberger K., Reimer A., Reimer O., Takahashi H., 2015, A&A, 577, A100 * Seo et al. (2019) Seo Y. M., et al., 2019, ApJ, 878, 120 * Shull et al. (2021) Shull M., Darling J., Danforth C., 2021, arXiv e-prints, p. arXiv:2103.07922 * Smith (2006) Smith N., 2006, MNRAS, 367, 763 * Smith (2008) Smith N., 2008, Nature, 455, 201 * Smith & Brooks (2007) Smith N., Brooks K. J., 2007, MNRAS, 379, 1279 * Smith et al. (2000) Smith N., Egan M. P., Carey S., Price S. D., Morse J. A., Price P. A., 2000, ApJ, 532, L145 * Sodroski et al. (1997) Sodroski T. J., Odegard N., Arendt R. G., Dwek E., Weiland J. L., Hauser M. G., Kelsall T., 1997, ApJ, 480, 173 * Sun et al. (2020a) Sun X.-N., Yang R.-Z., Wang X.-Y., 2020a, MNRAS, 494, 3405 * Sun et al. (2020b) Sun X.-N., Yang R.-Z., Liang Y.-F., Peng F.-K., Zhang H.-M., Wang X.-Y., Aharonian F., 2020b, A&A, 639, A80 * Sun et al. (2022) Sun X.-N., Yang R.-Z., Liang E.-W., 2022, A&A, 659, A83 * Tavani et al. (2009) Tavani M., et al., 2009, ApJ, 698, L142 * Vallée (2014) Vallée J. P., 2014, ApJS, 215, 1 * Verner et al. (2005) Verner E., Bruhweiler F., Gull T., 2005, ApJ, 624, 973 * White et al. (2020) White R., Breuhaus M., Konno R., Ohm S., Reville B., Hinton J. A., 2020, A&A, 635, A144 * Yang & Aharonian (2017) Yang R.-z., Aharonian F., 2017, A&A, 600, A107 * Yang & Liu (2022) Yang R.-Z., Liu B., 2022, Science China Physics, Mechanics, and Astronomy, 65, 219511 * Yang et al. (2018) Yang R.-z., de Oña Wilhelmi E., Aharonian F., 2018, A&A, 611, A77 * Zabalza (2015) Zabalza V., 2015, in 34th International Cosmic Ray Conference (ICRC2015). p. 922 (arXiv:1509.03319)
# Estimating Changepoints in Extremal Dependence, Applied to Aviation Stock Prices During COVID-19 Pandemic Arnab Hazra and Shiladitya Bose Contact: Arnab Hazra (Email: <EMAIL_ADDRESS>Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Kanpur, India 208016. ###### Abstract The dependence in the tails of the joint distribution of two random variables is generally assessed using $\chi$-measure, the limiting conditional probability of one variable being extremely high given the other variable is also extremely high. This work is motivated by the structural changes in $\chi$-measure between the daily rate of return (RoR) of the two Indian airlines, IndiGo and SpiceJet, during the COVID-19 pandemic. We model the daily maximum and minimum RoR vectors (potentially transformed) using the bivariate Hüsler-Reiss (BHR) distribution. To estimate the changepoint in the $\chi$-measure of the BHR distribution, we explore two changepoint detection procedures based on the Likelihood Ratio Test (LRT) and Modified Information Criterion (MIC). We obtain critical values and power curves of the LRT and MIC test statistics for low through high values of $\chi$-measure. We also explore the consistency of the estimators of the changepoint based on LRT and MIC numerically. In our data application, for RoR maxima and minima, the most prominent changepoints detected by LRT and MIC are close to the announcement of the first phases of lockdown and unlock, respectively, which are realistic; thus, our study would be beneficial for portfolio optimization in the case of future pandemic situations. ###### keywords: Aviation Stock Prices; Bivariate Hüsler-Reiss Distribution; Changepoint; COVID-19; Extremal Dependence; Modified Information Criterion. ## 1 Introduction The COVID-19 pandemic had a profound and disruptive impact on a global scale, causing not only a substantial loss of human lives but also significant economic turmoil. Nearly every industry faced adverse consequences as a result, including the aviation sector. The outbreak of the pandemic triggered a severe decline in air travel demand, as countries implemented travel restrictions and lockdown measures to curb the spread of the virus. Extensive research, such as the study conducted by the International Civil Aviation Organization [46] on the effects of COVID-19 on worldwide Civil Aviation, has examined these repercussions. Indian airlines encountered numerous challenges during this unprecedented period, as highlighted in [75] and [1], who examined the specific difficulties faced by Indian airlines due to the pandemic. The imposition of domestic and international flight suspensions for several months by the Government of India had a significant financial impact on airlines, leading to layoffs, salary cuts, and even bankruptcies [85]. The two airlines IndiGo and SpiceJet have played crucial roles in fostering the growth of the Indian aviation sector. These airlines have effectively increased the accessibility and affordability of air travel for a broader range of people. Operating in a highly competitive market, they have continuously expanded their flight routes, enhanced services, and taken innovative features to attract passengers [2]. Thus, it is unavoidable that there exists an interdependence within their businesses. Analyzing the share prices of the two companies over a period of time allows one to gauge the market’s perception of the companies and their financial performances. There are several parameters to consider while using share prices for performance evaluation, such as historical share price trends, relative performance, market index comparisons, and the impact of dividends and stock splits. The traders emphasize the importance of buying low and selling high while aiming to generate profits. This principle underscores the significance of timing and strategic decision-making in the context of stock market investments [9]. During the first phase of the lockdown, India and the rest of the world witnessed a significant decline in the stock market for many shares, primarily caused by reduced investments; at the same time, once the lockdown period was over, the stock prices started regaining [35]. While analyzing daily average stock prices gives an understanding of the bulk of the distribution of the share prices, our goal is to focus on understanding the upper and lower tail behaviors of stock prices, which are more precise indicators of their interdependence at the sell and buy positions, respectively. Various companies employed diverse strategies to mitigate substantial losses during the pandemic [6]. As a result, the tail interdependence, i.e., the tendency of selling both SpiceJet and IndiGo stocks simultaneously in high amounts or buying both stocks simultaneously in low amounts, could have experienced significant change during the COVID-19 pandemic. To analyze them effectively, we employ joint modeling in the context of extreme value theory (EVT) and changepoint analysis. In the context of joint modeling, the dependence structure between two random variables is often described using bivariate copulas. They are mathematical functions providing a way to model the joint distribution of variables while separately modeling their marginal distributions [63]. Copulas offer flexibility in capturing various forms of dependence, including linear, nonlinear, and tail dependence, making them valuable in finance, insurance, and risk management [15, 50, 68]. While the Gaussian copula is the most commonly used copula, it is criticized in the context of analyzing tail- interdependence because of its thin joint tails [23]. Several alternatives like Student-$t$ copula and other elliptical models have been explored in the literature [39]. In the context of investigating temporally-varying tail dependnece between several crypto-currencies like Bitcoin and Ethereum, [34] propose a new flexible copula approach that allow both dependence as well as independence in the tails through varying the model parameters. In recent years, EVT has emerged as one of the most crucial fields of study in different scientific disciplines [17, 22]. The main applications of EVT are in areas such as portfolio management in actuarial studies [43], financial risk assessment [10], telecom traffic prediction [61], and detection of meteorological changes [92]. In financial extremes, [71] showed that a portfolio is more affected by a few extreme movements in the market than by the sum of many small-scale shifts. The majority of EVT focuses on methods for analyzing univariate extremes; however, [79] and [81] introduced the statistical methodology for bivariate extremes and [18] further studied multivariate extremes. While the Pearson correlation coefficient measures dependence in the bulk of the joint distribution of two random variables, $\chi$-measure [74] assesses the dependence in tails. A bivariate Gaussian distribution is the most common model for bivariate responses; however, its components are asymptotically independent for any non-trivial choice of the Pearson correlation coefficient [74]. In this context, [45] obtained the limiting distribution of block maxima or block minima of bivariate Gaussian distribution under certain assumptions on the correlation between the components, and their bivariate model is known as the bivariate Hüsler-Reiss (BHR) distribution. The BHR distribution and its infinite-dimensional case, called Brown-Resnick process [5], have been used in numerous studies in the context of bivariate, multivariate, as well as spatial extreme value analysis [51, 44]. Besides, the BHR distribution is a building block of the high- dimensional graphical models for extremes [30, 31]. A changepoint is a place or time where the statistical properties of a sequence of observations change; in other words, the observations before and after the changepoint follow different probability distributions [13, 52]. We can use this information for prediction, monitoring, and decision-making purposes. Changepoint estimation or changepoint mining is common in many fields such as financial time series data [80], environmental studies [70], genome research [62], signal processing [56], quality control [54], and medical research [4]. During the 1950s, [66] first proposed a methodology for detecting only one change in a one-parameter (location-type) model. For a univariate Gaussian distribution, [14] and [33] studied changepoint detection for the mean component, and [42] studied a similar problem for the variance. While their studies were limited to a single parameter and a single changepoint scenario, [40] proposed a novel approach for detecting a single shift in any known function of the unknown mean and covariance of an arbitrary multivariate distribution, and [47] proposed a multiple changepoint detection procedure for the variance of a Gaussian distribution using posterior odds in a Bayesian setting. [59] discussed techniques and results on changepoint estimation for the correlation coefficient of the bivariate Gaussian distribution. There are various testing procedures available in the literature for the purpose of changepoint estimation. One of the most common and earliest proposed choices is the CUmulative SUM (CUSUM) procedure [67]. CUSUM is particularly useful for detecting changes in a process over time, such as changes in the mean, variance, or distribution of the data. It is widely used in quality control, signal processing, and environmental monitoring, among other fields, to identify when a process has changed significantly from its previous state. Besides, certain moving window based methods like MOSUM [28] have also been proposed. [36] proposed generalized maximum likelihood asymptotic power one tests that aim to detect a single changepoint in the parameters of a logistic regression model. [37] considered distribution free generalized changepoint detection policies where the data distribution under the null hypothesis is not necessarily the same with the data distribution before the changepoint under the alternative. [86] proposed and examined a class of generalized maximum likelihood asymptotic power one tests for detection of various types of changes in a linear regression model. The proposed retrospective tests are based on martingales structured Shiryayev–Roberts statistics. [87] showed that retrospective change point detection policies based on Shiryayev–Roberts statistics are non- asymptotically optimal in the context of most powerful testing. A detailed discussion on parametric and non-parametric changepoint detection approaches is in [20] and [13]. While the literature on changepoint estimation in the context of EVT is scarce, the most common choice of the testing procedure is the Likelihood Ratio Test (LRT). In the context of changepoint estimation for a sequence of extremes, [57] and [29] studied some theoretical properties of some test statistics. In a book chapter, [25] briefly discussed LRT for analyzing changes in the extreme value copulas and illustrated it for the bivariate Gumbel copula. [49] discussed LRT for detecting a changepoint in the location parameter of annual maxima and minima series and described some methods for finding critical values. Instead of focusing on block maxima, [26] discussed LRT for detecting changes in the shape parameter of the generalized Pareto distribution, which is the only possible limiting distribution of high- threshold exceedances. While the previous approaches focused on a frequentist analysis of extreme values, [27] discussed a Bayesian method to identify changepoints in a sequence of extremes. [24] proposed a time-varying extreme value copula where the authors model its angular surface using Bernstein polynomials and allow for changepoints in the parameters of the copula model. For detecting changepoints in financial extremes, [55] discussed a Bayesian approach for analyzing threshold exceedances using a generalized Pareto distribution while modeling the bulk of the distribution using a finite mixture of gamma distributions. Apart from the context of EVT, [88] examined the utilization of LRT to identify shifts or changes in the location of normal populations, while [89] focused on studying the power of LRT for detecting changes in binomial probabilities. Later, [76] investigated LRT for identifying changepoints in a multivariate setting, and [69] discussed the detection of epidemic changes using LRT for exponential distribution. Further, [90] studied detecting changepoints in two-phase linear regression models using LRT, and [72] used LRT for detecting changepoints in a sequence of univariate skew-normal distributed random variables. Without imposing any parametric model assumption, [91] proposed a changepoint detection procedure based on the empirical likelihood, and its asymptotic properties are justified by the work of [65]. As an alternative to LRT, [12], [11], [38], [64], and [7] used the information criteria approach for changepoint detection. Specifically, [11] proposed Modified Information Criterion (MIC) and showed that MIC has simple asymptotic behaviors and is consistent. While LRT generally performs better in detecting changepoints occurring near the middle of the data sequence, MIC includes some correction terms that allow better performance in detecting changepoints occurring at the very beginning or at the very end of the data. Among its applications, [73] and [82] used MIC for changepoint estimation in the case of skew-normal and Kumaraswamy distributions, respectively. Similar to the Pearson’s correlation coefficient for analyzing dependence in the bulk of the joint distribution of two random variables, the strength of the dependence in tails (both lower and upper) is measured using extremal dependence or $\chi$-measure. Our purpose is to identify structural changes in the $\chi$-measure between the daily stock rate of return (RoR) of the two largest Indian airlines, IndiGo and SpiceJet, during the COVID-19 pandemic in 2019. To accomplish this, we discuss necessary preprocessing steps, including transforming the daily maximum and minimum RoR series so that each of them for IndiGo and SpiceJet jointly follow the BHR distribution. The main objective of this paper is to estimate the point at which a change occurs in the $\chi$-measure of the BHR distribution separately for the upper and lower tails. Given the one-to-one correspondence between the dependence-related parameter of the BHR distribution and its $\chi$-measure, we explore a changepoint detection procedure utilizing LRT and MIC to achieve our goal. To assess the performance of these methods in practical settings with limited data samples, we numerically investigate their effectiveness. Since closed- form expressions for the finite sample distributions of LRT and MIC do not exist, we derive critical values and assess the power of the hypothesis testing problem across a range of low to high values of $\chi$-measure. In our data application, we explore the likelihood of different changepoints in the upper and lower tails from the beginning of the COVID-19 pandemic until the end of the third wave based on both LRT and MIC. We organize the paper as follows. In Section 2, we discuss a summary of univariate and bivariate EVT, quantification of extremal dependence, and the definition of the BHR distribution. Section 3 discusses the Indian aviation stock price dataset and some exploratory analyses. We describe LRT and MIC for detecting the most crucial changepoint in a sequence of BHR-distributed bivariate random vectors in Section 4. Section 5 discusses results on critical values and power comparison between LRT and MIC based on an extensive simulation study. In Section 6, we apply our methodology for joint analysis of the stock prices of SpiceJet and IndiGo during the COVID-19 pandemic. Section 7 concludes and discusses some scopes for future research. ## 2 Background on Extreme Value Theory In this section, we summarize the theoretical background of univariate and bivariate Extreme Value Theory (EVT); they justify the methodology we adopt for data pre-processing as well as drawing inferences. Subsequently, before we discuss the changepoint estimation for the bivariate Hüsler-Reiss (BHR) distribution, we discuss how we obtain this distribution as a special case of a bivariate generalized extreme value distribution in Section 2.3; this justifies the importance of using the BHR distribution for drawing tail inferences in our context. We also briefly summarize the key measures of dependence in EVT that are necessary for understanding the features in the tail of the data. In the next section, we fit the univariate generalized extreme value distributions discussed here and explore the dependence in the tails of the data empirically using the measures described here. A brief summary and motivation of the technical details in Sections 2.1 and 2.3 are as follows. The central limit theorem says that for any distributional assumption of the underlying sequence of random variables (univariate), as long as the underlying regularity conditions hold, the only possible asymptotic distribution of the sample mean is normal. Similarly, the Fisher- Tippett theorem says that for any distributional assumption of the underlying sequence of random variables, as long as the underlying regularity conditions hold and some non-degenerate limiting distribution exists, the only possible distribution of the block maxima/minima is the generalized extreme value distribution. In practice, we often observe only the sample mean (like average daily air temperature, instead of a full curve of air temperature throughout 24 hours) and hence a standard assumption is normality. Similarly, here we observe the daily maximum/minimum rate of return instead of the full curve of stock prices. At least we do not have access to such functional data. Hence, a natural choice of the distribution of daily maximum/minimum rate of return is the generalized extreme value distribution. When we have bivariate observations (obtained from bivariate curves with components corresponding to two airlines), the natural choice is the bivariate generalized extreme value distribution. Unlike a bivariate normal distribution, a bivariate generalized extreme value distribution can have different types of dependence structures. In practice, it is often difficult, if not impossible, to judge which class we should pick and the common choice is picking a specific class and draw inference based on that. Besides, there is a well-known result that if we assume the bivariate curves to be Gaussian processes, which is often a standard assumption for continuous stochastic processes across scientific disciplines, the only possible case of bivariate generalized extreme value distribution is the BHR distribution. Hence, we model the daily maximum/minimum rate of return for the two airlines using that distribution. In multivariate extreme value theory, the dependence between two variables are often measured in terms of $\chi$-measure, which is essentially the (limiting) conditional probability of one variable being extremely large (small) given the other variable is also extreme large (small). The difference with Pearson’s correlation is that $\chi$-measure does not measure linear association like Pearson’s correlation and purely focuses on the tails. Hence, the bulk of the data do not interfere with drawing inferences about the strength of the dependence between two variables in the tails. Because of our focus on assessing the dependence structure within the extremes, i.e., the components of daily maximum/minimum rate of return series, drawing inferences based on $\chi$-measure is more appropriate than that based on Pearson’s correlation. ### 2.1 Univariate and Bivariate Extreme Value Distributions The EVT deals with extreme observations; they are usually defined as block maxima/minima (e.g., annual maxima/minima of daily observations) or threshold exceedances (e.g., the observations above the $0.98^{th}$ data quantile or below the $0.02^{th}$ data quantile). We stick to the first type of definition in our case. Under both definitions, the important aspect of EVT lies in describing the tail behavior of a stochastic process. In mathematical notations, consider $X_{1},X_{2},\ldots$, a sequence of independent random variables, where $X_{i}$’s are measured on a regular time scale, following a common continuous cumulative distribution function (CDF) $F_{0}(\cdot)$. The main goal of EVT is to study the asymptotic behavior of $M_{n}=\max_{n}\\{X_{1},X_{2},\ldots,X_{n}\\}$. The CDF of $M_{n}$ is given by $\textrm{P}(M_{n}\leq z)=F_{0}(z)^{n}$. But, in practice, $F_{0}(\cdot)$ is generally unknown. One approach would be estimating it from observed data in case the full sequence of observations is available; however, a small overestimation or underestimation usually influences the final inference substantially. Here, the asymptotic behavior of $\tilde{M}_{n}=\min_{n}\\{X_{1},X_{2},\ldots,X_{n}\\}$ is similar to that of $M_{n}$ as we can write $\tilde{M}_{n}=-\max_{n}\\{-X_{1},-X_{2},\ldots,-X_{n}\\}$. Thus, we focus on discussing the asymptotic behavior of $M_{n}$ only. In this context, the celebrated Fisher-Tippett theorem [32] states that for any arbitrary CDF $F_{0}(\cdot)$, if there exist sequences of real numbers $\\{a^{0}_{n}\\}$ and $\\{b^{0}_{n}\\}$ and a non-degenerate CDF $G_{0}(\cdot)$ satisfying $\lim_{n\rightarrow\infty}F_{0}^{n}(a^{0}_{n}x+b^{0}_{n})=G_{0}(x)$ pointwise, then $G_{0}(\cdot)$ must belong to either Gumbel, Fréchet, or Weibull families. Here, $F_{0}(\cdot)$ is said to belong to the Domain of Attraction of $G_{0}(\cdot)$, and $a^{0}_{n}\in\mathbb{R}^{+}$ and $b^{0}_{n}\in\mathbb{R}$ are normalizing constants. Combining these three categories, $G_{0}(\cdot)$ is the CDF of the generalized extreme value (GEV) distribution given by $G_{0}(y)\;=\;\exp\left[-\left\\{1\;+\;\xi\left(\frac{y-\mu}{\sigma}\right)\right\\}_{+}^{-1/\xi}\right],$ (1) where $\mu\in\mathbb{R}$, $\sigma\in\mathbb{R}^{+}$, and $\xi\in\mathbb{R}$ are the location, scale, and shape parameters of the GEV distribution, and $x_{+}=\max\\{x,0\\}$. Depending on whether the shape parameter $\xi$ is zero, positive, or negative, the GEV family is called the Gumbel family, Fréchet family, or Weibull family, respectively. We denote the distribution with CDF $G_{0}(\cdot)$ by $\textrm{GEV}(\mu,\sigma,\xi)$, and in practice, we assume $M_{n}\sim\textrm{GEV}(\mu,\sigma,\xi)$ in a limiting sense. The centering (through $b^{0}_{n}$) and scaling (through $a^{0}_{n}$) of $M_{n}$ in the Fisher-Tippett theorem are assumed to be adjusted by the parameters $\mu$ and $\sigma$, respectively. Multivariate extreme value theory focuses on the case where multiple sequences of random variables are available and we are interested in assessing the joint asymptotic behavior of the block maxima for all sequences. For our purpose, we stick to the bivariate extremes scenario. Suppose $\\{(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots\\}$ is a sequence of IID bivariate random vectors having a common continuous CDF $F(\cdot,\cdot)$. The classical theory for characterizing the extremal behavior of bivariate extremes is based on the asymptotic behavior of the component-wise block maxima vector $\bm{M}_{n}=(M_{X,n},M_{Y,n})^{\prime}$, where $M_{X,n}=\max_{1\leq i\leq n}\\{X_{i}\\}$ and $M_{Y,n}=\max_{1\leq i\leq n}\\{Y_{i}\\}$. Extending the Fisher-Tippett theorem for bivariate cases, [8] stated that if there exist real sequences $\\{a_{n}\\}$, $\\{b_{n}\\}$, $\\{c_{n}\\}$ and $\\{d_{n}\\}$, where $a_{n},c_{n}\in\mathbb{R}^{+}$ for all $n$, and a bivariate non- degenerate CDF $G(\cdot,\cdot)$ that satisfy $\lim_{n\rightarrow\infty}F^{n}(a_{n}x+b_{n},c_{n}y+d_{n})=G(x,y)$ pointwise, then $G(\cdot,\cdot)$ is called the bivariate GEV distribution. Here, the standard Fisher-Tippett theorem applies to both $\\{X_{1},X_{2},\ldots\\}$ and $\\{Y_{1},Y_{2},\ldots\\}$, and the limiting CDFs of renormalized $M_{X,n}$ and $M_{Y,n}$ are of the form (1). Suppose, $M_{X,n}\sim\textrm{GEV}(\mu_{X},\sigma_{X},\xi_{X})$ and $M_{Y,n}\sim\textrm{GEV}(\mu_{Y},\sigma_{Y},\xi_{Y})$ in a limiting sense. The CDF $G(\cdot,\cdot)$ can be written as $G(x,y)=\exp\\{-V(\widetilde{x},\widetilde{y})\\},$ (2) where $\widetilde{x}=[1+\xi_{X}(x-\mu_{X})/\sigma_{X}]^{1/\xi_{X}}$ and $\widetilde{y}=[1+\xi_{Y}(y-\mu_{Y})/\sigma_{Y}]^{1/\xi_{Y}}$. Here, we assume $[1+\xi_{X}(x-\mu_{X})/\sigma_{X}]>0$, $1+\xi_{Y}(y-\mu_{Y})/\sigma_{Y}>0$ and $V(\widetilde{x},\widetilde{y})=2\int_{0}^{1}\max\left\\{w/\widetilde{x},(1-w)/\widetilde{y}\right\\}\mathrm{d}\widetilde{G}(w)$, where $\widetilde{G}(\cdot)$ is a CDF on $[0,1]$ satisfying the mean constraint $\int_{0}^{1}w\mathrm{d}\widetilde{G}(w)=1/2$. Here, $V(\cdot,\cdot)$ is called the exponent measure of $G(\cdot,\cdot)$. ### 2.2 Extremal dependence and F-madogram For certain bivariate CDFs with finite second moments, even if the two components of the corresponding bivariate random vector are highly correlated, i.e., the dependence in the bulk of the joint distribution is strong, the dependence in the tails can be weak or negligible. Similar to Pearson’s correlation for analyzing dependence in the bulk of the joint distribution, the most common metric for measuring dependence in the tails is called extremal dependence or $\chi$-measure introduced by [74]. This measure does not require the second moments to be necessarily finite. For a bivariate random vector with components $X$ and $Y$, and marginal CDFs $F_{X}(\cdot)$ and $F_{Y}(\cdot)$, the upper and lower tail $\chi$-measures at a quantile level $u$ are defined as $\displaystyle\chi_{U}(u)$ $\displaystyle=$ $\displaystyle\textrm{P}\\{F_{Y}(Y)>u|F_{X}(X)>u\\},$ $\displaystyle\chi_{L}(u)$ $\displaystyle=$ $\displaystyle\textrm{P}\\{F_{Y}(Y)<u|F_{X}(X)<u\\},~{}~{}~{}~{}u\in(0,1),$ (3) while the limiting $\chi$-measures are defined as $\chi_{U}=\lim_{u\uparrow 1}\chi_{U}(u)$ and $\chi_{L}=\lim_{u\downarrow 0}\chi_{L}(u)$. Here, $\chi_{U}(u)$ and $\chi_{L}(u)$ are not uniquely defined and dependent on $u$. Thus, for a unique measure of extremal dependence, we use $\chi_{U}$ and $\chi_{L}$ unless specified. Intuitively, a high value of $\chi_{U}$ indicates the tendency of $Y$ being extremely large given $X$ is extremely large, and similarly, a high value of $\chi_{L}$ indicates the tendency of $Y$ being extremely small given $X$ is extremely small. If $\chi_{U}\in(0,1]$, we call $X$ and $Y$ to be asymptotically dependent in the upper tail, while for $\chi_{U}=0$, $X$ and $Y$ are said to be asymptotically independent in the upper tail. Similarly, if $\chi_{L}\in(0,1]$, we call $X$ and $Y$ to be asymptotically dependent in the lower tail, while for $\chi_{L}=0$, $X$ and $Y$ are said to be asymptotically independent in the lower tail. In (2.2), $X$ and $Y$ are interchangeable. More details are in [17]. For a bivariate GEV distribution, (2) and (2.2) are linked through the equation $\chi_{U}(u)\;=\;\chi_{L}(u)\;=\;2\;-\;V(1,\;1),$ (4) uniformly for $u\in(0,1)$, and thus, $\chi=2-V(1,1)$; see Chapter 8 of [17]. In case IID replications of $(X,Y)^{\prime}$ are available, $\chi(u)$ can be computed empirically. However, the concept of $\chi$-measure can be extended from a bivariate setting to a stochastic process setting. For a stationary extremal time series $\\{Z_{1},Z_{2},\ldots\\}$ with a common marginal CDF $F(\cdot)$, the $\chi$-measure at a temporal lag $h=1,2,\ldots$ can be investigated using $F$-madogram [19] given by $\nu_{h}=E\left[|F(Z_{t+h})-F(Z_{t})|\right]/2$ for any $t$. Then, the $\chi$-measure at lag $h$, say $\chi_{h}$, satisfies $\chi_{h}=2-\left(\frac{1+2\nu_{h}}{1-2\nu_{h}}\right).$ (5) In practice, for testing temporal extremal independence, we calculate $\chi_{h}$ empirically and check whether the values are close to zero. If $\chi_{h}$ are negligible for all $h$, we can safely ignore extremal dependence and model $\\{Z_{1},Z_{2},\ldots\\}$ as IID observations. ### 2.3 Bivariate Hüsler-Reiss Distribution For a sequence of independent random variables $X_{1},X_{2},\ldots$ measured on a regular time scale and with a common CDF $F_{0}(\cdot)$, if $F_{0}(\cdot)=\Phi(\cdot)$–the standard Gaussian CDF, $G_{0}(\cdot)$ in (1) belongs to the Gumbel family [21], i.e., $\xi=0$, which is defined in a limiting sense $\xi\rightarrow 0$, and we have $G_{0}(y)=\exp(-\exp[-y]),~{}y\in\mathbb{R}$. Here, for a sequence $\\{b_{n}\\}$ satisfying $b_{n}=n\phi(b_{n})$, where $\phi(\cdot)$ denotes a standard Gaussian density, we have $\lim_{n\uparrow\infty}\Phi^{n}\left(b_{n}+x/b_{n}\right)=\exp(-\exp[-x])$, the standard Gumbel distribution, for all $x\in\mathbb{R}$. Similarly, for a sequence of bivariate random vectors $\\{(X_{1},Y_{1}),(X_{2},Y_{2}),\ldots\\}$ with common CDF $F$, if $F=\Phi_{\rho}$–the bivariate standard Gaussian CDF with correlation $\rho$, the component-wise renormalized block maxima $M_{X,n}$ and $M_{Y,n}$ (with notations as in Section 2.1) follow the standard Gumbel distribution. Similarly, the component-wise renormalized block minima $\tilde{M}_{X,n}=\min_{1\leq i\leq n}\\{X_{i}\\}$ and $\tilde{M}_{Y,n}=\min_{1\leq i\leq n}\\{Y_{i}\\}$ also follow the same distribution. For the bivariate Gaussian distributions, [74] proved that the components of the corresponding random vector are asymptotically independent, i.e., $\chi=0$, for any value of the correlation coefficient $\rho$ less than one. In this context, [45] suggested an asymptotic formulation where the correlation coefficient $\rho\equiv\rho_{n}$ of a bivariate Gaussian distribution varies as sample size $n$ increases, and they proved that the marginal maxima/minima are neither asymptotically independent nor completely independent if $(1-\rho_{n})\log(n)$ converges to a positive constant as $n\uparrow\infty$; the limiting joint distribution is called the bivariate Hüsler-Reiss (BHR) Distribution. More specifically, [45] proved that if $\lim_{n\uparrow\infty}(1-\rho_{n})\log(n)=\Lambda^{-2}\in[0,\infty]$, then $\forall~{}x,y\in\mathbb{R}$, $\lim_{n\uparrow\infty}\Phi_{\rho_{n}}^{n}\left(b_{n}+x/b_{n},b_{n}+y/b_{n}\right)=H_{\Lambda}(x,y)$, where $H_{\Lambda}(x,y)=\exp\left[-\exp(-x)\,\Phi\left\\{\frac{1}{\Lambda}+\frac{\Lambda}{2}(y-x)\right\\}-\exp(-y)\,\Phi\left\\{\frac{1}{\Lambda}+\frac{\Lambda}{2}(x-y)\right\\}\right]$ (6) is the CDF of the BHR distribution with the dependence-related parameter $\Lambda$. Obtaining the exponent measure $V(\cdot,\cdot)$ for the BHR distribution from (2) and (6) is straightforward. Further, from (4), we obtain the (limiting) extremal dependence measure to be $\chi=2-V(1,1)=2\overline{\Phi}(1/\Lambda)$, where $\overline{\Phi}(\cdot)$ denotes the standard Gaussian survival function. Here, $\chi$ is monotonically increasing with $\Lambda$, where $\Lambda=0$ and $\Lambda=\infty$ imply independence and complete dependence between the components. For likelihood-based testing procedures discussed in Section 4, the probability density function (PDF) is required. From (6), the corresponding PDF is given by $\displaystyle h_{\Lambda}(x,y)$ $\displaystyle=$ $\displaystyle\exp\left[-\left(\Psi(\Lambda)+\widetilde{\Psi}(\Lambda)\right)\right]\left[\frac{\Lambda}{2}\left\\{\left(\psi(\Lambda)+\widetilde{\psi}(\Lambda)\right)+\frac{\Lambda}{2}\left(\psi^{(1)}(\Lambda)+\widetilde{\psi}^{(1)}(\Lambda)\right)\right\\}\right.$ (7) $\displaystyle\left.+\left\\{\Psi(\Lambda)+\frac{\Lambda}{2}\left(\psi(\Lambda)-\widetilde{\psi}(\Lambda)\right)\right\\}\left\\{\widetilde{\Psi}(\Lambda)+\frac{\Lambda}{2}\left(\widetilde{\psi}(\Lambda)-\psi(\Lambda)\right)\right\\}\right],$ where $\displaystyle\Psi(\Lambda)=e^{-x}\Phi\left(1/\Lambda+\Lambda(y-x)/2\right),~{}~{}\widetilde{\Psi}(\Lambda)=e^{-y}\Phi\left(1/\Lambda+\Lambda(x-y)/2\right),$ $\displaystyle\psi(\Lambda)=e^{-x}\phi\left(1/\Lambda+\Lambda(y-x)/2\right),~{}~{}\widetilde{\psi}(\Lambda)=e^{-y}\phi\left(1/\Lambda+\Lambda(x-y)/2\right),$ $\displaystyle\psi^{(1)}(\Lambda)=e^{-x}\frac{\partial}{\partial y}\left\\{\phi\left(1/\Lambda+\Lambda(y-x)/2\right)\right\\},~{}~{}\widetilde{\psi}^{(1)}(\Lambda)=e^{-y}\frac{\partial}{\partial y}\left\\{\phi\left(1/\Lambda+\Lambda(x-y)/2\right)\right\\}.$ Figure 1: Densities of the bivariate Hüsler-Reiss distribution for different values of the dependence-related parameter $\Lambda$. A large (small) value of $\Lambda$ induces a strong (weak) dependence. For $\Lambda=0.1,1$, and $2.5$, we present the PDFs of the BHR distribution in Figure 1. For these choices of $\Lambda$, the values of the $\chi$-measure are $1.52\times 10^{-23}$, 0.3173, and 0.6892, respectively. The figure demonstrates that a large (small) value of $\Lambda$ induces a strong (weak) dependence. Interchanging the arguments ($x$ and $y$), the PDF (7) remains the same; thus, the components of the BHR distribution are exchangeable. ## 3 Aviation Stock Price Dataset: Exploration and Pre-Processing ### 3.1 Data Description This article focuses on analyzing the daily stock prices of two prominent Indian aviation companies, IndiGo and SpiceJet. We obtain the related datasets from the website https://www.investing.com/. We consider the period from December 2, 2019, to May 31, 2022, which coincides with the global COVID-19 pandemic. According to the World Health Organization (https://covid19.who.int/region/searo/country/in), India experienced three severe waves of the pandemic during this time. The first wave lasted from mid- March 2020 to mid-January 2021, followed by the deadliest second wave from mid-March 2021 to mid-August 2021. Lastly, the third wave persisted from mid- December 2021 to mid-March 2022. These waves had significant repercussions on various sectors of society and the economy, including the aviation market. Due to widespread travel restrictions and reduced investor activity, share prices experienced declines during the lockdown periods. At the same time, during unlock phases, share prices exhibited sudden increments. Our analysis aims to explore how the COVID-19 waves influenced the dependence between the share prices of these two aviation companies, with a focus on detecting the most crucial changepoint in each of their upper and lower tail dependences, due to their link with simultaneous sell and buy positions, respectively. ### 3.2 Daily Maximum and Minimum Rates of Return Instead of dealing with the stock prices in their original scales, we focus on analyzing their rate of return (RoR), which are more meaningful comparison metrics. This rate signifies the percentage variation in the value of a particular stock investment over a specified period, indicating the level of gain or loss experienced by the investor on their stock holding. The RoR can be computed using the following formula: $\textrm{RoR}\,=\,\frac{(\textrm{Ending price}\,-\,\textrm{Beginning price})}{\textrm{Beginning price}}.$ Here, the ending price corresponds to the stock price after a specified period, while the beginning price represents the stock price at the beginning of the same period. The RoR can take both positive and negative values, indicating whether an investor gains profit or incurs a loss on the investment, respectively. This measure holds great importance in evaluating the performance of a stock and comparing it to other investment options available. A standard period for calculating RoR is 24 hours and we stick to it here. Given our focus on the right and left tails of the joint distribution of RoR, we define the daily maximum and minimum RoR as follows. Suppose $Y_{t}(s)$ represents the value of the stock on a specific day $t$ at time $s$. We calculate the daily maximum and minimum RoR on day $t$, respectively, by $\displaystyle R^{\textrm{max}}_{t}\;$ $\displaystyle=\;\max_{s,s^{{}^{\prime}}}\;\left\\{\dfrac{Y_{t+1}\,(s)\;-\;Y_{t}\,(s^{{}^{\prime}})}{Y_{t}\,(s^{{}^{\prime}})}\right\\}=\;\dfrac{\max_{s}\,Y_{t+1}(s)}{\min_{s^{{}^{\prime}}}Y_{t}\,(s^{{}^{\prime}})}\,-\,1,$ $\displaystyle R^{\textrm{min}}_{t}\;$ $\displaystyle=\;\min_{s,s^{{}^{\prime}}}\;\left\\{\dfrac{Y_{t+1}\,(s)\;-\;Y_{t}\,(s^{{}^{\prime}})}{Y_{t}\,(s^{{}^{\prime}})}\right\\}=\;\dfrac{\min_{s}\,Y_{t+1}(s)}{\max_{s^{{}^{\prime}}}Y_{t}\,(s^{{}^{\prime}})}\,-\,1.$ (8) The metrics in (3.2) are not common in the financial or statistical literature. However, we provide a philosophical justification for the two metrics in the following. Sudden changes in the stock prices are often governed by news. For example, with the immediate announcement of lockdown during COVID-19 at a particular time of the day, numerous investors sold stocks immediately. The stock prices may recover slightly afterwards but the main impact on the market happens at the moment of broadcasting the news. Thus, choosing stock price as a function of time throughout the day and looking at the rate of change within a timeframe beginning on the $t$-th day and ending on the $(t+1)$-th day, we can have a better idea of the maximum or minimum possible variability within two consecutive days. A limitation of these metrics is that they do not explain maximum/minimum possible change within a single day. The standard publicly available datasets include both the daily maximum and minimum stock prices $\max_{s}\,Y_{t}(s)$ and $\min_{s}\,Y_{t}(s)$ for different $t$’s. Hence, despite having access to the curves $Y_{t}(\cdot)$ and $Y_{t+1}(\cdot)$, calculating $R^{\textrm{max}}_{t}$ and $R^{\textrm{min}}_{t}$ from them the available data is straightforward. The plots of $R^{\textrm{max}}_{t}$’s and $R^{\textrm{min}}_{t}$’s for both IndiGo and SpiceJet are presented in Figure 2. During the initial days of the COVID-19 pandemic in India, $R^{\textrm{max}}_{t}$ profiles attain negative values for both airlines; however, the lowest value of $R^{\textrm{max}}_{t}$ is more negative for SpiceJet than that for IndiGo, and the subsequent $R^{\textrm{max}}_{t}$ values attain higher positive values for IndiGo than that for SpiceJet. A clear nonstationary pattern is observable in both $R^{\textrm{max}}_{t}$ series. The $R^{\textrm{min}}_{t}$ profiles generally remain negative for IndiGo during the initial lockdown period, but it becomes positive for one instance, with a significant margin, for SpiceJet. Subsequently, for more than one month, $R^{\textrm{min}}_{t}$ attains negative values, with attaining more negative values for SpiceJet compared to IndiGo. After that period, we observe fewer highly positive spikes in the $R^{\textrm{min}}_{t}$ profile for IndiGo than that for SpiceJet. The largest spike in maximum RoR for IndiGo matches with the date when the first lockdown was declared. On the other hand, several spikes in the minimum RoR profile visible for SpiceJet before the day the first unlock phase was declared by the Government of India. Figure 2: Daily maximum (top) and minimum (bottom) rate of return series of IndiGo and SpiceJet airlines, as defined in (3.2). The two vertical lines denote the two dates March 25, 2020, the day the first lockdown was declared and June 1, 2020, the day the first unlock phase was declared by the Government of India. ### 3.3 Data Preprocessing While the Fisher-Tippett theorem described in Section 2 assumes the underlying sequence of random variables to be independently and identically distributed, the result also holds in the case of dependent sequences under some additional regularity conditions explained in [58]. Here, the original stock prices across different timestamps $s$ on a certain day $t$, denoted by $Y_{t}(s)$ in (3.2), are likely to be dependent. Assuming the regularity conditions of [58] hold, we assume that $R^{\textrm{max}}_{t}$’s follow the GEV distribution in (1), characterized by three parameters– location ($\mu$), scale ($\psi$), and shape ($\xi$). Because of not having access to the curves $Y_{t}(\cdot)$ and $Y_{t+1}(\cdot)$, only option is to assume that the regularity conditions of [58] hold and $R^{\textrm{max}}_{t}$’s and $R^{\textrm{min}}_{t}$’s follow the GEV distribution. However, such an assumption is common in the extreme value analysis literature. Therefore, to transform $R^{\textrm{max}}_{t}$’s to standard Gumbel margins, our first data preprocessing step involves estimating these parameters. However, the bottom panels of Figure 2 show that assuming the GEV parameters to be constant across $t$ is questionable for both airlines. Hence, we assume the marginal model to be $R^{\textrm{max}}_{t}\sim\textrm{GEV}(\mu_{t},\sigma_{t},\xi_{t})$ separately for each company. Here, we estimate the temporally-varying GEV parameters using local probability-weighted moments [PWM, 41] estimation. We repeat the same procedure for $R^{\textrm{min}}_{t}$’s. In general, the term local likelihood, introduced by [84], refers to a method that estimates the parameters of a statistical model by considering local subsets of data instead of the entire dataset as a whole. The local likelihood approach involves placing a window around each observation and maximizing the likelihood function for the model’s parameters within that window. In our case, we utilize a window size of 100 for each analysis and employ PWM estimation. Ideally, the window size can be considered to be a tuning parameter and chosen based on a thorough crossvalidation. While a too small value of the bandwidth might provide unreliable estimates, a too large value would oversmooth the parameter profiles. Unlike the traditional method of moments, where we equate population moments to their sample counterparts with equal weights, PWM utilizes a set of weights that depends on the probability density function of the population. These weights are carefully selected to capture the distribution’s shape more accurately, and thus, the PWM method is used widely in extreme value analysis. Implementing local PWM estimation, we obtain the estimates of $\\{\mu_{t},\sigma_{t},\xi_{t}\\}$ separately for IndiGo and SpiceJet at each day $t$ during the COVID-19 pandemic. We obtained the shape parameter estimates for both $R^{\textrm{max}}_{t}$’s and $R^{\textrm{min}}_{t}$’s to be close to zero in almost all the cases, along with high standard errors. Thus, the shape parameter can be safely fixed to zero, i.e., the corresponding GEV distribution can be assumed to belong to the Gumbel class, for both airlines. We then apply the local PWM estimation procedure after setting $\xi_{t}=0$ for all cases. Further, suppose we denote the joint daily maximum rate of return vector for day $t$ by $\bm{R}^{\textrm{max}}_{t}=(R^{X}_{t},R^{Y}_{t})^{\prime}$, where $R^{X}_{t}$ and $R^{Y}_{t}$ denote the individual daily maximum rate of return on day $t$ for IndiGo and SpiceJet, respectively. Besides, let the corresponding local PWM estimates for the parameters of the underlying Gumbel distributions be given by $\\{\widehat{\mu}^{X}_{t},\widehat{\sigma}^{X}_{t}\\}$ and $\\{\widehat{\mu}^{Y}_{t},\widehat{\sigma}^{Y}_{t}\\}$, respectively. We transform $R^{X}_{t}$ and $R^{Y}_{t}$ to standard Gumbel margins following the location-scale transformations $\widetilde{R}^{X}_{t}=\frac{R^{X}_{t}-\widehat{\mu}^{X}_{t}}{\widehat{\sigma}^{X}_{t}},\quad\widetilde{R}^{Y}_{t}=\frac{R^{Y}_{t}-\widehat{\mu}^{Y}_{t}}{\widehat{\sigma}^{Y}_{t}},$ (9) and obtain the transformed random vectors $\widetilde{\bm{R}}^{\textrm{max}}_{t}=(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t})$. We repeat the same procedure for $R^{\textrm{min}}_{t}$’s and denote the joint daily minimum rate of return vector transformed to standard Gumbel margins by $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s. We explore the temporal extremal dependence within the time series $\\{\widetilde{R}^{X}_{t}\\}$ and $\\{\widetilde{R}^{Y}_{t}\\}$ for both $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s through $F$-madogram as in (5). For all series, we observe that the estimated temporal $\chi$-measures are close to zero across low through high lags, and thus, we safely assume each of $\\{\widetilde{R}^{X}_{t}\\}$ and $\\{\widetilde{R}^{Y}_{t}\\}$ to be independent across $t$ for both $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s. To explore the extremal dependence between the components of $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s, we use $F$-madogram similar to its use for exploring temporal extremal dependence. We obtain the empirical $\chi$-measure for $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s to be 0.3516 and 0.3914, respectively. Under the null hypothesis of independence between the components of $\widetilde{\bm{R}}_{t}$, the bootstrap-based critical value is 0.0646 based on a test of level 0.95. Thus, we can safely claim that there is a strong extremal dependence between the components of each $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s, and further, we assume that the joint distribution of the components of each $\widetilde{\bm{R}}^{\textrm{max}}_{t}$’s and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$’s to be the BHR distribution with CDFs of the form (6). ## 4 Methodology In this section, we discuss the Likelihood Ratio Test (LRT) and Modified Information Criterion (MIC) to detect changepoints in the BHR distribution. From the data, we can estimate the model parameters based on different possible choices of the changepoint and subsequently, after plugging in the estimates within the likelihood function, we can calculate $Z^{\prime}_{T}$ and $S^{\prime}_{T}$. The tests are not distribution free and in a finite- sample setting, they depend on sample sizes. However, as long as we can obtain the critical values empirically, we can use these tests. In Section 5.1, we discuss obtaining the critical values empirically. This strategy in the context of using LRT and MIC for changepoint estimation is not new. For example, [83] and [73] follow the same strategy for obtaining the critical values in the context of using LRT and related tests. By an abuse of notation, we denote both $\widetilde{\bm{R}}^{\textrm{max}}_{t}$ and $\widetilde{\bm{R}}^{\textrm{min}}_{t}$ by a generic notation $\widetilde{\bm{R}}_{t}$ in this section. Let the transformed daily maximum/minimum rate of return vectors $\\{\widetilde{\bm{R}}_{t}=(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t}),t=1,\ldots,T\\}$ in (9) be a sequence of independent observations from the BHR distribution with dependence-related parameters $\\{\Lambda_{t},t=1,\ldots,T\\}$ respectively. We are interested in testing for changes in the parameters $\Lambda_{t}$’s, i.e., our null and alternative hypotheses of interest are $H_{0}\;:\;\Lambda_{1}=\cdots=\Lambda_{T}=\Lambda,\quad H_{A}\;:\;\Lambda_{1}=\ldots=\Lambda_{\tau}\neq\Lambda_{\tau+1}=\cdots=\;\Lambda_{T-1}\;=\;\Lambda_{T}$ for a changepoint $\tau$, if exists. Then, under $H_{0}$, the log-likelihood is $\displaystyle\log L_{H_{0}}(\Lambda)=\sum_{t=1}^{T}\log\left[h_{\Lambda}\left(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t}\right)\right],$ (10) where $h_{\Lambda}(\cdot,\cdot)$ is as in (7), and the corresponding maximum likelihood estimate (MLE) of $\Lambda$, say $\widehat{\Lambda}$, is obtained by solving the following score equation $\frac{\partial}{\partial\Lambda}(\log\,L_{H_{0}}(\Lambda))=0.$ (11) Under the alternative hypothesis $H_{A}$, the log-likelihood function is $\displaystyle\log L_{H_{A}}(\Lambda_{1},\Lambda_{T},\tau)=\sum_{t=1}^{\tau}\log\left[h_{\Lambda_{1}}\left(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t}\right)\right]+\log\left[\sum_{t=\tau+1}^{T}h_{\Lambda_{T}}\left(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t}\right)\right],$ (12) and we obtain the MLEs of $\Lambda_{1}$ and $\Lambda_{T}$, say $\widehat{\Lambda}_{1}$ and $\widehat{\Lambda}_{T}$ respectively, by solving the following score equations $\frac{\partial}{\partial\Lambda_{1}}(\log\,L_{H_{A}}(\Lambda_{1},\Lambda_{T},\tau))=0,\quad\frac{\partial}{\partial\Lambda_{T}}(\log\,L_{H_{A}}(\Lambda_{1},\Lambda_{n},\tau))=0.$ (13) The above score equations (11) and (13) require nonlinear optimization and we use the function fbvevd from the R package evd [77] to get the MLEs. ### 4.1 Likelihood Ratio Test (LRT) The most commonly used test for changepoint detection problems is LRT. Here we assume that the underlying data are independent across time. It works in the following way. Consider a timepoint $\tau$ between $1$ and $T$ at which a change occurs. We reject our null hypothesis, i.e., no changepoint, if we observe a high value of the (2-times the log) likelihood ratio, for a fixed $\tau$, given by $\displaystyle\textrm{LR}(\tau)$ $\displaystyle=-2\left\\{\log L_{H_{0}}(\widehat{\Lambda})-\log L_{H_{A}}(\widehat{\Lambda}_{1},\widehat{\Lambda}_{T})\right\\},$ (14) where $\log L_{H_{0}}$ and $\log L_{H_{A}}$ are given by (10) and (12), respectively. Further, considering a range of possible values of $\tau$, the alternative hypothesis is preferred if $\textrm{LR}(\tau)$ is high for any single $\tau$ and the related LRT statistic is given by $Z_{T}\;=\;\max_{1\leq\tau<T}\;\textrm{LR}(\tau).$ If $\tau$ is small, we do have not sufficient data to obtain the MLE $\widehat{\Lambda}_{1}$. Similarly, for $\tau$ close to $T$, $\widehat{\Lambda}_{T}$ would be unstable due to insufficient data. Thus, to avoid high uncertainty of the estimates, [64] suggested the modified version of $Z_{T}$, or in other words the trimmed LRT statistic $Z^{\prime}_{T}$ given by $Z^{\prime}_{T}\;=\;\max_{\tau_{0}<\tau<T-\tau_{0}}\;\textrm{LR}(\tau),\;\;\textrm{where}\;\;\tau_{0}=2\lfloor\log(T)\rfloor.$ (15) while other choices of $\tau_{0}$ have been proposed in the literature, e.g., [60] suggested $\tau_{0}\;=\;\lfloor\log\,T\rfloor^{2}$, we stick to the choice in (15). Once we reject our null hypothesis, the estimated changepoint is $\widehat{\tau}_{\textrm{LRT}}\;=\;\operatorname*{arg\,max}_{\tau_{0}<\tau<T-\tau_{0}}\;\textrm{LR}(\tau).$ (16) For a given level of significance $\alpha$, we reject our null hypothesis $H_{0}$ if $Z^{\prime}_{T}>c_{\alpha,\;T}$, where $c_{\alpha,\;T}$ is the corresponding critical value. In Section 5, we numerically obtain $c_{\alpha,\;T}$ for different choices of $T$ and $\alpha$, and different true values of the dependence-related parameter $\Lambda$. ### 4.2 Modified Information Criterion (MIC) The Modified Information Criterion (MIC) was proposed by [11]. [73] pointed out that MIC performs better than LRT in detecting the changepoints occurring near the very beginning or at the very end of the data sequence. Here again we assume that the underlying data are independent across time. The idea of MIC is as follows. Similar to the case of LRT, consider an integer $\tau$ between $1$ (included) to $T$. If a change occurs at any $\tau$, we reject our null hypothesis $H_{0}$, i.e., no changepoint. Under $H_{0}$, we have $\tau=T$, and then, the MIC is defined as $\textrm{MIC}(T)\;=\;-2\log L_{H_{0}}(\widehat{\Lambda})\;+\;dim(\Lambda)\log(T),$ where $\widehat{\Lambda}$ is the solution of (11). For the BHR distribution, the dimension of the parameter space is dim($\Lambda$) = 1. For $1\leq\,\tau<T$, the MIC is defined as $\textrm{MIC}(\tau)\;=\;-2\log L_{H_{A}}(\widehat{\Lambda}_{1},\;\widehat{\Lambda}_{T})\;+\;\left[2\;\textrm{dim}(\Lambda_{1})\;+\;\left(\frac{2\tau}{T}-1\right)^{2}\right]\log(T);$ (17) here, for our model, $\textrm{dim}(\Lambda_{1})=1$. If $\textrm{MIC}(T)\geq\min_{1\leq\tau<T}\textrm{MIC}(\tau)$, we select the model with a changepoint. Then, the estimated changepoint $\widehat{\tau}$ satisfies the equality $\textrm{MIC}(\widehat{\tau})=\min_{1\leq\tau<T}\textrm{MIC}(\tau)$. [11] also suggested a test statistics $S_{T}$ based on $\textrm{MIC}(T)$ and $\textrm{MIC}(\tau)$ to detect one changepoint as follows $S_{T}=\textrm{MIC}(T)\;-\;\min_{1\leq\tau<T}\;\textrm{MIC}(\tau)\;+\;dim(\Lambda)\log(T).$ For our model, dim$(\Lambda)$ = 1, and hence, $S_{T}$ can be written for our case as $S_{T}=-2\log L_{H_{0}}(\widehat{\Lambda})-\min_{1\leq\tau<T}\left[-2\log L_{H_{A}}(\widehat{\Lambda}_{1},\widehat{\Lambda}_{T})+\left\\{\left(\frac{2\tau}{T}-1\right)^{2}-1\right\\}\log(T)\right].$ If changepoints occur at the very beginning or the end, then we do have not sufficient data to obtain the MLEs $\widehat{\Lambda}_{1}$ or $\widehat{\Lambda}_{T}$. In these cases, [64] suggested a trimmed version of $S_{T}$ as $\displaystyle S^{\prime}_{T}=-2\log L_{H_{0}}(\widehat{\Lambda})-\min_{\tau_{0}<\tau<T-\tau_{0}}\left[-2\log L_{H_{A}}(\widehat{\Lambda}_{1},\widehat{\Lambda}_{T})+\left\\{\left(\frac{2\tau}{T}-1\right)^{2}-1\right\\}\log(T)\right].$ (18) where $\tau_{0}=2\lfloor\log(T)\rfloor$. Applications of $S^{\prime}_{T}$ can be found in [83]. Once we reject our null hypothesis, the estimated changepoint is $\widehat{\tau}_{\textrm{MIC}}\;=\;\operatorname*{arg\,min}_{\tau_{0}<\tau<T-\tau_{0}}\;\textrm{MIC}(\tau).$ (19) We reject our null hypothesis if $S^{\prime}_{T}>c_{\alpha,\;T}$, where $c_{\alpha,\;T}$ is the critical value for a given level of significance $\alpha$. We numerically obtain $c_{\alpha,\;T}$ for different choices of $T$, $\alpha$, and true dependence-related parameter $\Lambda$, and we tabulate them in Section 5. ## 5 Simulation Study Here we discuss obtaining critical values for the LRT statistic $Z^{\prime}_{T}$ in (15) and the MIC statistic $S^{\prime}_{T}$ in (18) and compare their performances in terms of power. We derive the critical values and power numerically due to their intractable analytic expressions. ### 5.1 Critical Values We numerically evaluate the critical values $c_{\alpha,T}$ for a few specific choices of the true parameter values $\Lambda$ of the BHR distribution, levels of significance $\alpha$, and sample sizes $T$, to illustrate the procedure for obtaining the critical values under a general set-up as well as for studying their patterns. We consider three choices of $\Lambda\in\\{0.5,2,4\\}$ under the null distribution, four choices of $T\in\\{50,100,150,200\\}$, and $\alpha\in\\{0.01,0.05,0.1\\}$. For a choice of the true value of $\Lambda$ and $T$, we first obtain $B=10^{4}$ samples each comprising of $T$ IID observations from the BHR distribution with dependence-related parameter $\Lambda$, say $\mathcal{R}^{(b)}=\\{\widetilde{\bm{R}}^{(b)}_{1},\ldots,\widetilde{\bm{R}}^{(b)}_{T}\\}$ for $b=1,\ldots,B$. Then, we obtain $\widehat{\Lambda}^{(b)}$, the MLE of $\Lambda$ based on $\mathcal{R}^{(b)}$, following (11), and then calculate $\log L_{H_{0}}(\widehat{\Lambda}^{(b)})$ based on $\mathcal{R}^{(b)}$, following (10). Further, for each $\tau$ between $\tau_{0}=2\lfloor\log(T)\rfloor$ and $T-\tau_{0}$ (excluding the endpoints), we divide the sample $\mathcal{R}^{(b)}$ into two parts $\mathcal{R}^{(b,1)}=\\{\widetilde{\bm{R}}^{(b)}_{1},\ldots,\widetilde{\bm{R}}_{\tau}^{(b)}\\}$ and $\mathcal{R}^{(b,T)}=\\{\widetilde{\bm{R}}_{\tau+1}^{(b)},\ldots,\widetilde{\bm{R}}^{(b)}_{T}\\}$. We then assume that the observations in $\mathcal{R}^{(b,1)}$ and $\mathcal{R}^{(b,T)}$ follow two different BHR distributions with parameters $\Lambda_{1}$ and $\Lambda_{T}$, respectively, and obtain the MLEs $\widehat{\Lambda}^{(b)}_{1}$ and $\widehat{\Lambda}^{(b)}_{T}$ following (13). Based on $\mathcal{R}^{(b,1)}$, $\mathcal{R}^{(b,T)}$, $\widehat{\Lambda}^{(b)}_{1}$, and $\widehat{\Lambda}^{(b)}_{T}$, we calculate $\log L_{H_{A}}(\widehat{\Lambda}^{(b)}_{1},\widehat{\Lambda}^{(b)}_{T})$ following (12). We repeat the above procedure for each $\tau_{0}<\tau<T-\tau_{0}$ and calculate $Z^{\prime}_{T}$ following (15); we call it $Z^{{}^{\prime}(b)}_{T}$. We repeat the whole procedure for each $b=1,\ldots,B$ and obtain $\mathcal{Z}_{T}=\\{Z^{{}^{\prime}(1)}_{T},\ldots,Z^{{}^{\prime}(B)}_{T}\\}$. Similar to LRT statistics, based on $\log L_{H_{0}}(\widehat{\Lambda}^{(b)})$ and $\log L_{H_{A}}(\widehat{\Lambda}^{(b)}_{1},\widehat{\Lambda}^{(b)}_{T})$ for all $\tau_{0}<\tau<T-\tau_{0}$, we calculate $S^{\prime}_{T}$ following (18); we call it $S^{{}^{\prime}(b)}_{T}$. We repeat the whole procedure for each $b=1,\ldots,B$ and obtain $\mathcal{S}_{T}=\\{S^{{}^{\prime}(1)}_{T},\ldots,S^{{}^{\prime}(B)}_{T}\\}$. Finally, the critical values $c_{\alpha,T}$ for LRT and MIC are obtained by the $100(1-\alpha)$-th percentiles of $\mathcal{Z}_{T}$ and $\mathcal{S}_{T}$, respectively. Due to the inherent randomness of the above-explained sampling- based procedure, we also calculate the standard error of $c_{\alpha,T}$ for LRT and MIC using a straightforward nonparametric bootstrap procedure from $\mathcal{Z}_{T}$ and $\mathcal{S}_{T}$, as otherwise repeating the parametric bootstrap procedure several times would be computationally challenging. The critical values for both LRT and MIC and their (Monte Carlo) standard errors (S.E.) are presented in Table 1. For smaller $\alpha$, $c_{\alpha,T}$ are naturally higher. For a specific $\Lambda$ and $\alpha$, $c_{\alpha,T}$ are generally higher as $T$ increases; however, the underlying high S.E.s indicate that such differences are generally insignificant. The S.E.s of $c_{\alpha,T}$ are high for small $\alpha$ for all $\Lambda$ and $T$ for both LRT and MIC. By construction, the $c_{\alpha,T}$ values are smaller for MIC than that for LRT for any $\Lambda$, $T$, and $\alpha$. Table 1: Critical values of LRT and MIC for different levels of significance $\alpha$ and different sample sizes $T$. LRT | | | | | | | | | | | ---|---|---|---|---|---|---|---|---|---|---|--- $T$ | $\Lambda$ | | $\alpha\,=\,0.01$ | $\alpha\,=\,0.05$ | $\alpha\,=\,0.1$ | $T$ | $\Lambda$ | | $\alpha\,=\,0.01$ | $\alpha\,=\,0.05$ | $\alpha\,=\,0.1$ 50 | 0.5 | Cutoff | 7.917 | 4.532 | 3.285 | 100 | 0.5 | Cutoff | 7.996 | 4.860 | 3.535 | | S.E. | 0.216 | 0.084 | 0.049 | | | S.E. | 0.236 | 0.084 | 0.052 | 2 | Cutoff | 8.737 | 5.617 | 4.335 | | 2 | Cutoff | 9.529 | 5.978 | 4.545 | | S.E. | 0.226 | 0.085 | 0.062 | | | S.E. | 0.177 | 0.097 | 0.078 | 4 | Cutoff | 8.580 | 5.589 | 4.238 | | 4 | Cutoff | 9.225 | 5.976 | 4.612 | | S.E. | 0.181 | 0.085 | 0.059 | | | S.E. | 0.231 | 0.089 | 0.053 150 | 0.5 | Cutoff | 7.966 | 4.948 | 3.654 | 200 | 0.5 | Cutoff | 8.036 | 4.968 | 3.698 | | S.E. | 0.185 | 0.086 | 0.053 | | | S.E. | 0.210 | 0.087 | 0.055 | 2 | Cutoff | 9.405 | 6.178 | 4.738 | | 2 | Cutoff | 9.676 | 6.155 | 4.807 | | S.E. | 0.278 | 0.087 | 0.069 | | | S.E. | 0.207 | 0.087 | 0.059 | 4 | Cutoff | 9.369 | 6.112 | 4.692 | | 4 | Cutoff | 9.588 | 6.263 | 4.786 | | S.E. | 0.171 | 0.095 | 0.067 | | | S.E. | 0.238 | 0.092 | 0.079 MIC | | | | | | | | | | | $T$ | $\Lambda$ | | $\alpha\,=\,0.01$ | $\alpha\,=\,0.05$ | $\alpha\,=\,0.1$ | $T$ | $\Lambda$ | | $\alpha\,=\,0.01$ | $\alpha\,=\,0.05$ | $\alpha\,=\,0.1$ 50 | 0.5 | Cutoff | 6.723 | 3.553 | 2.406 | 100 | 0.5 | Cutoff | 6.181 | 3.392 | 2.238 | | S.E. | 0.236 | 0.081 | 0.052 | | | S.E. | 0.191 | 0.099 | 0.044 | 2 | Cutoff | 7.510 | 4.540 | 3.292 | | 2 | Cutoff | 7.600 | 4.200 | 2.957 | | S.E. | 0.213 | 0.087 | 0.056 | | | S.E. | 0.257 | 0.082 | 0.053 | 4 | Cutoff | 7.385 | 4.559 | 3.273 | | 4 | Cutoff | 7.434 | 4.161 | 2.953 | | S.E. | 0.165 | 0.081 | 0.051 | | | S.E. | 0.167 | 0.061 | 0.056 150 | 0.5 | Cutoff | 6.016 | 3.271 | 2.179 | 200 | 0.5 | Cutoff | 5.440 | 3.008 | 2.017 | | S.E. | 0.193 | 0.064 | 0.045 | | | S.E. | 0.205 | 0.068 | 0.039 | 2 | Cutoff | 6.896 | 3.981 | 2.806 | | 2 | Cutoff | 7.031 | 3.747 | 2.508 | | S.E. | 0.252 | 0.073 | 0.051 | | | S.E. | 0.249 | 0.079 | 0.052 | 4 | Cutoff | 7.164 | 4.029 | 2.749 | | 4 | Cutoff | 6.778 | 3.743 | 2.535 | | S.E. | 0.200 | 0.083 | 0.060 | | | S.E. | 0.228 | 0.069 | 0.054 ### 5.2 Power Comparison In this subsection, we compare the power of LRT and MIC numerically under different scenarios of the alternative hypothesis $H_{A}$, i.e., we fix the changepoint $\tau$, the values of the dependence-related parameter of the BHR distribution before and after the changepoint, i.e., $\Lambda_{1}$ and $\Lambda_{T}$. We choose the sample sizes $T\in\\{50,200\\}$ and the levels of significance $\alpha\in\\{0.01,0.05,0.1\\}$. Then, for each $T$, we choose two different values of the changepoint $\tau=\lfloor\beta T\rfloor$, where $\beta\in\\{0.25,0.5\\}$. In each case, we consider the possible values of the dependence-related parameter before and after the changepoint as $\Lambda_{1}\in\\{0.5,2,4\\}$ and $\Lambda_{T}\in\\{0.5,1,1.5,\ldots,5\\}$, respectively. Under $H_{0}$, $\tau=T$ and the value of the dependence-related parameter under $H_{0}$ is the same as $\Lambda_{1}$. Hence, for the above choices, the critical values of LRT and MIC are obtained as in Section 5.1. For a choice of the true values of $\Lambda_{1}$, $\Lambda_{T}$, $\tau$, and $T$, we first obtain $B=10^{4}$ samples each comprising of $\tau$ IID observations from the BHR distribution with dependence-related parameter $\Lambda_{1}$, say $\mathcal{R}^{(b,1)}=\\{\widetilde{\bm{R}}^{(b)}_{1},\ldots,\widetilde{\bm{R}}_{\tau}^{(b)}\\}$ and another $T-\tau$ IID observations from the BHR distribution with parameter $\Lambda_{T}$, say $\mathcal{R}^{(b,T)}=\\{\widetilde{\bm{R}}_{\tau+1}^{(b)},\ldots,\widetilde{\bm{R}}^{(b)}_{T}\\}$, for $b=1,\ldots,B$. For each $b$, we thus obtain a combined sample of size $T$ given by $\mathcal{R}^{(b)}=\\{\widetilde{\bm{R}}^{(b)}_{1},\ldots,\widetilde{\bm{R}}^{(b)}_{T}\\}$. Under $H_{0}$, we assume the observations in $\mathcal{R}^{(b)}$ to be IID following a BHR distribution with a common parameter $\Lambda$. Following (11), we obtain the MLE of $\Lambda$ given by $\widehat{\Lambda}^{(b)}$ and then calculate $\log L_{H_{0}}(\widehat{\Lambda}^{(b)})$ based on $\mathcal{R}^{(b)}$, following (10). Subsequently, under $H_{A}$, we obtain the MLEs $\widehat{\Lambda}^{(b)}_{1}$ and $\widehat{\Lambda}^{(b)}_{T}$ following (13). Based on $\mathcal{R}^{(b,1)}$, $\mathcal{R}^{(b,T)}$, $\widehat{\Lambda}^{(b)}_{1}$, and $\widehat{\Lambda}^{(b)}_{T}$, we calculate $\log L_{H_{A}}(\widehat{\Lambda}^{(b)}_{1},\widehat{\Lambda}^{(b)}_{T})$ following (12). We repeat the above procedure for each $\tau_{0}<\tau<T-\tau_{0}$ and calculate $Z^{\prime}_{T}$ following (15); we call it $Z^{{}^{\prime}(b)}_{T}$. We repeat the whole procedure for each $b=1,\ldots,B$ and obtain $\mathcal{Z}_{T}=\\{Z^{{}^{\prime}(1)}_{T},\ldots,Z^{{}^{\prime}(B)}_{T}\\}$. Similar to LRT statistics, based on $\log L_{H_{0}}(\widehat{\Lambda}^{(b)})$ and $\log L_{H_{A}}(\widehat{\Lambda}^{(b)}_{1},\widehat{\Lambda}^{(b)}_{T})$ for all $\tau_{0}<\tau<T-\tau_{0}$, we calculate $S^{\prime}_{T}$ following (18); we call it $S^{{}^{\prime}(b)}_{T}$. We repeat the whole procedure for each $b=1,\ldots,B$ and obtain $\mathcal{S}_{T}=\\{S^{{}^{\prime}(1)}_{T},\ldots,S^{{}^{\prime}(B)}_{T}\\}$. Despite having a true value of $\tau$ under $H_{A}$, both LRT and MIC should be calculated based on maximizing the necessary terms in (15) and (18) over $\tau_{0}<\tau<T-\tau_{0}$. The critical values are obtained apriori following Section 5.1 and suppose we call the critical values $c_{\alpha,T}$ for LRT and MIC by $c^{\textrm{LRT}}_{\alpha,T}$ and $c^{\textrm{MIC}}_{\alpha,T}$, respectively. We thus can approximate their powers by $P_{\textrm{LRT}}=\frac{1}{B}\sum_{b=1}^{B}\textrm{I}\left(Z_{T}^{{}^{\prime}{(b)}}\geq c^{\textrm{LRT}}_{\alpha,T}\right),\quad P_{\textrm{MIC}}=\frac{1}{B}\sum_{b=1}^{B}\textrm{I}\left(S_{T}^{{}^{\prime}(b)}\geq c^{\textrm{MIC}}_{\alpha,T}\right),$ (20) where I($\cdot$) is an indicator function. Figure 3: Power curves of LRT and MIC under different choices of $T$, $\tau=\lfloor\beta T\rfloor$, $\Lambda_{1}$, $\Lambda_{T}$, and $\alpha$. The curves for LRT are presented by solid lines while the curves for MIC are presented by dashed lines. For each case, the curves from bottom to top, represented in black, red, and blue, are for $\alpha=0.01,0.05,0.1$, respectively. A higher value of power indicates a better testing procedure. We present the power curves in Figure 3. For both LRT and MIC, the power is higher for larger levels of significance $\alpha$ under all settings; this is obvious and follows directly from (20). When $T=50$, $\beta=0.25$, and $\Lambda_{1}=0.5$, the power curves of LRT and MIC for $\alpha=0.01$ are close to one only when $\Lambda_{T}\approx 3.5$. After changing $\beta=0.25$ to $\beta=0.5$ keeping other parameters fixed, both power curves for $\alpha=0.01$ are close to one when $\Lambda_{T}\approx 2.5$. Here, the availability of more observations for estimating $\Lambda_{1}$ allows higher power even when the difference between $\Lambda_{1}$ and $\Lambda_{T}$ remains the same. In general, the power of both LRT and MIC are higher for $\beta=0.5$ than for $\beta=0.25$ while keeping the other parameters fixed. This feature is more prominent when the sample size is small. Keeping other parameters fixed (with respect to the top-left panel), increasing the sample size from $T=50$ to $T=200$, the power increases significantly for both tests; both power curves for $\alpha=0.01$ are close to one when $\Lambda_{T}\approx 1.5$. A similar increasing pattern in power curves is visible for other values of $\Lambda_{1}$ and $\beta$. In all cases, naturally, the power is lowest when $\Lambda_{1}=\Lambda_{T}$. For $T=50$ and $\Lambda_{1}=2$, we observe that the power curves are asymmetric for $\Lambda_{T}<\Lambda_{1}$ and $\Lambda_{T}>\Lambda_{1}$. When $\Lambda_{T}<\Lambda_{1}$, the power values of LRT and MIC get closer to one faster than that in the case of $\Lambda_{T}>\Lambda_{1}$. For $T=200$, the power curves become more symmetric around $\Lambda_{1}$. However, for this asymmetry changes with the value of $\Lambda_{1}$; for example, when $\Lambda_{1}=0.5$ and $T=200$, the power curves reach near one when $\Lambda_{T}-\Lambda_{1}\approx 1$, while for $\Lambda_{1}=4$ and $T=200$, all the power curves become close to one only when $\Lambda_{1}-\Lambda_{T}\approx 2$. In general, MIC appears to be more powerful than LRT. This characteristic is more prominent when $T=200$. For both $T=50$ and $T=200$, the differences between the power curves of LRT and MIC are generally more prominent when $\beta=0.5$ compared to the case of $\beta=0.25$. ### 5.3 Consistency of the Estimator of Changepoint There is a vast literature on the asymptotic theoretical guarantees of both LRT [20] and MIC [11]. They hold in general under certain regularity conditions. However, despite the availability of asymptotic results, an analytical exposition of finite sample properties is often intricate because of the non-existence of any closed-form expression of the estimators of changepoint $\widehat{\tau}_{\textrm{LRT}}$ in (16) and $\widehat{\tau}_{\textrm{MIC}}$ in (19). As a result, in this subsection, we numerically calculate the probabilities of the estimators $\widehat{\tau}_{\textrm{LRT}}$ and $\widehat{\tau}_{\textrm{MIC}}$ being within a small neighborhood of the true value of the changepoint $\tau$, i.e., $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ for $\widehat{\tau}=\widehat{\tau}_{\textrm{LRT}},\widehat{\tau}_{\textrm{MIC}}$ and $\delta=1,2,3$. We calculate this probability under different scenarios of the alternative hypothesis $H_{A}$, i.e., we fix the changepoint $\tau$, the values of the dependence-related parameter of the BHR distribution before and after the changepoint, i.e., $\Lambda_{1}$ and $\Lambda_{T}$. We choose the sample sizes $T\in\\{50,200\\}$. Then, for each $T$, we choose two different values of the changepoint $\tau=\lfloor\beta T\rfloor$, where $\beta\in\\{0.25,0.5\\}$. In each case, we consider the possible values of the dependence-related parameter before and after the changepoint as $\Lambda_{1}\in\\{0.5,1,1.5,\ldots,5\\}$ and $\Lambda_{T}\in\\{0.5,2,4\\}$, respectively. We ignore the cases when $\Lambda_{1}=\Lambda_{T}$ as they indicate no changepoint scenarios. We present the curves of $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ in Figure 4. As $\delta$ increases, $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ naturally also increases for $\widehat{\tau}=\widehat{\tau}_{\textrm{LRT}},\widehat{\tau}_{\textrm{MIC}}$ under all settings. When $T=50,\beta=0.25$, and $\Lambda_{T}=0.5$ (top-left panel), the values of $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ are close to each other for LRT and MIC; for small differences between $\Lambda_{1}$ and $\lambda_{T}$, LRT performs slightly better than MIC and the difference fades away as the difference between $\Lambda_{1}$ and $\lambda_{T}$ increases. Upon changing $\Lambda_{T}=0.5$ to $\Lambda_{T}=2$, the difference in $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ between LRT and MIC is more prominent, and LRT generally outperforms MIC. Further, changing $\Lambda_{T}=2$ to $\Lambda_{T}=4$, we see a less prominent difference in performance between LRT and MIC. When $\beta=0.5$ (second row of Figure 4) instead of $\beta=0.25$ as in the previous cases, we notice an opposite pattern in terms of the performances of LRT and MIC, i.e., MIC performs better than LRT for small differences between $\Lambda_{1}$ and $\Lambda_{T}$, and for large values of $|\Lambda_{1}-\Lambda_{T}|$, both methods perform similarly. When $T=200$, both LRT and MIC perform equally in general, and for $\beta=0.5$, MIC outperforms LRT for small values of $|\Lambda_{1}-\Lambda_{T}|$. Theoretically, the probability $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ should converge to one as $T\uparrow\infty$ for any positive $|\Lambda_{1}-\Lambda_{T}|$; however, we see that for $|\Lambda_{1}-\Lambda_{T}|=0.5$, the probability is less than 0.5 for all settings we consider. However, corresponding to the bottom-left panel of Figure 4, for example, we observe that the inclusion probability is close to one for $\delta=3$ when $|\Lambda_{1}-\Lambda_{T}|=4.5$. This observation indicates that the convergence underlying the asymptotic consistency holds only slowly while increasing $T$, and the difference in $\Lambda_{1}$ and $\Lambda_{T}$ plays a crucial role here. We further study (not shown) the bias and mean squared error (MSE) in estimating the changepoint $\tau$ using $\widehat{\tau}_{\textrm{LRT}}$ and $\widehat{\tau}_{\textrm{MIC}}$. We observe that MIC produces significantly smaller values of MSE than LRT, and the difference is more prominent when $T=200$ compared to the case of $T=50$. Both bias and MSE are higher when the difference between $\Lambda_{1}$ and $\Lambda_{T}$ is small. We observe that the MSE decreases at a faster rate with $|\Lambda_{1}-\Lambda_{T}|$ when $\Lambda_{1}<\Lambda_{T}$ compared to the case when $\Lambda_{1}>\Lambda_{T}$. This pattern is consistent across the choices of $\beta$. For $\beta=0.25$, we observe positive biases for both LRT and MIC across all choices of $\Lambda_{1}$ and $\Lambda_{T}$. However, for $\beta=0.5$, we observe positive biases when $\Lambda_{1}>\Lambda_{T}$ and negative biases when $\Lambda_{1}<\Lambda_{T}$, irrespective of the method and the value of $T$. The absolute bias based on MIC is slightly higher than that for LRT when $\beta=0.25$, and the reverse is observable when $\beta=0.5$. Figure 4: The probabilities $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ for $\widehat{\tau}=\widehat{\tau}_{\textrm{LRT}},\widehat{\tau}_{\textrm{MIC}}$ and $\delta\in\\{1,2,3\\}$ under different choices of $T$, $\tau=\lfloor\beta T\rfloor$, $\Lambda_{1}$, and $\Lambda_{T}$. The curves for LRT are presented by solid lines while the curves for MIC are presented by dashed lines. For each case, the curves from bottom to top, represented in black, red, and blue, are for $\delta=1,2,$ and 3, respectively. A higher value of $\textrm{P}(|\widehat{\tau}-\tau|\leq\delta)$ indicates a better accuracy in the estimation of a changepoint. ## 6 Data Application ### 6.1 Local Probability Weighted Moments Estimation In Figure 2, we observe the nonstationary nature of the daily maximum/minimum rate of return (RoR) for both IndiGo and SpiceJet airlines, and in Section 3.3, we discuss considering the temporally-varying location and scale parameters of the underlying Gumbel distributions. We estimate them using local probability weighted moments estimation and obtain $\\{\widehat{\mu}^{X}_{t},\widehat{\sigma}^{X}_{t}\\}$, the location and scale parameters for IndiGo, and $\\{\widehat{\mu}^{Y}_{t},\widehat{\sigma}^{Y}_{t}\\}$, the location and scale parameters for SpiceJet, respectively, for each of the daily maximum/minimum RoR series. We present the time series of those estimates in Figure 5. For both daily maximum/minimum RoR series, the estimated location and scale profiles are higher during the first half of the COVID-19 period and lower during the second half for both airlines. This observation indicates that the median daily maximum/minimum RoR and variability are higher during the first half of the observation period, i.e., the overall volatility is high. Later, the median daily absolute maximum/minimum RoR and the variability drop. For the daily maximum return of rate, all profiles attain peaks during August 2020; however, we observe a decreasing trend afterward for both profiles of SpiceJet, while the profiles stabilize afterward for IndiGo. This pattern indicates that the median daily maximum RoR are unstable after a volatile first half of the COVID-19 period. Both the estimated location profiles $\\{\widehat{\mu}^{X}_{t},\widehat{\mu}^{Y}_{t}\\}$ remain positive throughout the COVID-19 period. For the daily minimum return of rate, the estimated location profiles gets closer to zero until July 2020 and then remains stable, while the profile for IndiGo remains closer to zero throughout the study period. The scale profiles decrease for both airlines until July 2020 for both airlines and remains stable afterwards. Lower values of the scale profile for IndiGo after July 2020 indicates lower volatility. While our main focus in this paper and the next subsection is on the shifts in the dependence structure, the patterns observable in Figure 5 shed light on the marginal behavior of the daily maximum/minimum RoR. Figure 5: Local Probability Weighted Moments estimates of the temporally- varying location (left) and scale (right) parameters of the fitted Gumbel distribution to daily maximum (top) and minimum (bottom) rates of return for IndiGo and SpiceJet airlines. ### 6.2 Changepoint Estimation Based on the estimated location and scale parameters of the marginal Gumbel distributions in Section 6.1, we transform the daily maximum/minimum RoR series to standard Gumbel margins according to (9). We denote the transformed data for both maximum/minimum RoR series by a generic notation $\mathcal{R}=\\{\widetilde{\bm{R}}_{t}=(\widetilde{R}^{X}_{t},\widetilde{R}^{Y}_{t}),t=1,\ldots,T\\}$. We first calculate the LRT and MIC statistics as follows. Under $H_{0}$, we assume the observations in $\mathcal{R}$ to be IID following a BHR distribution with a common parameter $\Lambda$. Following (11), we obtain the MLE of $\Lambda$ given by $\widehat{\Lambda}$ and then calculate $\log L_{H_{0}}(\widehat{\Lambda})$ based on $\mathcal{R}$, following (10). Subsequently, under $H_{A}$, for each possible value of the changepoint $\tau$ such that $\tau_{0}<\tau<T-\tau_{0}$ where $\tau_{0}=2\lfloor\log(T)\rfloor$, we obtain the MLEs $\widehat{\Lambda}_{1}$ and $\widehat{\Lambda}_{T}$ following (13), based on $\mathcal{R}^{(1)}=\\{\widetilde{\bm{R}}_{t},t=1,\ldots,\tau\\}$ and $\mathcal{R}^{(T)}=\\{\widetilde{\bm{R}}_{t},t=\tau+1,\ldots,T\\}$, respectively. Based on $\mathcal{R}^{(1)}$, $\mathcal{R}^{(T)}$, $\widehat{\Lambda}_{1}$, and $\widehat{\Lambda}_{T}$, we calculate $\log L_{H_{A}}(\widehat{\Lambda}_{1},\widehat{\Lambda}_{T})$ following (12). Accordingly, we calculate $Z^{\prime}_{T}$ following (15); we call it $Z^{{}^{\prime}(obs)}_{T}$. Similarly, we calculate $S^{\prime}_{T}$ following (18); we call it $S^{{}^{\prime}(obs)}_{T}$. Here, we deal with a real dataset, and thus, the true parameter values of the underlying distribution are unknown. Hence, for determining the critical values for both tests and the corresponding $p$-values, we use parametric bootstrapping. For each $b=1,\ldots,B$, where we choose $B=2000$, we draw an IID sample of size $T$, say $\mathcal{R}^{(b)}$, from the BHR distribution with parameter $\widehat{\Lambda}$. We repeat the same procedure of obtaining $Z^{\prime}_{T}$ and $S^{\prime}_{T}$ based on $\mathcal{R}^{(b)}$ and suppose we call them by $Z^{{}^{\prime}(b)}_{T}$ and $S^{{}^{\prime}(b)}_{T}$, respectively. We repeat the whole procedure for each $b=1,\ldots,B$ and obtain $\mathcal{Z}_{T}=\\{Z^{{}^{\prime}(1)}_{T},\ldots,Z^{{}^{\prime}(B)}_{T}\\}$ and $\mathcal{S}_{T}=\\{S^{{}^{\prime}(1)}_{T},\ldots,S^{{}^{\prime}(B)}_{T}\\}$. We calculate the critical values based on the $100(1-\alpha)$-th percentiles of $\mathcal{Z}_{T}$ and $\mathcal{S}_{T}$, respectively. The corresponding $p$-values are obtained by $p_{\textrm{LRT}}=\frac{1}{B}\sum_{b=1}^{B}\textrm{I}\left(Z_{T}^{{}^{\prime}{(b)}}\geq Z^{{}^{\prime}(obs)}_{T}\right),\quad p_{\textrm{MIC}}=\frac{1}{B}\sum_{b=1}^{B}\textrm{I}\left(S_{T}^{{}^{\prime}(b)}\geq S^{{}^{\prime}(obs)}_{T}\right).$ (21) We first discuss the results for the daily maximum RoR series. In the case of LRT, we obtain a $p$-value of 0.027. Hence, we reject our null hypothesis $H_{0}$, i.e., no changepoint. We obtain the estimated changepoint to be $\widehat{\tau}^{\textrm{max}}_{\textrm{LRT}}=81$, which corresponds to the date March 26, 2020. Based on MIC, we obtain a $p$-value of 0.021. Hence, we also reject our null hypothesis $H_{0}$ based on MIC. We obtain the estimated changepoint to be $\widehat{\tau}^{\textrm{max}}_{\textrm{MIC}}=81$, which coincides with the result based on LRT. Top panels of Figure 6 present the profiles $\textrm{LR}(\tau)$ in (14) and $\textrm{MIC}(\tau)$ in (17) for the daily maximum RoR series. Apart from $\widehat{\tau}^{\textrm{max}}_{\textrm{LRT}}=\widehat{\tau}^{\textrm{max}}_{\textrm{MIC}}=81$, certain other peaks are also observable in both profiles $\textrm{LR}(\tau)$ and $\textrm{MIC}(\tau)$; the most prominent peaks, where $\textrm{LR}(\tau)$ is higher than 6, occur between $\tau=81$ and $\tau=94$. Corresponding to $\tau=81$, the MLE of $\Lambda_{1}$ and $\Lambda_{T}$ are $\widehat{\Lambda}_{1}=1.012$ and $\widehat{\Lambda}_{T}=1.540$, respectively. Ignoring the first and the last 50 days (due to the instability in estimation), for $\tau=81$, $\widehat{\Lambda}_{T}-\widehat{\Lambda}_{1}$ attains its highest value, which shows the correctness of the inference based on LRT and MIC. We next discuss the results for the daily minimum RoR series. In the case of LRT, we obtain a $p$-value of less than 0.001. Hence, we reject our null hypothesis $H_{0}$, i.e., no changepoint. We obtain the estimated changepoint to be $\widehat{\tau}^{\textrm{min}}_{\textrm{LRT}}=128$, which corresponds to the date June 08, 2020. Based on MIC, we again obtain a $p$-value of less than 0.001. Hence, we also reject our null hypothesis $H_{0}$ based on MIC. We obtain the estimated changepoint to be $\widehat{\tau}^{\textrm{min}}_{\textrm{MIC}}=128$, which coincides with the result based on LRT. Bottom panels of Figure 6 present the profiles $\textrm{LR}(\tau)$ in (14) and $\textrm{MIC}(\tau)$ in (17) for the daily minimum RoR series. Apart from $\widehat{\tau}^{\textrm{min}}_{\textrm{LRT}}=\widehat{\tau}^{\textrm{min}}_{\textrm{MIC}}=128$, certain other peaks are also observable in both profiles $\textrm{LR}(\tau)$ and $\textrm{MIC}(\tau)$; the most prominent peaks, where $\textrm{LR}(\tau)$ is higher than 18, occur between $\tau=128$ and $\tau=135$. Corresponding to $\tau=128$, the MLE of $\Lambda_{1}$ and $\Lambda_{T}$ are $\widehat{\Lambda}_{1}=0.682$ and $\widehat{\Lambda}_{T}=1.003$, respectively. Figure 6: Temporal profiles of $\textrm{LR}(\tau)$ in (14) and $\textrm{MIC}(\tau)$ in (17), for the daily maximum (top) and minimum (bottom) rates of return series. According to [48], the Government of India imposed a countrywide lockdown on March 25, 2020, which is very close to our estimated changepoint on March 26, 2020, for the daily maximum RoR series. This lockdown phase continued for 21 days, and all the high peaks in $\textrm{LR}(\tau)$ indicate this period. Before this date, we see less dependence (small $\Lambda_{1}$) between the daily maximum RoR of IndiGo and SpiceJet airlines due to the pre-lockdown situation. After the announcement of the lockdown, the aviation sector faced an unprecedented interruption, and both RoR behaved in a more similar fashion; this led to a higher $\Lambda_{T}$ than $\Lambda_{1}$. Despite the two companies focusing on different strategies to mitigate the challenges posed by COVID-19 as described in Section 1, the effect of the initial phase of the lockdown appears to be the strongest one. On the other hand, the first phase of unlock started on June 1, 2020 and the estimated changepoint on June 8, 2020 for the daily minimum RoR series is also close to that date. From a financial perspective, our daily maximum RoR analysis demonstrates that before the onset of the pandemic, investors used to distribute their investments in the two airlines with higher variability. After the onset of the pandemic, investors sold their stocks in both airlines, given the high vulnerability of the aviation industry in the pandemic situation, that can occur in future as well and the same investment philosophy for both airlines would persist. In the marginal scale, the RoR for IndiGo remained more stable than that for SpiceJet, but while looking at the dependence structure, both behave more similarly than before after the onset of the pandemic which affect all members of a vulnerable industry simultaneously. ## 7 Discussions and Conclusions The literature on changepoint estimation in the context of extreme value analysis is scarce. Some recent publications [49, 26, 27, 55] focused on both frequentist and Bayesian perspectives of changepoint estimation for the univariate block maxima and threshold exceedances; however, as of our knowledge, no literature so far has focused on estimating changepoints in the extremal dependence structures or extreme value copulas except for a book chapter by [25], where the authors focused on a specific copula, namely the bivariate Gumbel copula. Understanding the structural changes in the extremal dependence structure is often crucial for analyzing stock price data because the buy and sell positions generally occur when the prices reach high and low values, respectively. Share prices of two companies doing similar businesses are naturally dependent, specifically when both companies do business in a volatile sector like aviation, where a pandemic like COVID-19 can impact businesses severely. Given the bivariate Hüsler-Reiss distribution being the only possible limit of a sequence of bivariate Gaussian random variables, we explore different changepoint estimation strategies for this distribution. While the likelihood ratio test is the most popular approach in the literature, using simulation studies, we showcase that the hypothesis testing based on modified information criterion proposed by [11] is generally more powerful. The most crucial changepoint in the daily maximum rate of return of the IndiGo and SpiceJet airlines, identified based on the methodology discussed here, almost coincides with the declaration of the first phase of the lockdown in India during COVID-19. This observation showcases the effectiveness of the two hypothesis testing procedures discussed here. During the announcement of the first phase of the lockdown, the number of COVID-19 cases was lower compared to the peaks of the three waves until mid-2022. Thus, the number of COVID-19 cases per day is not a meaningful predictor of the changepoint; rather, information on precautionary measures like lockdowns are significant factors while analyzing data related to the aviation industry. In this paper, we focused on identifying the most significant changepoints for the upper and lower joint tails which would convey ideas on the most important factors behind changes during a period of turmoil like COVID-19. There is also a vast literature on estimating multiple changepoints [3, 86, 53]. Among the algorithmic approaches, two most common approaches are binary segmentation [3] and Pruned Exact Linear Time [PELT, 53]. Binary segmentation method recursively partitions the data into segments, detecting a single changepoint in each iteration, while PELT aims to find the minimum-cost partitioning of the data into segments using dynamic programming. Binary segmentation is very popular due to its easy implementation and may work fine in other applications like identification of changes in the mean structure, but require more observations for identifying any change in the dependence-related parameter of the bivariate Hüsler-Reiss distribution. In our case, before the first changepoint for the sequences of the daily maximum rate of return, there are very less number of observations and thus it is difficult to estimate the changepoint within this period with less uncertainty. Exploring PELT or other more efficient approaches like penalized regression methods or nonparametric methods would be a future research direction. We explored the changepoint estimation focusing on the measures $\chi_{U}$ and $\chi_{L}$ in (2.2). Apart from them, [78] proposed related measures $\bar{\chi}_{U}$ and $\bar{\chi}_{L}$ given by $\displaystyle\bar{\chi}_{U}(u)$ $\displaystyle=$ $\displaystyle 2\log\textrm{P}\\{F_{X}(X)>u\\}/\log\textrm{P}\\{F_{X}(X)>u,F_{Y}(Y)>u\\}-1,$ $\displaystyle\bar{\chi}_{L}(u)$ $\displaystyle=$ $\displaystyle 2\log\textrm{P}\\{F_{X}(X)<u\\}/\log\textrm{P}\\{F_{X}(X)<u,F_{Y}(Y)<u\\}-1,~{}~{}~{}u\in(0,1),$ where the notations correspond to (2.2), and the limiting $\bar{\chi}$-measures are defined as $\bar{\chi}_{U}=\lim_{u\uparrow 1}\bar{\chi}_{U}(u)$ and $\bar{\chi}_{L}=\lim_{u\downarrow 0}\bar{\chi}_{L}(u)$. For the upper tail, there are two different extremal dependence regimes based on the two measures $\chi_{U}$ and $\bar{\chi}_{U}$; here $\chi_{L}$ and $\bar{\chi}_{L}$ similarly defines two regimes in the lower tail. In the first regime, two random variables are called extremally dependent and here $\chi_{U}\in(0,1)$ and $\bar{\chi}_{U}=1$. In the second regime, two random variables are called extremally independent and here $\chi_{U}=0$ and $\bar{\chi}_{U}\in[-1,1)$. For the bivariate Hüsler-Reiss distribution, except for the trivial case of independence, the first class holds and as a result, $\bar{\chi}_{U}=\bar{\chi}_{L}=1$ for both before and after the changepoint. Hence, a method of changepoint detection based on $\bar{\chi}_{U}$ or $\bar{\chi}_{L}$ cannot be used under the modeling assumption in this paper, i.e., the class of bivariate Hüsler-Reiss distribution. While we analyzed data for two specific airlines IndiGo and SpiceJet that acquired the highest market shares in the Indian aviation industry during COVID-19, our methodology can be adapted easily for analyzing data from other airlines, other countries, and other business sectors. Apart from a pandemic, one can also use our methodology in case of a recession. From a methodological perspective, one can adapt our approach to a general multivariate extreme value analysis beyond the bivariate case and also for analyzing spatial extremes, using pairwise likelihood [44], where each component is a bivariate Hüsler-Reiss density. Further, graphical approaches for multivariate [30] and spatial extremes [16] use the bivariate Hüsler-Reiss densities for each edge of a graph. Extending our methodology for detecting changepoints in a sequence of multivariate or spatial extremes would be a future endeavor. Any theoretical proposition corresponding to consistency with respect to the bivariate Hs̈ler-Reiss distribution has not been explored in this paper, except for discussing it numerically only in Section 5.3. Exploring such properties theoretically would be an interesting future endeavor. ## Data availability statement The dataset analyzed in this article is available at https://www.investing.com/. ## Disclosure statement No potential conflict of interest was reported by the authors. ## References * [1] A. Agrawal, _Sustainability of airlines in India with COVID-19: Challenges ahead and possible way-outs_ , Journal of Revenue and Pricing Management 20 (2021), pp. 1–16. * [2] D. Banerji, P. Mukherjee, and N. Siroya, _Case study-soaring into the high skies_ , SSRN 2873325 (2016). * [3] D. Barry and J.A. Hartigan, _A bayesian analysis for change point problems_ , Journal of the American Statistical Association 88 (1993), pp. 309–319. * [4] M. Bosc, F. Heitz, J.P. Armspach, I. Namer, D. Gounot, and L. Rumbach, _Automatic change detection in multimodal serial MRI: application to multiple sclerosis lesion evolution_ , NeuroImage 20 (2003), pp. 643–656. * [5] B.M. Brown and S.I. Resnick, _Extreme values of independent stochastic processes_ , Journal of Applied Probability 14 (1977), pp. 732–739. * [6] BT, _Coronavirus impact: How IndiGo is turning COVID crisis into an opportunity_ , Business Today (2020). Available at https://www.businesstoday.in/amp/bt-buzz/news/story/coronavirus-impact-how-indigo-is-turning-covid-crisis-into-an-opportunity-260283-2020-06-05". * [7] X. Cai, K.K. Said, and W. Ning, _Changepoint analysis with bathtub shape for the exponential distribution_ , Journal of Applied Statistics 43 (2016), pp. 2740–2750. Available at https://doi.org/10.1080/02664763.2016.1143455. * [8] J.W. Campbell and C.P. Tsokos, _The asymptotic distribution of maxima in bivariate samples_ , Journal of the American Statistical Association 68 (1973), pp. 734–739. Available at http://www.jstor.org/stable/2284810. * [9] Á. Cartea, S. Jaimungal, and J. Ricci, _Buy low, sell high: A high frequency trading perspective_ , SIAM Journal on Financial Mathematics 5 (2014), pp. 415–444. * [10] V. Chavez-Demoulin, P. Embrechts, and M. Hofert, _An extreme value approach for modeling operational risk losses depending on covariates_ , The Journal of Risk and Insurance 83 (2016), pp. 735–776. Available at http://www.jstor.org/stable/43998282. * [11] J. Chen, A. Gupta, and J. Pan, _Information criterion and change point problem for regular models_ , Sankhyā: The Indian Journal of Statistics (2003-2007) 68 (2006). * [12] J. Chen and A.K. Gupta, _Testing and locating variance changepoints with application to stock prices_ , Journal of the American Statistical Association 92 (1997), pp. 739–747. Available at http://www.jstor.org/stable/2965722. * [13] J. Chen and A. Gupta, _Parametric Statistical Change Point Analysis: With Applications to Genetics, Medicine, and Finance_ , Springer, 2012. * [14] H. Chernoff and S. Zacks, _Estimating the current mean of a normal distribution which is subjected to changes in time_ , The Annals of Mathematical Statistics 35 (1964), pp. 999 – 1018. Available at https://doi.org/10.1214/aoms/1177700517. * [15] U. Cherubini, E. Luciano, and W. Vecchiato, _Copula methods in finance_ , John Wiley & Sons, 2004. * [16] D. Cisneros, A. Hazra, and R. Huser, _Spatial wildfire risk modeling using mixtures of tree-based multivariate Pareto distributions_ , arXiv preprint arXiv:2308.03870 (2023). * [17] S. Coles, _An introduction to statistical modeling of extreme vaues_ , Springer Series in Statistics, Springer, New York (2001), pp. XIV, 209. Available at https://doi.org/10.1007/978-1-4471-3675-0. * [18] S.G. Coles and J.A. Tawn, _Statistical methods for multivariate extremes: An application to structural design_ , Journal of the Royal Statistical Society. Series C (Applied Statistics) 43 (1994), pp. 1–48. Available at http://www.jstor.org/stable/2986112. * [19] D. Cooley, P. Naveau, and P. Poncet, _Variograms for spatial max-stable random fields_ , in _Dependence in probability and statistics_ , Springer, New York, NY, 2006, pp. 373–390. * [20] M. Csörgö and L. Horváth, _Limit theorems in change-point analysis_ , Wiley series in probability and statistics, John Wiley and Sons, Chichester, 1997. * [21] H.A. David and H.N. Nagaraja, _Order statistics_ , John Wiley and Sons, New Jersey, 2004. * [22] A.C. Davison and R. Huser, _Statistics of extremes_ , Annual Review of Statistics and its Application 2 (2015), pp. 203–235. * [23] A.C. Davison, R. Huser, and E. Thibaud, _Geostatistics of dependent and asymptotically independent extremes_ , Mathematical Geosciences 45 (2013), pp. 511–529. * [24] M. de Carvalho, M. Leonelli, and A. Rossi, _Tracking change-points in multivariate extremes_ , arXiv preprint arXiv:2011.05067 (2020). * [25] A.D.C. Dias and P. Embrechts, _Change-point analysis for dependence structures in finance and insurance_ , in _Risk Measures for the 21st Century_ , G.P. Szegö, ed., chap. 16, Wiley, Chichester, 2004, pp. 321–335. * [26] G. Dierckx and J.L. Teugels, _Changepoint analysis of extreme values_ , Environmetrics 21 (2010), pp. 661–686. * [27] F.F. do Nascimento and W.V.M. e Silva, _A Bayesian model for multiple change point to extremes, with application to environmental and financial data_ , Journal of Applied Statistics 44 (2017), pp. 2410–2426. * [28] P. Eiauer and P. Hackl, _The use of mosums for quality control_ , Technometrics 20 (1978), pp. 431–436. * [29] P. Embrechts, C. Klüppelberg, and T. Mikosch, _Modelling extremal events: for insurance and finance_ , Springer Science and Business Media, Berlin, 1997. * [30] S. Engelke and A.S. Hitz, _Graphical models for extremes_ , Journal of the Royal Statistical Society Series B: Statistical Methodology 82 (2020), pp. 871–932. * [31] S. Engelke and J. Ivanovs, _Sparse structures for multivariate extremes_ , Annual Review of Statistics and Its Application 8 (2021), pp. 241–270. * [32] R.A. Fisher and L.H.C. Tippett, _Limiting forms of the frequency distribution of the largest or smallest member of a sample_ , Mathematical Proceedings of the Cambridge Philosophical Society 24 (1928), pp. 180–190. * [33] L.A. Gardner, _On detecting changes in the mean of normal variates_ , The Annals of Mathematical Statistics 40 (1969), pp. 116 – 126. Available at https://doi.org/10.1214/aoms/1177697808. * [34] Y. Gong and R. Huser, _Asymmetric tail dependence modeling, with application to cryptocurrency market data_ , The Annals of Applied Statistics 16 (2022), pp. 1822–1847. * [35] N.J. Gormsen and R.S. Koijen, _Coronavirus: Impact on stock prices and growth expectations_ , The Review of Asset Pricing Studies 10 (2020), pp. 574–597. * [36] G. Gurevich and A. Vexler, _Change point problems in the model of logistic regression_ , Journal of Statistical Planning and Inference 131 (2005), pp. 313–331. * [37] G. Gurevich and A. Vexler, _Retrospective change point detection: from parametric to distribution free policies_ , Communications in Statistics—Simulation and Computation 39 (2010), pp. 899–920. * [38] A. Hasan, W. Ning, and A. Gupta, _An information-based approach to the change-point problem of the noncentral skew-t distribution with applications to stock market data_ , Sequential Analysis 33 (2014), pp. 458–474. * [39] E. Hashorva, _On the residual dependence index of elliptical distributions_ , Statistics & Probability Letters 80 (2010), pp. 1070–1078. * [40] D. Hawkins, _Detecting shifts in functions of multivariate location and covariance parameters_ , Journal of Statistical Planning and Inference 33 (1992), pp. 233–244. Available at https://www.sciencedirect.com/science/article/pii/0378375892900709. * [41] J.R.M. Hosking, J.R. Wallis, and E.F. Wood, _Estimation of the generalized extreme-value distribution by the method of probability-weighted moments_ , Technometrics 27 (1985), pp. 251–261. * [42] D. Hsu, _Tests for variance shift at an unknown time point_ , Journal of The Royal Statistical Society Series C-Applied Statistics 26 (1977), pp. 279–284. * [43] F. Huang, R. Maller, and X. Ning, _Modelling life tables with advanced ages: An extreme value theory approach_ , Insurance: Mathematics and Economics 93 (2020), pp. 95–115. Available at https://www.sciencedirect.com/science/article/pii/S0167668720300482. * [44] R. Huser and A.C. Davison, _Composite likelihood estimation for the Brown–Resnick process_ , Biometrika 100 (2013), pp. 511–518. * [45] J. Hüsler and R.D. Reiss, _Maxima of normal random vectors: Between independence and complete dependence_ , Statistics and Probability Letters 7 (1989), pp. 283–286. * [46] ICAO, _Effects of Novel Coronavirus (COVID-19) on Civil Aviation: Economic Impact Analysis_ , International Civil Aviation Organization Report (2020). Available at https://www.icao.int/sustainability/Documents/COVID-19/ICAO_Coronavirus_Econ_Impact.pdf". * [47] C. Inclán, _Detection of multiple changes of variance using posterior odds_ , Journal of Business and Economic Statistics 11 (1993), pp. 289–300. * [48] M. Jaiswal, _Coronavirus in India: 21-day lockdown begins; key highlights of PM Modi’s speech_ , Business Today (2020). Available at https://www.businesstoday.in/latest/economy-politics/story/coronavirus-in-india-21-day-lockdown-begins-key-highlights-of-pm-modi-speech-253038-2020-03-25. * [49] D. Jarušková and M. Rencová, _Analysis of annual maximal and minimal temperatures for some European cities by change point methods_ , Environmetrics 19 (2008), pp. 221–233. * [50] P. Jaworski, F. Durante, W.K. Hardle, and T. Rychlik, _Copula theory and its applications_ , Vol. 198, Springer, 2010. * [51] Z. Kabluchko, M. Schlather, and L. De Haan, _Stationary max-stable fields associated to negative definite functions_ , Ann. Probab. 37 (2009), pp. 2042––2065. * [52] R. Killick and I. Eckley, _changepoint: An R package for changepoint analysis_ , Journal of Statistical Software 58 (2014), pp. 1–19. * [53] R. Killick, P. Fearnhead, and I.A. Eckley, _Optimal detection of changepoints with a linear computational cost_ , Journal of the American Statistical Association 107 (2012), pp. 1590–1598. * [54] T.L. Lai, _Sequential changepoint detection in quality control and dynamical systems_ , Journal of the Royal Statistical Society: Series B (Methodological) 57 (1995), pp. 613–644. * [55] C. Lattanzi and M. Leonelli, _A change-point approach for the identification of financial extreme regimes_ , Brazilian Journal of Probability and Statistics 35 (2021), pp. 811–837. * [56] M. Lavielle, _Using penalized contrasts for the change-point problem_ , Signal Processing 85 (2005), pp. 1501–1510. * [57] M.R. Leadbetter, G. Lindgren, and H. Rootzén, _Extremes and related properties of random sequences and processes_ , Springer Science and Business Media, New York, 1983. * [58] M. Leadbetter and H. Rootzen, _Extremal theory for stochastic processes_ , The Annals of Probability (1988), pp. 431–478. * [59] Z. Lerman and E. Schechtman, _Detecting a change in the correlation coefficient in a sequence of bivariate normal variables_ , Communications in Statistics - Simulation and Computation 18 (1989), pp. 589–599. Available at https://doi.org/10.1080/03610918908812778. * [60] Z. Liu and L. Qian, _Changepoint estimation in a segmented linear regression via empirical likelihood_ , Communications in Statistics - Simulation and Computation 39 (2009), pp. 85–100. * [61] M. Molina-Garcia, A. Fernandez-Duran, and J.I. Alonso, _Application of extreme value distribution to model propagation fading in indoor mobile radio environments_ , in _2008 IEEE Radio and Wireless Symposium_. IEEE, 2008, pp. 97–100. * [62] V.M. Muggeo and G. Adelfio, _Efficient change point detection for genomic sequences of continuous measurements_ , Bioinformatics 27 (2011), pp. 161–166. * [63] R.B. Nelsen, _An introduction to copulas_ , Springer, 2006. * [64] G. Ngunkeng and W. Ning, _Information approach for the change-point detection in the skew normal distribution and its applications_ , Sequential Analysis 33 (2014), pp. 475–490. Available at https://doi.org/10.1080/07474946.2014.961845. * [65] A.B. Owen, _Empirical likelihood ratio confidence intervals for a single functional_ , Biometrika 75 (1988), pp. 237–249. * [66] E.S. Page, _A test for a change in a parameter occurring at an unknown point_ , Biometrika 42 (1955), pp. 523–527. Available at http://www.jstor.org/stable/2333401. * [67] E.S. Page, _Continuous inspection schemes_ , Biometrika 41 (1954), pp. 100–115. * [68] A.J. Patton, _A review of copula models for economic time series_ , Journal of Multivariate Analysis 110 (2012), pp. 4–18. * [69] A. Ramanayake and A.K. Gupta, _Tests for an epidemic change in a sequence of exponentially distributed random variables_ , Biometrical Journal 45 (2003), pp. 946–958. Available at https://onlinelibrary.wiley.com/doi/abs/10.1002/bimj.200390062. * [70] J. Reeves, J. Chen, X.L. Wang, R. Lund, and Q.Q. Lu, _A review and comparison of changepoint detection techniques for climate data_ , Journal of Applied Meteorology and Climatology 46 (2007), pp. 900–915. * [71] M. Rocco, _Extreme value theory in finance: A survey_ , Journal of Economic Surveys 28 (2014), pp. 82–108. Available at https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1467-6419.2012.00744.x. * [72] K.K. Said, W. Ning, and Y. Tian, _Likelihood procedure for testing changes in skew normal model with applications to stock returns_ , Communications in Statistics - Simulation and Computation 46 (2017), pp. 6790–6802. Available at https://doi.org/10.1080/03610918.2016.1212067. * [73] K.K. Said, W. Ning, and Y. Tian, _Modified information criterion for testing changes in skew normal model_ , Brazilian Journal of Probability and Statistics 33 (2019), pp. 280 – 300. Available at https://doi.org/10.1214/17-BJPS388. * [74] M. Sibuya, _Bivariate extreme statistics_ , Annals of the Institute of Statistical Mathematics 11 (1960), pp. 195–210. Available at https://doi.org/10.1007/BF01682329. * [75] P.K. Sidhu and R. Shukla, _Impact of the COVID-19 pandemic on the Indian domestic aviation industry_ , in _2021 Reconciling Data Analytics, Automation, Privacy, and Security: A Big Data Challenge (RDAAPS)_. IEEE, 2021, pp. 1–8. * [76] M. Srivastava and K.J. Worsley, _Likelihood ratio tests for a change in the multivariate normal mean_ , Journal of the American Statistical Association 81 (1986), pp. 199–204. * [77] A.G. Stephenson, _evd: Extreme value distributions_ , R news 2 (2002), pp. 31–32. * [78] J.H. Stuart Coles and J. Tawn, _Dependence measures for extreme value analyses_ , Extremes 2 (1999), pp. 339–365. Available at https://doi.org/10.1023/A:1009963131610. * [79] J.A. Tawn, _Bivariate extreme value theory: Models and estimation_ , Biometrika 75 (1988), pp. 397–415. Available at http://www.jstor.org/stable/2336591. * [80] S. Thies and P. Molnár, _Bayesian change point analysis of Bitcoin returns_ , Finance Research Letters 27 (2018), pp. 223–227. * [81] J. Tiago de Oliveira, _Statistical decision for bivariate extremes_ , in _Extreme Value Theory: Proceedings of a Conference held in Oberwolfach, Dec. 6–12, 1987_. Springer, 1989, pp. 246–261. * [82] W. Tian, L. Pang, C. Tian, and W. Ning, _Changepoint analysis for Kumaraswamy distribution_ , Mathematics 11 (2023). Available at https://www.mdpi.com/2227-7390/11/3/553. * [83] W. Tian and Y. Yang, _Changepoint analysis for weighted exponential distribution_ , Communications in Statistics - Simulation and Computation 0 (2022), pp. 1–13. Available at https://doi.org/10.1080/03610918.2021.2020288. * [84] R. Tibshirani and T. Hastie, _Local likelihood estimation_ , Journal of the American Statistical Association 82 (1987), pp. 559–567. * [85] TOI, _Lockdown impact: IndiGo reports INR 2,844 crore loss in April_ , Times of India (2020). Available at https://timesofindia.indiatimes.com/business/india-business/lockdown-impact-indigo-reports-rs-2844-crore-loss-in-april-june-quarter/articleshow/77240430.cms. * [86] A. Vexler, _Guaranteed testing for epidemic changes of a linear regression model_ , Journal of Statistical Planning and Inference 136 (2006), pp. 3101–3120. * [87] A. Vexler and C. Wu, _An optimal retrospective change point detection policy_ , Scandinavian Journal of Statistics 36 (2009), pp. 542–558. * [88] K.J. Worsley, _On the likelihood ratio test for a shift in location of normal populations_ , Journal of the American Statistical Association 74 (1979), pp. 365–367. * [89] K. Worsley, _The power of likelihood ratio and cumulative sum tests for a change in a binomial probability_ , Biometrika 70 (1983), pp. 455–464. * [90] H. Zhao, H. Chen, and W. Ning, _Changepoint analysis by modified empirical likelihood method in two-phase linear regression models_ , Open Journal of Applied Sciences 03 (2013), pp. 1–6. * [91] C. Zou, Y. Liu, P. Qin, and Z. Wang, _Empirical likelihood ratio test for the change-point problem_ , Statistics and Probability Letters 77 (2007), pp. 374–382. * [92] F.W. Zwiers and V.V. Kharin, _Changes in the extremes of the climate simulated by CCC GCM2 under CO2 doubling_ , Journal of Climate 11 (1998), pp. 2200 – 2222. Available at https://journals.ametsoc.org/view/journals/clim/11/9/1520-0442_1998_011_2200_citeot_2.0.co_2.xml.
# Solving Zebra Puzzles Using Constraint-Guided Multi-Agent Systems Shmuel Berman <EMAIL_ADDRESS> &Kathleen McKeown <EMAIL_ADDRESS> &Baishakhi Ray <EMAIL_ADDRESS> ###### Abstract Prior research has enhanced the ability of Large Language Models (LLMs) to solve logic puzzles using techniques such as chain-of-thought prompting or introducing a symbolic representation. These frameworks are still usually insufficient to solve complicated logical problems, such as Zebra puzzles, due to the inherent complexity of translating natural language clues into logical statements. We introduce a multi-agent system, ZPS, that integrates LLMs with an off the shelf theorem prover. This system tackles the complex puzzle- solving task by breaking down the problem into smaller, manageable parts, generating SMT (Satisfiability Modulo Theories) code to solve them with a theorem prover, and using feedback between the agents to repeatedly improve their answers. We also introduce an automated grid puzzle grader to assess the correctness of our puzzle solutions and show that the automated grader is reliable by evaluating it in a user-study. Our approach shows improvement in all three LLMs we tested, with GPT-4 showing 166% improvement in the number of fully correct solutions. Solving Zebra Puzzles Using Constraint-Guided Multi-Agent Systems Shmuel Berman<EMAIL_ADDRESS>Kathleen McKeown<EMAIL_ADDRESS>Baishakhi Ray<EMAIL_ADDRESS> ## 1 Introduction Automated problem solving has long been a major goal in the field of Artificial Intelligence. This task ranges from trivial problems, like simple arithmetic or string searches, to more complex ones, such as solving a chess position.. However, unstructured problems presented in natural language introduce additional complications in modeling the problem accurately. Solving such problems has been extensively studied, from simple mathematical problems in the subfield of word problem solving to applications like automated code generation by Large Language Models (LLMs) (Mukherjee and Garain, 2009; Chen et al., 2021). These problems are particularly difficult because translating natural language into a precise logical or computational form requires sophisticated understanding and interpretation, making it a significant challenge in AI research. Figure 1: An Example Zebra Puzzle. In this paper, we focus on a particular type of unstructured natural language problem known as a logic grid problem, or colloquially, an Einstein or Zebra puzzle. A Zebra puzzle is a set of natural language assertions involving multiple entities that are linked by various attributes (Fig. 1 shows an example). To solve a puzzle, the user must correctly assign attributes to all of the entities. These attributes range from descriptions to relative ordering. Participants are provided with a series of clues in natural language, which they must use to deduce the correct relationships using logical reasoning and by adhering to implicit domain constraints. These puzzles require the solver to map from natural language to structured space, understand implicit assumptions, and in some cases use domain-specific knowledge. For instance, as illustrated in Fig. 1, the solver must assign the correct attributes for three houses based on a series of interconnected clues. Zebra puzzles are particularly challenging due to: * • Complex Inferences: Each clue provides partial information that must be combined with others to deduce the solution. * • High Interdependency: An error on one clue significantly impacts others, making the solution space highly interconnected. * • Natural Language (NL) Clues: Translating ambigous NL clues into logical statements or formal representations is challenging. * • Large Solution Space: The solver needs to explore numerous possibilities and combinations to find the solution. * • Consistency Checking: Potential solutions must be checked against all clues, which is a computationally intensive and requires sophisticated, domain- specific reasoning. The factors mentioned make Zebra puzzles difficult for both humans and AI systems due to the need for precise interpretation, inference, and logical reasoning. In Fig. 1, for example, a solver cannot simply map the spatial relationship between the Football house and the Red house. It must also encode additional constraints: the Football and Red houses occupy House 1 or House 3, respectively, and they must not be the same house. Encoding these constraints is non-trivial, as it requires detailed semantic interpretation of the clue’s subtext. Failure to accurately encode these subtleties usually renders the puzzle unsolvable. This complexity has empirically been shown to challenge puzzle-solving models significantly. Prior work often employed human-in-the-loop methods. Milicevic et al. (2012) translated puzzles into formal logic but required users to rephrase or rewrite ambiguous clues. Claes et al. (2019) developed ZebraTutor, which creates a puzzle-specific lexicon to formalize the problem but needed users to edit the lexicon for accuracy. Prior research using ChatGPT to solve Zebra puzzles reported a correctness rate of only 8.33% (Groza, 2023), with performance deteriorating significantly as the problem’s complexity increases. Due to their complexity, solving Zebra puzzles effectively requires the use of a constraint solver; a solver can efficiently determine the feasible and infeasible solution space within the given constraints. However, converting natural language clues into a formal representation suitable for a solver is a non-trivial task. This process often involves intricate interpretation of clues, which must be precise to ensure that the solver can operate correctly. Additionally, maintaining consistency across all clues requires iterative back-and-forth reasoning. To address the challenges inherent in solving Zebra puzzles, we introduce a multi-agent based system, ZPS. This system decomposes the problem-solving process into discrete, manageable components, enhancing the handling of complex interdependencies and constraints. Each agent is responsible for a specific aspect of the problem, working collaboratively and using feedback loops to refine their answers and ensure consistency. In this framework, we conceptualize integrating Large Language Models (LLMs) with formal reasoning. First, an LLM agent decompose a given puzzle to sub- problems. Then, another LLM agent interprets NL clues of each sub-problem and generates SMT-LIB translations of the constraints and parameters. An off-the- shelf SMT solver 111Satisfiability Modulo Theories (SMT) is a decision problem that involves determining whether a given logical formula is satisfiable, considering various background theories like Arithmetic, Arrays, Bit-Vector, etc. SMT extends the concept of Boolean satisfiability (SAT) by incorporating more complex theories. then processes these translations to produce a model that corresponds to the solution. The output, including the model and any syntactic errors, is fed back to the LLM which generates a new translation addressing syntactic and semantic errors, emulating back and forth reasoning. This continuous feedback refines the model’s predictions and ensures the translations are both syntactically correct and solvable. To this end, our approach demonstrates improvements across all three LLMs we tested, with GPT-4 showing up to a 166% increase in the number of fully correct solutions. The main contributions of our research are as follows: 1. 1. We demonstrate that combining a formal constraint solver with an LLM interpreter using an agent-based approach for solving Zebra Puzzles significantly improves upon existing baseline methodologies. 2. 2. We implement a plan generation and decomposition strategy, enabling step-by- step reasoning that enhances the solving process. 3. 3. We introduce an iterative conversation-based feedback mechanism that allows for continual refinement of solutions, adapting dynamically to the solving context. 4. 4. We incorporate an autograder within our system to evaluate the accuracy of solutions, ensuring reliability and precision in automated assessments. We also present the results of a user study showing that this autograder correlates very well with human graders. Our code is available at https://anonymous.4open.science/r/anon_emnlp-1AD0/README.md. ## 2 Methodology Figure 2: Logic Puzzle Solver Workflow Figure 3: Example Feedback Puzzle Solving Process. The puzzle is decomposed and then the LLM-agent attempts to translate it into a logical SMT formula. The theorem prover attempts to solve it, and the feedback is fed back into the LLM-agent so that it can modify its formal representation. We integrate LLMs with formal systems within a multi-agent framework to solve Zebra puzzles. The process involves a series of steps where the problem is decomposed, translated into a formal language (SMT-LIB), solved using a theorem prover, and iteratively refined based on feedback. This approach aims to leverage the strengths of both LLMs and formal solvers, ensuring robust problem-solving capabilities. ### 2.1 Multi-Agent Workflow The workflow, as illustrated in Figure 2, integrates multiple agents to transform a natural language puzzle into a logically solvable structure and then iteratively refines the solution. The process is initiated by the Decomposition Agent and continuously refined through a feedback loop that encompasses both the translation to SMT-LIB and the solving phases. Figure 3 shows a working example of puzzle solving by our method. ##### Decomposition The input puzzle, expressed in natural language, is first decomposed by the Decomposition LLM-Agent. This agent identifies and isolates key entities, attributes, and relationships, structuring them into smaller, systematically translatable components. This is a first step that ensures the puzzle is presented in a format amenable to formal processing. ##### Feedback Loop The core of our methodology lies in the feedback loop where continuous refinement of the solution occurs. This loop integrates the translation of decomposed components into SMT-LIB format by the Solver LLM-Agent and the subsequent problem solving using a theorem prover, whose output serves as feedback. Each iteration through the loop consists of the following steps: * • Translation to SMT-LIB: After decomposition, the puzzle components are systematically translated into SMT-LIB (Satisfiability Modulo Theories Library) by the Solver LLM-Agent. This format is essential for interfacing with theorem provers and ensures that logical constraints and relationships are accurately represented. * • Solving with Theorem Prover: The SMT-LIB formatted components are then processed by the theorem prover (Z3 in our implementation). The theorem prover attempts to find a satisfying assignment that adheres to all given constraints. * • Evaluate and Refine: The solution generated by the theorem prover is evaluated by the Solver LLM-Agent to determine if it meets the puzzle’s requirements. If the solution is deemed insufficient– either due to to explicit errors or because of how the attributes are assigned– modifications are made to the translation of the SMT-LIB formalization of the puzzle and the cycle repeats. Otherwise, the LLM-agent submits its final answer. This iterative process ensures that the Solver LLM-Agent and the Theorem Prover continually refine the solution until the Solver LLM-Agent is satisfied with the final assignments. ### 2.2 Modeling the Agent Environment The feedback loop is how the agents engage with each other. This loop is mathematically modeled using a combination of evaluation functions and error detection mechanisms, which together guide the system towards a solution that optimally satisfies the problem constraints. More formally, let $\mathcal{D},\mathcal{G},\mathcal{T},\mathcal{E}$, and $\mathcal{F}$ represent the decomposition, translation to SMT-LIB, theorem solving, evaluation, and feedback functions, respectively. The feedback loop can be described by the following recursive function: $S_{k+1}=\mathcal{F}(\mathcal{E}(\mathcal{T}(\mathcal{G}(\mathcal{D}(P)),S_{k})),S_{k})$ Where, $P$: initial puzzle in natural language. $S_{k}$: solution state at the $k$-th iteration. $\mathcal{D}(P)$: decomposes $P$ into a structured format amenable to translation. $\mathcal{G}$: translates this structure into the SMT-LIB format. $\mathcal{T}$: applies the theorem prover to find a solution that satisfies the logical constraints. $\mathcal{E}$: evaluates this solution to determine its adequacy in solving the puzzle’s clues. $\mathcal{F}$: adjusts the translation based on the evaluation, aiming to correct any errors or optimize the solution. ##### Convergence Criteria The convergence of this iterative process is governed by the Solver LLM- agent’s evaluation function $\mathcal{E}$, which assesses both the correctness of the solution against the domain-specific requirements and the presence of any syntactic or semantic errors detected by $\mathcal{T}$. We assume that $\mathcal{E}$ is a black-box function defined by the instructions given to the LLM. The loop terminates when $\mathcal{E}$ returns a value indicating that the solution $S_{k}$ sufficiently meets all puzzle requirements and contains no detectable errors, or when a maximum retry limit has been reached. ##### Optimization and Refinement Each iteration through the feedback loop serves to progressively refine the solution, optimizing the representation and alignment with the puzzle’s constraints. This optimization process is critical for moving the solution towards a local optimum, where no further improvements can be detected by $\mathcal{E}$ or $\mathcal{F}$. ## 3 Experimental Setup To comprehensively evaluate ZPS’s performance, we examine its effectiveness across 114 Zebra Puzzles. Our assessment emphasizes ZPS’s capability to solve the puzzles using the different agents. ### 3.1 Selection of Logic Puzzles We compiled two datasets to evaluate the problem-solving capabilities of our agent-centric approach. The first dataset, sourced from GitHub222https://github.com/ross-nordstrom/LogicSolver/tree/master/data, contains 59 Zebra puzzles involving entity-attribute matching. We further curated 55 additional puzzles from different sources from the Web and manually cross-checked them to determine they are valid zebra problems. ### 3.2 Agent Configuration We experimented with three different LLMs: GPT-4, GPT-3.5,and Llama3-8b . We used z3 as the automated theorem solver. Our total cost across all experiments was approximately 2500 USD. ##### Number of Retries In our experimental setup, we initially conduct the feedback loop once. To enhance performance and address syntactical errors in the final output, we implement an additional cold-start retry mechanism if we reach the action- limit without an error-free solution. This involves restarting the workflow from scratch with an increased temperature. ##### Response Limit To limit the conversation length and prevent hallucination, we define a maximum number of actions that the LLM-agent can take. All experiments performed allow the LLM to perform up to 4 actions; this limit is reset if the puzzle-solving task is retried. #### 3.2.1 Grading To assess solution accuracy, we created an autograder LLM-agent that provides a numeric grade to every solution generated by the solving agent. Each assignment is worth 1 point. In order to evaluate the reliability of this autograder, we also conducted a user study where a subset of the problems were regraded by humans and then compared to the autograder’s results. ##### Autograding GPT-4o was used for autograding. It received the ground-truth answer, final SMT-LIB output, and conversation history to assess the consistency and correctness of each solution. For each problem, the model compared the logical assignments produced by the solving agent against the reference assignments, producing a final accuracy score. To demonstrate the autograding process, consider a scenario where the output of the SMT-LIB solver is evaluated against a pre-defined answer key. The solver’s output and subsequent interpretation by the grader are detailed below. ###### SMT-LIB Solver Output Below is a sample SMT solution for the logic grid puzzle given in Fig. 1. ; 1 is Brazilian, 2 is German ; 3 is American (define-fun H1_Color () String "Blue") (define-fun H1_N () Int 1) (define-fun H1_Anml () String "Cats") (define-fun H1_Sp () String "Football") (define-fun H2_Color () String "Green") (define-fun H2_N () Int 3) (define-fun H2_Anml () String "Dogs") (define-fun H2_Sp () String "Basketball") (define-fun H3_Color () String "Red") (define-fun H3_N () Int 2) (define-fun H3_Anml () String "Fishes") (define-fun H3_Sp () String "Baseball") ###### Ground Truth Answer * • House 1: Blue, Brazilian, Fishes, Football * • House 2: Green, American, Cats, Baseball * • House 3: Red, German, Dogs, Basketball The autograder evaluates the solution by mapping the SMT-LIB output to the expected results either using contextual clues or an explicitly defined lookup table, which would be defined in the SMT-LIB comments, converting function definitions into comparable assignments, as in Table 1. Entity | Assignment | Result ---|---|--- House 1 | Color: Blue | ✓ House 1 | Nationality: Brazilian | ✓ House 1 | Animal: Cats | ✗ House 1 | Sport: Football | ✓ House 2 | Color: Green | ✓ House 2 | Nationality: American | ✓ House 2 | Animal: Dogs | ✓ House 2 | Sport: Basketball | ✗ House 3 | Color: Red | ✓ House 3 | Nationality: German | ✓ House 3 | Animal: Fishes | ✗ House 3 | Sport: Baseball | ✗ Table 1: Validation Results ###### Partial Scoring (PS) Each correct match between the SMT-LIB output and the answer key earns a point. The autograder agent also calculates the total number of assignments which is equal to the number of points it is possible to receive. In this example, all matches are correct, thus: $\text{Partial Score}=\frac{\text{Correct Matches}}{\text{Total Matches}}=\frac{8}{12}=0.67$ If the animals and sports had been chosen correctly, the score would be 1. ##### Manual User Study Grading A separate user study manually graded 50 solutions from the state-of-the-art workflow, 35 solutions from a non-optimal variant, and 20 solutions from the naive approach. Though it was impractical to have all of the thousands of solutions that the LLM-agent generated be hand-graded, this user study allows us to quantify the correctness of our results and verify that our autograder correlates well with the ground-truth grades. The manual grading team included five undergraduate computer science students and one master’s student. We then used their manual grades to capture various statistical measures of similarity between human grading and LLM grading; these stats are explained in the "Results" section. A large percentage of the attempted solutions included explicit lookup tables, making these solutions significantly more time-consuming to grade (see "SMT- LIB Solver Output" and "Answer Key" above). The lookup table could appear anywhere in the generated text, which comprises multiple blocks of SMT-LIB code, errors, and intermediate SMT models. We therefore do not include them in our user study. ## 4 Results This analysis is structured around four key research questions: Firstly, we examine the baseline performance of different LLMs in solving logic puzzles without solver assistance to understand their intrinsic problem-solving capabilities. Secondly, we assess the improvements in accuracy and problem- solving completeness when integrating solver feedback, evaluating how external theorem provers enhance LLM effectiveness. Thirdly, we explore the impact of using a decomposition agent, analyzing whether segmenting puzzles into simpler components before solving improves overall solution quality. Four, we conduct a user study to evaluate our LLM-Grader and substantiate the validity of our results. ### 4.1 ZPS Performance over Baselines To establish a baseline, we first evaluate the performance of LLMs without the assistance of a solver by asking the LLM to solve the logic grid puzzle. This baseline configuration yields mediocre puzzle-solving accuracy, as detailed in Table 2. We report both the average partial score, given by the "Avg. PS" column, and the number of puzzles solved fully correctly, given by the "#Solved" column. For instance, GPT-4 under a baseline achieves an average partial score of of 52.4% and solves 27/114 logic grid puzzles completely correctly. The effectiveness of the LLM-agent workflow increases markedly when solver feedback is incorporated. As shown in Table 3, the integration of theorem prover feedback, without retries and under a deterministic generation setting (temperature = 0), increases GPT-4’s average partial score to 0.687 from baseline of 0.524 ($\Delta=31.1$%). The inclusion of a decomposition agent further improves this to 0.700 ($\Delta=33.58$%). In terms of the total number solutions that can be completely solved, GPT4 with solver solves up to 133.33% more problems than the baseline settings. GPT-3.5 shows a similar positive trend. Llama3’s improvement is more subtle; we believe this is because its fewer number of parameters limits its ability to generate syntactically correct SMT- LIB code. This theory is supported by the fact that in every Llama3 experiment, no less than 50 final solutions contained errors, whereas in every GPT-4 or GPT-3.5 experiment, the number was no more than 42. Nonetheless, Llama3 can also improve the total number of correct solutions by 50% over baseline. ### 4.2 ZPS Performance under Different Settings For the variable temperature experiments, we set the model temperature to zero and increased it if the solution contained errors. While this approach provides the flexibility to bypass a solution if the deterministic solution is erroneous, it risks generating less stable solutions that may inadvertently replace syntactically incorrect yet valid solutions with syntactically correct but logically flawed ones. This phenomenon is particularly pronounced in models with fewer parameters, where the performance tends to decline with the introduction of retries. For example, under variable temperature conditions with retries, GPT-4 maintains a high accuracy rate of 76.1%, while Llama3’s accuracy degrades to 48.4%. The addition of a decomposition agent to the SMT-integrated LLM-agent yielded mixed results. For both GPT-4 and GPT-3.5, the average partial score and number solved fully correctly slightly improved, in all cases by less than 5.5%. However, Llama-3’s average partial score declined by less than 5% and it was able to solve 3 fewer problems than with just SMT integration. Because of the relatively small differences in all cases, more experimentation is needed to determine when decomposition increases performance. Model | T | D | Avg. P.S | #Solved ---|---|---|---|--- Llama3-8b | 0 | ✗ | 0.47 | 14 (12.3%) GPT-3.5 | 0 | ✗ | 0.471 | 17 (15.0%) GPT-4 | 0 | ✗ | 0.524 | 27 (23.7%) Table 2: Baseline Performance of LLMs Without Solver Integration.The "D" column indicates if a decomposition agent was present in the workflow. The "T" column indicates Temperature. Model | T | D | Avg. PS | #Solved | $\Delta\\#$Solved ---|---|---|---|---|--- Llama3-8b | 0 | ✗ | 0.496 | 21 (18.4%) | 50.0% GPT-3.5 | 0 | ✗ | 0.493 | 22 (19.3%) | 29.4% GPT-4 | 0 | ✗ | 0.687 | 59 (51.8%) | 118.5% Llama3-8b | Var. | ✗ | 0.436 | 15 (13.2%) | 7.1% GPT-3.5 | Var. | ✗ | 0.484 | 24 (21.0%) | 41.2% GPT-4 | Var. | ✗ | 0.761 | 72 (63.2%) | 166.7% Llama3-8b | 0 | ✓ | 0.468 | 18 (15.8%) | 28.6% GPT-3.5 | 0 | ✓ | 0.520 | 24 (21.0%) | 41.2% GPT-4 | 0 | ✓ | 0.700 | 63 (55.3%) | 133.3% Table 3: Enhancements from Solver Integration with Percentage Improvement Over Baseline. The "D" column indicates if a decomposition agent was present in the workflow. The "T" column indicates Temperature. ### 4.3 Manual Analysis of Grading Based on our user study, our LLM-based grading systems demonstrate high accuracy accross a variety of models and settings. The system maintains consistent scoring accuracy, with exact match rates exceeding 78% across all tested scenarios. To evaluate the LLM-grader, we employed various statistical measures: (i) Avg. Abs. Diff. : The average magnitude of the difference between the partial score given by the LLM-grader and the human evaluator. (ii) Avg. Rel. Diff. : The expected percent difference between the the partial score given by the LLM- grader and the human evaluator. % problems for which the LLM-grader (iii) overestimated and (iv) underestimated the partial score provided by the human evaluator. (v) % problems for which the LLM-grader and the human evaluator gave exactly the same partial score, and (vi) Joint Full Credit: The count of problems for which both the user and the LLM assigned full credit, normalized over the total number of problems that either party marked for full credit. This metric helps in understanding the extent of agreement in the grading of solutions between the human and machine evaluators. The metric of "Joint Full Credit," which consistently registers above 85%, serves as a robust indicator of the LLM-grader’s capability to accurately assess fully correct solutions as demonstrated by the level of agreement between the LLM and a human grader. Additionally, the analysis indicates a propensity for the grader to overestimate the score of the LLM without SMT integration, whereas the integration of SMT tends to result in slight underestimations by the grader. This observation suggests that the integration of SMT and a feedback based loop may contribute more significantly to performance improvements than the raw grading differentials indicate. | GPT-4 | GPT-3.5 | GPT-4 ---|---|---|--- | SMT+D | Naive | SMT Statistic | (50) | (20) | (35) Exact Match (%) | 78.26 | 78.94 | 82.35 Avg. Abs. Diff | 0.056 | 0.040 | 0.117 Avg. Rel. Diff (%) | -3.547 | +13.8 | +2.916 LLM Overestimated (%) | 13.04 | 21.05 | 11.76 LLM Underestimated (%) | 8.70 | 0.00 | 5.88 Joint Full Credit (%) | 89.19 | 100 | 86.96 Spearman Correlation | 0.73 | 0.948 | 0.70 Table 4: User Study Statistics comparing different experimental setups. The first column is GPT-4 with SMT integration and the decomposition (D) agent over 50 problems. The second column is GPT-3.5 without SMT integration over 20 problems. The third column is GPT-4 with just SMT integration over 35 problems. All problems were graded by both the manual grader and the LLM. ## 5 Background and Related Work The concept of agent-centric LLM agents, as discussed in recent literature revolves around creating systems (usually backed by LLMs) that can act independently in diverse environments, both physical and virtual (Wang et al., 2024). This framework shifts the focus from passive systems to proactive entities capable of dynamic interaction and problem-solving. In these models, agents are designed to perceive and react to multi-modal data, integrating visual, auditory, and textual input to generate appropriate actions in real time. The most tangible benefit of this framework is feedback, which can take the form of a physical environment, error correction, or manual input (Durante et al., 2024). Significant work has been done to apply this framework to solving text-based puzzles. Zhou et al. (2023) used a process called Language Agent Tree Search (LATS), which integrates planning, reasoning, and acting within LLMs to decompose and solve a high-level reasoning task. Gao et al. (2023) showed that generating intermediate representations as Python programs allowed small LLMs to outperform much larger ones Logic-LM and SatLM both used LLMs to generate formal representations of general natural language problem and used off the shelf theorem provers to generate answers (Pan et al., 2023; Ye et al., 2023). While none of these approaches focus on Zebra puzzles, they each show that LLMs perform better when used as agents in a formally grounded system. Research has shown that natural language cannot be mapped one-to-one with a formal space due to inherent ambiguities (Osama et al., 2020). For our approach, it was thus vital to create an agent that can take into account context and background knowledge to figure out the correct translation into a formal space. Even if the clues were perfectly translated as they are presented, a formal solver will not be able to generate a fully correct solution without additional encoding by the problem translator of this general context. Our approach is different from prior agent approaches in that we use a structured symbolic space (SMT-LIB) but use the syntactic and semantic feedback from an automated theorom prover for analysis in an LLM agent. We also provide a conceptual framework to understand LLM interaction with the automated theorem prover and its generated text as an agent. ## 6 Conclusion This research shows the effectiveness of a multi-agent LLM and SMT framework that bolsters the performance of large language models (LLMs) in solving logic puzzles and other natural language task. Our work demonstrates the importance of integrating LLMs and SMTs in the task, boosting preformance over an LLM alone. We also show that the inter-agent critique mechanism plays a crucial role. Through dialogues, agents critique and refine each other’s contributions, which leads to more accurate and consistent results. The development of an autograder, with a verified correlation to human evaluation, played a role in the feedback mechanism by indicating when a solution was not judged logically correct and also enabled iterative development of our approach. Our findings suggest that structured planning and agent-feedback greatly enhance LLMs’ capability to solve logical problems. Looking ahead, further research could optimize retry mechanisms for discovering more effective solutions, informed by approaches like Program-of- Thoughts and Graph-of-Thoughts-Rationale (Chen et al., 2023; Besta et al., 2024). Additionally, increasing the agent-environment size and the feedback loop length would enhance the solving agent’s self-correction capabilities by expanding the actual and effective context limits for remembering past strategies. ## 7 Limitations This study, while advancing our understanding of LLMs in solving logic puzzles, has several limitations that warrant further investigation. Firstly, the experiments were confined to only three models: GPT-4, GPT-3.5, and Llama3-8b. Investigation of generalizability across different LLMs is warranted, especially because our performance gains occurred mainly in the GPT family. Secondly, our approach relied on specific prompt constructions for both the grader and the solver agents. There exists a possibility that alternative prompting strategies could yield more accurate or efficient problem-solving and grading results. Further research is needed to explore and optimize these prompts to fully leverage the potential of LLMs in this domain. Additionally, our user study inherently carries some uncertainty regarding its correlation to actual problem-solving performance. Solutions involving complex lookup tables were excluded from the user study due to how time consuming they were to grade, which might affect the study’s comprehensiveness and the general applicability of our findings. Lastly, our bank of logic grid puzzles used in this study was somewhat limited in both size– we used 114 problems– and range of difficulty. The majority of puzzles used were subjectively rated as medium difficulty. Extending this research to include a larger and more varied dataset would verify the usefulness of our findings. ## 8 Ethics Statement Use of Generative AI. Generative models carry ethical risks, including the potential to produce harmful content or content that closely mirrors pre- training data. However, we are using the generative models to solve puzzles rather than showing their direct output, minimizing this risk. Compute. Employing deep learning models is computationally intensive and can have environmental implications. However, as no models were trained as part of this research, the computational impact remains relatively low. Human Evaluator. We use only 5 human evaluators who are undergraduate/masters students in the lab environment and were given full disclosure about the nature of the study and its unpaid nature. No ethical violations were committed in such setting. ## Acknowledgments ## References * Besta et al. (2024) Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Michal Podstawski, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Hubert Niewiadomski, Piotr Nyczyk, and Torsten Hoefler. 2024. Graph of thoughts: Solving elaborate problems with large language models. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 38(16):17682–17690. * Chen et al. (2021) Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mohammad Bavarian, Clemens Winter, Philippe Trottier, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shantanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large language models trained on code. _arXiv_. * Chen et al. (2023) Wenhu Chen, Xueguang Ma, Xinyi Wang, and William W. Cohen. 2023. Program of thoughts prompting: Disentangling computation from reasoning for numerical reasoning tasks. _Preprint_ , arXiv:2211.12588. * Claes et al. (2019) Jens Claes, Bart Bogaerts, Rocsildes Canoy, and Tias Guns. 2019. User-oriented solving and explaining of natural language logic grid puzzles. In _The Third Workshop on Progress Towards the Holy Grail_ , volume 14. * Durante et al. (2024) Zane Durante, Qiuyuan Huang, Naoki Wake, Ran Gong, Jae Sung Park, Bidipta Sarkar, Rohan Taori, Yusuke Noda, Demetri Terzopoulos, Yejin Choi, Katsushi Ikeuchi, Hoi Vo, Li Fei-Fei, and Jianfeng Gao. 2024. Agent ai: Surveying the horizons of multimodal interaction. _arXiv_. * Gao et al. (2023) Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. 2023. Pal: Program-aided language models. In _International Conference on Machine Learning_ , pages 10764–10799. PMLR. * Groza (2023) Adrian Groza. 2023. Measuring reasoning capabilities of chatgpt. _arXiv_. * Milicevic et al. (2012) Aleksandar Milicevic, Joseph P Near, and Rishabh Singh. 2012. Puzzler: An automated logic puzzle solver. _Massachusetts Institute of Technology (MIT)_. * Mukherjee and Garain (2009) Anirban Mukherjee and Utpal Garain. 2009. A review of algorithms for solving mathematical word problems in natural language texts. _Artificial Intelligence Review_ , 32(4):285–298. * Osama et al. (2020) Mohamed Osama, Aya Zaki-Ismail, Mohamed Abdelrazek, John Grundy, and Amani Ibrahim. 2020. Score-based automatic detection and resolution of syntactic ambiguity in natural language requirements. In _2020 IEEE International Conference on Software Maintenance and Evolution (ICSME)_ , pages 651–661. * Pan et al. (2023) Shirui Pan, Yizhen Zheng, and Yixin Liu. 2023. Integrating graphs with large language models: Methods and prospects. _arXiv_. * Wang et al. (2024) Lei Wang, Chen Ma, Xueyang Feng, Zeyu Zhang, Hao Yang, Jingsen Zhang, Zhiyuan Chen, Jiakai Tang, Xu Chen, Yankai Lin, et al. 2024. A survey on large language model based autonomous agents. _Frontiers of Computer Science_ , 18(6):1–26. * Ye et al. (2023) Xi Ye, Qiaochu Chen, Isil Dillig, and Greg Durrett. 2023. Satlm: Satisfiability-aided language models using declarative prompting. In _Advances in Neural Information Processing Systems_ , volume 36, pages 45548–45580. Curran Associates, Inc. * Zhou et al. (2023) Andy Zhou, Kai Yan, Michal Shlapentokh-Rothman, Haohan Wang, and Yu-Xiong Wang. 2023. Language agent tree search unifies reasoning acting and planning in language models. _arXiv_. ## Appendix A Appendix ### A.1 Example Puzzle Workflow We begin by inputting the puzzle and its entities into the Solver LLM-Agent (in this case, GPT-4). first, second, third, fourthBridget, Kermit, Ophelia, Stretch#105, #118, #126, #128The ostrich that finished second was #128.First place was either #118 or #126.Third place was either #126 or Bridget.Ophelia finished second.Stretch finished 2 places after Kermit. The solver agent responds by attempting to translate the puzzle and its clues into SMT-LIB: (set-logic QF_LIA); Place of each ostrich(declare-const Bridget_Place Int)(declare-const Kermit_Place Int)(declare-const Ophelia_Place Int)(declare- const Stretch_Place Int); Number of each ostrich(declare-const Bridget_Number Int)(declare-const Kermit_Number Int)(declare-const Ophelia_Number Int)(declare-const Stretch_Number Int)(assert (and(or (= Bridget_Place 1)(= Bridget_Place 2)(= Bridget_Place 3) (= Bridget_Place 4))(or (= Kermit_Place 1)(= Kermit_Place 2)(= Kermit_Place 3) (= Kermit_Place 4))(or (= Ophelia_Place 1)(= Ophelia_Place 2)(= Ophelia_Place 3) (= Ophelia_Place 4))(or (= Stretch_Place 1)(= Stretch_Place 2)(= Stretch_Place 3) (= Stretch_Place 4))))(assert (and(or (= Bridget_Number 105)(= Bridget_Number 118)(= Bridget_Number 126)(= Bridget_Number 128))(or (= Kermit_Number 105)(= Kermit_Number 118)(= Kermit_Number 126)(= Kermit_Number 128))(or (= Ophelia_Number 105)(= Ophelia_Number 118)(= Ophelia_Number 126)(= Ophelia_Number 128))(or (= Stretch_Number 105)(= Stretch_Number 118)(= Stretch_Number 126)(= Stretch_Number 128)))); The ostrich that finished second was #128.(assert (or (and (= Ophelia_Place 2)(= Ophelia_Number 128)) (and (= Bridget_Place 2) (= Bridget_Number 128)) (and (= Kermit_Place 2) (= Kermit_Number 128)) (and (= Stretch_Place 2) (= Stretch_Number 128)))); The ostrich that finished first was either #118 or #126.(assert (or (and (= Ophelia_Place 1)(or (= Ophelia_Number 118)(= Ophelia_Number 126))) (and (= Bridget_Place 1) (or (= Bridget_Number 118) (= Bridget_Number 126))) (and (= Kermit_Place 1) (or (= Kermit_Number 118) (= Kermit_Number 126))) (and (= Stretch_Place 1) (or (= Stretch_Number 118) (= Stretch_Number 126))))); The runner that finished third was; either #126 or Bridget.(assert (or (and (= Ophelia_Place 3)(= Ophelia_Number 126)) (and (= Bridget_Place 3) (or (= Bridget_Number 126) (= Bridget_Number 105) (= Bridget_Number 118) (= Bridget_Number 128))) (and (= Kermit_Place 3) (= Kermit_Number 126)) (and (= Stretch_Place 3) (= Stretch_Number 126)))); Ophelia finished second.(assert (= Ophelia_Place 2)); Stretch finished 2 places after Kermit.(assert (= Stretch_Place (+ Kermit_Place 2))); Distinct places(assert (distinct Bridget_PlaceKermit_Place Ophelia_Place Stretch_Place)); Distinct numbers(assert (distinct Bridget_NumberKermit_Number Ophelia_Number Stretch_Number)) This SMT-LIB code is then fed into z3, which produces the following output: (error ""line 15column 0: invalid command, ’(’ expected"")(error ""line 15column 2: unexpected character"")(error ""line 15column 77: unexpected character"")(error ""line 15column 88: unexpected character"")(error ""line 15column 117: unexpected character"")(error ""line 15column 136: unexpected character"")(error ""line 15column 142: unexpected character"")(error ""line 31column 0: invalid command, ’(’ expected"")(error ""line 31column 7: unexpected character"")(error ""line 57column 0: invalid command, ’(’ expected"")(error ""line 57column 7: unexpected character"")(error ""line 65column 0: invalid command, ’(’ expected"")sat( (define-fun Ophelia_Number () Int 128) (define-fun Stretch_Place () Int 3) (define-fun Stretch_Number () Int 126) (define-fun Bridget_Place () Int 4) (define-fun Kermit_Number () Int 118) (define-fun Bridget_Number () Int 105) (define-fun Ophelia_Place () Int 2) (define-fun Kermit_Place () Int 1)) This output is fed back into the Solver LLM-Agent, which then produces the following modified SMT-LIB code: (set-logic QF_LIA); Place of each ostrich(declare-const Bridget_Place Int)(declare-const Kermit_Place Int)(declare-const Ophelia_Place Int)(declare- const Stretch_Place Int); Number of each ostrich(declare-const Bridget_Number Int)(declare-const Kermit_Number Int)(declare-const Ophelia_Number Int)(declare-const Stretch_Number Int); The places are in the set {1, 2, 3, 4}(assert (and(or (= Bridget_Place 1)(= Bridget_Place 2) (= Bridget_Place 3)(= Bridget_Place 4))(or (= Kermit_Place 1)(= Kermit_Place 2) (= Kermit_Place 3)(= Kermit_Place 4))(or (= Ophelia_Place 1)(= Ophelia_Place 2) (= Ophelia_Place 3)(= Ophelia_Place 4))(or (= Stretch_Place 1)(= Stretch_Place 2) (= Stretch_Place 3)(= Stretch_Place 4)))); Numbers are in the set {105, 118, 126, 128}(assert (and(or (= Bridget_Number 105)(= Bridget_Number 118) (= Bridget_Number 126)(= Bridget_Number 128))(or (= Kermit_Number 105)(= Kermit_Number 118) (= Kermit_Number 126)(= Kermit_Number 128))(or (= Ophelia_Number 105)(= Ophelia_Number 118) (= Ophelia_Number 126)(= Ophelia_Number 128))(or (= Stretch_Number 105)(= Stretch_Number 118) (= Stretch_Number 126)(= Stretch_Number 128)))); The ostrich that finished second was #128.(assert (or (and (= Ophelia_Place 2)(= Ophelia_Number 128)) (and (= Bridget_Place 2) (= Bridget_Number 128)) (and (= Kermit_Place 2) (= Kermit_Number 128)) (and (= Stretch_Place 2) (= Stretch_Number 128)))); The ostrich that finished first was; either #118 or #126.(assert (or (and (= Ophelia_Place 1)(or (= Ophelia_Number 118)(= Ophelia_Number 126))) (and (= Bridget_Place 1) (or (= Bridget_Number 118) (= Bridget_Number 126))) (and (= Kermit_Place 1) (or (= Kermit_Number 118) (= Kermit_Number 126))) (and (= Stretch_Place 1) (or (= Stretch_Number 118) (= Stretch_Number 126))))); The runner that finished third; was either #126 or Bridget.(assert (or (and (= Ophelia_Place 3)(= Ophelia_Number 126)) (and (= Bridget_Place 3) (or (= Bridget_Number 126) (= Bridget_Number 105) (= Bridget_Number 118) (= Bridget_Number 128))) (and (= Kermit_Place 3) (= Kermit_Number 126)) (and (= Stretch_Place 3) (= Stretch_Number 126)))); Ophelia finished second.(assert (= Ophelia_Place 2)); Stretch finished 2 places after Kermit.(assert (= Stretch_Place (+ Kermit_Place 2))); Distinct places(assert (distinctBridget_Place Kermit_PlaceOphelia_Place Stretch_Place)); Distinct numbers(assert (distinctBridget_Number Kermit_NumberOphelia_Number Stretch_Number))(check-sat)(get-model) Z3 produces the following output, which is then determined to be both error- free and consistent: sat ( (define-fun Ophelia_Number () Int 128) (define-fun Stretch_Place () Int 3) (define-fun Stretch_Number () Int 126) (define-fun Bridget_Place () Int 4) (define-fun Kermit_Number () Int 118) (define-fun Bridget_Number () Int 105) (define-fun Ophelia_Place () Int 2) (define-fun Kermit_Place () Int 1) ) ###### Ground Truth Answer * • Kermit: First, #118 * • Ophelia: Second, #128 * • Stretch: Third, #126 * • Bridget: Fourth, #105 The autograder evaluates the solution by mapping the SMT-LIB output to the expected results either using contextual clues or an explicitly defined lookup table, which would be defined in the SMT-LIB comments, converting function definitions into comparable assignments, as in Table 5. In this case, the solution gets full credit. Table 5: Validation Results Entity | Result ---|--- Kermit | Place: First (Correct) Kermit | Number: 118 (Correct) Ophelia | Place: Second (Correct) Ophelia | Number: 128 (Correct) Stretch | Place: Third (Correct) Stretch | Number: 126 (Correct) Bridget | Place: Fourth (Correct) Bridget | Number: 105 (Correct) ### A.2 User Study Instructions The following instructions were presented to our manual graders before they began grading. The full UI can be found at https://anonymous.4open.science/r/anon_emnlp-1AD0 by running the "autograder_flask.py" file.
# Simultaneous Color Holography Eric Markley<EMAIL_ADDRESS>Reality Labs Research, MetaUSA , Nathan Matsuda Reality Labs Research, MetaUSA , Florian Schiffers Reality Labs Research, MetaUSA , Oliver Coissart Reality Labs Research, MetaUSA and Grace Kuo Reality Labs Research, MetaUSA ###### Abstract. Computer generated holography has long been touted as the future of augmented and virtual reality (AR/VR) displays, but has yet to be realized in practice. Previous high-quality, color holographic displays have either made a 3$\times$ sacrifice on frame rate by using a sequential illumination scheme or have made use of multiple spatial light modulators (SLM) and/or bulky, complex optical setups. The reduced frame rate of sequential color introduces distracting judder and color fringing in the presence of head motion while the form factor of current simultaneous color systems is incompatible with a head-mounted display. In this work, we propose a framework for simultaneous color holography that allows the use of the full SLM frame rate while maintaining a compact and simple optical setup. State-of-the-art hologram quality is achieved through a perceptual loss function, a physics-based neural network wavefront propagator, and a camera-calibrated forward model. We measurably improve hologram quality compared to other simultaneous color methods and move one step closer to the realization of color holographic displays for AR/VR. Figure 1. Simultaneous color holograms captured in experiment. Traditionally, color holograms are illuminated sequentially with a unique spatial light modulator (SLM) pattern for each color channel. In this work we outline a flexible framework that enables the use of a single SLM pattern for red-green- blue (RGB) holograms using simultaneous RGB illumination. We validate this framework experimentally on a simple and compact optical setup. ## 1\. Introduction Holographic displays are a promising technology for augmented and virtual reality (AR/VR). Such displays use a spatial light modulator (SLM) to shape an incoming coherent wavefront so that it appears as though the wavefront came from a real, three-dimensional (3D) object. The resulting image can have natural defocus cues, providing a path to resolve the vergence-accommodation conflict of stereoscopic displays (Kim et al., 2022b). Additionally, the fine- grain control offered by holography can also correct for optical aberrations, provide custom eyeglass prescription correction in software, and enable compact form-factors (Maimone et al., 2017), while improving light efficiency compared traditional LCD or OLED displays (Yin et al., 2022). Recent publications have demonstrated significant improvement in hologram image quality (Maimone et al., 2017; Peng et al., 2020; Choi et al., 2021a) and computation time (Shi et al., 2021; Eybposh et al., 2020), bringing holographic displays one step closer to practicality. However, color holography for AR/VR has remained an open problem. Traditionally, red-green-blue (RGB) holograms are created through field sequential color, where a separate hologram is computed for each of the three wavelengths and displayed in sequence and synchronized with the color of the illumination source. Due to persistence of vision, this appears as a single full color image if the update is sufficiently fast, enabling color holography for static displays. However, in a head-mounted AR/VR system displaying world- locked content, framerate requirements are higher to prevent noticeable judder (Van Waveren, 2016). Furthermore, field sequential color can lead to visible spatial separation of the colors, particularly when the user rotates their head while tracking a fixed object with their eyes (Riecke et al., 2006). Although these negative effects can be mitigated with high framerate displays, the most common SLM technology for holography, liquid-crystal-on-silicon (LCoS), is quite slow due to the physical response time of the liquid crystal (LC) layer (Zhang et al., 2014). Although most commercial LCoS SLMs can be driven at 60 Hz, at that speed the SLM will have residual artifacts from the prior frames (Haist and Osten, 2015). Micro-electro-mechanical system (MEMS) SLMs can be much faster (in the kilohertz range) but so far have larger pixels and limited bit depth (Duerr et al., 2021; Choi et al., 2022). In this work, we aim to display RGB holograms using only a single SLM pattern, enabling a $3\times$ increase in framerate compared to sequential color and removing color fringing artifacts in the presence of head motion. Our compact setup does not use a physical filter in the Fourier plane or bulky optics to combine color channels. Instead, the full SLM is simultaneously illuminated by an on-axis RGB source, and we optimize the SLM pattern to form the three color image. We design a flexible framework for end-to-end optimization of the digital SLM input from the target RGB intensity, allowing us to optimize for SLMs with extended phase range, and we develop a color-specific perceptual loss function which further improves color fidelity. Our method is validated experimentally on 2D and 3D content. Specifically, we make the following contributions: * • We introduce a novel algorithm for generating simultaneous color holograms which takes advantage of the extended phase range of the SLM in an end-to-end manner and uses a new loss function based on human color perception. * • We analyze the “depth replicas” artifact in simultaneous color holography and demonstrate how these replicas can be mitigated with extended phase range. * • We demonstrate high quality experimental simultaneous color holograms in both 2D and 3D using a custom camera-calibrated model. Figure 2. Hologram optimization framework. This figure illustrates the three key components of the simultaneous color optimization framework: an SLM model, a propagation model, and a perceptual loss function. The SLM model maps voltage values to a complex field using a learned cross-talk kernel and a linear lookup table. The complex wavefront from the SLM is then propagated to the sensor plane using a modified version of the model proposed by Gopakumar et al. (2021), which separates the zeroth and first diffraction orders and combines them through a U-Net. The output is then fed into the perceptual loss function, and gradients are calculated using Pytorch’s autograd implementation. The SLM voltages are then updated using these gradients. Rubik’s cube source image by Iwan Gabovitch (CC BY 2.0). ## 2\. Related Works #### Field Sequential Color The vast majority of color holographic displays use field sequential color in which the SLM is sequentially illuminated by red, green, and blue sources while the SLM pattern is updated accordingly (Maimone et al., 2017; Jang et al., 2018; Chakravarthula et al., 2019, 2020, 2022; Peng et al., 2020, 2021; Choi et al., 2021a, b; Shi et al., 2021; Yang et al., 2022; Li et al., 2016). Field sequential color is effective at producing full color holograms but reduces framerate by a factor of $3\times$. This is a challenge for LCoS SLMs where framerate is severely limited by the LC response time (Zhang et al., 2014). Although, SLMs based on MEMS technology can run at high framerates in the kilohertz range (Duerr et al., 2021), so far these modulators are maximum 4-bit displays, with most being binary (Choi et al., 2022; Kim et al., 2022b; Lee et al., 2022). Even with high framerate modulators, it may be worthwhile to maintain the full temporal bandwidth, since the extra bandwidth can be used to address other holography limitations. For example, speckle can be reduced through temporal averaging (Choi et al., 2022; Kim et al., 2022b; Lee et al., 2022), and limited etendue can be mitigated through pupil scanning (Jang et al., 2018; Kim et al., 2022a). #### Spatial Multiplexing An alternate approach is spatial multiplexing, which maintains the native SLM framerate by using different regions of the SLM for each color. Most prior works in this area use three separate SLMs and an array of optics to combine the wavefronts (Yaraş et al., 2009; Shiraki et al., 2009; Nakayama et al., 2010). Although this method produces high quality holograms, the resulting systems are bulky, expensive, and require precise alignment, making them poorly suited for near-eye displays. Spatial multiplexing can also be implemented with a single SLM split into sub-regions (Makowski et al., 2011, 2009); while less expensive, this approach still requires bulky combining optics and sacrifices space-bandwidth product (SBP), also known as etendue. Etendue is already a limiting factor in holographic displays (Kuo et al., 2020), and further reduction is undesirable as it limits the range of viewing angles or display field-of-view. #### Frequency Multiplexing Rather than split the physical extent of the SLM into regions, frequency multiplexing assigns each color a different region in the frequency domain, and the colors are separated with a physical color filter at the Fourier plane of a 4$f$ system (Makowski et al., 2010; Lin and Kim, 2017; Lin et al., 2019). A variation on this idea uses different angles of illumination for each color so that the physical filter in Fourier space is not color-specific (Xue et al., 2014). Frequency multiplexing can also be implemented with white light illumination, which reduces speckle noise at the cost of resolution (Kozacki and Chlipala, 2016; Yang et al., 2019). However, all of these techniques involve filtering in Fourier space, which sacrifices system etendue and requires a bulky 4$f$ system. #### Depth Division and Bit Division for Simultaneous Color The prior methods most closely related to our work also use simultaneous RGB illumination over the SLM, maintain the full SLM etendue, and don’t require a bulky 4$f$ system (Pi et al., 2022). We refer to the first method as depth division multiplexing which takes advantage of the ambiguity between color and propagation distance (explained in detail in Sec. 3.1) and assigns each color a different depth (Makowski et al., 2008, 2010). After optimizing with a single color for the correct multiplane image, the authors show they can form a full color 2D hologram when illuminating in RGB. However, this approach does not account for wavelength dependence of the SLM response, and since it explicitly defines content at multiple planes, it translates poorly to 3D. Another similar approach is bit division multiplexing, which takes advantage of the extended phase range of LCoS SLMs (Jesacher et al., 2014). The authors calibrate an SLM lookup-table consisting of phase-value triplets (for RGB) as a function of digital SLM input, and they note that SLMs with extended phase range (up to $10\pi$) can create substantial diversity in the calibrated phase triplets. After pre-optimizing a phase pattern for each color separately, the lookup-table is used on a per-pixel basis to find the digital input that best matches the desired phase for all colors. In our approach, we also use an extended SLM phase range for the same reason, but rather than using a two-step process, we directly optimize the output hologram. This flexible framework also allows us to incorporate a perceptual loss function to further improve perceived image quality. #### Algorithms for Hologram Generation Our work builds on a body of literature applying iterative optimization algorithms to holographic displays. Perhaps most popular is the Gerchberg- Saxton (GS) method (Gerchberg, 1972), which is effective and easy to implement, but does not have an explicitly defined loss function, making it challenging to adapt to specific applications. Zhang et al. (2017) and Chakravarthula et al. (2019) were the first to explicitly formulate the hologram generation problem in an optimization framework. This framework has been very powerful, enabling custom loss functions (Choi et al., 2022) and flexible adaptation to new optical configurations (Choi et al., 2021b; Gopakumar et al., 2021). In particular, perceptual loss functions can improve the perceived image by taking aspects of human vision into account, such as human visual acuity (Kuo et al., 2020), foveated vision (Walton et al., 2022), and sensitivity to spatial frequencies during accommodation (Kim et al., 2022b). Like these prior works, we use an optimization-based framework which we adapt to account for the wavelength-dependence of the SLM; this also enables our new perceptual loss function for color, which is based on visual acuity difference between chrominance and luminance channels. #### Camera-Calibration of Holographic Displays When the model used for hologram generation does not match the physical system, deviations cause artifacts in the experimental holograms. Recently, several papers have addressed this issue using measurements from a camera in the system for calibration. Peng et al. (2020) proposed using feedback from the camera to update the SLM pattern for a particular image; although a single image can be improved, it does not extend to new content. A more flexible approach uses pairs of SLM patterns and camera captures to estimate the learnable parameters in a model, which is then used for offline hologram generation. Learnable parameters can be physically-based (Peng et al., 2020; Kavaklı et al., 2022; Chakrabarti, 2016), black box CNNs (Choi et al., 2021a), or a combination of both (Choi et al., 2022). The choice of learnable parameters effects the ability of the model to match the physical system; we introduce a new parameter for modeling SLM cross talk and tailor the CNN architecture for higher diffraction orders from the SLM. ## 3\. Simultaneous Color Holography A holographic image is created by a spatially coherent illumination source incident on an SLM. The SLM imparts a phase delay on the electric field; after light propagates some distance, the intensity of the electric field forms an image. Our goal in this work is to compute a single SLM pattern that simultaneously creates a three color RGB hologram. For instance, when the SLM is illuminated with a red source, the SLM forms a hologram of the red channel of an image; with a green source the same SLM pattern forms the green channel; and with the blue source it creates the blue channel. We propose a flexible optimization-based framework (Fig. 2) for generating simultaneous color holograms. We start with a generic model for estimating the hologram from the digital SLM pattern, $s$, as a function of illumination wavelength, $\lambda$: (1) $\displaystyle g_{\lambda}$ $\displaystyle=e^{i\phi_{\lambda}\left(s\right)}$ (2) $\displaystyle I_{z,\lambda}$ $\displaystyle=\left|f_{\text{prop}}\left(g_{\lambda},z,\lambda\right)\right|^{2}.$ Here, $\phi_{\lambda}$ is a wavelength-dependent function that converts the 8 bit digital SLM pattern to a phase delay, $g_{\lambda}$ is the electric field coming off the SLM, $f_{\text{prop}}$ represents propagation of the electric field, and $I_{z,\lambda}$ is the intensity a distance $z$ from the SLM. To calculate the SLM pattern, $s$, we can solve the following optimization problem (3) $\displaystyle\operatorname*{argmin}_{s}\sum_{z}\mathcal{L}\left(\hat{I}_{z,\lambda_{r}},I_{z,\lambda_{r}}\right)+\mathcal{L}\left(\hat{I}_{z,\lambda_{g}},I_{z,\lambda_{g}}\right)+\mathcal{L}\left(\hat{I}_{z,\lambda_{b}},I_{z,\lambda_{b}}\right),$ where $\hat{I}$ is the target image, $\mathcal{L}$ is a pixel-wise loss function such as mean-square error, and $\lambda_{r},\lambda_{g},\lambda_{b}$ are the wavelengths corresponding to red, green, and blue respectively. Since the model is differentiable, we solve Eq. 3 with gradient descent. Figure 3. Extended phase range reduces depth replicas in simulation. (A) Using an SLM with a uniform $2\pi$ phase range across all channels leads to strong depth replicas (top row), which reduce image quality at the target plane compared to the target (bottom row) and add in-focus content at depths that should be defocused. By using the extended phase Holoeye Pluto-2.1-Vis-016 SLM (with Red: $2.4\pi$, Green: $5.9\pi$, Blue: $7.4\pi$ phase ranges), depth replicas are significantly reduced (middle row), improving the quality of target plane holograms and creating defocused content at other depths. (B) Schematic illustrating the positions of the replicate planes and target plane. Note, this simulation was generated using RGB images and three color channels, but only the green and blue channels are displayed for clarity. (Rubik’s cube source image by Iwan Gabovitch (CC BY 2.0). ### 3.1. Color-Depth Ambiguity A common model for propagating electric fields is Fresnel propagation111Fresnel propagation is the paraxial approximation to the popular angular spectrum method (ASM). Since most commercials SLMs have pixel pitch greater than $3\text{\,}\mathrm{\SIUnitSymbolMicro m}$, resulting in a maximum diffraction angle under $5^{\circ}$ (well within the small angle approximation), Fresnel and ASM are almost identical for holography. (Goodman, 2005), which can be written in Fourier space as (4) $\displaystyle f_{\text{fresnel}}(g,z,\lambda)$ $\displaystyle=\mathcal{F}^{-1}\left\\{\mathcal{F}\\{g\\}\cdot H(z,\lambda)\right\\}$ (5) $\displaystyle H(z,\lambda)$ $\displaystyle=\exp\left(i\pi\lambda z\left(f_{x}^{2}+f_{y}^{2}\right)\right)$ where $\mathcal{F}$ is a 2D Fourier transform, $H$ is the Fresnel propagation kernel, and $f_{x}$, $f_{y}$ are the spatial frequency coordinates. In Eq. 5, note that $\lambda$ and $z$ appear together, creating an ambiguity between wavelength and propagation distance. To see how this ambiguity affects color holograms, consider the case where $\phi_{\lambda}$ in Eq. 1 is independent of wavelength ($\phi_{\lambda}=\phi$). For example, this would be the case if the SLM had a linear phase range from 0 to $2\pi$ at every wavelength. Although this is unrealistic for most off-the-shelf SLMs, it is a useful thought experiment. Note that if $\phi$ is wavelength-independent, then so is the electric field off the SLM ($g_{\lambda}=g$). In this scenario, assuming $f_{\text{prop}}=f_{\text{fresnel}}$, the Fresnel kernel is the only part of the model affected by wavelength. Now assume that the SLM forms an image at distance $z_{0}$ under red illumination. From the ambiguity in the Fresnel kernel, we have the following equivalence: (6) $\displaystyle H(z_{0},\lambda_{r})=H\left(\tfrac{\lambda_{g}}{\lambda_{r}}z_{0},\lambda_{g}\right)=H\left(\tfrac{\lambda_{b}}{\lambda_{r}}z_{0},\lambda_{b}\right).$ This means the same image formed in red at $z_{0}$ will also appear at $z=z_{0}\lambda_{g}/\lambda_{r}$ when the SLM is illuminated with green and at $z=z_{0}\lambda_{b}/\lambda_{r}$ when the SLM is illuminated with blue. We refer to these additional copies as “depth replicas,” and this phenomena is depicted in Fig. 3. Note that depth replicas do not appear in sequential color holography, since the SLM pattern optimized for red is never illuminated with the other wavelengths. If we only care about the hologram at the target plane $z_{0}$, then the depth replicas are not an issue, and in fact, we can take advantage of the situation for hologram generation: The SLM pattern for an RGB hologram at $z_{0}$ is equivalent to the pattern that generates a three-plane red hologram where the RGB channels of the target are each at a different depth ($z0$, $z_{0}\lambda_{r}/\lambda_{g}$, and $z_{0}\lambda_{r}/\lambda_{b}$ for RGB respectively). This is the basis of the depth division multiplexing approach of Makowski et al. (2008, 2010), where the authors optimize for this three- plane hologram in red, then illuminate in RGB. Although this makes the assumption that $\phi$ does not depend on $\lambda$, this connection between simultaneous color and multi-plane holography suggests simultaneous color should be possible for a single plane, since multi-plane holography has been successfully demonstrated in prior work. However, the ultimate goal of holography is to create 3D imagery, and the depth replicas could prevent us from placing content arbitrarily over the 3D volume. In addition, in-focus images can appear at depths that should be out- of-focus, which may prevent the hologram from successfully driving accommodation (Kim et al., 2022b). We propose taking advantage of SLMs with extended phase range to mitigate the effects of depth replicas. ### 3.2. SLM Extended Phase Range In general, the phase $\phi_{\lambda}$ of the light depends on its wavelength, which was not considered in Sec. 3.1. Perhaps the most popular SLM technology today is LCoS, in which rotation of birefringent LC molecules causes a change in refractive index. The phase of light traveling through the LC layer is delayed by (7) $\displaystyle\phi_{\lambda}=\frac{2\pi d}{\lambda}n(s,\lambda),$ where $d$ is the thickness of the LC layer, and its refractive index, $n$, is controlled with the digital input $s$. $n$ also depends on $\lambda$ due to dispersion (Jesacher et al., 2014). The wavelength dependence of $\phi_{\lambda}$ presents an opportunity to reduce or remove the depth replicas. Even if the propagation kernel $H$ is the same for several $(\lambda,z)$ pairs, if the phase, and therefore the electric field off the SLM, changes with $\lambda$, then the output image intensity at the replica plane will also be different. As the wavelength-dependence of $\phi_{\lambda}$ increases, the replicas are diminished. We can quantify the degree of dependence on $\lambda$ by looking at the derivative $d\phi/{d\lambda}$ which informs us that larger $n$ will give $\lambda$ more influence on the SLM phase. However, the final image intensity depends only on relative phase, not absolute phase; therefore, for the output image to have a stronger dependence on $\lambda$, we desire larger $\Delta n=n_{\text{max}}-n_{\text{min}}$. In addition, $d\phi/{d\lambda}$ increases with $-dn/{d\lambda}$, suggesting that more dispersion is helpful for simultaneous color. Although $d\phi/{d\lambda}$ also depends on the absolute value of $\lambda$, we have minimal control over this parameter since there are limited wavelengths corresponding to RGB. In summary, this means we can reduce depth replicas in simultaneous color with larger phase range on the SLM and higher dispersion. However, there is a trade-off: As the range of phase increases, the limitations of the bit depth of the SLM become more noticeable, leading to increased quantization errors. We simulate the effect of quantization on hologram quality and find that PSNR and SSIM are almost constant for 6 bits and above. This suggests that each $2\pi$ range should have at least 6 bits of granularity. Therefore, we think that using a phase range of around $8\pi$ for an 8-bit SLM will be the best balance between replica reduction and maintaining accuracy for hologram generation. Figure 3 simulates the effect of extended phase range on depth replica removal. While holograms were calculated on full color images, only two color channels are shown for simplicity. In the first row of Fig. 3, we simulate an SLM with no wavelength dependence to $\phi$ (i.e. 0 - $2\pi$ phase range for each color). Consequently, perfect copies appear at the replica planes. In the second row, we simulate using the specifications from an extended phase range SLM (Holoeye Pluto-2.1-Vis-016), which has $2.4\pi$ range in red, $5.9\pi$ range in green, and $7.4\pi$ range in blue demonstrating that replicas are substantially diminished with an extended phase range. By reducing the depth replicas, the amount of high frequency out of focus light at the sensor plane is reduced leading to improved hologram quality. Figure 4. Perceptual loss improves color fidelity and reduces noise in simulation. The first column of this figure depicts simulated holograms that were optimized with an RGB loss function (A) and our perceptual loss function (B). The same filters for the perceptual loss function then were applied to both of these simulated holograms as well as the target image. Image metrics were calculated between the filtered holograms and the filtered target image (D). All image metrics are better for the perceptually optimized hologram (B). One should also note that the filtered target (D) and original target (C) are indistinguishable suggesting our perceptual loss function only removes information imperceptible by the human visual system as intended. Figure 5. Comparison of bit division, depth division and our method of simultaneous color holography in simulation. Bit division (Col. 1) is noisier than our method but achieves comparable color fidelity, although more washed out. The depth division method (Col. 2) is also noisier than our method and has inferior color fidelity. Our method matches the target image well. Our method uses our perceptual loss function and a high order angular spectrum propagation model with no learned components. Further implementation details for each method are available in the supplement. ### 3.3. Perceptual Loss Function Creating an RGB hologram with a single SLM pattern is an overdetermined problem as there are $3\times$ more output pixels than degrees of freedom of the SLM. As a result, it may not be possible to exactly match the full RGB image, which can result in color deviations and de-saturation. To address this, we take advantage of color perception in human vision. There’s evidence that the human visual systems converts RGB images into a luminance channel (a grayscale image) and two chrominance channels, which contain information about the color (Wandell, 1995). The visual system is only sensitive to high resolution features in the luminance channel, so the chrominance channels can be lower resolution with minimal impact on the perceived image (Wandell, 1995). This observation is used in JPEG compression (Pennebaker and Mitchell, 1992) and subpixel rendering (Platt, 2000), but to our knowledge, it has never been applied to holographic displays. By allowing the unperceived high frequency chrominance and extremely high frequency luminance features to be unconstrained, we can better use the the degrees of freedom on the SLM to faithfully represent the rest of the image. Our flexible optimization framework allows us to easily change the RGB loss function in Eq. 3 to a perceptual loss. For each depth, we transform the RGB intensities of both $\hat{I}$ (the target image) and $I$ (the simulated image from the SLM) into opponent color space as follows: (8) $\begin{split}O_{1}&=0.299\cdot I_{\lambda_{r}}+0.587\cdot I_{\lambda_{g}}+0.114\cdot I_{\lambda_{b}}\\\ O_{2}&=I_{\lambda_{r}}-I_{\lambda_{g}}\\\ O_{3}&=I_{\lambda_{b}}-I_{\lambda_{r}}-I_{\lambda_{g}}\end{split}$ where $O_{1}$ is the luminance channel, and $O_{2}$, $O_{3}$ are the red-green and blue-yellow chrominance channels, respectively. We can then update Eq. 3 to (9) $\begin{split}\operatorname*{argmin}_{s}\sum_{z}\Big{[}&\mathcal{L}\left(\hat{O}_{1}*k_{1},{O}_{1}*k_{1}\right)+\mathcal{L}\left(\hat{O}_{2}*k_{2},{O}_{2}*k_{2}\right)+\\\ &\mathcal{L}\left(\hat{O}_{3}*k_{3},{O}_{3}*k_{3}\right)\Big{]},\end{split}$ where $*$ represents a 2D convolution with a low pass filter ($k_{1}\ldots k_{3}$) for each channel in opponent color space . $\hat{O}_{i}$ and $O_{i}$ are the $i$-th channel in opponent color space of $\hat{I}$ and $I$, respectively. In order to mimic the contrast sensitivity functions of the human visual system, we implement filters in the Fourier domain by applying a low-pass filter of 45% of the width of Fourier space to the chrominance channels ($O_{2}$, $O_{3}$) and a filter of 75% of the width of Fourier space to the luminance channel ($O_{1}$). These filter widths were heuristically determined. By de-prioritizing high frequencies in chrominance and extremely high frequencies in luminance, the optimizer is able to better match the low frequency color. This low frequency color is what is perceivable by the human visual system. Figure 4 depicts the hologram quality improvement by optimizing with our perceptual loss function. The first column of Fig. 4 shows the perceptually filtered versions of simulated holograms generated using an RGB loss function (Fig 4A) and our perceptual loss function (Fig 4B). The second column displays the original target image (Fig 4C) and the perceptually filtered target image (Fig 4D). It can be observed that the two targets are indistinguishable, indicating that our perceptual filter choices align well with the human visual system. The PSNR and SSIM values are higher for the perceptually optimized hologram, and it also appears less noisy and with better color fidelity. This suggests that the loss function has effectively shifted most of the error into imperceptible regions of the opponent color space. We see an average PSNR increase of 6.4 dB and average increase of 0.266 in SSIM across a test set of 294 images. ### 3.4. Simulation Comparisons In Figure 5 we compare the performance of our method to the depth and bit division approach to simultaneous color holography. Depth and bit division use only a single SLM, make use of the full space-time-bandwidth product of the SLM, and contain no bulky optics or filters making these methods the most similar to our method. The holograms simulated with depth and bit division are much noisier and have lower color fidelity than our proposed method. The depth division simulated hologram has the worst color fidelity due to to the replica planes discussed in Sec. 3.1 contributing defocused light at the target plane. Our method uses a perceptual loss function and the HOASM outlined by Gopakumar et al. (2021) to directly optimize the simultaneous color hologram, while comparison methods optimize indirectly. This direct approach produces less noisy holograms with better color fidelity. ## 4\. Camera-Calibrated Model We’ve demonstrated that our algorithm can generate simultaneous color holograms in simulation. However, experimental holograms frequently do not match the quality of simulations due to mismatch between the physical system and the model used in optimization (Eqs. 1, 2). Therefore, to demonstrate simultaneous color experimentally, we need to calibrate the model to the experimental system. To do this, we design a model based on our understanding of the system’s physics, but we include several learnable parameters representing unknown elements. To fit the parameters, we capture a dataset of SLM patterns and camera captures and use gradient descent to estimate the learnable parameters based on the dataset. Next we explain the model which is summarized in Fig. 2. ### 4.1. Learnable Parameters for Offline Calibration #### Lookup Table A key element in our optimization is $\phi_{\lambda}$ which converts the digital SLM input into the phase coming off the SLM, and it’s important that this function accurately matches the behavior of the real SLM. Many commercial SLMs ship with a lookup-table (LUT) describing $\phi_{\lambda}$; however, the manufacturer LUT is generally only calibrated at a few discrete wavelengths. These wavelengths may not be accurate for the source used in the experiment. Therefore, we learn a LUT for each color channel as part of the model. Based on a pre-calibration of the LUT using the approach of Yang et al. (2015), we observe the LUT is close to linear; we therefore parameterize the LUT with a linear model to encourage physically realistic solutions. #### SLM Crosstalk SLMs are usually modeled as having a constant phase over each pixel with sharp transitions at boundaries. However, in LCoS SLMs, elastic forces in the LC layer prevent sudden spatial variations, and the electic field that drives the pixels changes gradually over space. As a result, LCoS SLMs suffer from crosstalk, also called field-fringing, in which the phase is blurred (Apter et al., 2004; Moser et al., 2019; Persson et al., 2012). We model crosstalk with a convolution on the SLM phase. Combined with our linear LUT described above, we can describe the phase off the SLM as (10) $\displaystyle\phi_{\lambda}(s)=k_{\text{xt}}*(a_{1}\cdot s+a_{2})$ where $a_{1},a_{2}$ are the learn parameters of the LUT, and $k_{\text{xt}}$ is a learned $5\times 5$ convolution kernel representing crosstalk. Separate values of these parameters are learned for each color channel. #### Propagation with Higher Diffraction Orders The discrete pixel structure on the SLM creates higher diffraction orders that are not modeled well with ASM or Fresnel propagation. A physical aperture at the Fourier plane of the SLM can be used to block higher orders. However, accessing the Fourier plane requires a 4$f$ system, which adds significant size to the optical system, reducing the practicality for head-mounted displays. Therefore, we chose to avoid additional lenses after the SLM and instead account for higher orders in the propagation model. We adapt the higher-order angular spectrum model (HOASM) of Gopakumar et al. (2021). The zero order diffraction, $G(f_{x},f_{y})$, and first order diffraction, $G_{\text{1st order}}$, patterns are propagated with ASM to the plane of interest independently. Then the propagated fields are stacked and passed into a U-net, which combines the zero and first orders and returns the image intensity: (11) $\displaystyle f_{\text{ASM}}(G,z)$ $\displaystyle=\mathcal{F}^{-1}\left\\{G\cdot H_{\text{ASM}}(z)\right\\}$ (12) $\displaystyle I_{z}$ $\displaystyle=\text{Unet}\left(f_{\text{ASM}}(G,z),\>f_{\text{ASM}}(G_{\text{1st order}},z)\right),$ where $H_{\text{ASM}}(z)$ is the ASM kernel. The U-Net architecture is detailed in the supplement; a separate U-net for each color is learned from the data. The U-Net helps to address any unmodeled aspects of the system that may affect the final hologram quality such as source polarization, SLM curvature, and beam profiles, and the U-net better models superposition of higher orders, allowing for more accurate compensation in SLM pattern optimization. Figure 8 compares ASM, HOASM, and our modified version with the U-Net. ## 5\. Implementation #### Experimental Setup Our system starts with a fiber coupled RGB source, collimated with a $400\text{\,}\mathrm{mm}$ lens. The beam is aligned using two mirrors, passes through a linear polarizer and beamsplitter, reflects off the SLM (Holoeye-2.1-Vis-016), and passes through the beamsplitter a second time before directly hitting the color camera sensor with Bayer filter (FLIR GS3-U3-123S6C). As seen in Fig. 9, there’s no bulky 4$f$ system between the SLM and camera sensor, which allows the setup to be compact, but requires modeling of higher diffraction orders. The camera sensor is on a linear motion stage, enabling a range of propagation distances from $z=$80\text{\,}\mathrm{mm}$$ to $z=$130\text{\,}\mathrm{mm}$$. For our source, we use a superluminescent light emitting diode (SLED, Exalos EXC250011-00) rather than a laser due to its lower coherence, which has been demonstrated to reduce speckle in holographic displays (Deng and Chu, 2017). Although previous work showed state-of-the-art image quality by modeling the larger bandwidth of the SLED as a summation of coherent sources (Peng et al., 2021), we found the computational cost to be prohibitively high for our application due to GPU memory constraints. We achieved sufficient image quality while assuming a fully coherent model, potentially due to the U-net which is capable of simulating the additional blur we expect from a partially coherent source. #### Calibration Procedure We fit the learned parameters in our model (Eqs. 10 \- 12) using a dataset captured on the experimental system. We pre-calculate 882 SLM patterns from a personally collected dataset of images using a naive ASM propagation model. Each SLM pattern is captured in $5\text{\,}\mathrm{mm}$ increments from $z=$90\text{\,}\mathrm{mm}$$ to $120\text{\,}\mathrm{mm}$, resulting in a total of 6174 paired entries. The raw camera data is debayered and an affine transform is applied to align the image with the SLM (see Supplement for details). Model fitting is implemented in Pytorch using an L1 loss function between the model output and camera capture. To account for the camera color balance, we additionally learn a $3\times 3$ color calibration matrix from the RGB simulated intensities to the captured color image. We train until convergence, which is typically reached between 50 and 100 epochs (2-3 days on Nvidia A6000 GPU). #### Hologram Generation After training, we can generate holograms by solving Eq. 9 using the trained model for $I_{z,\lambda}$, implemented with Pytorch’s native auto- differentiation. The SLM pattern, $s$, is constrained to the range where the LUT is valid (for example, 0 - 255); the values outside that range are wrapped after every optimization step. On the Nvidia A6000 GPU, it takes about two minutes to optimize a 2D hologram. Computation time for the optimization of a 3D hologram scales proportional to the number of depth planes. ## 6\. Experimental Results #### 2-Dimensional Holograms We validate our simulation results by capturing holograms in experiment. The SLM patterns were optimized for a propagation distance of $120\text{\,}\mathrm{mm}$ using our perceptual loss function laid out in Section 3.3. A white border was added to each target image to improve the color fidelity by encouraging a proper white balance. After each hologram is captured, debayering is performed and a homography is applied to map from camera space to SLM space. The homography also downsamples the captured holograms to the same resolution as the SLM. The captured results are shown in Figure 6. The images match simulation well, validating our simultaneous color algorithm, although experimental results are noisier with lower color fidelity due to model-mismatch. #### 3-Dimensional Holograms As mentioned earlier a major appeal of holography is the ability to solve the vergence-accommodation conflict without the need for eye tracking. Consequently, we also validate our method for 3D scenes. A 4-plane focal stack was rendered with 0.5 pixels blur radius per millimeter depth. Holograms were captured at distance from $90\text{\,}\mathrm{mm}$ to $120\text{\,}\mathrm{mm}$ in $10\text{\,}\mathrm{mm}$ increments. The results are displayed in Fig. 7, and quality is similar to the 2D case. These experimental results demonstrate the ability to form 3D color holograms with natural defocus blur from a single SLM frame. ## 7\. Limitations While our method improves hologram quality for simultaneous illumination and is compatible with VR and AR displays, it does have limitations. First, our method is not equally effective for all images. Natural images with high levels of texture work best, as they have similarly structured color channels and contain high frequency color information that is perceptually suppressible by our loss function. However, images with large flat areas may show noticeable artifacts such as the cat paws image in Figure 6. Unnatural images often have more saturated color, creating the more difficult task of finding an SLM pattern that can produce three largely unique holograms. Additionally, our method takes on the order of minutes to calculate a single SLM pattern for a 2D image using a Nvidia A6000. This is incompatible with real time displays. Recent work has shown that neural nets can produce SLM patterns for holography in near real time while maintaining hologram quality (Shi et al., 2021; Eybposh et al., 2020; Yang et al., 2022). A neural SLM pattern generator for simultaneous color holography is likely feasible, but has been left for future work. ## 8\. Conclusion In summary, we have developed a comprehensive framework for generating high- quality color holograms using simultaneous RGB illumination and a simple, compact optical setup. Our framework features a camera-calibrated, differentiable forward model that reduces model mismatch and allows for the use of custom loss functions. By employing a perceptual loss function, we have successfully addressed the difficult challenge of simultaneous color holography, as validated by experimental testing in 2D and 3D. Our work brings us closer to creating holographic near-eye displays. Figure 6. Experimentally captured 2D holograms. This figure depicts experimentally captured holograms at a depth of $120\text{\,}\mathrm{mm}$. Row one contains the experimentally captured images. Row two is the simulation output of the optimized SLM pattern. Row three contains the target images. While most the of captured holograms have good color fidelity, our method is least effective on highly saturated images with low texture, such as the cat paws in column 4, which represents a limitation or our method (see Sec. 7). Figure 7. Experimentally captured focal stack. This figure depicts a focal stack captured from $90\text{\,}\mathrm{mm}$ to $120\text{\,}\mathrm{mm}$ in $10\text{\,}\mathrm{mm}$ increments. Row one contains the experimentally captured images. Row two is the simulation output of the optimized SLM pattern. Row three contains the target images. Figure 8. Comparison of different propagation methods for suppressing higher diffraction orders. The first column shows the results obtained using the traditional angular spectrum method (ASM) which doesn’t model higher diffraction orders. The second column shows the results obtained using HOASM which reduces the visibility of higher orders but fails to completely suppress them. The third column shows the results obtained using our proposed learned propagation method that includes a U-net, which largely suppresses the higher diffraction orders and results in a hologram with the fewest artifacts, suggesting the learned propagation model best matches the physical propagation. Figure 9. Experimental setup. (A) Top view of our system with labeled components and approximate beam path is drawn in green. (B) Side-view of the system. ## References * (1) * Apter et al. (2004) Boris Apter, Uzi Efron, and Eldad Bahat-Treidel. 2004\. On the fringing-field effect in liquid-crystal beam-steering devices. _Applied optics_ 43, 1 (2004), 11–19. * Chakrabarti (2016) Ayan Chakrabarti. 2016\. Learning sensor multiplexing design through back-propagation. _Advances in Neural Information Processing Systems_ Nips (2016), 3089–3097. * Chakravarthula et al. (2022) Praneeth Chakravarthula, Seung-Hwan Baek, Ethan Tseng, Andrew Maimone, Grace Kuo, Florian Schiffers, Nathan Matsuda, Oliver Cossairt, Douglas Lanman, and Felix Heide. 2022. Pupil-aware Holography. _arXiv preprint arXiv:2203.14939_ (2022). * Chakravarthula et al. (2019) Praneeth Chakravarthula, Yifan Peng, Joel Kollin, Henry Fuchs, and Felix Heide. 2019\. Wirtinger holography for near-eye displays. _ACM Transactions on Graphics_ 38, 6 (2019). https://doi.org/10.1145/3355089.3356539 * Chakravarthula et al. (2020) Praneeth Chakravarthula, Ethan Tseng, Tarun Srivastava, Henry Fuchs, and Felix Heide. 2020\. Learned hardware-in-the-loop phase retrieval for holographic near-eye displays. _ACM Transactions on Graphics_ 39, 6 (2020). https://doi.org/10.1145/3414685.3417846 * Choi et al. (2022) Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, Matthew O’Toole, and Gordon Wetzstein. 2022\. Time-multiplexed Neural Holography: A Flexible Framework for Holographic Near-eye Displays with Fast Heavily-quantized Spatial Light Modulators. (2022), 1–9. https://doi.org/10.1145/3528233.3530734 * Choi et al. (2021a) Suyeon Choi, Manu Gopakumar, Yifan Peng, Jonghyun Kim, and Gordon Wetzstein. 2021a. Neural 3D Holography: Learning Accurate Wave Propagation Models for 3D Holographic Virtual and Augmented Reality Displays. _ACM Transactions on Graphics_ 40, 6 (2021). https://doi.org/10.1145/3478513.3480542 * Choi et al. (2021b) Suyeon Choi, Jonghyun Kim, Yifan Peng, and Gordon Wetzstein. 2021b. Optimizing image quality for holographic near-eye displays with michelson holography. _Optica_ 8, 2 (2021), 143–146. * Deng and Chu (2017) Yuanbo Deng and Daping Chu. 2017. Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays. _Scientific Reports_ 7, 1 (2017), 1–12. https://doi.org/10.1038/s41598-017-06215-x * Duerr et al. (2021) Peter Duerr, Andreas Neudert, Christoph Hohle, Hagen Stolle, Johannes Pleikies, and Hagen Sahm. 2021\. MEMS Spatial Light Modulators for Real Holographic 3D Displays. In _MikroSystemTechnik Congress 2021; Congress_. 1–4. * Eybposh et al. (2020) M Hossein Eybposh, Nicholas W Caira, Praneeth Chakravarthula, Mathew Atisa, and Nicolas C Pégard. 2020. High-speed computer-generated holography using convolutional neural networks. In _Optics and the Brain_. Optica Publishing Group, BTu2C–2. * Gerchberg (1972) Ralph W Gerchberg. 1972\. A practical algorithm for the determination of plane from image and diffraction pictures. _Optik_ 35, 2 (1972), 237–246. * Goodman (2005) Joseph W Goodman. 2005\. _Introduction to fourier optics_. Roberts & Company Publishers. * Gopakumar et al. (2021) Manu Gopakumar, Jonghyun Kim, Suyeon Choi, Yifan Peng, and Gordon Wetzstein. 2021. Unfiltered holography: optimizing high diffraction orders without optical filtering for compact holographic displays. _Optics Letters_ 46, 23 (2021), 5822. https://doi.org/10.1364/ol.442851 * Haist and Osten (2015) Tobias Haist and Wolfgang Osten. 2015. Holography using pixelated spatial light modulators—part 1: theory and basic considerations. _Journal of Micro/Nanolithography, MEMS, and MOEMS_ 14, 4 (2015), 041310\. * Jang et al. (2018) Changwon Jang, Kiseung Bang, Gang Li, and Byoungho Lee. 2018\. Holographic near-eye display with expanded eye-box. _ACM Transactions on Graphics (TOG)_ 37, 6 (2018), 1–14. * Jesacher et al. (2014) Alexander Jesacher, Stefan Bernet, and Monika Ritsch-Marte. 2014\. Colour hologram projection with an SLM by exploiting its full phase modulation range. _Optics Express_ 22, 17 (8 2014), 20530\. https://doi.org/10.1364/oe.22.020530 * Kavaklı et al. (2022) Koray Kavaklı, Hakan Urey, and Kaan Akşit. 2022\. Learned holographic light transport. _Applied Optics_ 61, 5 (2022), B50–B55. * Kim et al. (2022b) Dongyeon Kim, Seung-Woo Nam, Byounghyo Lee, Jong-Mo Seo, and Byoungho Lee. 2022b. Accommodative holography: improving accommodation response for perceptually realistic holographic displays. _ACM Transactions on Graphics (TOG)_ 41, 4 (2022), 1–15. * Kim et al. (2022a) Jonghyun Kim, Manu Gopakumar, Suyeon Choi, Yifan Peng, Ward Lopes, and Gordon Wetzstein. 2022a. Holographic glasses for virtual reality. In _ACM SIGGRAPH 2022 Conference Proceedings_. 1–9. * Kozacki and Chlipala (2016) Tomasz Kozacki and Maksymilian Chlipala. 2016. Color holographic display with white light LED source and single phase only SLM. _Optics Express_ 24, 3 (feb 2016), 2189\. https://doi.org/10.1364/oe.24.002189 * Kuo et al. (2020) Grace Kuo, Laura Waller, Ren Ng, and Andrew Maimone. 2020\. High resolution étendue expansion for holographic displays. _ACM Transactions on Graphics (TOG)_ 39, 4 (2020), 66–1. * Lee et al. (2022) Byounghyo Lee, Dongyeon Kim, Seungjae Lee, Chun Chen, and Byoungho Lee. 2022. High-contrast, speckle-free, true 3D holography via binary CGH optimization. _Scientific reports_ 12, 1 (2022), 1–12. * Li et al. (2016) Gang Li, Dukho Lee, Youngmo Jeong, Jaebum Cho, and Byoungho Lee. 2016. Holographic display for see-through augmented reality using mirror-lens holographic optical element. _Optics letters_ 41, 11 (2016), 2486–2489. * Lin et al. (2019) Shu-Feng Lin, Hong-Kun Cao, and Eun-Soo Kim. 2019. Single SLM full-color holographic three-dimensional video display based on image and frequency-shift multiplexing. _Opt. Express_ 27, 11 (may 2019), 15926–15942. https://doi.org/10.1364/OE.27.015926 * Lin and Kim (2017) Shu-Feng Lin and Eun-Soo Kim. 2017. Single SLM full-color holographic 3-D display based on sampling and selective frequency-filtering methods. _Opt. Express_ 25, 10 (may 2017), 11389–11404. https://doi.org/10.1364/OE.25.011389 * Maimone et al. (2017) Andrew Maimone, Andreas Georgiou, and Joel S Kollin. 2017\. Holographic near-eye displays for virtual and augmented reality. _ACM Transactions on Graphics (TOG)_ 36, 4 (2017), 85\. * Makowski et al. (2011) Michal Makowski, Izabela Ducin, Karol Kakarenko, Jaroslaw Suszek, Maciej Sypek, Andrzej Kolodziejczyk, P.-H Yao, C.-H Chen, J.-N 4 Kuo, and H.-W Wu. 2011\. _Simple holographic projection in color_. Technical Report. 7–9 pages. www.osiris-project.eu * Makowski et al. (2010) Michal Makowski, Izabela Ducin, Maciej Sypek, Agnieszka Siemion, Andrzej Siemion, Jaroslaw Suszek, and Andrzej Kolodziejczyk. 2010. _Color image projection based on Fourier holograms_. Technical Report 8. 1227 pages. * Makowski et al. (2009) Michal Makowski, Mciej Sypek, Izabela Ducin, Agnieszka Fajst, Andrzej Siemion, Jaroslaw Suszek, and Andrzej Kolodzijczyk. 2009. Experimental evaluation of a full-color compact lensless holographic display. _Opt. Express_ 17, 23 (2009), 20840–20846. https://doi.org/10.1364/OE.17.020840 * Makowski et al. (2008) Michal Makowski, Maciej Sypek, and Andrzej Kolodziejczyk. 2008\. _Colorful reconstructions from a thin multi-plane phase hologram_. Technical Report. * Moser et al. (2019) Simon Moser, Monika Ritsch-Marte, and Gregor Thalhammer. 2019\. Model-based compensation of pixel crosstalk in liquid crystal spatial light modulators. _Optics express_ 27, 18 (2019), 25046–25063. * Nakayama et al. (2010) Hirotaka Nakayama, Naoki Takada, Yasuyuki Ichihashi, Shin Awazu, Tomoyoshi Shimobaba, Nobuyuki Masuda, and Tomoyoshi Ito. 2010. _Real-time color electroholography using multiple graphics processing units and multiple high-definition liquid-crystal display panels_. Technical Report. * Peng et al. (2021) Yifan Peng, Suyeon Choi, Jonghyun Kim, and Gordon Wetzstein. 2021\. Speckle-free holography with partially coherent light sources and camera-in-the-loop calibration. _Science Advances_ 7, 46 (2021). https://doi.org/10.1126/sciadv.abg5040 * Peng et al. (2020) Yifan Peng, Suyeon Choi, Nitish Padmanaban, and Gordon Wetzstein. 2020. Neural holography with camera-in-the-loop training. _ACM Transactions on Graphics_ 39, 6 (11 2020). https://doi.org/10.1145/3414685.3417802 * Pennebaker and Mitchell (1992) William B Pennebaker and Joan L Mitchell. 1992. _JPEG: Still image data compression standard_. Springer Science & Business Media. * Persson et al. (2012) Martin Persson, David Engström, and Mattias Goksör. 2012\. Reducing the effect of pixel crosstalk in phase only spatial light modulators. _Optics express_ 20, 20 (2012), 22334–22343. * Pi et al. (2022) Dapu Pi, Juan Liu, and Yongtian Wang. 2022. Review of computer-generated hologram algorithms for color dynamic holographic three-dimensional display. https://doi.org/10.1038/s41377-022-00916-3 * Platt (2000) John C Platt. 2000\. Optimal filtering for patterned displays. _IEEE Signal Processing Letters_ 7, 7 (2000), 179–181. * Riecke et al. (2006) Bernhard E Riecke, Hans-Günther Nusseck, and Jörg Schulte-Pelkum. 2006. Selected technical and perceptual aspects of virtual reality displays. (2006). * Shi et al. (2021) Liang Shi, Beichen Li, Changil Kim, Petr Kellnhofer, and Wojciech Matusik. 2021. Towards real-time photorealistic 3D holography with deep neural networks. _Nature_ 591, 7849 (2021), 234–239. * Shiraki et al. (2009) Atsushi Shiraki, Naoki Takada, Masashi Niwa, Yasuyuki Ichihashi, Tomoyoshi Shimobaba, Nobuyuki Masuda, Tomoyoshi Ito, P S Hilaire, S A Benton, M Lucente, M L Jepsen, J Kollin, H Yoshikawa, and J Underkoffler. 2009\. _Simplified electroholographic color reconstruction system using graphics processing unit and liquid crystal display projector References and links_. Technical Report. http://www.opticsinfobase.org/oe/abstract.cfm?id=81092. * Van Waveren (2016) JMP Van Waveren. 2016\. The asynchronous time warp for virtual reality on consumer hardware. In _Proceedings of the 22nd ACM Conference on Virtual Reality Software and Technology_. 37–46. * Walton et al. (2022) David R Walton, Koray Kavaklı, Rafael Kuffner Dos Anjos, David Swapp, Tim Weyrich, Hakan Urey, Anthony Steed, Tobias Ritschel, and Kaan Akşit. 2022\. Metameric Varifocal Holograms. In _2022 IEEE Conference on Virtual Reality and 3D User Interfaces (VR)_. IEEE, 746–755. * Wandell (1995) Brian A Wandell. 1995\. _Foundations of vision_. Sinauer Associates. * Xue et al. (2014) Gaolei Xue, Juan Liu, Xin Li, Jia Jia, Zhao Zhang, Bin Hu, and Yongtian Wang. 2014\. Multiplexing encoding method for full-color dynamic 3D holographic display. _Opt. Express_ 22, 15 (Jul 2014), 18473–18482. https://doi.org/10.1364/OE.22.018473 * Yang et al. (2022) Daeho Yang, Wontaek Seo, Hyeonseung Yu, Sun Il Kim, Bongsu Shin, Chang-Kun Lee, Seokil Moon, Jungkwuen An, Jong-Young Hong, Geeyoung Sung, et al. 2022\. Diffraction-engineered holography: Beyond the depth representation limit of holographic displays. _Nature Communications_ 13, 1 (2022), 1–11. * Yang et al. (2015) Lei Yang, Jun Xia, Chenliang Chang, Xiaobing Zhang, Zhiming Yang, and Jianhong Chen. 2015\. Nonlinear dynamic phase response calibration by digital holographic microscopy. _Applied optics_ 54, 25 (2015), 7799–7806. * Yang et al. (2019) Xin Yang, Ping Song, HongBo Zhang, and Qiong-Hua Wang. 2019\. Full-color computer-generated holographic near-eye display based on white light illumination. _Optics Express_ 27, 26 (2019), 38236–38249. * Yaraş et al. (2009) Fahri Yaraş, Hoonjong Kang, and Levent Onural. 2009\. _Real-time phase-only color holographic video display system using LED illumination_. Technical Report. * Yin et al. (2022) Kun Yin, En Lin Hsiang, Junyu Zou, Yannanqi Li, Zhiyong Yang, Qian Yang, Po Cheng Lai, Chih Lung Lin, and Shin Tson Wu. 2022. Advanced liquid crystal devices for augmented reality and virtual reality displays: principles and applications. _Light: Science and Applications_ 11, 1 (2022). https://doi.org/10.1038/s41377-022-00851-3 * Zhang et al. (2017) Jingzhao Zhang, Nicolas Pégard, Jingshan Zhong, Hillel Adesnik, and Laura Waller. 2017\. 3D computer-generated holography by non-convex optimization. _Optica_ 4, 10 (2017), 1306–1313. * Zhang et al. (2014) Zichen Zhang, Zheng You, and Daping Chu. 2014. Fundamentals of phase-only liquid crystal on silicon (LCOS) devices. _Light: Science & Applications_ 3, 10 (2014), e213–e213. Supplementary Material – Simultaneous Color Holography ## S1. Additional Implementation Details #### Spatial Light Modulator For all simulations, a spatial light modulator (SLM) with $1920\times 1080$ pixels and a pixel size of $$8\text{\,}\mathrm{\SIUnitSymbolMicro m}$\times$8\text{\,}\mathrm{\SIUnitSymbolMicro m}$$ is used. The phase ranges of the red, green, and blue channels are $2.4\pi$, $5.9\pi$, and $7.4\pi$, respectively, unless otherwise noted. These values were experimentally calibrated for the Holoeye Pluto-2.1-Vis-016 SLM. The propagation distance of all simulated holograms is $100\text{\,}\mathrm{mm}$ unless otherwise noted. #### Modified High Order Angular Spectrum Method (HOASM) We implement a modified version of the High Order Angular Spectrum Method (HOASM) as described by Gopakumar et al. (2021). Instead of propagating the zero- and first-order together, we propagate them separately. The zero-order is propagated by performing the traditional angular spectrum method (ASM). To propagate the first-order, we pattern the zero-padded Fourier transform of the complex field to be propagated into a $3\times 3$ grid. The center Fourier transform of the grid is then zeroed out. The Fourier representation of the first-order is then weighted with a sinc function and propagated to the sensor plane using ASM. The field is then down-sampled and cropped. The complex fields of the zero- and first-orders are then split into real and imaginary parts and stacked before being fed into a U-Net. The U-Net consists of 4 downsampling layers, the number of channels increases from 4 to 32 during the first downsampling layer and doubles in each of the next 3 downsampling layers until there are 256 channels. Four upsampling layers are then applied, producing a single-channel output representing the intensity of the propagated wavefront. #### Camera Space to SLM Space Homography To perform either offline or active camera-in-the-loop optimization, the captured wavefront and SLM must be in the same space. This requires a transform and downsampling of the captured image to place it in the same coordinate system as the SLM pattern used to generate it. We opt to use an affine transform to perform this mapping. The affine transform is calculated as follows: First, an SLM pattern is calculated that produces a grid of dots. The dots are then detected on the sensor, and their centers are estimated in camera space coordinates. The centers of the dots are known in SLM space since the target image containing the dots is in SLM space for optimization. Finally, Python’s OpenCV package is used to produce the affine transform matrix that maps the captured dots to the SLM coordinate space. A unique homography is calculated for each depth location and color channel. #### Source Power Optimization Correctly setting the power of each color channel of the SLED for a given hologram is an important step to achieving good color fidelity. To achieve this, we use an active camera-in-the-loop based approach to optimize the power of the color channels. First, the power of the source is set to an arbitrary value less than 100% across all three color channels. A baseline reference image is captured, debayered, and mapped to the SLM space. Three learnable weighting parameters, one for each color channel, are initialized to unity and applied to the captured reference image. These weighting parameters serve as a proxy to optimizing the source power. An iterative process is then undertaken, where an image is captured on the camera, debayered, and mapped to the SLM space. The loss between this image and the target image is calculated, and then backpropagated using the computational graph of the weighting parameters applied to the reference image. The initial source power is then multiplied by the updated weighting parameters, and a new image is captured, restarting the iterative loop. This is done until the color weighting parameters have converged, usually taking between 15-30 iterations. If the process fails to converge or the initial source power multiplied by the weighting function becomes greater than 100%, the exposure time is increased, and the source power optimization is restarted. Although we use camera feedback in this process, we note that the information needed for source power optimization is contained in the color balance of the image itself. We believe this step could be replaced with a precomputed source power that’s dependent on the image content. ## S2. Active camera-in-the-Loop (CiTL) Active CiTL (Peng et al., 2020) is a special case of camera-calibrated models in which an image is displayed on the SLM, and camera captures are used to improve that particular image using the difference between the experimental capture and target image. While active CiTL is incompatible for real time displays, it does provide a useful proof of achievable hologram quality since. Consequently, we implemented active CiTL for our system as follows. First, an SLM pattern is optimized using our learned simulation model and the computational graph is retained. This SLM pattern is then displayed and the resulting hologram is captured. A homography is applied to the captured hologram for each color channel to map it from camera space to simulation space. Our perceptual loss function is applied to the remapped captured hologram and target image. Backpropagation is performed using a computational graph saved during the forward pass, but the experimentally captured hologram is used in the loss function (instead of the simulated model output). This is the first time to our knowledge that active CiTL has been combined with a deep component to the forward model. Figure S1 shows reduced noise and improved color fidelity for holograms generated with active CiTL. Since active CiTL uses the difference between the experimental capture and the target, the alignment between the two must be precise. We find that improved alignment using a piecewise affine homography, rather than a global affine homography, dramatically improves color fidelity. A comparison of this case is shown in Figure S2. Figure S1. Active camera-in-the-Loop (CiTL) reduces noise and improves color fidelity. The first column of this image depicts experimentally captured color holograms. The second columns shows images that were iteratively improved with a camera in the system using the active CiTL algorithm of Peng et al. (2020). Figure S2. Piecewise affine homography improves color fidelity for active CiTL. The first column shows the target image. The second column shows the experimentally captured hologram optimized using active CiTL with a global affine homography. The third column depicts active CiTL with a piecewise affine homography, which reduces color artifacts and noise due to better alignment during optimization. Cat source image by Chris Erwin (CC-BY-2.0). ## S3. Additional Experimental Results and Failure Cases Figure S3 depicts additional captured results, which are intended to showcase a wider variety of scenes and include failure cases of our method. Our method has the most difficulty when the target has large, flat areas (i.e. textureless) of saturated color. Textureless targets lack high frequency information that can be leveraged by our loss function, leading to substantial artifacts such as color non-uniformity and ringing . These artifacts are particularly apparent in the image of colored bars in Fig. S3. Highly saturated images or “unnatural” images (like the colored bars) often fail due to disparate color channels, resulting in a single SLM pattern having to produce three holograms at the same plane with substantially different structures. In contrast, natural images typically have similarly structured color channels. Figure S3. Additional simultaneous color holograms captured in experiment. The first column depicts holograms captured in experiment. The second column shows the simulation output. The third column depicts the target image. Although our system performs well on most natural scenes, unnatural images such as the bars in the center row are more challenging for our algorithm. ## S4. Additional Perceptual Loss Function Details The filter sizes for our perceptual loss function were chosen heuristically, such that no visible change was noticeable between the target image and the target image with the perceptual filter applied. These filter sizes were kept constant regardless of the scene being optimized. These filters can be viewed in Fig. S4 To test the effectiveness of our perceptual loss function, we applied it to a personally captured dataset of 294 images. For each target image, an SLM pattern was optimized using both the traditional RGB loss function and our perceptual loss function. The resulting hologram was then captured, and the perceptual filter was applied. The PSNR, SSIM, and NMSE were calculated for the filtered simulated holograms and the perceptually filtered target image. The average metrics over the entire dataset are provided in Table S1. | PSNR | SSIM | NMSE ---|---|---|--- RGB Loss Function | 20.11 | 0.603 | 0.010 Perceptual Loss Function | 26.58 | 0.869 | 0.003 Table S1. A comparison of the average PSNR, SSIM, and NMSE for holograms optimized with the traditional RGB loss function and perceptual loss function. The metrics were calculated between the perceptually filtered simulated holograms and the perceptually filtered target. The data set used was a personally captured set of 294 images of natural scenes. Figure S4. Perceptual loss function filters in Fourier opponent color space. The white areas of the filters pictured represents the pass band of the filter. The luminance channel has a filter width of 75% of Fourier space. Both chrominance channels (Red- Green, Blue-Yellow) have filter widths of 45% of Fourier space. ## S5. The Effect of Bit Depth on Hologram Quality The effect of quantization on hologram quality is an important consideration when choosing an extended phase SLM. We define the effective bit depth as the number of bits contained in a $2\pi$ interval of the extended range. For example, the effective bit depth of an 8-bit SLM with a phase range of $8\pi$ is 6 bits as each $2\pi$ interval contains 64 discrete samples i.e. 6 bits. To determine the minimum bit depth required for adequate image quality, we simulated holograms using an SLM with a $2\pi$ phase range and bit depths from 2 bits to 8-bits. Simulations are done by optimizing the hologram with gradient descent, then quantizing to the target bit depth. A significant drop off in both PSNR and SSIM was observed between 5 and 6 bits, as depicted in Fig. S5. This suggests that the minimum effective bit depth required for an extended phase SLM is 6 bits. Since most commercially available SLMs are 8 bits, this suggests that the maximum phase range in any channel should be $8\pi$, which aligns well with the SLM used in our experiments (maximum phase range of $7.4\pi$ in the blue channel). Figure S5. An analysis of SLM bit depth on hologram quality in simulation We simulate holograms using SLMs of 2 to 8 bits. The target image is pictured in the top left of the figure. One should note the rapidly increasing drop off in both PSNR and SSIM between 5 and 6 bits. ## S6. Bit and Depth Division Implementation Details and Analysis In this section we provide our implementation details of bit and depth division holography. Additionally, we analyze the methods for SLMs of various phase ranges. We implement bit division largely as laid out by Jesacher et al. (2014). First we calculated the three color channels SLM patterns using a modified Gerchberg-Saxton approach assuming a $2\pi$ phase range in each color channel. Instead of using the Fourier transform for propagation as in Jesacher et al. (2014), we use ASM match our other results. This is run until convergence, and 3 unique SLM patterns are produced. These SLM patterns are then combined via an optimization problem as described by (Jesacher et al., 2014). We then used the combined SLM pattern to simulate a color hologram at the sensor plane. We choose to implement the depth division method using gradient descent-based optimzation rather than a modified Gerchberg-Saxton (GS) algorithm for multiplane holograms originally proposed by Makowski et al. (2008, 2010) for depth division holography. Since we use gradient descent in our approach, we determined this was a fairer comparison. In our implementation the SLM pattern is first converted to a complex field. The complex field is then propagated to $z=$68\text{\,}\mathrm{mm}$,$80\text{\,}\mathrm{mm}$,$100\text{\,}\mathrm{mm}$$ using the ASM kernel for the red color channel. These correspond to the replica planes. The intensity of the of the fields are then calculated at each target plane and compared to the blue, green, and red channels, respectively, using an L2 loss function. Backpropagation is then used to calculate the gradients of the loss function with respect the SLM voltage values and then update these voltages. We implement both the bit and depth division holography methods for 3 simulated SLMs. The first SLM has a uniform $2\pi$ phase range in each color channel. This phase range is optimal for depth division, but performs the worst of the simulated SLMs for bit division, demonstrating how bit division relies on extended SLM phase range. The next simulated SLM is an arbitrary standard SLM i.e. not extended phase. We model this SLM to have $2\pi$ phase range in red, $2.7\pi$ in green, and $3.4\pi$ in blue. The simulated holograms increase in quality from the $2\pi$ SLM for the bit division method, but decrease in quality for depth division. Finally we simulate the Holoeye Pluto SLM used in our experimental setup. This SLM has a $2.4\pi$ phase range in red, $5.9\pi$ phase range in green, and $7.4\pi$ phase range in blue. The results for depth division continue to degrade with this SLM, since the depth division algorithm does not take into account the wavelength-dependent response of the SLM. The results improve for bit division with the additional extended phase. This suggests that that phase diversity across channels provides the best performance for bit division holography, while phase uniformity across channels provides the best performance for depth division holography. The results of the outlined experiment can be found in Figs. S6 and S7. Figure S6. SLM phase range affects hologram quality for bit division holography. Bit division takes advantage of the extended phase range of the SLM, so does not perform well with an SLM with only $2\pi$ phase range per channel (left column). With a “ standard” SLM with realistic wavelength dependence to the phase, bit division performs better. It works best with the extended phase range of the simulated Holoeye Pluto that we use for our experiments. Figure S7. SLM phase range affects hologram quality for depth division holography. The depth division approach assumes no wavelength dependence of the SLM, which is simulated in the first column. With standard SLM with $2\pi$ phase in red and realistic wavelength dependence (second column) the results are slightly degraded due to inaccurate modeling of the SLM response. Finally, with the extended phase range of the simulated Holoeye Pluto SLM, the results show significant color artifacts and noise.
# Relationship Between the Prime-Counting Function and a Unique Prime Number Sequence Michael P. May 443 W 2825 S Perry, Utah 84302<EMAIL_ADDRESS>or<EMAIL_ADDRESS> ###### Abstract. In mathematics, the prime counting function $\pi(x)$ is defined as the function yielding the number of primes less than or equal to a given number $x$. In this paper, we prove that the asymptotic limit of a summation operation performed on a unique subsequence of the prime numbers yields the prime number counting function $\pi(x)$ as $x$ approaches $\infty$. We also show that the prime number count $\pi(n)$ can be estimated with a notable degree of accuracy by performing the summation operation on the subsquence up to a limit $n$. ## 1\. GENERATING $\mathnormal{\mathbb{P{{}^{\prime}}}}$ AND $\mathnormal{\mathbb{P{{}^{\prime\prime}}}}$ Consider the prime number subsequence of higher order [2] $\mathbb{P^{{}^{\prime}}}=\left\\{{p{{}^{\prime}}}\right\\}=\left\\{2,5,7,13,19,23,29,31,37,43,47,53,59,61,71,...\right\\}.$ It was discovered [3] that $\mathbb{P{{}^{\prime}}}$ can be generated via an alternating sum of the prime number subsequences of increasing order, i.e., $\mathnormal{\mathbb{P^{{}^{\prime}}}={\left\\{{(-1)^{n-1}}\left\\{{p^{(n)}}\right\\}\right\\}}_{n=1}^{\infty}}$ (1.1) where the right-hand side of Eq. 1.1 is an expression of the alternating sum MSC2020: 11A41, 11B05, 11K31 Key words and phrases: Prime-counting function, Prime numbers, Higher-order prime number sequences, Prime gaps $\left\\{{p^{(1)}}\right\\}-\left\\{{p^{(2)}}\right\\}+\left\\{{p^{(3)}}\right\\}-\left\\{{p^{(4)}}\right\\}+\left\\{{p^{(5)}}\right\\}-...\;.$ (1.2) The prime number subsequences of increasing order [4] in Expression 1.2 are defined as $\left\\{{p^{(1)}}\right\\}={\left\\{{p_{n}}\right\\}}_{n=1}^{\infty}=\left\\{2,3,5,7,11,13,17,19,23,29,31,37,41,43,...\right\\}$ $\left\\{{p^{(2)}}\right\\}={\left\\{{p_{p_{n}}}\right\\}}_{n=1}^{\infty}=\left\\{3,5,11,17,31,41,59,67,83,109,127,...\right\\}$ $\left\\{{p^{(3)}}\right\\}={\left\\{{p_{p_{p_{n}}}}\right\\}}_{n=1}^{\infty}=\left\\{5,11,31,59,127,179,277,331,...\right\\}$ $\left\\{{p^{(4)}}\right\\}={\left\\{{p_{p_{p_{p_{n}}}}}\right\\}}_{n=1}^{\infty}=\left\\{11,31,127,277,709,...\right\\}$ $\left\\{{p^{(5)}}\right\\}={\left\\{{p_{p_{p_{p_{p_{n}}}}}}\right\\}}_{n=1}^{\infty}=\left\\{31,127,709,...\right\\}$ and so on and so forth. It is noted for clarification that the operation performed on the right-hand side of Eq. 1.1 denotes an alternating sum of the entire sets of prime number subsequences of increasing order. The prime number subsequence of higher order $\mathbb{P^{{}^{\prime}}}$ can also be generated by performing an alternating sum of the individual elements across the sets. To illustrate this, we arrange the subsequences in Expression 1.2 side-by-side and sum elements laterally across the rows to create the new $\mathbb{P^{{}^{\prime}}}$ subsequence term-by-term as follows: (row) | $+p^{(1)}$ | $-p^{(2)}$ | $+p^{(3)}$ | $-p^{(4)}$ | $+p^{(5)}$ | $-p^{(6)}$ | $...$ | $p{{}^{\prime}}$ ---|---|---|---|---|---|---|---|--- (1) | 2 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 2 (2) | 3 | 3 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 0 (3) | 5 | 5 | 5 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 5 (4) | 7 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 7 (5) | 11 | 11 | 11 | 11 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 0 (6) | 13 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 13 (7) | 17 | 17 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 0 (8) | 19 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 19 (9) | 23 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 23 (10) | 29 | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | $\longrightarrow$ | 29 (11) | 31 | 31 | 31 | 31 | 31 | $\longrightarrow$ | $\longrightarrow$ | 31 $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ | $\vdots$ Table 1. Alternating Sum of $p^{(n)}$ Thus, the infinite prime number subsequence $\mathbb{P^{{}^{\prime}}}$ of higher order [2] emerges in the rightmost column of Table 1: $\mathnormal{\mathbb{P^{{}^{\prime}}}=\left\\{{p{{}^{\prime}}}\right\\}}=\left\\{2,5,7,13,19,23,29,31,37,43,47,53,59,61,71,...\right\\}.$ The prime number subsequence of higher order $\mathbb{P^{{}^{\prime}}}$ can also be generated by the N-sieve [3]. We now demonstrate how that is accomplished. Starting with $n=1$, choose the prime number with subscript 1 (i.e., $p_{1}=2$) as the first term of the subsequence and eliminate that prime number from the natural number line. Then, proceed forward on $\mathbb{N}$ from $1$ to the next available natural number. Since $2$ was eliminated from the natural number line in the previous step, one moves forward to the next available natural number that has not been eliminated, which is $3$. $3$ then becomes the subscript for the next $\mathbb{P^{{}^{\prime}}}$ term which is $p_{3}=5$, and $5$ is then eliminated from the natural number line, and so on and so forth. Such a sieving operation has been carried out in the chart below for the natural numbers $1$ to $100$. 1 \raisebox{2pt}{\normalsize2}⃝ 3 4 \raisebox{2pt}{\normalsize5}⃝ 6 \raisebox{2pt}{\normalsize7}⃝ 8 9 10 11 12 \raisebox{2pt}{\normalsize13}⃝ 14 15 16 17 18 \raisebox{2pt}{\normalsize19}⃝ 20 21 22 \raisebox{2pt}{\normalsize23}⃝ 24 25 26 27 28 \raisebox{2pt}{\normalsize29}⃝ 30 \raisebox{2pt}{\normalsize31}⃝ 32 33 34 35 36 \raisebox{2pt}{\normalsize37}⃝ 38 39 40 41 42 \raisebox{2pt}{\normalsize43}⃝ 44 45 46 \raisebox{2pt}{\normalsize47}⃝ 48 49 50 51 52 \raisebox{2pt}{\normalsize53}⃝ 54 55 56 57 58 \raisebox{2pt}{\normalsize59}⃝ 60 \raisebox{2pt}{\normalsize61}⃝ 62 63 64 65 66 67 68 69 70 \raisebox{2pt}{\normalsize71}⃝ 72 \raisebox{2pt}{\normalsize73}⃝ 74 75 76 77 78 \raisebox{2pt}{\normalsize79}⃝ 80 81 82 83 84 85 86 87 88 \raisebox{2pt}{\normalsize89}⃝ 90 91 92 93 94 95 96 \raisebox{2pt}{\normalsize97}⃝ 98 99 100 Thus, we may optionally designate $\mathbb{P{{}^{\prime}}}$, which has been created via the N-sieve operation above, by the following notation [3] to indicate that the natural numbers $\mathbb{N}$ have been sieved to produce this prime number subsequence: $\mathnormal{\left\lfloor{\raisebox{-0.3pt}{\\!{\dashuline{\begin{math}\,\mathbb{N}\end{math}}}}}\right\rfloor}=\mathnormal{\mathbb{P{{}^{\prime}}}}=\left\\{{2,5,7,13,19,23,29,31,37,43,47,53,59,61,71,...}\right\\}.$ Regardless of the method used to generate $\mathbb{P{{}^{\prime}}}$, when the prime numbers in this unique subsequence are applied as indexes to the set of all prime numbers $\mathbb{P}$, one obtains the next higher-order prime number subsequence $\mathnormal{\mathbb{P^{{}^{\prime\prime}}}=\left\\{{p{{}^{\prime\prime}}}\right\\}}=\left\\{3,11,17,41,67,83,109,127,157,191,211,241,...\right\\}.$ By definition [3], the sequence $\mathbb{P{{}^{\prime\prime}}}$ can be generated via the expression $\mathnormal{\mathbb{P^{{}^{\prime\prime}}}={\left\\{{(-1)^{n}}\left\\{{p^{(n)}}\right\\}\right\\}}_{n=2}^{\infty}}$ (1.3) where an expansion of the right-hand side of Eq. 1.3 is the alternating sum $\left\\{{p^{(2)}}\right\\}-\left\\{{p^{(3)}}\right\\}+\left\\{{p^{(4)}}\right\\}-\left\\{{p^{(5)}}\right\\}+\left\\{{p^{(6)}}\right\\}-...\;.$ (1.4) Further, it has been shown [3] that the subsequences $\mathbb{P^{{}^{\prime}}}$ and $\mathbb{P^{{}^{\prime\prime}}}$ when added together form the entire set of prime numbers $\mathbb{P}$: $\mathnormal{\mathbb{P{}}}=\mathnormal{\mathbb{P{{}^{\prime}}}+\mathbb{P{{}^{\prime\prime}}}}.$ (1.5) We sketch a short proof of Eq. 1.5 here: ###### Proof. It has been established [3] that $\mathnormal{\mathbb{P{{}^{\prime}}}}={\left\\{{(-1)^{n-1}}\left\\{{p^{(n)}}\right\\}\right\\}}_{n=1}^{\infty}=\left\\{{p^{(1)}}\right\\}-\left\\{{p^{(2)}}\right\\}+\left\\{{p^{(3)}}\right\\}-...$ and $\mathnormal{\mathbb{P{{}^{\prime\prime}}}}={\left\\{{(-1)^{n}}\left\\{{p^{(n)}}\right\\}\right\\}}_{n=2}^{\infty}=\left\\{{p^{(2)}}\right\\}-\left\\{{p^{(3)}}\right\\}+\left\\{{p^{(4)}}\right\\}-...\;.$ Then, $\displaystyle\mathnormal{\mathbb{P{{}^{\prime}}}+\mathbb{P{{}^{\prime\prime}}}}=$ $\displaystyle\left\\{{p^{(1)}}\right\\}-\left\\{{p^{(2)}}\right\\}+\left\\{{p^{(3)}}\right\\}-...$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;+$ $\displaystyle\left\\{{p^{(2)}}\right\\}-\left\\{{p^{(3)}}\right\\}+\left\\{{p^{(4)}}\right\\}-...=\left\\{{p^{(1)}}\right\\}=\mathnormal{\mathbb{P{}}}.$ ∎ An interesting property was observed in the relationship between the set of all prime numbers $\mathbb{P{}}$ and the complement prime number sets $\mathbb{P{{}^{\prime}}}$ and $\mathbb{P{{}^{\prime\prime}}}$. Since $\mathbb{P{{}^{\prime\prime}}}=\mathbb{P}_{\mathbb{P{{}^{\prime}}}}$, Eq. 1.5 can be restated as $\displaystyle\mathnormal{\mathbb{P{{}^{\prime\prime}}}}$ $\displaystyle\mathnormal{=\mathbb{P{}}-\left\\{2,5,7,13,19,23,29,...\right\\}}$ $\displaystyle\mathnormal{=\left\\{{p_{{}_{{}_{\mathnormal{{2}}}}},p_{{}_{{}_{{\mathnormal{5}}}}},p_{{}_{{}_{{\mathnormal{7}}}}},p_{{}_{{}_{{\mathnormal{13}}}}},p_{{}_{{}_{{\mathnormal{19}}}}},p_{{}_{{}_{{\mathnormal{23}}}}},p_{{}_{{}_{{\mathnormal{29}}}}},...}\right\\}=\mathbb{P}_{\mathbb{P{{}^{\prime}}}}}$ where the prime numbers of the subsequence $\mathbb{P{{}^{\prime}}}$ form the indexes for the complement set of primes $\mathbb{P{{}^{\prime\prime}}}$ such that $\mathnormal{\mathnormal{\mathbb{P{{}^{\prime\prime}}}}=\mathnormal{\mathbb{P}_{{\mathbb{P{{}^{\prime}}}}}=\\{p_{k}\mid k\in\mathbb{P{{}^{\prime}}}\\}}}.$ We now calculate the average gap size for $\mathbb{P{{}^{\prime}}}$ at $\infty$. ## 2\. AVERAGE GAP OF $\mathnormal{\mathbb{P{{}^{\prime}}}}$ We will now derive the asymptotic density for the prime number subsequence $\mathbb{P{{}^{\prime}}}$ assuming that $1/\ln{n}$ is the asymptotic density of the set of all prime numbers $\mathbb{P{}}$ at $\infty$. We approach this task via alternately adding and subtracting the prime number densities (or “probabilities” as they are also called) of the prime number subsequences of increasing order to arrive at a value for the density of $\mathbb{P{{}^{\prime}}}$. We begin by recalling [3] that the prime number subsequence $\mathbb{P{{}^{\prime}}}$ is formed by the alternating series $\mathnormal{\mathbb{P{{}^{\prime}}}}={\left\\{{(-1)^{n-1}}\left\\{{p^{(n)}}\right\\}\right\\}}_{n=1}^{\infty}=\left\\{{p^{(1)}}\right\\}-\left\\{{p^{(2)}}\right\\}+\left\\{{p^{(3)}}\right\\}-...$ where $\left\\{{p^{(k)}}\right\\}=\left\\{{\mathnormal{p_{p_{._{._{._{p_{n}}}}}}}}\right\\}\;\text{($p$ ``$k$" times)}.$ Broughan and Barnett [4] have shown that for the general case of higher-order superprimes $\mathnormal{p_{p_{._{._{._{p_{n}}}}}}}$, the asymptotic density is approximately $\mathnormal{\dfrac{n}{p_{p_{._{._{._{p_{n}}}}}}}\sim\dfrac{n}{n\,(\ln{n})^{k}}\sim\dfrac{1}{(\ln{n})^{k}}}\;\;\;$ for large $n\in\mathbb{N{}}$. Now, assuming that $\ln{n}$ is the asymptotic limit of the gap size for the set of all prime numbers $\mathbb{P{}}$ at $\infty$, we derive an expression for the density $d$ for the prime number subsequence $\mathbb{P{{}^{\prime}}}$ at $\infty$. We begin with the geometric series $\mathnormal{S=1-x+x^{2}-x^{3}+x^{4}-x^{5}+...=\dfrac{1}{1+x}\;\;\;(\lvert x\rvert<1)}.$ Then let $\displaystyle\mathnormal{T}\;\;$ $\displaystyle\mathnormal{=-S+1}$ so that $\displaystyle\mathnormal{T}\;\;$ $\displaystyle\mathnormal{=x-x^{2}+x^{3}-x^{4}+x^{5}+...}$ $\displaystyle\mathnormal{=\dfrac{-1}{1+x}+1}$ $\displaystyle\mathnormal{=\dfrac{x}{1+x}}.$ Now substitute $\dfrac{1}{\ln{n}}$ for $x$ to get $\mathnormal{\dfrac{\dfrac{1}{\ln{n}}}{1+\dfrac{1}{\ln{n}}}=\dfrac{1}{\ln{n}+1}}$ so that $\displaystyle\mathnormal{T}=\mathnormal{\mathnormal{{d_{\mathbb{P{{}^{\prime}}}}}}}$ $\displaystyle\mathnormal{\approx\dfrac{1}{\ln{n}}-\dfrac{1}{(\ln{n})^{2}}+\dfrac{1}{(\ln{n})^{3}}-\dfrac{1}{(\ln{n})^{4}}+...}$ (2.1) $\displaystyle\mathnormal{=\dfrac{1}{\ln{n}+1}}.$ (2.2) Based on our assumption that $1/\ln{n}$ is the asymptotic limit of the density of the set of all prime numbers $\mathbb{P{}}$ as $n\rightarrow\infty$, Eq. 2.2 provides us with the density (or probability) of the occurrence of the primes $\mathbb{P{{}^{\prime}}}$ at $\infty$. Thus, the average gap $g$ between prime numbers in the subsequence $\mathbb{P{{}^{\prime}}}$ on the natural number line as $n\rightarrow\infty$ is the inverse of the density $d$ of $\mathbb{P{{}^{\prime}}}$ so that $\displaystyle\mathnormal{\mathnormal{g_{\mathbb{P{{}^{\prime}}}}}}=\mathnormal{\frac{1}{d_{\mathbb{P{{}^{\prime}}}}}}$ $\displaystyle\approx\mathnormal{\dfrac{1}{\dfrac{1}{\ln{n}}-\dfrac{1}{(\ln{n})^{2}}+\dfrac{1}{(\ln{n})^{3}}-\dfrac{1}{(\ln{n})^{4}}+...}}$ $\displaystyle=\mathnormal{\ln{n}+1}.$ Since it has been shown via the N-sieving operation [3] that the prime number subsequence $\mathbb{P{{}^{\prime}}}$ has fewer primes than the set of all prime numbers $\mathbb{P{}}$ at $\infty$, it intuitively follows that the average gap size for $\mathbb{P{{}^{\prime}}}$ will always be larger than the gap size for $\mathbb{P{}}$ at $\infty$. ## 3\. ESTIMATING $\mathnormal{\pi{(x)}}$ VIA $\mathnormal{\mathbb{P{{}^{\prime}}}}$ When we remove the prime number subsequence $\mathbb{P{{}^{\prime\prime}}}$ from the set of all prime numbers $\mathbb{P{}}$, we create the prime number subsequence $\mathbb{P{{}^{\prime}}}$ [3]. Further, it was shown in the previous section that the limit of the average gap size between the prime numbers $\mathbb{P{{}^{\prime}}}$ is $\ln{n}+1$ as $n$ tends toward $\infty$. Thus, the increase made to the average gap between the prime numbers $\mathbb{P{{}^{\prime}}}$ when discounting the primes $\mathbb{P{{}^{\prime\prime}}}$ on the natural number line is unity at $\infty$. This is equivalently stated by Eq. 3.1 wherein the average gap size for the set of all prime numbers $\mathbb{P{}}$ is subtracted from the average gap size for the set of all prime numbers $\mathbb{P{{}^{\prime}}}$ to yield the average contribution that the removal of the prime numbers $\mathbb{P{{}^{\prime\prime}}}$ adds to to produce the average gap size for $\mathbb{P{{}^{\prime}}}$ at $\infty$. $\mathnormal{\mathnormal{{g_{\mathbb{P{{}^{\prime}}}}-g_{\mathbb{P{}}}}}=1}.$ (3.1) Fig. 1 provides a visual representation of the operation of Eq. 3.1 on the natural number line by showing that in all intervals where an element of $\mathbb{P{{}^{\prime\prime}}}$ exists, the removal of $\mathbb{P{{}^{\prime\prime}}}$ and replacing with a null integer placeholder iteratively increases the average gap size between the remaining prime numbers $\mathbb{P{{}^{\prime}}}$ on the natural number line by unity as that operation is carried out to $\infty$. Fig. 1 – Prime Gaps on the Natural Number Line And since $\mathnormal{\mathnormal{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}$ represents the prime gaps [5] $\mathnormal{\left\\{p_{p^{\prime}_{n}}-p_{({p^{\prime}_{n}-1})}\right\\}=\left\\{1,4,4,4,6,4,2,14,6,10,12,2,...\right\\}}$ which have been counted as placeholders among the set of all prime numbers $\mathbb{P{}}$ (thereby increasing the gap size between the remaining prime numbers $\mathbb{P{{}^{\prime}}}$ from $\ln{n}$ to $\ln{n}+1$ at $\infty$), we arrive at the asymptotic limit $\mathnormal{\mathnormal{{\pi{(x)}}\sim{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}},$ the sum of which approximates the prime number count $\pi{(x)}$ for the set of all prime numbers $\mathbb{P{}}$ at $\infty$. ###### Theorem 3.1. The prime number counting function $\pi(x)$ is asymptotically equivalent to an operation performed on a unique subsequence of the prime numbers in that $\mathnormal{\mathnormal{{\pi{(x)}}\sim{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}}$ which states that the magnitude of the gaps contributed by an operation performed on the unique prime number subsequence $p_{p^{\prime}}$ as $x$ approaches $\infty$ is asymptotically equivalent to the total number of primes counted by $\pi(x)$ as $x$ approaches $\infty$. ###### Proof. We begin with the asymptotic limit of the prime counting function [1] $\mathnormal{\pi{(x)}\sim\frac{x}{\ln{x}}}$ (3.2) to show that as $x\rightarrow\infty$, $\mathnormal{\lim_{x\to\infty}{\frac{\mathnormal{\pi{(x)}}}{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}=1}}.$ (3.3) In order to evaluate Limit 3.3, we need to express both the numerator and denominator in terms of $x$ and $\ln{x}$. The asymptotic limit of the prime counting function in terms of $x$ and $\ln{x}$ is defined in 3.2, and a careful examination of Fig. 1 reveals that the denominator of 3.3 can be expressed as $\mathnormal{x\left[1-\frac{\ln{x}}{\ln{x}+1}\right]}$ (3.4) so that $\mathnormal{\lim_{x\to\infty}{\frac{\mathnormal{\pi{(x)}}}{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}=\mathnormal{\frac{\frac{x}{\ln{x}}}{x\left[1-\frac{\ln{x}}{\ln{x}+1}\right]}}}=\frac{\ln{x}+1}{\ln{x}}}.$ And clearly, $\mathnormal{\lim_{x\to\infty}{\frac{\ln{x}+1}{\ln{x}}=1}}.$ ∎ We also show that the asymptotic limit of the ratio of $\pi{(x)}$ to the complement of the sum in the denominator of Limit 3.3, or $\mathnormal{\mathnormal{{x}-{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}},$ (3.5) converges to zero at $\infty$, implying that the complement Expression 3.5 is infinitely larger than the count of prime numbers for infinitely large x. ###### Theorem 3.2. The prime number counting function $\pi(x)$ is asymptotically equivalent to zero when evaluated against the complement expression $\mathnormal{\mathnormal{{x}-{\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}}$ as $x$ approaches $\infty$. ###### Proof. We again begin with the asymptotic limit of the prime counting function [1] in 3.2 to show that as $x\rightarrow\infty$, $\mathnormal{\lim_{x\to\infty}{\frac{\mathnormal{\pi{(x)}}}{x-\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}=0}}.$ (3.6) To transform the denominator of the Limit 3.6 to an expression in terms of $x$ and $\ln{x}$, we recognize that when one subtracts 3.4 from $x$, we have $\mathnormal{x-x\left[1-\frac{\ln{x}}{\ln{x}+1}\right]=x\left[1-\frac{1}{\ln{x}+1}\right]}$ so that $\displaystyle\mathnormal{\lim_{x\to\infty}{\frac{\mathnormal{\pi{(x)}}}{x-\sum\limits_{k=1}^{x}\left(p_{{}_{p^{\prime}_{k}}}-p_{{}_{({p^{\prime}_{k}-1})}}\right)}}}$ $\displaystyle=\mathnormal{\frac{\frac{x}{\ln{x}}}{x\left[1-\frac{1}{\ln{x}+1}\right]}}$ $\displaystyle\mathnormal{=\frac{\ln{x}+1}{(\ln{x})^{2}}}.$ And clearly, $\mathnormal{\lim_{x\to\infty}{\frac{\ln{x}+1}{(\ln{x})^{2}}=0}}.$ ∎ ## 4\. APPROXIMATION OF ${\mathnormal{\pi{(n)}}}$ FOR $\mathnormal{\mathnormal{n<\mathnormal{\infty}}}$ It was discovered using Mathematica that the prime number count can be estimated with a notable degree of accuracy (within the bounds of a multiplicative constant) by performing the aforementioned operation on the prime number subsequence of higher-order up to a finite integer $p_{p^{\prime}_{N}}$, i.e., $\mathnormal{\mathnormal{\pi{(p_{p^{\prime}_{N}})}\approx C_{3}*{\sum\limits_{n=1}^{N}\left(p_{{}_{p^{\prime}_{n}}}-p_{{}_{({p^{\prime}_{n}-1})}}\right)}}}.$ (4.1) The results of Eq. 4.1 tabulated below begin at $p_{p^{\prime}_{N}}\approx 100$ and incrementally go up to $p_{p^{\prime}_{N}}\approx 10E6$: $p_{p^{\prime}_{N}}$ | $\pi(p_{p^{\prime}_{N}})$ | ${\sum\limits_{n=1}^{N}p_{p^{\prime}_{n}}-p_{({p^{\prime}_{n}-1})}}$ | $C_{3}$ ---|---|---|--- 1E02 | 25 | 23 | 1.08696 1E03 | 168 | 187 | 0.89840 1E04 | 1,229 | 1,319 | 0.93177 1E05 | 9,592 | 10,651 | 0.90057 1E06 | 78,498 | 86,249 | 0.91013 2E06 | 148,933 | 165,133 | 0.90190 3E06 | 216,816 | 239,893 | 0.90380 4E06 | 283,146 | 312,563 | 0.90588 5E06 | 348,513 | 384,277 | 0.90693 6E06 | 412,849 | 455,401 | 0.90656 7E06 | 476,648 | 525,917 | 0.90632 8E06 | 539,777 | 595,285 | 0.90675 9E06 | 602,489 | 665,345 | 0.90553 10E6 | 664,579 | 733,389 | 0.90618 An observance of the data in the table above reveals that the constant $C_{3}$ appears to oscillate rather tightly around the value $\mathnormal{\frac{\pi{}\sqrt{3}}{6}}$ which happens to be the densest packing density possible for identi-cally- sized circles in a plane. This would imply (at least within the range of $p_{p^{\prime}_{N}}$ in the table) that the ratio of the prime counting function $\pi{(p_{p^{\prime}_{N}})}$ to the sum of the gaps counted by $\mathnormal{{\sum\limits_{n=1}^{N}\left(p_{{}_{p^{\prime}_{n}}}-p_{{}_{({p^{\prime}_{n}-1})}}\right)}}$ closely approximates the density of identical circles packed as tightly as possible in a hexagonal packing arrangement in a plane. More study is needed to determine the convergence or divergence of the constant $C_{3}$. ## 5\. CONCLUSION In this paper, we derived an expression for the asymptotic limit of the prime- counting function $\mathnormal{\pi{(x)}}$ as a function of the prime number subsequence of higher order $\mathbb{P{{}^{\prime}}}$. We further showed that the expression derived is a good approximation (to within a constant $C_{3}$) of the prime counting function $\pi(n)$ up to any positive real $N\leq 10E6$. ## References * [1] Tom M. Apostol, Introduction to Analytic Number Theory, Springer-Verlag, 1976, pp.92-94. * [2] N. J. A. Sloane, Online Encyclopedia of Integer Sequences, A333242 by Michael P. May, March 2020. * [3] Michael P. May, Properties of Higher-Order Prime Number Sequences, Missouri Journal of Mathematical Sciences, Volume 32, No. 2, 2020, pp.158-170. * [4] Kevin A Broughan and A. Ross Barnett, On the Subsequence of Primes Having Prime Subscripts, Journal of Integer Sequences, Vol. 12, 2009, Article 09.2.3. * [5] N. J. A. Sloane, Online Encyclopedia of Integer Sequences, A348677 by Michael P. May, October 2021.
# Stability analysis of heterogeneous oligopoly games of increasing players with quadratic costs Xiaoliang Li Corresponding author<EMAIL_ADDRESS>School of Finance and Trade, Dongguan City College, Dongguan, P. R. China ###### Abstract In this discussion draft, we explore heterogeneous oligopoly games of increasing players with quadratic costs, where the market is supposed to have the isoelastic demand. For each of the models considered in this draft, we analytically investigate the necessary and sufficient condition of the local stability of its positive equilibrium. Furthermore, we rigorously prove that the stability regions are enlarged as the number of involved firms is increasing. ## 1 General Assumptions Motivated by [5] , we consider a market served by firms with heterogeneous decision mechanisms producing homogeneous products. We use $q_{i}(t)$ to denote the output of firm $i$ at period $t$. The cost function of firm $i$ is supposed to be quadratic, i.e., $C_{i}(q_{i})=cq_{i}^{2}$. Note that $c$ is a positive parameter and identical for all our firms. Furthermore, assume that the demand function of the market is isoelastic, which is founded on the hypothesis that the consumers have the Cobb-Douglas utility function. Hence, the price of the product should be $p(Q)=\frac{1}{Q}=\frac{1}{\sum_{i}q_{i}},$ where $Q=\sum_{i}q_{i}$ is the total supply. ## 2 Game of Two Firms First, let us consider a duopoly game, where the first firm adopts a so-called _gradient adjustment mechanism_ , while the second firm adopts the _best response mechanism_. Both of these two mechanisms are boundedly rational. To be exact, the first firm increases/decreases its output according to the information given by the marginal profit of the last period, i.e., at period $t+1$, $q_{1}(t+1)=q_{1}(t)+kq_{1}(t)\frac{\partial\Pi_{1}(t)}{\partial q_{1}(t)},$ (1) where $\Pi_{1}(t)=\frac{q_{1}(t)}{q_{1}(t)+q_{2}(t)}-cq_{1}^{2}(t)$ is the profit of firm 1 as period $t$, and $k>0$ is a parameter controlling the adjustment speed. It is worth noting that the adjustment speed depends upon not only the parameter $k$ but also the size of the firm $q_{1}(t)$. The second firm knows exactly the form of the price function, thus can estimate its profit at period $t+1$ to be $\Pi_{2}^{e}(t+1)=\frac{q_{2}(t+1)}{q_{1}^{e}(t+1)+q_{2}(t+1)}-cq_{2}^{2}(t+1),$ (2) where $q_{1}^{e}(t+1)$ is its expectation of the output at period $t+1$ of firm 1. It is realistic that firm 2 has no idea about its rival’s production plan of the present period. We suppose that firm 2 have a naive expectation of its competitor to produce the same quantity as the last period, i.e., $q_{1}^{e}(t+1)=q_{1}(t)$. Hence, $\Pi_{2}^{e}(t+1)=\frac{q_{2}(t+1)}{q_{1}(t)+q_{2}(t+1)}-cq_{2}^{2}(t+1).$ (3) In order to maximize the expected profit, the second firm try to solve the first condition $\partial\Pi_{2}^{e}(t+1)/\partial q_{2}(t+1)=0$, i.e., $q_{1}(t)-2\,cq_{2}(t+1)(q_{1}(t)+q_{2}(t+1))^{2}=0.$ (4) It should be noted that (4) is an equation of cubic polynomial. Although a general cubic polynomial has at most three real roots, it is easy to know that there exist one single real solution of (4) for $q_{2}(t+1)$, but its closed- form expression is particularly complex. However, we suppose that firm 2, by observing the rival’s output at the last period, has such ability of computation to find the best response, which is denoted as $R_{2}(q_{1}(t))$. Therefore, the model could be described as the following discrete dynamic system. $T_{GB}(q_{1},q_{2}):\left\\{\begin{split}&q_{1}(t+1)=q_{1}(t)+kq_{1}(t)\left[\frac{q_{2}(t)}{(q_{1}(t)+q_{2}(t))^{2}}-2\,cq_{1}(t)\right],\\\ &q_{2}(t+1)=R_{2}(q_{1}(t)).\end{split}\right.$ (5) By setting $q_{1}(t+1)=q_{1}(t)=q_{1}$ and $q_{2}(t+1)=q_{2}(t)=q_{2}$, the equilibrium can be identified by $\left\\{\begin{split}&q_{1}=q_{1}+kq_{1}\left(\frac{q_{2}}{(q_{1}+q_{2})^{2}}-2\,cq_{1}\right),\\\ &q_{2}=R_{2}(q_{1}),\end{split}\right.$ (6) where $q_{2}=R_{2}(q_{1})$ can be reformulated to $q_{1}-2\,cq_{2}(q_{1}+q_{2})^{2}=0$ according to (4). Thus, we have $\left\\{\begin{split}&kq_{1}\left(\frac{q_{2}}{(q_{1}+q_{2})^{2}}-2\,cq_{1}\right)=0,\\\ &q_{1}-2\,cq_{2}(q_{1}+q_{2})^{2}=0,\end{split}\right.$ (7) which could be solved by a unique solution $E_{GB}^{1}=\left(\frac{1}{\sqrt{8c}},\frac{1}{\sqrt{8c}}\right).$ It should be noted that $(0,0)$ is not an equilibrium for it is not defined for the iteration map (5). In order to investigate the local stability of an equilibrium $(q_{1}^{*},q_{2}^{*})$, we consider the Jacobian matrix of the form $J_{GB}(q_{1}^{*},q_{2}^{*})=\left[\begin{matrix}\frac{\partial q_{1}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*})}&\frac{\partial q_{1}(t+1)}{\partial q_{2}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*})}\\\ \frac{\partial q_{2}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*})}&\frac{\partial q_{2}(t+1)}{\partial q_{2}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*})}\\\ \end{matrix}\right].$ (8) It is easy to obtain that $\begin{split}&\frac{\partial q_{1}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*})}=1+kq_{2}^{*}\frac{q_{2}^{*}-q_{1}^{*}}{(q_{1}^{*}+q_{2}^{*})^{3}}-4\,ckq_{1}^{*},\\\ &\frac{\partial q_{1}(t+1)}{\partial q_{2}(t)}\Big{|}_{(q_{1}^{*},q_{2}^{*})}=kq_{1}^{*}\frac{q_{1}^{*}-q_{2}^{*}}{(q_{1}^{*}+q_{2}^{*})^{3}}.\end{split}$ (9) Furthermore, the derivative of $q_{2}(t+1)$ with respect to $q_{2}(t)$ is $0$ as $R_{2}$ does not involve $q_{2}$. However, the derivative of $q_{2}(t+1)$ with respect to $q_{1}(t)$ may not be directly obtained. By virtue of the method called implicit differentiation, it can be acquired that $\frac{\partial q_{2}(t+1)}{\partial q_{1}(t)}\Big{|}_{(q_{1}^{*},q_{2}^{*})}=-\frac{4\,cq_{1}^{*}q_{2}^{*}+4\,cq_{2}^{*2}-1}{2\,c(q_{1}^{*2}+4\,q_{1}^{*}q_{2}^{*}+3\,q_{2}^{*2})}.$ (10) At $E_{GB}^{1}=(1/\sqrt{8c},1/\sqrt{8c})$, we have that $J_{GB}(E_{GB}^{1})=\left[\begin{matrix}1-k\sqrt{2\,c}&0\\\ 0&0\\\ \end{matrix}\right].$ (11) Obviously, its eigenvalues are $\lambda_{1}=1-k\sqrt{2\,c}$ and $\lambda_{2}=0$. It is evident that $E_{GB}^{1}$ is locally stable if and only if $k\sqrt{c}<\sqrt{2}$. We summarize the above results in the following proposition. ###### Proposition 1. The $T_{GB}$ model described by (5) has a unique equilibrium $\left(\frac{1}{\sqrt{8c}},\frac{1}{\sqrt{8c}}\right),$ which is locally stable provided that $k\sqrt{c}<\sqrt{2}.$ (12) ## 3 Game of Three Firms In this section, we introduce a new boundedly rational player and add it to the model of the previous section. This player is assumed to take an _adaptive mechanism_ , which means that at each period $t+1$ it decides the quantity of production $q_{3}(t+1)$ according to the previous output $q_{3}(t)$ as well as its expectations of the other two competitors. It is also supposed that this player naively expects that at period $t+1$ firm 1 and 2 would produce the same quantity as at period $t$. Therefore, the third firm could calculate the best response $R_{3}(q_{1}(t),q_{2}(t))$ to maximize its expected profit. Similar as (4), $R_{3}(q_{1}(t),q_{2}(t))$ is the solution for $q_{3}^{\prime}(t+1)$ of the following equation. $q_{1}(t)+q_{2}(t)-2\,cq_{3}^{\prime}(t+1)(q_{1}(t)+q_{2}(t)+q_{3}^{\prime}(t+1))^{2}=0.$ (13) The adaptive decision mechanism for firm 3 is that it choose the output $q_{3}(t+1)$ proportionally to be $q_{3}(t+1)=(1-l)q_{3}(t)+lR_{3}(q_{1}(t),q_{2}(t)),$ where $l\in(0,1]$ is a parameter controlling the proportion. Hence, the triopoly can be described by $T_{GBA}(q_{1},q_{2},q_{3}):\left\\{\begin{split}&q_{1}(t+1)=q_{1}(t)+kq_{1}(t)\left[\frac{q_{2}(t)+q_{3}(t)}{(q_{1}(t)+q_{2}(t)+q_{3}(t))^{2}}-2\,cq_{1}(t)\right],\\\ &q_{2}(t+1)=R_{2}(q_{1}(t),q_{3}(t)),\\\ &q_{3}(t+1)=(1-l)q_{3}+lR_{3}(q_{1}(t),q_{2}(t)).\end{split}\right.$ (14) Similar to Section 2, the equilibria satisfy that $\left\\{\begin{split}&kq_{1}\left(\frac{q_{2}+q_{3}}{(q_{1}+q_{2}+q_{3})^{2}}-2\,cq_{1}\right)=0,\\\ &q_{1}+q_{3}-2\,cq_{2}(q_{1}+q_{2}+q_{3})^{2}=0,\\\ &q_{1}+q_{2}-2\,cq_{3}(q_{1}+q_{2}+q_{3})^{2}=0,\end{split}\right.$ (15) which could be solved by $\begin{split}E_{GBA}^{1}=&~{}\left(0,\frac{1}{\sqrt{8c}},\frac{1}{\sqrt{8c}}\right),\\\ E_{GBA}^{2}=&~{}\left(\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}}\right).\end{split}$ (16) For an equilibrium $(q_{1}^{*},q_{2}^{*},q_{3}^{*})$, the Jacobian matrix of $T_{GBA}$ takes the form $J_{GBA}(q_{1}^{*},q_{2}^{*},q_{3}^{*})=\left[\begin{matrix}\frac{\partial q_{1}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{1}(t+1)}{\partial q_{2}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{1}(t+1)}{\partial q_{3}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}\\\ \frac{\partial q_{2}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{2}(t+1)}{\partial q_{2}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{2}(t+1)}{\partial q_{3}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}\\\ \frac{\partial q_{3}(t+1)}{\partial q_{1}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{3}(t+1)}{\partial q_{2}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}&\frac{\partial q_{3}(t+1)}{\partial q_{3}(t)}\big{|}_{(q_{1}^{*},q_{2}^{*},q_{3}^{*})}\\\ \end{matrix}\right].$ (17) The first and the second rows of the matrix might be similarly computed as Section 2. For the third row, we have $\begin{split}\frac{\partial q_{3}(t+1)}{\partial q_{1}(t)}=&~{}l\frac{\partial R_{3}(q_{1}(t),q_{2}(t))}{\partial q_{1}(t)},\\\ \frac{\partial q_{3}(t+1)}{\partial q_{2}(t)}=&~{}l\frac{\partial R_{3}(q_{1}(t),q_{2}(t))}{\partial q_{2}(t)},\\\ \frac{\partial q_{3}(t+1)}{\partial q_{3}(t)}=&~{}1-l,\\\ \end{split}$ (18) where ${\partial R_{3}(q_{1}(t),q_{2}(t))}/{\partial q_{1}(t)}$ and ${\partial R_{3}(q_{1}(t),q_{2}(t))}/{\partial q_{2}(t)}$ can be acquired using the method of implicit differentiation. From an economic point of view, we only consider the positive equilibrium $E^{2}_{GBA}$, where the Jacobian matrix would be $J_{GBA}(E^{2}_{GBA})=\left[\begin{matrix}1-{10\,k\sqrt{c}}/{9}&-k\sqrt{c}/9&-k\sqrt{c}/9\\\ -{1}/{10}&0&-{1}/{10}\\\ -{l}/{10}&-{l}/{10}&1-l\end{matrix}\right].$ (19) Let $A$ be the characteristic polynomial of a Jacobian matrix $J$. The eigenvalues of $J$ are simply the roots of the polynomial $A$ for $\lambda$. So the problem of stability analysis can be reduced to that of determining whether all the roots of $A$ lie in the open unit disk $|\lambda|<1$. To the best of our knowledge, in addition to the Routh-Hurwitz criterion [4] generalized from the corresponding criterion for continuous systems, there are two other criteria, the Schur-Cohn criterion [2, pp. 246–248] and the Jury criterion [3], available for discrete dynamical systems. In what follows, we provide a short review of the Schur-Cohn criterion. ###### Proposition 2 (Schur-Cohn Criterion). For a $n$-dimensional discrete dynamic system, assume that the characteristic polynomial of its Jacobian matrix is $A=\lambda^{n}+a_{n-1}\lambda^{n-1}+\cdots+a_{0}.$ Consider the sequence of determinants $D^{\pm}_{1}$, $D^{\pm}_{2}$, $\ldots$, $D^{\pm}_{n}$, where $\begin{split}D^{\pm}_{i}=&\left|\left(\begin{array}[]{ccccc}1&a_{n-1}&a_{n-2}&\cdots&a_{n-i+1}\\\ 0&1&a_{n-1}&\cdots&a_{n-i+2}\\\ 0&0&1&\cdots&a_{n-i+3}\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\ 0&0&0&\cdots&1\\\ \end{array}\right)\pm\left(\begin{array}[]{ccccc}a_{i-1}&a_{i-2}&\cdots&a_{1}&a_{0}\\\ a_{i-2}&a_{i-3}&\cdots&a_{0}&0\\\ \vdots&\vdots&\ddots&\vdots&\vdots\\\ a_{1}&a_{0}&\cdots&0&0\\\ a_{0}&0&\cdots&0&0\\\ \end{array}\right)\right|.\end{split}$ The characteristic polynomial $A$ has all its roots inside the unit open disk if and only if 1. 1. $A(1)>0$ and $(-1)^{n}A(-1)>0$, 2. 2. $D^{\pm}_{1}>0,D^{\pm}_{3}>0,\ldots,D^{\pm}_{n-3}>0,D^{\pm}_{n-1}>0$ (when $n$ is even), or $D^{\pm}_{2}>0,D^{\pm}_{4}>0,\ldots,D^{\pm}_{n-3}>0,D^{\pm}_{n-1}>0$ (when $n$ is odd). ###### Corollary 1. Consider a $3$-dimensional discrete dynamic system with the characteristic polynomial of its Jacobian matrix of the form $A=\lambda^{3}+a_{2}\lambda^{2}+a_{1}\lambda+a_{0}.$ An equilibrium $E$ is locally stable if and only if the following inequalities are satisfied at $E$. $\left\\{\begin{split}&1+a_{2}+a_{1}+a_{0}>0,\\\ &1-a_{2}+a_{1}-a_{0}>0,\\\ &-a_{0}^{2}-a_{0}a_{2}+a_{1}+1>0,\\\ &-a_{0}^{2}+a_{0}a_{2}-a_{1}+1>0.\end{split}\right.$ (20) For the $3$-dimensional discrete dynamic system (14), it is easy to verify that at the unique positive equilibrium $E_{GBA}^{2}$ the local stability condition (20) could be reformulated to $CD_{GBA}^{1}>0,~{}CD_{GBA}^{2}>0,~{}CD_{GBA}^{3}<0,~{}CD_{GBA}^{4}<0,$ (21) where $\begin{split}CD_{GBA}^{1}=&~{}kl\sqrt{c},\\\ CD_{GBA}^{2}=&~{}504\,kl\sqrt{c}-1010\,k\sqrt{c}-909\,l+1800,\\\ CD_{GBA}^{3}=&~{}324\,ck^{2}l^{2}-18360\,ck^{2}l+10100\,ck^{2}-16524\,kl^{2}\sqrt{c}-840420\,kl\sqrt{c}\\\ &+8181\,l^{2}+891000\,k\sqrt{c}+801900\,l-1620000,\\\ CD_{GBA}^{4}=&~{}36\,ck^{2}l^{2}+1960\,ck^{2}l+1764\,kl^{2}\sqrt{c}-1100\,ck^{2}+93420\,kl\sqrt{c}\\\ &-99000\,k\sqrt{c}-891\,l^{2}-89100\,l.\end{split}$ (22) It is obvious that $CD_{GBA}^{1}>0$ could be ignored as it is always true for all parameter values such that $k>0$, $c>0$ and $1\geq l>0$. A further question is whether the other three inequalities could be simplified. To answer this question, we might investigate the inclusion relations of these inequalities. It worth noticing that the surfaces $CD_{GBA}^{2}=0$, $CD_{GBA}^{3}=0$ and $CD_{GBA}^{4}=0$ divide the parameter space $\\{(k,l,c)\,|\,k>0,1\geq l>0,c>0\\}$ of our concern into a number of connected regions. Moreover, in a given region, the signs of $CD_{GBA}^{i}$ ($i=1,2,3,4$) would be invariant. This means that in each of these regions we could identify whether the inequalities in (21) are satisfied by checking them at a single sample point. For simple cases, the selection of sample points might be done by hand. Generally, however, the selection could be automated by using, e.g., the partial cylindrical algebraic decomposition (PCAD) method [1]. Table 1: Stability Condition of $T_{GBA}$ at Selected Sample Points sample point of $(k,l,c)$ | $CD_{GBA}^{1}>0$ | $CD_{GBA}^{2}>0$ | $CD_{GBA}^{3}<0$ | $CD_{GBA}^{4}<0$ ---|---|---|---|--- (455/256, 71/256, 1/4) | true | true | true | true (31/8, 71/256, 1/4) | true | false | true | true (601/128, 71/256, 1/4) | true | false | false | true (453/256, 183/256, 1/4) | true | true | true | true (1439/256, 183/256, 1/4) | true | false | true | true (1577/16, 183/256, 1/4) | true | false | false | true (49855/256, 183/256, 1/4) | true | false | true | true (25673/128, 183/256, 1/4) | true | false | true | false (451/256, 15/16, 1/4) | true | true | true | true (5237/256, 15/16, 1/4) | true | false | true | true (2425/64, 15/16, 1/4) | true | false | true | false In Table 1, we list all the selected sample points such that there is at least one point in each region. The four inequalities in (21) are verified at these sample points one by one, which are also given in Table 1. It is observed that at the sample points where $CD_{GBA}^{2}>0$ is true, the other three inequalities would also be true. Hence, if $CD_{GBA}^{2}>0$ is satisfied, then all the four inequalities in (21) would be satisfied definitely. In other words, only $CD_{GBA}^{2}>0$ is needed herein for the detection of the local stability. Furthermore, $CD_{GBA}^{2}>0$ is equivalent to $k\sqrt{c}<\frac{9(101\,l-200)}{2(252\,l-505)}.$ Therefore, we summarize the obtained results in the following proposition. ###### Proposition 3. The $T_{GBA}$ model described by (14) has a unique positive equilibrium $\left(\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}}\right),$ which is locally stable provided that $k\sqrt{c}<\frac{9(101\,l-200)}{2(252\,l-505)}.$ (23) Furthermore, we have the following result. ###### Proposition 4. The stability region of the $T_{GBA}$ model is strictly larger than that of $T_{GB}$. ###### Proof. It suffices to prove that $\frac{9(101\,l-200)}{2(252\,l-505)}>\sqrt{2},$ which is equivalent to $9(101\,l-200)<2\sqrt{2}(252\,l-505)$ since $252\,l-505<0$. It is easy to see that the above inequality can be reformulated to $(909-504\sqrt{2})l<(1800-1010\sqrt{2}),$ which is true by checking at $l=0$ and $l=1$. This completes the proof. ∎ ## 4 Game of Four Firms In this section, we introduce an additional player. The fourth firm adopts the so-called _local monopolistic approximation_ (LMA) mechanism [6], which is also a boundedly rational adjustment process. In this process, the player just has limited knowledge of the demand function. To be exact, the firm can observe the current market price $p(t)$ and the corresponding total supply $Q(t)$ and is able to correctly estimate the slope $p^{\prime}(Q(t))$ of the price function around the point $(p(t),Q(t))$. Then, the firm uses such information to conjecture the demand function and expect the price at period $t+1$ to be $p^{e}(t+1)=p(Q(t))+p^{\prime}(Q(t))(Q^{e}(t+1)-Q(t)),$ where $Q^{e}(t+1)$ represents the expected aggregate production at period $t+1$. Moreover, firm $4$ is also assumed to use the naive expectations of its rivals, i.e., $Q^{e}(t+1)=q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t+1).$ Thus, we have that $p^{e}(t+1)=\frac{1}{Q(t)}-\frac{1}{Q^{2}(t)}(q_{4}(t+1)-q_{4}(t)).$ The expected profit of the fourth firm is $\Pi^{e}_{4}(t+1)=p^{e}(t+1)q_{4}(t+1)-cq_{4}^{2}(t+1).$ To maximize the expected profit, firm $4$ chooses its output at period $t+1$ to be the solution of the first order condition $q_{4}(t+1)=\frac{2\,q_{4}(t)+q_{1}(t)+q_{2}(t)+q_{3}(t)}{2(1+c(q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t))^{2})}.$ Therefore, the new model can be described by the following $4$-dimensional discrete dynamic system. $\begin{split}&T_{GBAL}(q_{1},q_{2},q_{3},q_{4}):\\\ &\left\\{\begin{split}&q_{1}(t+1)=q_{1}(t)+kq_{1}(t)\left[\frac{q_{2}(t)+q_{3}(t)+q_{4}(t)}{(q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t))^{2}}-2\,cq_{1}(t)\right],\\\ &q_{2}(t+1)=R_{2}(q_{1}(t),q_{3}(t),q_{4}(t)),\\\ &q_{3}(t+1)=(1-l)q_{3}+lR_{3}(q_{1}(t),q_{2}(t),q_{4}(t)),\\\ &q_{4}(t+1)=\frac{2\,q_{4}(t)+q_{1}(t)+q_{2}(t)+q_{3}(t)}{2(1+c(q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t))^{2})}.\end{split}\right.\end{split}$ (24) Similarly, we know that the equilibria are described by $\left\\{\begin{split}&kq_{1}\left(\frac{q_{2}+q_{3}+q_{4}}{(q_{1}+q_{2}+q_{3}+q_{4})^{2}}-2\,cq_{1}\right)=0,\\\ &q_{1}+q_{3}+q_{4}-2\,cq_{2}(q_{1}+q_{2}+q_{3}+q_{4})^{2}=0,\\\ &q_{1}+q_{2}+q_{4}-2\,cq_{3}(q_{1}+q_{2}+q_{3}+q_{4})^{2}=0,\\\ &q_{4}-\frac{2\,q_{4}+q_{1}+q_{2}+q_{3}}{2(1+c(q_{1}+q_{2}+q_{3}+q_{4})^{2})}=0,\end{split}\right.$ (25) which could be solved by two solutions $\begin{split}E_{GBAL}^{1}=&~{}\left(0,\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}},\frac{1}{\sqrt{9c}}\right),\\\ E_{GBAL}^{2}=&~{}\left(\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right).\\\ \end{split}$ (26) Hence, there exists a unique positive equilibrium $E_{GBAL}^{2}$, where the Jacobian matrix of $T_{GBAL}$ should be $J_{GBAL}(E^{2}_{GBAL})=\left[\begin{matrix}1-{3\,k\sqrt{6c}}/8&-k\sqrt{6c}/{24}&-k\sqrt{6c}/{24}&-k\sqrt{6c}/{24}\\\ -{1}/{9}&0&-{1}/{9}&-{1}/{9}\\\ -{l}/{9}&-{l}/{9}&1-l&-{l}/{9}\\\ -{1}/{10}&-{1}/{10}&-{1}/{10}&{1}/{10}\\\ \end{matrix}\right].$ (27) By virtue of Proposition 2, we have the following corollary. ###### Corollary 2. Consider a $4$-dimensional discrete dynamic system with the characteristic polynomial of its Jacobian matrix of the form $A=\lambda^{4}+a_{3}\lambda^{3}+a_{2}\lambda^{2}+a_{1}\lambda+a_{0}.$ An equilibrium $E$ is locally stable if and only if the following inequalities are satisfied at $E$. $\left\\{\begin{split}&1+a_{3}+a_{2}+a_{1}+a_{0}>0,\\\ &1-a_{3}+a_{2}-a_{1}+a_{0}>0,\\\ &-a_{0}^{3}-a_{0}^{2}a_{2}+a_{0}a_{1}a_{3}+a_{0}a_{3}^{2}-a_{0}^{2}-a_{1}^{2}-a_{1}a_{3}+a_{0}+a_{2}+1>0,\\\ &a_{0}^{3}-a_{0}^{2}a_{2}+a_{0}a_{1}a_{3}-a_{0}a_{3}^{2}-a_{0}^{2}+2\,a_{0}a_{2}-a_{1}^{2}+a_{1}a_{3}-a_{0}-a_{2}+1>0,\\\ &1+a_{0}>0,\\\ &1-a_{0}>0.\end{split}\right.$ (28) For the $4$-dimensional discrete dynamic system (24), it is easy to verify that at the unique positive equilibrium $E_{GBAL}^{2}$ the above condition (28) could be reformulated to $\begin{split}CD_{GBAL}^{1}>0,~{}CD_{GBAL}^{2}>0,~{}CD_{GBAL}^{3}>0,\\\ CD_{GBAL}^{4}<0,~{}CD_{GBAL}^{5}<0,~{}CD_{GBAL}^{6}>0,\end{split}$ (29) where $\begin{split}CD_{GBAL}^{1}=&~{}kl\sqrt{32c/3},\\\ CD_{GBAL}^{2}=&~{}(512\,kl-1017\,k)\sqrt{32c/3}-3616\,l+7056,\\\ CD_{GBAL}^{3}=&~{}(28672\,k^{3}l^{3}-1062432\,k^{3}l^{2}+9180054\,k^{3}l-12603681\,k^{3})(\sqrt{32c/3})^{3}\\\ &+(-3777536\,k^{2}l^{3}+179157888\,k^{2}l^{2}-1194862752\,k^{2}l+945483840\,k^{2})(\sqrt{32c/3})^{2}\\\ &+(116054016\,kl^{3}-4248400896\,kl^{2}-5573546496\,kl+13237426944\,k)\sqrt{32c/3}\\\ &-566525952\,l^{3}+11952783360\,l^{2}+47066406912\,l-133145026560,\\\ CD_{GBAL}^{4}=&~{}(3616\,k^{3}l^{3}-132966\,k^{3}l^{2}-512973\,k^{3}l+1226907\,k^{3})(\sqrt{32c/3})^{3}\\\ &+(-472768\,k^{2}l^{3}+16419744\,k^{2}l^{2}+77813136\,k^{2}l-83525904\,k^{2})(\sqrt{32c/3})^{2}\\\ &+(-6484992\,kl^{3}+276668928\,kl^{2}+1145829888\,kl-1868106240\,k)\sqrt{32c/3}\\\ &+55148544\,l^{3}-1055932416\,l^{2}-6642155520\,l,\\\ CD_{GBAL}^{5}=&~{}(16\,kl-27\,k)\sqrt{32c/3}-96\,l-12816,\\\ CD_{GBAL}^{6}=&~{}(16\,kl-27\,k)\sqrt{32c/3}-96\,l+13104,\\\ \end{split}$ (30) Table 2: Stability Condition of $T_{GBAL}$ at Selected Sample Points sample point of $(k,l,c)$ | $CD_{GBAL}^{1}>0$ | $CD_{GBAL}^{2}>0$ | $CD_{GBAL}^{3}>0$ ---|---|---|--- (55/64, 109/256, 3/2) | true | true | true (243/128, 109/256, 3/2) | true | false | true (301/32, 109/256, 3/2) | true | false | false (271/16, 109/256, 3/2) | true | false | true (5725/64, 109/256, 3/2) | true | false | true (20771/128, 109/256, 3/2) | true | false | true (109/128, 119/128, 3/2) | true | true | true (1275/256, 119/128, 3/2) | true | false | true (35405/256, 119/128, 3/2) | true | false | true (34413/128, 119/128, 3/2) | true | false | true sample point of $(k,l,c)$ | $CD_{GBAL}^{4}<0$ | $CD_{GBAL}^{5}<0$ | $CD_{GBAL}^{6}>0$ ---|---|---|--- (55/64, 109/256, 3/2) | true | true | true (243/128, 109/256, 3/2) | true | true | true (301/32, 109/256, 3/2) | true | true | true (271/16, 109/256, 3/2) | true | true | true (5725/64, 109/256, 3/2) | false | true | true (20771/128, 109/256, 3/2) | false | true | false (109/128, 119/128, 3/2) | true | true | true (1275/256, 119/128, 3/2) | true | true | true (35405/256, 119/128, 3/2) | false | true | true (34413/128, 119/128, 3/2) | false | true | false In order to simplify condition (29), it is also helpful to explore the inclusion relations of these inequalities. Bear in mind that the surfaces $CD_{GBAL}^{i}=0$ ($i=1,\ldots,6$) divide the parameter space $\\{(k,l,c)\,|\,k>0,1\geq l>0,c>0\\}$ into regions, and in each of them the signs of $CD_{GBA}^{i}$ ($i=1,\ldots,6$) would be invariant. Similarly, we use the PCAD method to select at least one sample point from each region. Table 2 lists the selected sample points and shows the verification results of the six inequalities in (29) at these sample points. It is observed that at all the sample points where $CD_{GBAL}^{2}>0$ is true, the rest inequalities would also be true. In other words, if $CD_{GBAL}^{2}>0$, then the local stability condition (28) would be satisfied. Thus, condition (29) could be simplified to one single inequality. Furthermore, it is easy to see that $CD_{GBAL}^{2}>0$ is equivalent to $k\sqrt{c}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}.$ Therefore, we summarize the results in the following proposition. ###### Proposition 5. The $T_{GBAL}$ model described by (24) has a unique positive equilibrium $\left(\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right),$ which is locally stable provided that $k\sqrt{c}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}.$ Furthermore, we have the following result. ###### Proposition 6. The stability region of the $T_{GBAL}$ model is strictly larger than that of $T_{GBA}$. ###### Proof. It suffices to prove that $\frac{9(101\,l-200)}{2(252\,l-505)}<\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017},$ which is equivalent to $9(101\,l-200)(512\,l-1017)<4\sqrt{6}(252\,l-505)(226\,l-441),$ and further to $(-227808\sqrt{6}+465408)l^{2}+(901048\sqrt{6}-1846053)l-890820\sqrt{6}+1830600<0.$ This inequality is satisfied for $0<l\leq 1$ since the left part has a negative leading coefficient and has both of its roots greater than $1$, which completes the proof. ∎ ## 5 Game of Five Firms Finally, we introduce a special firm, which is a rational player, to the model of this section. A _rational player_ , quite different from the second player, not only knows clearly the form of the price function, but also has complete information of its rivals’ decisions. Because of no information about the rivals, firm 2 just naively expects that all its competitors produce the same amounts as the last period. Thus, the expected profit of firm 2 at period $t+1$ would be $\Pi_{2}^{e}(t+1)=\frac{q_{2}(t+1)}{q_{1}(t)+q_{2}(t+1)+q_{3}(t)+q_{4}(t)+q_{5}(t)}-cq_{2}^{2}(t+1).$ In comparison, firm 5 has complete information and know exactly the production plans of all its rivals. Hence, the expected profit of firm 5 would be the real profit, i.e., $\Pi_{5}^{e}(t+1)=\Pi_{5}(t+1)=\frac{q_{5}(t+1)}{q_{1}(t+1)+q_{2}(t+1)+q_{3}(t+1)+q_{4}(t+1)+q_{5}(t+1)}-cq_{5}^{2}(t+1).$ In order to maximize its profit, firm 5 need to solve the first condition $\partial\Pi_{5}(t+1)/\partial q_{5}(t+1)=0$ for $q_{5}(t+1)$. We denote the solution as $q_{5}(t+1)=R_{5}(q_{1}(t+1),q_{2}(t+1),q_{3}(t+1),q_{4}(t+1)).$ It is worth noting that the form of the solution is similar as that of firm 2, but with variables replaced by the output quantities of the rivals at the present period. In short, we have the $5$-dimensional iteration map $\begin{split}&T_{GBALR}(q_{1},q_{2},q_{3},q_{4},q_{5}):\\\ &\left\\{\begin{split}&q_{1}(t+1)=q_{1}(t)+kq_{1}(t)\left[\frac{q_{2}(t)+q_{3}(t)+q_{4}(t)+q_{5}(t)}{(q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t)+q_{5}(t))^{2}}-2\,cq_{1}(t)\right],\\\ &q_{2}(t+1)=R_{2}(q_{1}(t),q_{3}(t),q_{4}(t),q_{5}(t)),\\\ &q_{3}(t+1)=(1-l)q_{3}(t)+lR_{3}(q_{1}(t),q_{2}(t),q_{4}(t),q_{5}(t)),\\\ &q_{4}(t+1)=\frac{2\,q_{4}(t)+q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{5}(t)}{2(1+c(q_{1}(t)+q_{2}(t)+q_{3}(t)+q_{4}(t)+q_{5}(t))^{2})},\\\ &q_{5}(t+1)=R_{5}(q_{1}(t+1),q_{2}(t+1),q_{3}(t+1),q_{4}(t+1)).\end{split}\right.\end{split}$ (31) Therefore, the equilibria are described by $\left\\{\begin{split}&kq_{1}\left(\frac{q_{2}+q_{3}}{(q_{1}+q_{2}+q_{3}+q_{4}+q_{5})^{2}}-2\,cq_{1}\right)=0,\\\ &q_{1}+q_{3}+q_{4}+q_{5}-2\,cq_{2}(q_{1}+q_{2}+q_{3}+q_{4}+q_{5})^{2}=0,\\\ &q_{1}+q_{2}+q_{4}+q_{5}-2\,cq_{3}(q_{1}+q_{2}+q_{3}+q_{4}+q_{5})^{2}=0,\\\ &q_{4}-\frac{2\,q_{4}+q_{1}+q_{2}+q_{3}+q_{5}}{2(1+c(q_{1}+q_{2}+q_{3}+q_{4}+q_{5})^{2})}=0,\\\ &q_{1}+q_{2}+q_{3}+q_{4}-2\,cq_{5}(q_{1}+q_{2}+q_{3}+q_{4}+q_{5})^{2}=0,\end{split}\right.$ (32) which could be solved by two solutions $\begin{split}E_{GBALR}^{1}=&~{}\left(0,\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}},\sqrt{\frac{3}{32c}}\right),\\\ E_{GBALR}^{2}=&~{}\left(\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}}\right).\end{split}$ (33) For simplicity, we denote the first and the fourth equation in (31) to be $q_{1}(t+1)=G_{1}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t),q_{5}(t))$ and $q_{4}(t+1)=L_{4}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t),q_{5}(t)),$ respectively. One may find that (31) could be reformulated to the following $4$-dimensional map. $\begin{split}&T_{GBALR}(q_{1},q_{2},q_{3},q_{4}):\\\ &\left\\{\begin{split}&q_{1}(t+1)=G_{1}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t),R_{5}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t))),\\\ &q_{2}(t+1)=R_{2}(q_{1}(t),q_{3}(t),q_{4}(t),R_{5}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t))),\\\ &q_{3}(t+1)=(1-l)q_{3}(t)+lR_{3}(q_{1}(t),q_{2}(t),q_{4}(t),R_{5}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t))),\\\ &q_{4}(t+1)=L_{4}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t),R_{5}(q_{1}(t),q_{2}(t),q_{3}(t),q_{4}(t))).\end{split}\right.\end{split}$ (34) Hence, the analysis of the local stability is transformed to the investigation of the Jacobian matrix (34) of the form $J_{GBALR}=\left[\begin{matrix}\frac{\partial q_{1}(t+1)}{\partial q_{1}(t)}&\frac{\partial q_{1}(t+1)}{\partial q_{2}(t)}&\frac{\partial q_{1}(t+1)}{\partial q_{3}(t)}&\frac{\partial q_{1}(t+1)}{\partial q_{4}(t)}\\\ \frac{\partial q_{2}(t+1)}{\partial q_{1}(t)}&\frac{\partial q_{2}(t+1)}{\partial q_{2}(t)}&\frac{\partial q_{2}(t+1)}{\partial q_{3}(t)}&\frac{\partial q_{2}(t+1)}{\partial q_{4}(t)}\\\ \frac{\partial q_{3}(t+1)}{\partial q_{1}(t)}&\frac{\partial q_{3}(t+1)}{\partial q_{2}(t)}&\frac{\partial q_{3}(t+1)}{\partial q_{3}(t)}&\frac{\partial q_{3}(t+1)}{\partial q_{4}(t)}\\\ \frac{\partial q_{4}(t+1)}{\partial q_{1}(t)}&\frac{\partial q_{4}(t+1)}{\partial q_{2}(t)}&\frac{\partial q_{4}(t+1)}{\partial q_{3}(t)}&\frac{\partial q_{4}(t+1)}{\partial q_{4}(t)}\\\ \end{matrix}\right],$ (35) where $\begin{split}\frac{\partial q_{1}(t+1)}{\partial q_{i}(t)}=&~{}\frac{\partial G_{1}}{\partial q_{i}}+\frac{\partial G_{1}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{i}},~{}~{}i=1,2,3,4,\\\ \frac{\partial q_{2}(t+1)}{\partial q_{i}(t)}=&~{}\frac{\partial R_{2}}{\partial q_{i}}+l\frac{\partial R_{2}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{i}},~{}~{}i=1,3,4,\\\ \frac{\partial q_{2}(t+1)}{\partial q_{2}(t)}=&~{}\frac{\partial R_{2}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{2}},\\\ \frac{\partial q_{3}(t+1)}{\partial q_{i}(t)}=&~{}l\frac{\partial R_{3}}{\partial q_{i}}+l\frac{\partial R_{3}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{i}},~{}~{}i=1,2,4,\\\ \frac{\partial q_{3}(t+1)}{\partial q_{3}(t)}=&~{}(1-l)+l\frac{\partial R_{3}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{3}},\\\ \frac{\partial q_{4}(t+1)}{\partial q_{i}(t)}=&~{}\frac{\partial L_{4}}{\partial q_{i}}+\frac{\partial L_{4}}{\partial q_{5}}\frac{\partial R_{5}}{\partial q_{i}},~{}~{}i=1,2,3,4.\\\ \end{split}$ (36) Likewise, we focus on the positive equilibrium $E^{2}_{GBALR}$, where the Jacobian matrix $J_{GBALR}$ becomes $J_{GBALR}(E^{2}_{GBALR})=\left[\begin{matrix}1-{31\,k\sqrt{2c}}/56&-3\,k\sqrt{2c}/{56}&-3\,k\sqrt{2c}/{56}&-3\,k\sqrt{2c}/{56}\\\ -{75}/{784}&9/784&-{75}/{784}&-{75}/{784}\\\ 0&0&1-25\,l/28&0\\\ -{5}/{56}&-{5}/{56}&-{5}/{56}&{13}/{168}\\\ \end{matrix}\right].$ (37) According to Corollary 2, the unique positive equilibrium $E_{GBALR}^{2}$ is locally stable if and only if the following condition is satisfied. $\begin{split}CD_{GBALR}^{1}>0,~{}CD_{GBALR}^{2}>0,~{}CD_{GBALR}^{3}<0,\\\ CD_{GBALR}^{4}<0,~{}CD_{GBALR}^{5}<0,~{}CD_{GBALR}^{6}>0,\end{split}$ (38) where $\begin{split}CD_{GBALR}^{1}=&~{}kl\sqrt{25c/2},\\\ CD_{GBALR}^{2}=&~{}(25\,l-56)(5737\,k\sqrt{25c/2}-50860),\\\ CD_{GBALR}^{3}=&~{}(3934321875\,k^{3}l^{3}-104905111500\,k^{3}l^{2}+1172129631120\,k^{3}l\\\ &-1186719653952\,k^{3})(\sqrt{25c/2})^{3}+(-439562531250\,k^{2}l^{3}+19054516460000\,k^{2}l^{2}\\\ &-144796527937600\,k^{2}l+134072666053760\,k^{2})(\sqrt{25c/2})^{2}+(19706242500000\,kl^{3}\\\ &-579386747450000\,kl^{2}-1721529608680000\,kl+3133067852544000\,k)\sqrt{25c/2}\\\ &-113004562500000\,l^{3}+1975821995000000\,l^{2}+12875890524000000\,l\\\ &-37485773024000000,\\\ CD_{GBALR}^{4}=&~{}(9423\,k^{2}(\sqrt{25c/2})^{2}-981050\,k\sqrt{25c/2}-33575000)((3375\,kl^{3}-89180\,kl^{2}\\\ &-629552\,kl+812224\,k)\sqrt{25c/2}\\\ &-22500\,l^{3}+343000\,l^{2}+3332000\,l)\\\ CD_{GBALR}^{5}=&~{}(225\,kl-252\,k)\sqrt{25c/2}-1500\,l-217840,\\\ CD_{GBALR}^{6}=&~{}(225\,kl-252\,k)\sqrt{25c/2}-1500\,l+221200.\\\ \end{split}$ (39) By observing Table 3, we have the following proposition. ###### Proposition 7. The $T_{GBALR}$ model described by (31) has a unique positive equilibrium $\left(\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}},\sqrt{\frac{2}{25c}}\right),$ which is locally stable provided that $k\sqrt{c}<\frac{10172\sqrt{2}}{5737}.$ Furthermore, the following result is acquired. ###### Proposition 8. The stability region of the $T_{GBALR}$ model is strictly larger than that of $T_{GBAL}$. ###### Proof. It suffices to prove that $\frac{2\sqrt{6}(226\,l-441)}{512\,l-1017}<\frac{10172\sqrt{2}}{5737},$ which is equivalent to $10172\sqrt{2}(1017-512\,l)-5737\times 2\sqrt{6}(441-226\,l)>0,$ which is true by checking at $l=0$ and $l=1$. ∎ Table 3: Stability Condition of $T_{GBALR}$ at Selected Sample Points sample point of $(k,l,c)$ | $CD_{GBALR}^{1}>0$ | $CD_{GBALR}^{2}>0$ | $CD_{GBALR}^{3}<0$ ---|---|---|--- (453/256, 61/128, 1/2) | true | true | true (1007/256, 61/128, 1/2) | true | false | true (7183/256, 61/128, 1/2) | true | false | false (6675/128, 61/128, 1/2) | true | false | true (10587/32, 61/128, 1/2) | true | false | true (9755/16, 61/128, 1/2) | true | false | true (453/256, 251/256, 1/2) | true | true | true (1567/256, 251/256, 1/2) | true | false | true (225/8, 251/256, 1/2) | true | false | false (12807/256, 251/256, 1/2) | true | false | true (91267/64, 251/256, 1/2) | true | false | true (89603/32, 251/256, 1/2) | true | false | true sample point of $(k,l,c)$ | $CD_{GBALR}^{4}<0$ | $CD_{GBALR}^{5}<0$ | $CD_{GBALR}^{6}>0$ ---|---|---|--- (453/256, 61/128, 1/2) | true | true | true (1007/256, 61/128, 1/2) | true | true | true (7183/256, 61/128, 1/2) | true | true | true (6675/128, 61/128, 1/2) | true | true | true (10587/32, 61/128, 1/2) | false | true | true (9755/16, 61/128, 1/2) | false | true | false (453/256, 251/256, 1/2) | true | true | true (1567/256, 251/256, 1/2) | true | true | true (225/8, 251/256, 1/2) | true | true | true (12807/256, 251/256, 1/2) | true | false | true (91267/64, 251/256, 1/2) | false | true | true (89603/32, 251/256, 1/2) | false | true | false ## 6 Concluding Remarks Figure 1: The stability regions of the models considered in the paper. The unique equilibrium of $T_{GB}$ is locally stable if and only if the parameters take values from the red region. The unique positive equilibrium of $T_{GBA}$ is locally stable if and only if the parameters take values from the red and yellow regions. By analogy, similar conclusions can be obtained for the $T_{GBAL}$ and $T_{GBALR}$ models. ## References * [1] G. E. Collins and H. Hong. Partial cylindrical algebraic decomposition for quantifier elimination. Journal of Symbolic Computation, 12(3):299–328, 1991. * [2] S. Elaydi. An Introduction to Difference Equations. Springer, 3rd edition, 2005. * [3] E. Jury, L. Stark, and V. Krishnan. Inners and stability of dynamic systems. IEEE Transactions on Systems, Man, and Cybernetics, (10):724–725, 1976. * [4] R. C. Oldenbourg and H. Sartorius. The Dynamics of Automatic Controls. American Society of Mechanical Engineers, 1948. * [5] F. Tramontana, A. Elsadany, B. Xin, and H. Agiza. Local stability of the Cournot solution with increasing heterogeneous competitors. Nonlinear Analysis: Real World Applications, 26:150–160, 2015. * [6] J. Tuinstra. A price adjustment process in a model of monopolistic competition. International Game Theory Review, 6(03):417–442, 2004.
# Learning Compact Compositional Embeddings via Regularized Pruning for Recommendation Xurong Liang† Tong Chen† Quoc Viet Hung Nguyen⟂ Jianxin Li§ Hongzhi Yin†∗ $~{}^{\dagger}$The University of Queensland, Australia, <EMAIL_ADDRESS> $~{}^{\perp}$Griffith University, Australia<EMAIL_ADDRESS> §Deakin University, Australia<EMAIL_ADDRESS> ###### Abstract Latent factor models are the dominant backbones of contemporary recommender systems (RSs) given their performance advantages, where a unique vector embedding with a fixed dimensionality (e.g., 128) is required to represent each entity (commonly a user/item). Due to the large number of users and items on e-commerce sites, the embedding table is arguably the least memory- efficient component of RSs. For any lightweight recommender that aims to efficiently scale with the growing size of users/items or to remain applicable in resource-constrained settings, existing solutions either reduce the number of embeddings needed via hashing, or sparsify the full embedding table to switch off selected embedding dimensions. However, as hash collision arises or embeddings become overly sparse, especially when adapting to a tighter memory budget, those lightweight recommenders inevitably have to compromise their accuracy. To this end, we propose a novel compact embedding framework for RSs, namely Compositional Embedding with Regularized Pruning (CERP). Specifically, CERP represents each entity by combining a pair of embeddings from two independent, substantially smaller meta-embedding tables, which are then jointly pruned via a learnable element-wise threshold. In addition, we innovatively design a regularized pruning mechanism in CERP, such that the two sparsified meta-embedding tables are encouraged to encode information that is mutually complementary. Given the compatibility with agnostic latent factor models, we pair CERP with two popular recommendation models for extensive experiments, where results on two real-world datasets under different memory budgets demonstrate its superiority against state-of-the-art baselines. The codebase of CERP is available in https://github.com/xurong-liang/CERP. ###### Index Terms: lightweight recommender systems, compositional embeddings, regularized pruning ††*Hongzhi Yin is the corresponding author. ## I Introduction The invention of recommender systems (RSs) greatly eases the difficulty of identifying and suggesting useful information or products from the sheer volume of data based on users’ preferences. Most RSs leverage collaborative filtering through latent factor models, in which all entities (i.e., users and items in most RSs) are mapped to distinct, real-valued dense vectors of a unified dimension. Then, based on these vector representations, i.e., embeddings, a pairwise similarity function (e.g., dot product [1], multi-layer perceptrons [2], graph neural networks [3], etc.) can be learned to rank each item’s relevance to a user. In latent factor-based collaborative filtering, all entities’ embeddings are hosted in an embedding table and can be efficiently drawn via a look-up operation. Given the large number of possible users and items in recommendation services, the embedding table is commonly the heaviest component in an RS in terms of parameter sizes [4, 5, 6, 7, 8, 9]. Recently, with the frequently intersected needs for handling large-scale e-commerce data and deploying RSs on resource- constrained devices [10], the memory consumption of embedding tables has become the major bottleneck that prevents RSs from scaling up. Take an example of the _Amazon Product Reviews_ dataset [11] which includes $20.98$ million users and $9.35$ million items. If the embedding dimension is $128$, representing all these entities in a full embedding table incurs approximately $3.9$ billion parameters, translating into $31.2$ GB memory consumption for a double floating-point system. In comparison, the number of parameters used in the recommendation layer is almost negligible even for state-of-the-art RSs built upon deep neural networks (DNNs). Clearly, storing embedding vectors with a fixed dimension for all entities drastically escalates memory usage, making it intractable for RSs to scale to large datasets or support on-device applications. To this end, the urge for a lightweight recommender, created by utilizing a memory-efficient embedding structure, is raised. One naïve solution is to choose a small dimension size for all entities so that a low memory budget can be met. However, as the dimension size determines the ability to encode each entity’s information [12], this approach heavily impedes the expressiveness of embedding vectors and thus, the recommendation accuracy. To counter the inflexibility of fixed-size embeddings, one mainstream of recent memory-efficient RSs is to dynamically allocate different embedding sizes to entities. This is done by either constructing an automated search procedure to find the best embedding size for each entity from a set of predefined options [13, 14, 12, 15], or applying sparsification (i.e., pruning) on the full embedding table to zero-out less important dimensions in every entity embedding [16, 10, 17, 18]. Though introducing varying dimensions helps selectively preserve embedding expressiveness for important entities (e.g., popular items) when working toward a tight memory budget, the usable embedding dimensions in both search- and pruning-based approaches will decrease dramatically. Consequently, a substantial amount of embedded information is lost, sacrificing the accuracy when calculating user-item similarity. The key reason is that, these methods follow the conventional embedding table scheme, where every entity is still explicitly mapped to a unique embedding vector and no parameter sharing across different entities is allowed. Hence, existing dynamic embedding size allocation methods are essentially committed to reducing the average embedding dimension. However, the embedding parameter size is commonly dominated by the number of entities rather than the embedding dimension (e.g., a million users and items versus an embedding size of 128), the average dimension for each entity embedding has to drop significantly to meet a given memory budget. (a) The deactivated dimensions in $\mathbf{v}_{1}$, $\mathbf{v}_{2}$ complement each other, leading to a dense compositional embedding with higher expressiveness. (b) The deactivated dimensions in $\mathbf{v}_{1}$, $\mathbf{v}_{2}$ fully overlap, leading to a sparse compositional embedding with lower expressiveness. Figure 1: Illustration of the impact of complementary behavior between the $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$. Naturally, another line of research in lightweight RSs is to enable parameter sharing to lower the number of embedding vectors needed for representing all entities, and hashing-based methods [9, 19, 20] are the most representative ones. In a nutshell, those solutions need one or several meta-embedding tables (a.k.a. codebooks) consisting of fixed-size embedding vectors. By hashing each user/item ID into a combination of indexes, an entity embedding can be composed by merging all meta-embedding vectors (e.g., via sum pooling) drawn from these hashed indexes, which is also termed a compositional embedding. As such, the meta-embedding tables need to carry far fewer embedding vectors than a full embedding table, while still producing distinct representations for all entities. One example is that 10 meta-embeddings with dual hashing [19] can represent up to $C^{10}_{2}=45$ entities, with only $22\%$ of the parameters needed. While hashing-based methods can provide dense and fixed-size embeddings for all entities, the number of meta-embeddings allowed must be reduced for lower memory budgets. As a result, one meta-embedding has to be reused for a large number of compositional embeddings, e.g., each meta- embedding will appear in over one-fifth of the compositional embeddings in the previous example, diluting the uniqueness of information they carry and eventually hurting the recommendation effectiveness [21]. Moreover, given a target memory budget, hash collisions are inevitable when the combinations of meta-embeddings are exhausted for the entity size. In this case, many entities are forced to share one identical compositional embedding [21]. Compared with dynamic embedding size allocation that is prone to producing excessively compact embeddings, hashing-based methods bear the risk of weakening the distinguishability of entity embeddings, which also impairs the utility of entity embeddings in ranking tasks. To this end, we put forward our lightweight embedding framework for RSs, namely Compositional Embedding with Regularized Pruning (CERP). In CERP, instead of altering the usable embedding dimension or the number of total embeddings alone, we deploy a compositional embedding paradigm with two balanced codebooks, and design a pruning process to simultaneously sparsify them. On the one hand, with the same amount of parameters budgeted, sparsification allows to selectively switch off (i.e., zero-out) less informative dimensions for each meta-embedding, which essentially squeezes out additional parameters to allow more meta-embeddings to be used. Thus, each meta-embedding is then less frequently reused in different compositional embeddings, uplifting the uniqueness of entity representations. On the other hand, as pruning now starts from the inherently smaller codebooks rather than the full embedding table, a given memory budget can be met with far fewer pruned parameters for every meta-embedding. In return, this further brings substantially denser and more expressive compositional embeddings for all entities. However, another challenge associated with the pruning process has to be tackled before CERP can enjoy benefits from both ends. Given the compositional nature of entity embeddings, the behavior of the pruning algorithm has to be regularized to avoid imbalanced or homogenized sparsification patterns between two codebooks. Taking a pair of meta- embeddings $\mathbf{v}_{1},\mathbf{v}_{2}\in\mathbb{R}^{d}$ as an example, with the commonly used sum pooling $\mathbf{v}=\mathbf{v}_{1}+\mathbf{v}_{2}$, if half of their dimensions are deactivated, then the number of usable/non- zero embedding dimensions in the composed embedding $\mathbf{v}$ will range from $\frac{d}{2}$ to $d$. Ideally, if the deactivated dimensions in $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$ complement each other, then the density of $\mathbf{v}$ is maximized (see Figure 1(a)). On the contrary, as demonstrated in Figure 1(b), if $\mathbf{v}_{1}$ fully overlaps with $\mathbf{v}_{2}$ on deactivated dimensions, despite consuming the same amount of $d$ parameters in $\mathbf{v}_{1}$ and $\mathbf{v}_{2}$, half of the dimensions in the resultant $\mathbf{v}$ do not carry any useful information, hence still being highly sparse. To alleviate this, we further design a pruning regularizer to facilitate complementary pruning so that the average size of entities’ final embedding vectors is not compromised amid robust pruning. We summarize our main contributions below: * • We innovatively propose to let the dynamic embedding size allocation and compositional embeddings complement each other, so as to facilitate memory- efficient and accurate lightweight recommendation. * • We design an embedding optimization framework CERP that utilizes two codebooks for generating compositional entity representations, where we further propose a novel pruning regularizer that coordinates joint pruning on the two codebooks. * • We conduct extensive experiments on real-world datasets to compare the performance of CERP with state-of-art embedding optimization methods. The results indicate that CERP achieves promising recommendation performance under strict memory budgets. ## II Related Work Current literature attempts to construct lightweight recommender systems by considering various approaches, but with a unanimous focus on the compression of the embedding layer. In this section, we analyze different existing technical pathways toward this goal. Binary Code Representation Learning. Since the conventional embedding table stores entity embeddings as real-valued dense vectors, which consumes significant storage space, early approaches target this by representing the embedding vectors using binary codes [22, 23, 24, 25, 26]. There is also a branch that adaptively learns entities’ binary hash code representations from real-valued dense embeddings [27, 28] via approximation. Although binarized embedding vectors utilize much less space, the binarization process often causes quantization loss [29, 30], vastly distorting the entity representations and severely hurting the recommendation performance. Automated Embedding Size Search. A large amount of work [13, 14, 12, 15, 31, 16, 10, 18] relies on automated machine learning (AutoML) for embedding optimization due to its convenience in training [32]. MDE [13] takes in human- defined heuristics for entity embedding dimension assignment. Inspired by Neural Architecture Search (NAS) [33], techniques that conduct automatic embedding selection from pre-defined search space [14, 34] were proposed. AutoEmb [12] and ESAPN [15] were further invented to learn suitable embedding dimensions based on the popularity of entities. AutoEmb [12] devises soft selection to express embedding vectors as the weighted sum of multiple embedding sizes. ESAPN [15] designs an automatic reinforcement learning (RL) agent for embedding size selection. Although these dimension search methods may find the appropriate dimension size for each entity, the slow training speed of NAS [35] throttles their training efficiency. In addition, for a method that uses reinforcement learning for dimension search, the search space will normally be enormous. Thus, increases the difficulty for the learning agent to find the most optimal embedding sizes. Embedding Pruning. As another alternative, the main idea of pruning-based methods is to automatically learn the importance of parameters in embeddings or components in models [32]. The unimportant or redundant ones are then pruned (i.e., deleted) to lower memory consumption. Techniques that belong to this category are PEP [16] and OptEmbed [18]. PEP [16] applies $L_{1}$ regularization on the full embedding table directly to increase the sparsity adaptively via a learnable pruning threshold. While the robustness of $L_{1}$ regularization in sparsification [36], amid the tight memory budget and the fact that each entity is assigned a unique embedding, the number of usable dimension drops drastically, causing a loss of embedding fidelity and sacrificing system accuracy. OptEmbed [18] combines a trainable pruning threshold with a one-shot NAS dimension search to perform both row-wise and column-wise embedding optimization. However, the time-consuming nature of NAS [35] remains a bottleneck for its scalability. Compositional Embeddings. Compared with pruning, compositional embeddings [37, 19, 9, 20, 21, 38] are an ideal solution to lower the number of embedding rows, and meanwhile preserving dense embeddings. The main idea is to represent entities with a combination of meta-embedding vectors, where the meta- embeddings for entities are determined by hash functions and can be shared [37]. However, a high memory constraint will also limit the number of usable meta-embeddings, bringing the risk of hash collisions. LCE [39] only explicitly trains user (or item) embeddings and applies composition operator and GNN to infer item (or user) embeddings in real time. However, this technique does not avoid memory exhaustion when handling an extreme number of users/items. Kang et al. [21] replace the embedding tables with a DNN-based embedding generator fed with carefully crafted hash codes for each user/item. Nevertheless, the quality of the generated embeddings relies on excessively long hash codes (e.g., 1,024 as in [21]) to compensate for the limited expressiveness of the DNN. It is also worth noting that there is another direction [40, 41, 42, 8] to utilize tensor train (TT) decomposition [43] to compress the embedding table as a sequence of TT-cores, which can be interpreted as a special case of compositional embeddings [10]. However, given the widely acknowledged computational overheads introduced by the sequential matrix operations [42], it is a less practical solution for large-scale RSs. ## III Compositional Embedding with Regularized Pruning We present CERP, our proposed lightweight embedding approach for recommendation in this section by introducing the detailed design of its components. ### III-A Generating Compositional Embeddings In a typical latent factor recommender, an entity (i.e., user/item) is represented by a distinct, $d$-dimensional embedding vector, contributing to an embedding table with $(|\mathcal{U}|+|\mathcal{I}|)\times d$ parameter consumption. $\mathcal{U}$ and $\mathcal{I}$ are the sets of all users and all items, respectively. As described in Section I, to reduce the size of the full embedding table, each entity embedding in CERP is composed by combining a pair of meta-embeddings $\mathbf{p}$, $\mathbf{q}\in\mathbb{R}^{d}$ respectively drawn from two smaller embedding tables $\mathbf{P}$, $\mathbf{Q}\in\mathbb{R}^{b\times d}$ (i.e., codebooks). Each row in $\mathbf{P}$ and $\mathbf{Q}$ corresponds to a meta-embedding, and $b$ is the number of meta-embeddings in each codebook, also termed bucket size. For each entity, to generate its compositional embedding, we first retrieve one meta- embedding vector from each codebook, and then merge both vectors. In this case, a total of $b^{2}$ different combinations of $(\mathbf{p},\mathbf{q})$ can be guaranteed. As our setting allows $2b\ll|\mathcal{U}|+|\mathcal{I}|$ by several magnitudes while ensuring $b^{2}\geq|\mathcal{U}|+|\mathcal{I}|$, such a compositional paradigm can significantly cut the parameter consumption while preserving the uniqueness of all entity embeddings. If a full embedding table is in use, then each entity’s associated index $k\in[0,N-1]$ will point to its embedding stored at the corresponding row of the full embedding table. In our compositional embedding scheme, each entity now needs to have a pair of indexes $k_{p},k_{q}\in[0,b-1]$ for the two codebooks. Intuitively, this can be accomplished by applying some hash functions to map the original entity index $k$ to $(k_{p},k_{q})$. Meanwhile, in a recommendation setting, to obtain optimal expressiveness of the resulted entity embeddings, each meta-embedding needs to be prevented from being frequently reused for composing entity embeddings. Hence, we would also like to spread the hashed values as evenly as possible. To achieve this, we design a balanced hashing trick to assign each entity a unique combination without introducing additional learnable model parameters. Let all users/items be indexed with continuous integers $k\in[0,|\mathcal{U}|+|\mathcal{I}|-1]$, then the two hashed indexes are computed via: $\displaystyle k_{p}$ $\displaystyle=k\\!\\!\\!\\!\mod b,$ (1) $\displaystyle k_{q}$ $\displaystyle=k\,\,\mathrm{div}\left\lceil\frac{|\mathcal{U}|+|\mathcal{I}|}{b}\right\rceil,$ where $\mathrm{mod}$ and $\mathrm{div}$ are respectively the modulo and integer division operators. Essentially, with Equation 1, each $k_{p}$/$k_{q}$ value only appears $\left\lceil\frac{|\mathcal{U}|+|\mathcal{I}|}{b}\right\rceil$ times in all compositional embeddings. For the $k$-th entity, we identify the $k_{p}$-th and $k_{q}$-th meta-embeddings respectively from $\mathbf{P}$ and $\mathbf{Q}$, and compute its compositional embedding via sum pooling: $\displaystyle\mathbf{e}_{k}$ $\displaystyle=\mathbf{p}+\mathbf{q},$ (2) $\displaystyle\mathbf{p}$ $\displaystyle=\mathbf{P}[k_{p}]^{\top},\,\,\,\mathbf{q}=\mathbf{Q}[k_{q}]^{\top}.$ Considering that $\mathbf{p}$ and $\mathbf{q}$ will be heavily sparsified in the pruning stage, the use of sum pooling produces denser compositional embeddings, especially compared with other vector combinatory operations like element-wise product and concatenation that will lead to more zero-valued entries in the resulted embeddings. ### III-B Regularized Embedding Pruning If the memory budget is sufficient and a decent bucket size $b$ for both codebooks is used, the uniqueness of information encoded in each compositional embeddings can be strengthened, since each meta-embedding will be shared by fewer entities. However, when the memory budget shrinks and a small $b$ has to be used, it inevitably lowers the expressiveness of all entity embeddings. Hence, in CERP, we propose to sparsify the codebooks via pruning, such that the codebooks can retain a relatively larger bucket size under each given memory budget. During pruning, the less informative dimensions in each meta- embedding are masked by zeros, where the resultant codebooks can be efficiently handled by the latest sparse matrix storage techniques [44, 45] that bring negligible cost for storing zero-valued entries. Compared with methods that reduce embedding dimensions by recursively searching for the best embedding size for each entity, pruning-based solutions have several advantages. Firstly, most search-based methods have to choose the optimal embedding size from a set of predefined discrete options (e.g., $\\{16,32,64,128\\}$), making the final embedding sizes far less refined than pruning methods that can individually decide to block or keep each dimension of $\mathbb{R}^{d}$. Secondly, unlike pruning methods, search-based methods are heavily entangled with reinforcement learning due to the need for iterative search and evaluate different actions, which does not favor large- scale applications. Thirdly, while search-based methods will vary the dimensionality across different embeddings, all pruned meta-embeddings are still $d$-dimensional vectors that are partially masked, thus fully supporting the need for vector combinatory operations in compositional paradigms. Figure 2: The overview of the main components and optimization process of CERP. Considering the codebooks $\mathbf{P}$, $\mathbf{Q}$, the pruning objective is formulated in conjunction with the overall loss $\mathcal{L}$ (i.e., the recommendation loss with other optional side-task losses) as $\min\;\mathcal{L},\;\;\;s.t.\;||\mathbf{P}||_{0}+||\mathbf{Q}||_{0}\leq t,$ (3) where $||\cdot||_{0}$ is the $L_{0}$-norm that counts the number of non-zero entries in a matrix, and $t\in\mathbb{N}$ is a predefined threshold indicating the maximum parameter number allowed. Unfortunately, straightforwardly optimizing Equation 3 is intractable, given $L_{0}$-norm’s non-convexity and the NP-hard nature of such a combinatorial problem [46]. A proper way to wrap around this is to perform $L_{1}$ regularization, in which the original $L_{0}$-norm problem is projected on $L_{1}$-ball to make the original problem end-to-end differentiable. Enlightened by the effectiveness of $L_{1}$ convex relaxation [36], we reparameterize the pruning process [46, 16] of two codebooks into the following functions to approximate the $L_{0}$-based sparsity constraint: $\displaystyle\mathbf{\widehat{P}}$ $\displaystyle=\mathrm{prune}(\mathbf{P},\mathbf{S}_{p})=\mathrm{sign}(\mathbf{P})\odot\mathrm{ReLU}(|\mathbf{P}|-\sigma(\mathbf{S}_{p})),$ (4) $\displaystyle\mathbf{\widehat{Q}}$ $\displaystyle=\mathrm{prune}(\mathbf{Q},\mathbf{S}_{q})=\mathrm{sign}(\mathbf{Q})\odot\mathrm{ReLU}(|\mathbf{Q}|-\sigma(\mathbf{S}_{q})),$ where $\odot$ denotes element-wise multiplication, $\mathrm{ReLU}(\cdot)$ and $\sigma(\cdot)$ are respectively the rectified linear unit and sigmoid functions, and $\mathrm{sign}(\cdot)$ is the signum function that returns 1, -1, and 0 respectively for inputs above, below, or equal to 0. We introduce two learnable soft threshold matrices $\mathbf{S}_{p},\mathbf{S}_{q}\in\mathbb{R}^{b\times d}$ to control the sparsity of both codebooks in a fine-grained, element-wise manner. Taking codebook $\mathbf{P}$ as an example, $\mathrm{ReLU}(|\mathbf{P}|-\sigma(\mathbf{S}_{p}))$ will zero-out entries in $\mathbf{P}$ if the corresponding threshold elements in $\mathbf{S}_{p}$ receive a large value, while the multiplication with $\mathrm{sign}(\mathbf{P})$ helps recover the positivity/negativity of retained elements to ensure expressiveness. Notably, once the soft thresholds are learned, only the computed sparse codebooks $\mathbf{\widehat{P}}$ and $\mathbf{\widehat{Q}}$ will be kept and deployed for the recommender, thus avoiding unnecessary parameter consumption. As training proceeds, the soft thresholds $\mathbf{S}_{p}$ and $\mathbf{S}_{q}$ are to be updated in every optimization iteration, so will both sparsified codebooks $\mathbf{\widehat{P}}$ and $\mathbf{\widehat{Q}}$ until the desired parameter number $t$ is met. However, it is worth mentioning that function $\mathrm{sign}(\cdot)$ is non-differentiable at zero and has a zero-valued gradient at all other points, so we take advantage of subgradients [46] to facilitate end-to-end backpropagation. Taking codebook $\mathbf{P}$ as an example, the subgradients for optimizing $\mathbf{P}$ and the soft threshold $\mathbf{S}_{p}$ are respectively the following: $\displaystyle\nabla_{\mathbf{P}}$ $\displaystyle=\frac{\partial\mathcal{L}(\mathcal{D},\mathrm{prune}(\mathbf{P},\mathbf{S}_{p}))}{\partial\mathrm{prune}(\mathbf{P},\mathbf{S}_{p})}\odot\mathbf{1}\\{\mathrm{prune}(\mathbf{P},\mathbf{S}_{p})\neq 0\\},$ (5) $\displaystyle\nabla_{\mathbf{S}_{p}}$ $\displaystyle=-\frac{\partial\sigma(\mathbf{S}_{p})}{\partial\mathbf{S}_{p}}\cdot\bigg{\langle}\frac{\partial\mathcal{L}(\mathcal{D},\mathrm{prune}(\mathbf{P},\mathbf{S}_{p}))}{\partial\mathrm{prune}(\mathbf{P},\mathbf{S}_{p})},$ $\displaystyle\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\mathrm{sign}(\mathbf{P})\odot\mathbf{1}\\{\mathrm{prune}(\mathbf{P},\mathbf{S}_{p})\neq 0\\}\bigg{\rangle},$ where $\mathcal{D}$ is the full training set, $\langle\cdot,\cdot\rangle$ is the matrix inner product, and $\mathbf{1}\\{\cdot\\}$ is the indicator function that only keeps gradients for non-zero entries for gradient descent. Analogously, we can also derive the gradients for the other codebook $\mathbf{Q}$ and soft threshold $\mathbf{S}_{q}$. Now, following the same hashing rule in Equation 1, the entity embeddings are composed from the sparsified codebooks $\widehat{\mathbf{P}}$ and $\widehat{\mathbf{Q}}$, denoted by $\mathbf{e}_{k}=\widehat{\mathbf{p}}+\widehat{\mathbf{q}}$. As stated in Section I, given the compositional nature of entity embeddings, it is beneficial to enforce that the pruned dimensions in $\widehat{\mathbf{p}}$ and $\widehat{\mathbf{q}}$ are complementary rather than repetitive. Ideally, for all $k\in[0,|\mathcal{U}|+|\mathcal{I}|-1]$, we want to achieve this for every possible $(\mathbf{\hat{p}},\mathbf{\hat{q}})$ pair, leading to the following optimization goal: $\underset{\widehat{\mathbf{P}},\,\widehat{\mathbf{Q}}}{\mathrm{argmin}}-\sum_{k_{p}=1}^{b}\sum_{k_{q}=1}^{b}||\widehat{\mathbf{P}}[k_{p}]+\widehat{\mathbf{Q}}[k_{q}]||_{0},$ (6) which is essentially maximizing the total number of non-zero entries in all compositional entity embeddings. However, the non-convexity of $L_{0}$ norm and time-consuming iterative computation over $b^{2}$ combinations suggest that we need a more computationally feasible solution to regularize the pruning process. Thus, we design the following instance-level regularization term to approximate the desired pruning behavior in Equation 6: $\mathcal{L}_{prune}=-\sum_{k\in\mathcal{B}}\|\tanh\left(\eta\mathbf{e}_{k}\right)\|^{2}_{2}$ (7) where $\mathcal{B}$ is the current training batch. With a large positive temperature value $\eta$ (e.g., $10^{3}$), for any non-zero entry $e\in\mathbf{e}_{k}$, we have $\tanh\left(\eta e\ \right)\approx-1$/$1$. Rationale of The Proposed Pruning Regularizer. To better motivate the design of $\mathcal{L}_{prune}$, we use a toy example to showcase the relationship between $\mathcal{L}_{prune}$ and different pruning behaviors on $\widehat{\mathbf{p}}$ and $\widehat{\mathbf{q}}$ that compose $\mathbf{e}_{k}$. In a nutshell, with the rescaled $\tanh(\cdot)$, the squared $L_{2}$ norm counts the number of non-zero entries in each $\mathbf{e}_{k}$. Despite the same amount of 6 parameters consumed they may correspond to three possible cases where the non-zero dimensions in two meta-embeddings completely overlap, partially complementary, and completely complementary. As the density of the resultant entity embedding grows, the squared norm returns a higher value, which translates into a lower $\mathcal{L}_{prune}$. Without the pruning regularizer, the sparsification focuses on lowering the parameter number only, and may result in suboptimal utility of the compositional entity embeddings. In contrast, $\mathcal{L}_{prune}$ is able to advocate minimizing the amount of zero-valued entries in $\mathbf{e}_{k}$, which in fact rewards a mismatching pattern in the pruned dimensions between every pair of $(\widehat{\mathbf{p}},\widehat{\mathbf{q}})$. Furthermore, $\mathcal{L}_{prune}$ is fully differentiable111Because we design $\mathcal{L}_{prune}$ only for regularization purpose, instead of using a non- differentiable, hard selection function like $\mathrm{sign}(\cdot)$, our rescaled $\tanh(\cdot)$ is simple yet effective for counting non-zero entries., and can be efficiently appended to the mini-batch training process. ### III-C Joint Optimization of Recommendation and Pruning We provide an overview of the optimization process of CERP in Figure 2. After obtaining sparsified codebooks $\widehat{\mathbf{P}}$ and $\widehat{\mathbf{Q}}$, we may retrieve the sparsified entity embeddings. For a (user, item) pair indexed by $(u,i)$, denoting the retrieved user and item embeddings as $\mathbf{e}_{u},\mathbf{e}_{i}$, then the user-item affinity score for personalized item ranking is computed as follows: $\hat{y}_{ui}=f_{rec}(\mathbf{e}_{u},\mathbf{e}_{i}),$ (8) where $f_{rec}(\cdot,\cdot)$ is the base recommender that takes the sparsified user and item embeddings as its input, and estimates the pairwise user-item similarity $\hat{y}_{ui}$. The choice of $f_{rec}(\cdot,\cdot)$ can be an arbitrary latent factor model that requires embeddings. Since we are optimizing the quality of personalized recommendation ranking, we adopt the Bayesian personalized ranking (BPR) loss [47] for model parameter learning: $\mathcal{L}_{BPR}=\sum_{(u,i^{+},i^{-})\in\mathcal{B}}-\ln{\sigma(\hat{y}_{ui^{+}}-\hat{y}_{ui^{-}})},$ (9) where $\mathcal{B}$ is a training batch of the full dataset, and triplets $(u,i^{+},i^{-})$ contains the sampled user $u$’s positively rated item $i^{+}$ and negatively rated/unvisited item $i^{-}$. For simplicity, we omit the $L_{2}$ penalization which is a common practice for preventing overfitting. Finally, we define the joint loss equation to facilitate simultaneous optimization of both the recommendation and sparsification tasks: $\mathcal{L}=\mathcal{L}_{BPR}+\gamma\mathcal{L}_{prune},$ (10) where $\gamma\in[0,1]$ adjusts the weight of $\mathcal{L}_{prune}$ in the total loss $\mathcal{L}$. The higher the value of $\gamma$, the stronger the constraint will be for the pruning behavior of CERP, hence a slower pruning process may be observed. To coordinate pruning efficiency and embedding quality simultaneously, we additionally impose an exponential decay on $\gamma$. Rationale of Exponential Decay on $\gamma$. We first set $\gamma$ to an initial value, then by the end of each pruning epoch, $\gamma$ is reduced by half. The motivation for having a gradually decreasing $\gamma$ value is that, at an early stage of the pruning process, we aim to retain valuable embedding elements so the embedding quality is preserved. When the embedding table becomes highly sparse as the pruning goes, instead of overly emphasizing where to prune, it is more beneficial to focus on optimizing the information stored in each sparse embedding by lowering the impact of $\mathcal{L}_{prune}$. From Figure 2, we may see that the recommendation loss and pruning regularization loss are optimized at the same time. The joint training is terminated when the total parameter consumption of the two sparsified codebooks reaches the target sparsity rate $s$: $s=\frac{||\widehat{\mathbf{P}}||_{0}+||\widehat{\mathbf{Q}}||_{0}}{(|\mathcal{U}|+|\mathcal{I}|)d}\times 100\%,$ which is the ratio between the final embedding parameter consumption and the size of a full embedding table with $|\mathcal{U}|+|\mathcal{I}|$ $d$-dimensional vectors. ### III-D Parameter Retraining Work that utilizes pruning as the embedding optimization strategy typically requires a parameter retraining stage after the pruning stage is conducted [16, 18] in order to prevent the interference on embedding value learning caused by pruning itself. We follow this fashion in our work as well. When the joint optimization described above is done, we can obtain the fixed pruning masks from both sparsified codebooks: $\mathbf{\widehat{P}}_{mask}=\lvert\mathrm{sign}(\mathbf{\widehat{{P}}})\rvert,\,\,\,\,\mathbf{\widehat{Q}}_{mask}=\lvert\mathrm{sign}(\mathbf{\widehat{{Q}}})\rvert.$ (11) It is clear that $\mathbf{\widehat{P}}_{mask},\mathbf{\widehat{Q}}_{mask}\in\\{0,1\\}^{b\times d}$ record whether an element in $\mathbf{{P}}$ and $\mathbf{{Q}}$ should be pruned or not. Then, element-wise multiplication is performed with $\mathbf{\widehat{P}}_{mask}$ and $\mathbf{\widehat{Q}}_{mask}$ respectively to mask out deactivated codebook entries during retraining. The retraining is summarized as the following: $\displaystyle\underset{\mathbf{{P}},\mathbf{Q},\Theta}{\mathrm{argmin}}\,\mathcal{L}_{BPR}(\mathcal{D},\mathbf{{P}}\odot\mathbf{\widehat{P}}_{mask},\mathbf{{Q}}\odot\mathbf{\widehat{Q}}_{mask}),$ (12) where $\Theta$ denotes all trainable parameters in the recommender $f(\cdot,\cdot)$. When the retraining converges, only the sparse codebooks $\widehat{\mathbf{{P}}}=\mathbf{{Q}}\odot\mathbf{\widehat{Q}}_{mask}$ and $\widehat{\mathbf{{Q}}}=\mathbf{{Q}}\odot\mathbf{\widehat{Q}}_{mask}$ need to be kept as the embedding component for the recommendation model. ## IV Experiments In this section, we organize experiments to evaluate the effectiveness of CERP. Specifically, we are interested in answering the following research questions (RQs): * • RQ1: Does our framework work well compared to other baselines under various memory budgets? * • RQ2: What is the effect of CERP’s key components? * • RQ3: How do different hyperparameter settings affect CERP’s performance? ### IV-A Experimental Settings #### IV-A1 Datasets We use two publicly available benchmark datasets: Gowalla and Yelp2020. Gowalla dataset is available on LightGCN’s [3] official code repository222https://github.com/gusye1234/LightGCN-PyTorch, while Yelp2020 can be found on HGCF’s [48] official code repository333https://github.com/layer6ai-labs/HGCF. The detailed statistics of both datasets are summarized in Table I. We adopt train/test/validation splits protocol in [49] to randomly select 80% of interactions as the train set. The test set contains the remaining 20% of interactions and 10% of the train set was further picked to form the validation set. For each user-positive item interaction, we sample 5 negative items. TABLE I: Statistics of datasets used in our work. Dataset | #User | #Item | #Interaction | Density ---|---|---|---|--- Gowalla | 29,858 | 40,981 | 1,027,370 | 0.084% Yelp2020 | 71,135 | 45,063 | 1,782,999 | 0.056% TABLE II: Performance comparison $w.r.t.$ different embedding sparsity rates, where “Avg Dim” denotes the average of the actual embedding dimension achieved by each method. Percentages indicate the performance difference between CERP and the best baseline results. We use bold font to highlight the best result under each target sparsity rate $s$. | Gowalla | Yelp2020 ---|---|--- | MLP | LightGCN | MLP | LightGCN Method | Sparsity | | Avg --- Dim N@10 | R@10 | Sparsity | | Avg --- Dim N@10 | R@10 | Sparsity | | Avg --- Dim N@10 | R@10 | Sparsity | | Avg --- Dim N@10 | R@10 ESAPN | 87.89% | 15.5 | 0.0045 | 0.0051 | 89.14% | 13.9 | 0.0282 | 0.0292 | 89.83% | 13.02 | 0.0035 | 0.0040 | 90.35% | 12.35 | 0.0064 | 0.0078 AutoEmb | 89.84% | 13 | 0.0013 | 0.0012 | 89.84% | 13 | 0.0275 | 0.0275 | 89.84% | 13 | 0.0003 | 0.0003 | 89.84% | 13 | 0.0073 | 0.0090 OptEmbed | 89.97% | 12.84 | 0.0276 | 0.0276 | 89.34% | 13.64 | 0.0373 | 0.0369 | 88.31% | 14.96 | 0.0055 | 0.0068 | 87.96% | 15.41 | 0.0084 | 0.0089 DHE | 90.63% | 12 | 0.0081 | 0.0069 | 90.63% | 12 | 0.0046 | 0.0050 | 90.63% | 12 | 0.0017 | 0.0023 | 90.63% | 12 | 0.0013 | 0.0016 UD | 90% | 12 | 0.0265 | 0.0266 | 90% | 12 | 0.0875 | 0.0797 | 90% | 12 | 0.0049 | 0.0052 | 90% | 12 | 0.0211 | 0.0251 PEP | 12.79 | 0.0251 | 0.0252 | 12.79 | 0.0685 | 0.0607 | 12.80 | 0.0036 | 0.0046 | 12.78 | 0.0157 | 0.0186 QR | 128 | 0.0288 | 0.0273 | 128 | 0.0776 | 0.0724 | 128 | 0.0054 | 0.0062 | 128 | 0.0193 | 0.0225 CERP | 124.33 | 0.0312 | 0.0288 | 125.39 | 0.0965 | 0.0926 | 122.94 | 0.0061 | 0.0067 | 125.79 | 0.0230 | 0.0267 | | (+8.3%) | (+5.7%) | | (+10.3%) | (+16.2%) | | (+12.7%) | (+8.0%) | | (+9.0%) | (+6.4%) UD | 95% | 6 | 0.0248 | 0.0257 | 95% | 6 | 0.0617 | 0.0568 | 95% | 6 | 0.0042 | 0.0048 | 95% | 6 | 0.0161 | 0.0190 PEP | 6.39 | 0.0277 | 0.0268 | 6.39 | 0.0664 | 0.0589 | 6.40 | 0.0052 | 0.0055 | 6.39 | 0.0111 | 0.0121 QR | 128 | 0.0280 | 0.0254 | 128 | 0.0598 | 0.0546 | 128 | 0.0056 | 0.0063 | 128 | 0.0151 | 0.0166 CERP | 83.96 | 0.0307 | 0.0288 | 84.88 | 0.0913 | 0.0877 | 80.85 | 0.0057 | 0.0067 | 82.37 | 0.0206 | 0.0236 | | (+9.6%) | (+7.1%) | | (+37.6%) | (+48.8%) | | (+1.7%) | (+6.0%) | | (+28.0%) | (+24.3%) UD | 99% | 1 | 0.0273 | 0.0261 | 99% | 1 | 0.0416 | 0.0406 | 99% | 1 | 0.0043 | 0.0050 | 99% | 1 | 0.0092 | 0.0104 PEP | 1.28 | 0.0264 | 0.0277 | 1.28 | 0.0423 | 0.0346 | 1.28 | 0.0052 | 0.0054 | 1.28 | 0.0042 | 0.0052 QR | 128 | 0.0222 | 0.0212 | 128 | 0.0575 | 0.0540 | 128 | 0.0058 | 0.0067 | 128 | 0.0071 | 0.0081 CERP | 18.30 | 0.0298 | 0.0279 | 17.99 | 0.0831 | 0.0816 | 18.66 | 0.0061 | 0.0071 | 18.61 | 0.0167 | 0.0190 | | (+9.0%) | (+0.7%) | | (+44.5%) | (+51.1%) | | (+5.1%) | (+5.6%) | | (+82.3%) | (+83.1%) #### IV-A2 Backbone Recommenders and Baselines CERP is model-agnostic to various latent factor recommenders for lowering the memory consumption of their embedding tables. To comprehensively evaluate our method’s efficacy and generalizability across different base recommenders, we pair CERP with two recommenders, namely the multi-layer percerptron (MLP) from neural collaborative filtering [2] (i.e., its deep component), and the light graph convolution network (LightGCN) [3], which are commonly used backbones [50, 51, 52]. Specifically, both recommenders will have their full embedding table replaced by the sparsified one in CERP, while all other structural designs remain unchanged. CERP is compared with the following lightweight embedding baselines, which are all model-agnostic: * • ESAPN [15]: An AutoML-based dimension search algorithm that utilizes reinforcement learning (RL) to solve the discrete embedding size selection problem. * • AutoEmb [12]: A differentiable, AutoML-based dimension search algorithm that performs soft selection by using the weighted sum on embedding vectors with different dimension sizes. * • PEP [16]: An automatic embedding sparsification technique that solely relies on $L_{1}$ regularized pruning to reach desired memory budget. * • QR [9]: A work that deploys compositional embedding structure via quotient- remainder hashing trick for meta-embedding indexing. * • OptEmbed [18]: The state-of-the-art embedding optimization framework that combines pruning and AutoML-based embedding dimension search. * • DHE [21]: A hashing-based technique that replaces the entire embedding table with fixed-length hash codes and a DNN network to compute unique embedding vectors. Besides, for both base recommenders MLP and LightGCN, we implement a vanilla baseline with a fixed embedding size for all users and items, where the fixed embedding size is set to the maximum integer allowed for each given sparsity rate. We term this baseline the unified dimensionality (UD) approach. #### IV-A3 Evaluation Metrics We follow the common practice in recommendation research [3, 53, 54, 49] to use $\textit{NDCG}@N$ [55] and $Recall@N$ as evaluation metrics and $N$ is fixed to $10$. For UD, PEP, QR and CERP, to testify their effectiveness under different memory budgets, we choose three embedding sparsity rates $s\in\\{90\%,95\%,99\%\\}$, where the compressed embedding table is guaranteed to have no more than $sd(|\mathcal{U}|+|\mathcal{I}|)$ parameters. Notably, ESAPN, AutoEmb, OptEmbed and DHE have a more performance-focused optimization objective and lack a mechanism to precisely control the resulting embedding sparsity, hence we tune them to obtain a sparsity as close to $90\%$ as possible and only report their performance under their final sparsity achieved. TABLE III: Performance comparison between default settings and settings with particular component modified. Default means settings with $\mathcal{L}_{prune}$ and exponential decay on $\gamma$ enabled, and the bucket size is balanced. Overlap rate in our context refers to the percentage of overlapping non-zero dimensions between the two meta-embeddings used for composing all entity embeddings. | | Gowalla | Yelp2020 ---|---|---|--- | | MLP | LightGCN | MLP | LightGCN Sparsity | Variant | | Avg --- Dim N@10 | | Overlap --- Rate | Avg --- Dim N@10 | | Overlap --- Rate | Avg --- Dim N@10 | | Overlap --- Rate | Avg --- Dim N@10 | | Overlap --- Rate 90% | Default | 124.33 | 0.0312 | 49.17% | 125.39 | 0.0965 | 47.89% | 122.94 | 0.0061 | 51.80% | 125.79 | 0.0230 | 49.85% w/o $\mathcal{L}_{prune}$ | 118.23 | 0.0310 | 53.40% | 119.34 | 0.0658 | 52.93% | 118.91 | 0.0049 | 54.62% | 118.70 | 0.0069 | 54.93% w/o $\gamma$ decay | 125.52 | 0.0302 | 49.07% | 125.93 | 0.0969 | 47.59% | 123.07 | 0.0053 | 47.58% | 126.01 | 0.0222 | 49.91% Imbalanced meta-embeddings | 128 | 0.0313 | 70.75% | 128 | 0.0815 | 70.79% | 128 | 0.0056 | 72.58% | 128 | 0.0200 | 72.60% 95% | Default | 83.96 | 0.0307 | 8.07% | 84.88 | 0.0913 | 6.80% | 80.85 | 0.0057 | 10.94% | 82.37 | 0.0206 | 9.89% w/o $\mathcal{L}_{prune}$ | 74.39 | 0.0295 | 14.83% | 78.99 | 0.0665 | 11.99% | 75.80 | 0.0057 | 14.40% | 73.40 | 0.0070 | 16.59% w/o $\gamma$ decay | | Sparsity stalls at 94.94% --- Sparsity stalls at 93.86% | 81.18 | 0.0061 | 4.16% | 82.58 | 0.0189 | 6.43% Imbalanced meta-embeddings | 128 | 0.0293 | 35.35% | 128 | 0.0743 | 35.40% | 128 | 0.0059 | 36.31% | 128 | 0.0191 | 36.31% 99% | Default | 18.30 | 0.0298 | 0.43% | 17.99 | 0.0831 | 0.76% | 18.66 | 0.0061 | 0.32% | 18.61 | 0.0167 | 0.41% w/o $\mathcal{L}_{prune}$ | 17.10 | 0.0287 | 1.26% | 17.24 | 0.0646 | 1.33% | 16.27 | 0.0058 | 2.00% | 15.39 | 0.0069 | 2.76% w/o $\gamma$ decay | | Sparsity stalls at 94.94% --- | Sparsity stalls at 93.86% --- | Sparsity stalls at 97.43% --- | Sparsity stalls at 97.00% --- Imbalanced meta-embeddings | 128 | 0.0280 | 7.01% | 127.99 | 0.0572 | 7.04% | 128 | 0.0058 | 7.21% | 128 | 0.0145 | 7.25% #### IV-A4 Implementation Details In our work, the full embedding size $d$ for $\mathbf{P}$ and $\mathbf{Q}$ is set to $128$. The bucket size $b$ is set to $5,000$ for Gowalla dataset and $8,000$ for Yelp2020 dataset. All trainable parameters are initialized via Xavier Initialization [56]. The pruning temperature $\eta$ is fixed at 100. We use Adam optimizer, with the optimal learning rate selected from $\\{{1}\mathrm{e}{-2},{1}\mathrm{e}{-3},{1}\mathrm{e}{-4}\\}$ and weight decay selected from $\\{{1}\mathrm{e}{-5},{1}\mathrm{e}{-6},{1}\mathrm{e}{-7}\\}$. For MLP recommender, the number of hidden layers is set to $3$. The number of message-passing layers in LightGCN is set to $4$. For the two search-based methods ESAPN and AutoEmb, we use smaller values in their candidate dimension (i.e., action) set, so as to let their final embeddings achieve comparable sparsity rates. For DHE, since it computes embedding vectors by inputting dense hash encodings to DNN, we set the hash code length to 12 to match the lowest sparsity rate of $90\%$. For PEP, we adopt its optimal settings reported in the paper [16]. For QR, we adjust the bucket size of the remainder embedding table so that the total parameter size of the two meta-embedding tables can match the budgeted number of parameters. (a) CERP with MLP (b) CERP with LightGCN Figure 3: Visualization of sampled entities embedding vectors with and without pruning regularizer in $99\%$ sparsity pruned embedding tables. (*A red cell means a non-zero value in the embedding vector, a light blue cell indicates otherwise.) ### IV-B Overall Performance (RQ1) The overall performance benchmark of all tested methods under different memory budgets is shown in Table II. We summarize our findings as follows. Recommendation Performance. Regardless of the choice of base recommender, our method outperforms all baselines on both datasets. While methods with LightGCN base recommender perform better than those with MLP base recommender across the three memory budgets, LightGCN is more sensitive to the memory budget on fixed size settings or implemented on PEP and QR. Whereas when implemented with CERP, the performance degradation is reduced significantly under a tight memory budget. This can be witnessed as the huge performance percentage increase on both datasets under the $99\%$ sparsity. As for methods that cannot precisely control the final embedding size, we find that no methods can obtain comparable results to CERP under the $90\%$ sparsity memory budget. Among them, OptEmbed attains relatively better performance. Both ESAPN’s and AutoEmb’s performance heavily depends on the choice of the base recommender. DHE is the weakest method on both datasets, indicating that using dense hash encodings with small dimension sizes does not avoid the expressiveness limitation of generated entity embeddings. Average Embedding Dimension. UD, PEP, QR and CERP can meet various memory budgets, while PEP squeezes the average entity embedding size to an extremely low 1.28 at $99\%$ sparsity, compared to a size of around 18 achieved by CERP. As a non-pruning method, QR assigns full size to all embeddings unanimously but still underperforms due to the limited number of meta-embeddings. This demonstrates CERP’s strong capability in generating expressive embeddings under an extremely tight memory budget. Figure 4: Performance of CERP w.r.t. different hyperparameter settings and backbone recommenders, where “soft thr init” is a shorthand for soft threshold initializer. ### IV-C Discussions on Key Model Components (RQ2) To validate the performance gain from each key component of CERP, we carry out ablation studies on the pruning regularizer $\mathcal{L}_{prune}$, the exponential decay on $\gamma$, as well as the use of our balanced hashing trick. The performance benchmark used in this section is shown in Figure III. #### IV-C1 Pruning Regularizer To study the impact of pruning regularizer, we conduct pruning on each dataset with and without $\mathcal{L}_{prune}$, and then test their performance. It is discovered that the settings with pruning regularizer always obtain a higher average embedding dimension size than those without on both datasets. The model accuracy of settings with the regularizer enabled also improves. In terms of non-zero value overlap rates between meta-embedding vectors, applying the pruning regularizer effectively reduces the collision of non-zero dimensions of two meta-embedding vectors, making them complementary. As a qualitative analysis, we sample $10$ entities in Gowalla dataset and visualize their composed embeddings from $99\%$ pruned embedding tables in Figure 3. It is witnessed that with the regularizer, the pruned embeddings are denser for some particular entities, showing CERP can assign more usable dimensions to important entity embeddings to retain their fidelity. #### IV-C2 Exponential Decay on $\gamma$ We switch on and off the exponential decay behavior on the control factor of the pruning regularizer $\gamma$ to examine its effects on the quality of pruned embedding table and pruning efficiency. We set a maximum pruning epoch limit of $50$ in case there are settings that never terminate. In terms of pruning efficiency, the exponential decay does not affect the number of pruning epochs required to reach $90\%$ sparsity. However, for more rigorous sparsity targets, settings without exponential decay on $\gamma$ may fail to reach them. Regarding model accuracy, the $\textit{NDCG}@10$ scores of pruned embedding tables created by settings with exponential decay switched on, in general, are better than those without. Our experiments confirm the necessity of applying exponential decay on pruning regularization loss control factor $\gamma$ so that CERP can switch the goal between preserving embedding quality and accelerating pruning speed. #### IV-C3 Balanced Codebook Hashing In CERP, we leverage two codebooks with the same bucket size $b$ for compositional embeddings, such that each meta-embedding is shared by as few entities as possible. Alternatively, Shi et al. [9], suggests an imbalanced bucket size arrangement scheme with a quotient-remainder hashing trick. In their proposal, the bucket size of the quotient embedding table fully depends on the bucket size of the remainder embedding table. Such an arrangement reduces the bucket size of the quotient embedding table significantly. We implement this bucket size arrangement as well in CERP and test its performance against our balanced setting. To make a fair comparison between the two bucket size schemes, we define the bucket size of the remainder embedding table in the imbalanced setting $b^{\prime}=2\times b$. It follows that $b^{\prime}$ in Gowalla is $10,000$ and in Yelp2020 it is $16,000$. One noticeable change with the imbalanced bucket size scheme is that the retrieved embedding vectors barely contain zero values despite using the pruning technique. This is due to the fact that the bucket size of the quotient embedding table is only a fraction of the bucket size of the remainder embedding table, which makes the pruning algorithm consider every element in the quotient embedding table to be equally important and hence, should be retained. As shown in Table III, the following consequence is the degradation in performance. A guess of cause is the high non-zero value collision which lowers the uniqueness of entity embeddings. ### IV-D Hyperparameter Sensitivity (RQ3) In this section, we explore our framework CERP’s sensitivity to three crucial hyperparameters, namely the initial value of $\gamma$, the bucket size of two codebooks, as well as the initialization method of the soft threshold for pruning. We visualize the performance trend conducted under the $99\%$ sparsity memory budget in Figure 4. #### IV-D1 Initial Value of $\gamma$ The set of initial $\gamma$ values for testing is $\\{{1}\mathrm{e}{-4},{1}\mathrm{e}{-3},{1}\mathrm{e}{-2},{1}\mathrm{e}{-1},1\\}$. Figure 4a and Figure 4b show the performance of MLP and LightGCN respectively. The MLP settings are generally less sensitive to the initial value of $\gamma$ than LightGCN settings. This is especially true on Gowalla dataset, indicating the initial value of $\gamma$ is crucial to settings with LightGCN base recommender, especially for datasets with relatively dense entity interactions. #### IV-D2 Bucket Size We choose the bucket size from $\\{$4,000; 5,000; 6,000; 7,000; 8,000$\\}$ for both codebooks to conduct this part of hyperparameter testing. The performance results on both datasets are revealed in Figure 4c and Figure 4d. It is discovered that blindly increasing the bucket size does not guarantee performance improvement. This is mainly because under a fixed memory constraint, CERP faces the trade-off between embedding uniqueness and embedding fidelity. The higher the bucket size yields the sparser the meta- embedding vectors for entities, so are their compositional embeddings. Hence, the number of usable parameters is sacrificed. Too few buckets used on the other hand, hurts embedding uniqueness. #### IV-D3 Soft Threshold Initialization To study the impact of the soft threshold’s initial values on embedding quality, we conduct experiments by setting all values in the soft threshold base matrix to one (termed “all ones” initialization) or randomize it using the Uniform distribution, Normal distribution, Long-tail distribution, or Xavier Uniform distribution [56]. The performance trend is depicted in Figure 4e and Figure 4f. We find that most settings using the long-tail distribution perform exceptionally better than others. In settings with LightGCN base recommender, the simple “all ones” initialization is sufficient for attaining comparable results to the setting which uses the long-tail distribution. ## V Conclusion In this paper, we acknowledge the embedding table’s space efficiency dilemma in latent factor recommender models and identify the possible side effects that arise by devising contemporary embedding optimization techniques. We propose a novel compact embedding framework CERP to overcome recognized challenges. In CERP, we take advantage of the benefits of dynamic embedding size allocation and compositional embeddings. We design an innovative regularizer to enforce complementary behavior between the two types of work. Our comprehensive experiments confirm CERP’s capability to obtain embedding tables that satisfy various memory budgets. The performance results also indicate the superiority of CERP over other embedding optimization baselines. ## VI Acknowledgments This work is supported by the Australian Research Council under the streams of Future Fellowship (No. FT210100624), Discovery Project (No. DP190101985), and Discovery Early Career Research Award (No. DE200101465 and No. DE230101033). ## References * [1] S. Rendle, “Factorization Machines,” in _ICDM_ , 2010, pp. 995–1000. * [2] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, “Neural Collaborative Filtering,” in _WWW_ , 2017, pp. 173–182. * [3] X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, “LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation,” in _SIGIR_ , 2020, pp. 639–648. * [4] W.-C. Kang, D. Z. Cheng, T. Chen, X. Yi, D. Lin, L. Hong, and E. H. Chi, “Learning multi-granular quantized embeddings for large-vocab categorical features in recommender systems,” in _Companion Proceedings of the Web Conference 2020_ , 2020, pp. 562–566. * [5] Q. V. H. Nguyen, C. T. Duong, T. T. Nguyen, M. Weidlich, K. Aberer, H. Yin, and X. Zhou, “Argument discovery via crowdsourcing,” _The VLDB Journal_ , vol. 26, pp. 511–535, 2017. * [6] Y. Sun, F. Yuan, M. Yang, G. Wei, Z. Zhao, and D. Liu, “A generic network compression framework for sequential recommender systems,” in _SIGIR_ , 2020\. * [7] S. Zheng, W. Wang, J. Qu, H. Yin, W. Chen, and L. Zhao, “Mmkgr: Multi-hop multi-modal knowledge graph reasoning,” in _ICDE_ , 2023. * [8] X. Xia, H. Yin, J. Yu, Q. Wang, G. Xu, and Q. V. H. Nguyen, “On-device next-item recommendation with self-supervised knowledge distillation,” in _SIGIR_ , 2022, pp. 546–555. * [9] H.-J. M. Shi, D. Mudigere, M. Naumov, and J. Yang, “Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems,” _SIGKDD_ , pp. 165–175, 2020. * [10] T. Chen, H. Yin, Y. Zheng, Z. Huang, Y. Wang, and M. Wang, “Learning elastic embeddings for customizing on-device recommenders,” in _SIGKDD_ , 2021, pp. 138–147. * [11] R. He and J. McAuley, “Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering,” in _WWW_ , 2016. * [12] X. Zhao, H. Liu, W. Fan, H. Liu, J. Tang, C. Wang, M. Chen, X. Zheng, X. Liu, and X. Yang, “Autoemb: Automated embedding dimensionality search in streaming recommendations,” in _ICDM_ , 2021, pp. 896–905. * [13] A. A. Ginart, M. Naumov, D. Mudigere, J. Yang, and J. Zou, “Mixed dimension embeddings with application to memory-efficient recommendation systems,” in _ISIT_ , 2021, pp. 2786–2791. * [14] M. R. Joglekar, C. Li, M. Chen, T. Xu, X. Wang, J. K. Adams, P. Khaitan, J. Liu, and Q. V. Le, “Neural Input Search for Large Scale Recommendation Models,” in _SIGKDD_ , 2020, pp. 2387–2397. * [15] H. Liu, X. Zhao, C. Wang, X. Liu, and J. Tang, “Automated Embedding Size Search in Deep Recommender Systems,” in _SIGIR_ , 2020. * [16] S. Liu, C. Gao, Y. Chen, D. Jin, and Y. Li, “Learnable Embedding Sizes for Recommender Systems,” _ICLR_ , 2021. * [17] L. Qu, Y. Ye, N. Tang, L. Zhang, Y. Shi, and H. Yin, “Single-shot embedding dimension search in recommender system,” in _SIGIR_ , 2022. * [18] F. Lyu, X. Tang, H. Zhu, H. Guo, Y. Zhang, R. Tang, and X. Liu, “OptEmbed: Learning Optimal Embedding Table for Click-through Rate Prediction,” in _CIKM_ , 2022, pp. 1399–1409. * [19] C. Zhang, Y. Liu, Y. Xie, S. I. Ktena, A. Tejani, A. Gupta, P. K. Myana, D. Dilipkumar, S. Paul, I. Ihara _et al._ , “Model Size Reduction Using Frequency Based Double Hashing for Recommender Systems,” in _RecSys_ , 2020, pp. 521–526. * [20] Y. Li, T. Chen, P.-F. Zhang, and H. Yin, “Lightweight Self-Attentive Sequential Recommendation,” in _CIKM_ , 2021, pp. 967–977. * [21] W.-C. Kang, D. Z. Cheng, T. Yao, X. Yi, T. Chen, L. Hong, and E. H. Chi, “Learning to Embed Categorical Features without Embedding Tables for Recommendation,” in _SIGKDD_ , 2021, pp. 840–850. * [22] K. Zhou and H. Zha, “Learning binary codes for collaborative filtering,” in _SIGKDD_ , 2012, pp. 498–506. * [23] D. Lian, R. Liu, Y. Ge, K. Zheng, X. Xie, and L. Cao, “Discrete content-aware matrix factorization,” in _SIGKDD_ , 2017, pp. 325–334. * [24] Y. Zhang, H. Yin, Z. Huang, X. Du, G. Yang, and D. Lian, “Discrete deep learning for fast content-aware recommendation,” in _WSDM_ , 2018. * [25] Y. Zhang, I. W. Tsang, H. Yin, G. Yang, D. Lian, and J. Li, “Deep pairwise hashing for cold-start recommendation,” _TKDE_ , 2020. * [26] D. Lian, H. Wang, Z. Liu, J. Lian, E. Chen, and X. Xie, “LightRec: A Memory and Search-Efficient Recommender System,” in _WWW_ , 2020\. * [27] W.-C. Kang and J. McAuley, “Candidate generation with binary codes for large-scale top-n recommendation,” in _CIKM_ , 2019, pp. 1523–1532. * [28] Q. Tan, N. Liu, X. Zhao, H. Yang, J. Zhou, and X. Hu, “Learning to hash with graph neural networks for recommender systems,” in _WWW_ , 2020, pp. 1988–1998. * [29] H. Zhang, F. Shen, W. Liu, X. He, H. Luan, and T.-S. Chua, “Discrete Collaborative Filtering,” in _SIGIR_ , 2016, pp. 325–334. * [30] H. Liu, X. He, F. Feng, L. Nie, R. Liu, and H. Zhang, “Discrete factorization machines for fast feature-based recommendation,” in _IJCAI_ , 2018, pp. 3449–3455. * [31] Y. Qu, T. Chen, X. Zhao, L. Cui, K. Zheng, and H. Yin, “Continuous input embedding size search for recommender systems,” in _SIGIR_ , 2023. * [32] R. Zheng, L. Qu, B. Cui, Y. Shi, and H. Yin, “Automl for deep recommender systems: A survey,” _TOIS_ , 2023. * [33] T. Elsken, J. H. Metzen, and F. Hutter, “Neural architecture search: A survey,” _JMLR_ , vol. 20, no. 1, pp. 1997–2017, 2019. * [34] W. Cheng, Y. Shen, and L. Huang, “Differentiable neural input search for recommender systems,” _arXiv preprint arXiv:2006.04466_ , 2020. * [35] H. Pham, M. Guan, B. Zoph, Q. Le, and J. Dean, “Efficient neural architecture search via parameters sharing,” in _ICML_ , 2018. * [36] C. Ramirez, V. Kreinovich, and M. Argaez, “Why l1 is a good approximation to l0: A geometric explanation,” 2013. * [37] K. Weinberger, A. Dasgupta, J. Langford, A. Smola, and J. Attenberg, “Feature hashing for large scale multitask learning,” in _ICML_ , 2009. * [38] A. Desai, Y. Pan, K. Sun, L. Chou, and A. Shrivastava, “Semantically constrained memory allocation (scma) for embedding in efficient recommendation systems,” _arXiv e-prints_ , pp. arXiv–2103, 2021. * [39] M. Hang, T. Schnabel, L. Yang, and J. Neville, “Lightweight compositional embeddings for incremental streaming recommendation,” _arXiv e-prints_ , pp. arXiv–2202, 2022. * [40] Q. Wang, H. Yin, T. Chen, Z. Huang, H. Wang, Y. Zhao, and N. Q. Viet Hung, “Next Point-of-Interest Recommendation on Resource-Constrained Mobile Devices,” in _WWW_ , 2020, pp. 906–916. * [41] X. Xia, J. Yu, Q. Wang, C. Yang, N. Q. V. Hung, and H. Yin, “Efficient on-device session-based recommendation,” _TOIS_ , vol. 41, no. 4, 2023. * [42] C. Yin, D. Zheng, I. Nisa, C. Faloutsos, G. Karypis, and R. Vuduc, “Nimble GNN Embedding with Tensor-Train Decomposition,” in _SIGKDD_ , 2022, pp. 2327–2335. * [43] I. V. Oseledets, “Tensor-train decomposition,” _SISC_ , 2011. * [44] N. Sedaghati, T. Mu, L.-N. Pouchet, S. Parthasarathy, and P. Sadayappan, “Automatic selection of sparse matrix representation on gpus,” in _ICS_ , 2015, pp. 99–108. * [45] P. Virtanen _et al._ , “Scipy 1.0: fundamental algorithms for scientific computing in python,” _Nature methods_ , 2020. * [46] A. Kusupati, V. Ramanujan, R. Somani, M. Wortsman, P. Jain, S. Kakade, and A. Farhadi, “Soft threshold weight reparameterization for learnable sparsity,” in _ICML_ , 2020, pp. 5544–5555. * [47] S. Rendle, C. Freudenthaler, Z. Gantner, and L. Schmidt-Thieme, “BPR: Bayesian Personalized Ranking from Implicit Feedback,” p. 10, 2009\. * [48] J. Sun, Z. Cheng, S. Zuberi, F. Pérez, and M. Volkovs, “Hgcf: Hyperbolic graph convolution networks for collaborative filtering,” in _WWW_ , 2021\. * [49] X. Wang, X. He, M. Wang, F. Feng, and T.-S. Chua, “Neural Graph Collaborative Filtering,” in _SIGIR_ , 2019, pp. 165–174. * [50] T. Chen, H. Yin, G. Ye, Z. Huang, Y. Wang, and M. Wang, “Try this instead: Personalized and interpretable substitute recommendation,” in _SIGIR_ , 2020, pp. 891–900. * [51] T. Chen, H. Yin, J. Long, Q. V. H. Nguyen, Y. Wang, and M. Wang, “Thinking inside the box: learning hypercube representations for group recommendation,” in _SIGIR_ , 2022, pp. 1664–1673. * [52] J. Yu, H. Yin, X. Xia, T. Chen, J. Li, and Z. Huang, “Self-supervised learning for recommender systems: A survey,” _TKDE_ , 2023. * [53] N. Q. V. Hung, H. H. Viet, N. T. Tam, M. Weidlich, H. Yin, and X. Zhou, “Computing crowd consensus with partial agreement,” _TKDE_ , 2017. * [54] H. Chen, Y. Li, X. Sun, G. Xu, and H. Yin, “Temporal meta-path guided explainable recommendation,” in _WSDM_ , 2021, pp. 1056–1064. * [55] Y. Wang, L. Wang, Y. Li, D. He, W. Chen, and T.-Y. Liu, “A theoretical analysis of ndcg ranking measures,” in _COLT_ , vol. 8, 2013, p. 6. * [56] X. Glorot and Y. Bengio, “Understanding the difficulty of training deep feedforward neural networks,” in _AISTATS_ , 2010, pp. 249–256.
Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). Woodstock’22: Symposium on the irreproducible science, June 07–11, 2022, Woodstock, NY <EMAIL_ADDRESS>]<EMAIL_ADDRESS>hamburg.de, ]<EMAIL_ADDRESS>] [email=chris.biemann@uni- hamburg.de ] # DBLP-QuAD: A Question Answering Dataset over the DBLP Scholarly Knowledge Graph Debayan Banerjee Universität Hamburg, Hamburg, Germany Sushil Awale Ricardo Usbeck Chris Biemann (2022) ###### Abstract In this work we create a question answering dataset over the DBLP scholarly knowledge graph (KG). DBLP is an on-line reference for bibliographic information on major computer science publications that indexes over 4.4 million publications published by more than 2.2 million authors. Our dataset consists of 10,000 question answer pairs with the corresponding SPARQL queries which can be executed over the DBLP KG to fetch the correct answer. DBLP-QuAD is the largest scholarly question answering dataset. ###### keywords: Question Answering Scholarly Knowledge Graph DBLP Dataset ## 1 Introduction Over the past decade, knowledge graphs (KG) such as Freebase [1], DBpedia [2], and Wikidata[3] have emerged as important repositories of general information. They store facts about the world in the linked data architecture, commonly in the format of <subject predicate object> triples. These triples can also be visualised as node-edge-node molecules of a graph structure. Much interest has been generated in finding ways to retrieve information from these KGs. Question Answering over Knowledge Graphs (KGQA) is one of the techniques used to achieve this goal. In KGQA, the focus is generally on translating a natural language question to a formal logical form. This task has, in the past, been achieved by rule-based systems [4]. More recently, neural network and machine learning based methods have gained popularity [5]. A scholarly KG is a specific class of KGs that contains bibliographic information. Some well known scholarly KGs are the Microsoft Academic Graph111https://www.microsoft.com/en-us/research/project/microsoft-academic- graph/, OpenAlex222http://openalex.org/, ORKG333https://orkg.org/ and DBLP444https://dblp.org/. DBLP caters specifically to the bibliography of computer science, and as a result, it is smaller in size than other scholarly KGs. We decided to build our KGQA dataset over DBLP due to its focused domain and manageable size so that we could concentrate on adding complexity to the composition of the KGQA dataset itself. Datasets are important, especially for ML-based systems, because such systems often have to be trained on a sample of data before they can be used on a similar test set. To this end, several KGQA datasets exist [6]. However, not all datasets contain a mapping of natural language questions to the logical form (e.g. SPARQL, $\lambda$-calculus, S-expression). Some simply contain the question and the eventual answer. Such datasets can not be used to train models in the task of semantic parsing. In this work, we present a KGQA dataset called DBLP-QuAD, which consists of 10,000 questions with corresponding SPARQL queries. The question formation process begins with human-written templates, and later, we machine-generate more questions from these templates. DBLP-QuAD consists of a variety of simple and complex questions and also tests the compositional generalisation of the models. DBLP-QuAD is the largest scholarly KGQA dataset being made available to the public555https://doi.org/10.5281/zenodo.7643971. ## 2 Related Work ORKG-QA benchmark [7] is the first scholarly KGQA dataset grounded to ORKG. The dataset was prepared using the ORKG API and focuses on the content of academic publications structured in comparison tables. The dataset is relatively small in size with only 100 question-answer pairs covering only 100 research publications. Several other QA datasets exist, both for IR-based QA [8, 9] and KGQA [10, 11] approaches. Several different approaches have been deployed to generate the KGQA datasets. These approaches range from manual to machine generation. However, most datasets lie in between and use a combination of manual and automated process. A clear separation can be created between datasets that contain logical forms and those that do not. Datasets that do not require logical forms can be crowd-sourced and such datasets are generally large in size. Crowd sourcing is generally not possible for annotating logical forms because this task requires high domain expertise and it is not easy to find such experts on crowd sourcing platforms. We focus on datasets that contain logical forms. Free917 and QALD [12, 13] datasets were created manually by domain experts, however, their sizes are relatively small (917 and 806 respectively). WebQuestionsSP and ComplexWebQuestions [14, 15] are developed using exisiting datasets. WebQuestionsSP is a semantic parsing dataset developed by using questions from WebQuestions [16]. Yih et al. [14] developed a dialogue-like user interface which allowed five expert human annotators to annotate the data in stages. ComplexWebQuestions is a collection of 34,689 complex question paired with answers and SPARQL queries grounded to Freebase KG. The dataset builds on WebQuestionsSP by sampling question-query pairs from the dataset and automatically generating questions and complex SPARQL queries with composition, conjunctions, superlatives, and comparatives functions. The machine generated questions are manually annotated to natural questions and validated by 200 AMT crowd workers. The OVERNIGHT (ON) approach is a semantic parsing dataset generation framework introduced by Wang et al. [17]. In this approach, the question-logical form pairs are collected with a three step process. In the first step, the logical forms are generated from a KG. Secondly, the logical forms are converted automatically into canonical questions. These canonical questions are grammatically incorrect but successfully carry the semantic meaning. Lastly, the canonical questions are converted into natural forms via crowdsourcing. Following are some of the datasets developed using this approach. GraphQuestions [18] consists of 5,166 natural questions accompanied by two paraphrases of the original question, an answer, and a valid SPARQL query grounded against the Freebase KG. GraphQuestions uses a semi-automated three- step algorithm to generate the natural questions for the KG. LC-QuAD 1.0 [10] is another semantic parsing dataset for the DBpedia KG. LC- QuAD 1.0 is relatively larger in size with 5,000 natural language English questions and corresponding SPARQL queries. The generation process starts with the set of manually created SPARQL query templates, a list of seed entities, and a whitelist of predicates. Using the list of seed entities, two-hop subgraphs from DBpedia are extracted. The SPARQL query templates consist of placeholders for both entities and predicates which are instantiated using triples from the subgraph. These SPARQL queries are then used to instantiate natural question templates which form the base for manual paraphrasing by humans. LC-QuAD 2.0 [19] is the second iteration of LC-QuAD 1.0 with 30,000 questions, their paraphrases and their corresponding SPARQL queries compatible with both Wikidata and DBpedia KGs. Similar to LC-QuAD 1.0, in LC-QuAD 2.0 a sub-graph is generated using seed entities and a SPARQL query template is selected based on whitelist predicates. Then, the query template is instantiated using the sub-graph. Next, a template question is generated from the SPARQL query which is then verbalised and paraphrased by AMT crowd workers. LC-QuAD 2.0 has more questions and more variation compared to LC-QuAD 1.0 with paraphrases to the natural questions. GrailQA [20] extends the approach in [18] to generate 64,331 question-S- expression pairs grounded to the Freebase Commons KG. Here, S-expression are linearized forms of graph queries. Query templates extracted from graph queries generated from the KG are used to generate canonical logical forms grounded to compatible entities. The canonical logic forms are then validated by a graduate student if they represent plausible user query or not. Next, another graduate student annotated the validated canonical logic form with a canonical question. Finally, 6,685 Amazon Mechanical Turk workers write five natural paraphrases for each canonical question which are further validated by multiple independent crowd workers. KQA Pro [21] is a large collection of 117,000 complex questions paired with SPARQL queries for the Wikidata KG. KQA Pro dataset also follows the OVERNIGHT approach where firstly facts from the KG are extracted. Next, canonical questions are generated with corresponding SPARQL queries, ten answer choices and a golden answer. The canonical questions are then converted into natural language with paraphrases using crowd sourcing. CFQ [22] (Compositional Freebase Questions) is a semantic parsing dataset developed completely using synthetic generation approaches that consists of simple natural language questions with corresponding SPARQL query against the Freebase KG. CFQ contains 239,357 English questions which are generated using hand-crafted grammar and inference rules with a corresponding logical form. Next, resolution rules are used to map the logical forms to SPARQL queries. The CFQ dataset was specifically designed to measure compositional generalization. In this work, we loosely follow the OVERNIGHT approach to create a large scholarly KGQA dataset for the DBLP KG. ## 3 DBLP KG Figure 1: Example of entries in the DBLP KG with its schema DBLP, which used to stand for Data Bases and Logic Programming666https://en.wikipedia.org/wiki/DBLP, was created in 1993 by Michael Ley at the University of Trier, Germany [23]. The service was originally designed as a bibliographic database for research papers and proceedings from the fields of database systems and logic programming. Over time, the service has grown in size and scope, and today includes bibliographic information on a wide range of topics within the field of computer science. The DBLP RDF data models a person-publication graph shown in Figure 1. The DBLP KG contains two main entities: Person and Publication, where as other metadata such as journal and conferences, affiliation of authors are currently only string literals. Henceforth, we use the term person and creator interchangeably. At the time of its release, the RDF dump consisted of 2,941,316 person entities, 6,010,605 publication entities, and 252,573,199 RDF triples. DBLP currently does not provide a SPARQL endpoint but the RDF dump can be downloaded and a local SPARQL endpoint such as Virtuoso Server can be setup to run a SPARQL query against the DBLP KG. The live RDF data model on the DBLP website follows the schema shown in Figure 1. However, the RDF snapshots available for download have the coCreatorWith and authorOf predicates missing. Although these predicates are missing, the authoredBy predicate can be used to derive the missing relations. DBLP-QuAD is based on the DBLP KG schema of the downloadable RDF graph. ## 4 Dataset Generation Framework Figure 2: Motivating Example. The generation process starts with (1) selection of a template tuple followed by (2) subgraph generation. Then, literals in subgraph are (3) augmented before being used to (4) instantiate the selected template tuple. The generated data is (5) filtered based on if they produce answers or not. In this work, the aim is to generate a large variety of scholarly questions and corresponding SPARQL query pairs for the DBLP KG. Initially, a small set of templates $T$ containing a SPARQL query template $s_{t}$ and a few semantically equivalent natural language question templates $Q_{t}$ are created. The questions and query templates are created such that they cover a wide range of scholarly metadata user information need while also being answerable using a SPARQL query against the DBLP KG. Next, we synthetically generate a large set of question-query pairs $(q_{i},s_{i})$ suitable for training a neural network semantic parser. The core methodology of the dataset generation framework encompasses instantiating the templates using literals of subgraphs sampled from the KG. Moreover, to capture different representations of the literal values from a human perspective, we randomly mix in different augmentations of these textual representations. The dataset generation workflow is shown in Figure 2. ### 4.1 Templates The first step in the dataset generation process starts with the creation of a template set. After carefully analyzing the ontology of the DBLP KG, we manually wrote 98 pairs of valid SPARQL query templates and a set of semantically equivalent natural language question templates. The template set was written by one author and verified for correctness by another author. The query and question templates consist of placeholder markers instead of URIs, entity surface forms or literals. For example, in Figure 2 (Section $1$), the SPARQL query template includes the placeholders $?c1$ and $[VENUE]$ for DBLP person URI and venue literal respectively. Similarly, the question templates include placeholders $[CREATOR\\_NAME]$ and $[VENUE]$ for creator name and venue literal respectively. The template set covers the two entities creator and publication, and additionally the foreign entity bibtex type. Additionally, they also cover the $11$ different predicates of DBLP KG. The template set consists of template tuples. A template tuple $t=(s_{t},Q_{t},E_{t},P_{t})$ is composed of a SPARQL query template $s_{t}$, a set of semantically equivalent natural language question templates $Q_{t}$, a set of entity placeholders $E_{t}$ and a set of predicates $P_{t}$ used in $s_{t}$. We also add a boolean indicating whether the query template is temporal or not and another boolean indicating whether to use or not use the template while generating $train$ dataset. Each template tuple contains between four and seven paraphrased question templates offering wide linguistic diversity. While most of the question templates use the "Wh-" question keyword, we also include instruction-style paraphrases. We group the template tuples as creator-focused or publication-focused $\epsilon$ and further group them by query types $\delta$. We have $10$ different query types and they include Single Fact, Multiple Facts, Boolean, Negation, Double Negation, Double Intent, Union, Count, Superlative/Comparative, and Disambiguation. The question types are discussed in Section 4.6 with examples. The distribution of templates per entity and query type is shown in Table 1. During dataset generation, for each data instance we sample a template tuple from the template set using stratified sampling maintaining equal distribution of entity types and query types. Query Type | Creator-focused | Publication-focused | Total ---|---|---|--- Single Fact | 5 | 5 | 10 Multiple Facts | 7 | 7 | 14 Boolean | 6 | 6 | 12 Negation | 4 | 4 | 8 Double Negation | 4 | 4 | 8 Double Intent | 5 | 4 | 9 Union | 4 | 4 | 8 Count | 6 | 5 | 11 Superlative/Comparative | 6 | 6 | 12 Disambiguation | 3 | 3 | 6 Total | 50 | 48 | 98 Table 1: Total number of template tuples per query type grouped by entity type ### 4.2 Subgraph generation The second part of the dataset generation framework is subgraph generation. Given a graph $G=(V,E)$ where $V$ are the vertices, and $E$ are edges, we draw a subgraph $g=(v,e)$ where $v\subset V$, $e\subset E$. For the DBLP KG, $V$ are the creator and publication entity URIs or literals, and the $E$ are the predicates of the entities. The subgraph generation process starts with random sampling of a publication entity $v_{i}$ from the DBLP KG. We only draw from the set of publication entities as the RDF snapshot available for download has $authorOf$ and $coCreatorWith$ predicates missing for creator entity. As such, a subgraph centered on a creator entity would not have end vertices that can be expanded further. With the sampled publication entity $v_{i}$, we iterate through all the predicates $e$ to extract creator entities $v^{\prime}$ as well as the literal values. We further, expand the creator entities and extract their literal values to form a two-hop subgraph $g=(v,e)$ as shown in Figure 2 (Section $2$). ### 4.3 Template Instantiation Using the generated subgraph and the sampled template tuple, the template tuple is instantiated with entity URIs and literal values from the subgraph. In the instantiation process, a placeholder marker in a string is replaced by the corresponding text representation. For the SPARQL query template $s_{t}$, we instantiate the creator/publication placeholder markers with DBLP creator/publication entity URIs or literal values for affiliation and conference or journals to create a valid SPARQL query $s$ that returns answers when run against the DBLP KG SPARQL endpoint. In case of natural language question templates, we randomly sample two from the set of question templates $q_{t}^{1},q_{t}^{2}\in Q_{T}$, and instantiate each using only the literal values from the subgraph to form one main natural language question $q^{1}$ and one natural language question paraphrase $q^{2}$. In natural language, humans can write the literal strings in various forms. Hence to introduce this linguistic variation, we randomly mix in alternate string representations of these literal values in both natural language questions. The data augmentation process allows us to add heuristically manipulated alternate literal representations to the natural questions. A example of an instantiated template is shown in Figure 2 (Section $3$). ### 4.4 Data Augmentation For the template instantiation process, we perform simple string manipulations to generate alternate literal representations. Then, we randomly select between the original literal representation and the alternate representation to instantiate the natural language questions. For each literal type, we apply different string manipulation techniques which we describe below. Names: For names we generate four different alternatives involving switching parts of names or keeping only initials of the names. Consider the name John William Smith for which we produce Smith, John William, J. William Smith, John W. Smith, and Smith, J. William. Venues: Venues can be represented using either its short form or its full form. For example, ECIR or European Conference on Information Retrieval. In DBLP venues are stored in its short form. We use a selected list of conference and journals777http://portal.core.edu.au/conf- ranks/?search=&by=all&source=CORE2021&sort=atitle&page=1 containing the short form and its equivalent full form to get the full venue names. Duration: About 20% of the templates contain temporal queries, and some of them require dummy numbers to represent duration. For example, the question "In the last five years, which papers did Mante S. Nieuwland publish?" uses the dummy value five. We randomly select between the numerical representation and the textual representation for the dummy duration value. Affiliation: In natural language questions, only the institution name is widely used to refer to the affiliation of an author. However, the DBLP KG uses the full address of an institution including city and country name. Hence, using RegeEx we extract the institution names and randomly select between the institution name and the full institution address in the instantiation process. Keywords: For disambiguation queries, we do not use the full title of a publication but rather a part of it by extracting keywords. For this purpose, we use SpaCy’s Matcher API888https://spacy.io/api/matcher/ to extract noun phrases from the title. GenerateDataset _$(T,x,N,G)$_ inputs : template set $T$; dataset set to generate $x$; size of dataset to generate $N$; KG to sample subgraphs from $G$; output : dataset $D$; $D\leftarrow\emptyset$; $n\leftarrow(N/|\epsilon|)/|\delta|$; foreach _$e\in\epsilon$_ do foreach _$s\in\delta$_ do $i\leftarrow 0$; $T_{es}\leftarrow T[e][s]$; if _$x==train$_ then $T_{es}\leftarrow Filter(T_{es},test\\_only==True)$ while _$i <n$_ do $g_{1},g_{2}\leftarrow SampleSubgraph(G,2)$; $t_{i}\leftarrow random.sample(T_{es})$; $d_{i}\leftarrow Instantiate(t_{i},g_{1},g_{2},x)$; $answer\leftarrow Query(d_{i})$; if _$answer$_ then $D\leftarrow d_{i}$; $i\leftarrow i+1$; return _D_ Algorithm 1 Dataset Generation Process ### 4.5 Dataset Generation For each data instance $d_{i}$, we sample $2$ subgraphs (SampleSubgraph(G,2)) and instantiate a template tuple $t_{i}$ (Instantiate($t_{i}$, $g_{1}$, $g_{2}$, x)). We sample $2$ subgraphs as some template tuples require to be instantiated with two publication titles. Each data instance $d_{i}=(s_{i},q^{1}_{i},q^{2}_{i},E_{i},P_{i},y,z)$ comprises of a valid SPARQL query $s_{i}$, one main natural language question $q^{1}_{i}$, one semantically equivalent paraphrase of the main question $q^{2}_{i}$, a list of entities $E_{i}$ used in $s_{i}$, a list of predicates $P_{i}$ used in $s_{i}$, a Boolean indicating whether the SPARQL query is temporal or not $y$, and another Boolean informing whether the SPARQL query is found only in $valid$ and $test$ sets $z$. We generate an equal number $n$ of questions for each entity group $\epsilon$ equally divided for each query type $\delta$. To foster a focus on generalization ability, we manually marked $20$ template tuples to withhold during generation of the $train$ set. However, we use all the template tuples in the generation of $valid$ and $test$ sets. Furthermore, we also withhold $2$ question templates when generating $train$ questions but use all question templates when generating $valid$ and $test$ sets. This controlled generation process allows us to withhold some entity classes, predicates and paraphrases from $train$ set. Our aim with this control is to create a scholarly KGQA dataset that facilitates development of KGQA models that adhere to i.i.d, compositional, and zero-shot [20] generalization. Further, we validate each data instance $d_{i}$ by running the SPARQL query $s_{i}$ against the DBLP KG via a Virtuoso SPARQL endpoint999https://docs.openlinksw.com/virtuoso/whatisvirtuoso/. We filter out data instances for which the SPARQL query is invalid or generates a blank response. A SPARQL query may generate a blank response if the generated subgraphs have missing literal values. In the DBLP KG, some of the entities have missing literals for predicates such as primaryAffiliation, orcid, wikidata, and so on. Additionally, we also store the answers produced by the SPARQL query against the DBLP KG formatted according to https://www.w3.org/TR/sparql11-results-json/. The dataset generation process is summarized in Algorithm 1. ### 4.6 Types of Questions The dataset is composed of the following question types. The examples shown here are hand-picked from the dataset. * • Single fact: These questions can be answered using a single fact. For example, What year was SIRA: SNR-Aware Intra-Frame Rate Adaptation published? * • Multiple facts: These questions require connecting two or more facts to answer. For example, In SIGCSE, which paper written by Darina Dicheva with Dichev, Christo was published? * • Boolean: These questions answer where a given fact is true or false. We can also add negation keywords to negate the questions. For example, Does Szeider, Stefan have an ORCID? * • Negation: These questions require to negate the answer to the Boolean questions. For example, Did M. Hachani not publish in ICCP? * • Double negation: These questions require to negate the Boolean question answers twice which results. For example, Wasn’t the paper Multi-Task Feature Selection on Multiple Networks via Maximum Flows not published in 2014? * • Count: These questions pertain to the count of occurrence of facts. For example, Count the authors of Optimal Symmetry Breaking for Graph Problems who have Carnegie Mellon University as their primary affiliation. * • Superlative/Comparative: Superlative questions ask about the maximum and minimum for a subject and comparative questions compare values between two subjects. We group both types under one group. For example, Who has published the most papers among the authors of k-Pareto optimality for many-objective genetic optimization? * • Union questions cover a single intent but for multiple subjects at the same time. For example, List all the papers that Pitas, Konstantinos published in ICML and ISCAS. * • Double intent questions poses two user intentions, usually about the same subject. For example, In which venue was the paper Interactive Knowledge Distillation for image classification published and when? * • Disambiguation questions requires identifying the correct subject in the question. For example, Which author with the name Li published the paper about Buck power converters? ## 5 Dataset Statistics DBLP-QuAD consists of 10,000 unique question-query pairs grouped into train, valid and test sets with a ratio of 7:1:2. The dataset covers 13,348 creators and publications, and 11 predicates of the DBLP KG. For each query type in Table 1, the dataset includes 1,000 question-query pairs each of which is equally divided as creator-focused or publication-focused. Additionally, among the questions in DBLP-QuAD, 2,350 are temporal questions. Linguistic Diversity. In DBLP-QuAD, a natural language question has an average word length of 17.32 words and an average character length of 114.1 characters. Similarly, a SPARQL query has an average vocab length of 12.65 and an average character length of 249.48 characters. Between the natural language question paraphrases, the average Jaccard similarity for unigram and bigram are $0.62$ and $0.47$ (with standard deviations of $0.22$ and $0.24$) respectively. The average Levenshtein edit distance between them is $32.99$ (with standard deviation of $23.12$). We believe the metrics signify a decent level of linguistic diversity. Entity Linking. DBLP-QuAD also presents challenging entity linking with data augmentation performed on literals during the generation process. The augmented literals present more realistic and natural representation of the entity surface forms and literals compared to the entries in the KG. Generalization. In the valid set 18.9% and in the test set 19.3% of instances were generated using the withheld templates. Hence, these SPARQL query templates and natural language question templates are unique to the valid and test sets. Table 2 shows the percent of questions with different levels of generalization in the valid and test sets of the dataset. Dataset | I.I.D | Compositional | Zero-shot ---|---|---|--- Valid | 82.8% | 13.6% | 3.6% Test | 81.2% | 15.1% | 3.8% Table 2: Percent of questions with different levels of generalization in the valid and test sets of DBLP-QuAD ## 6 Semantic Parsing Baseline To lay the foundation for future work on DBLP-QuAD, we also release baselines using the recent work by Banerjee et al. [24], where a pre-trained T5 model is fine-tuned [25] on the LC-QuAD 2.0 dataset. Following Banerjee et al. [24], we assume the entities and the relations are linked, and only focus on query building. We formulate the source as shown in Figure 3, where for each natural language question a prefix parse text to SPARQL query: is added. The source string is further concatenated with entity URIs and relation schema URIs separated by a special token $[SEP]$. The target text is the corresponding SPARQL query which is padded with the tokens $<s></s>$. We also make use of the sentinel tokens provided by T5 to represent the DBLP prefixes e.g. <extra_id_1> denotes the prefix https://dblp.org/pid/, SPARQL vocabulary and symbols. This step helps the T5-tokenizer to correctly fragment the target text during inference. Figure 3: Representation of source and target text used to fine-tune the T5 model We fine-tune T5-Base and T5-Small on DBLP-QuAD train set with a learning rate of 1e-4 for $5$ epochs with an input as well as output text length of $512$ and batch size of $4$. ### 6.1 Experiment Results We report the performance of the baseline model on the DBLP-QuAD test set. Firstly, we report on the exact-match between the gold and the generated SPARQL query. For the exact-match accuracy we compare the generated and the gold query token by token after removing whitespaces. Next, for each SPARQL query on the test set, we run both the gold and and the query generated by the T5 baseline models using Virtuoso SPARQL endpoint to fetch answers from the DBLP KG. Based on the answers collected, we report on the F1 score. The results are reported on Table 3. Evaluation metrics | T5-Small | T5-Base ---|---|--- Exact-match Accuracy | 0.638 | 0.813 F1 Score | 0.721 | 0.868 Table 3: Evaluation results of fine-tuned T5 to DBLP-QuAD ## 7 Limitations One of the drawbacks of our dataset generation framework is that natural questions are synthetically generated. (CFQ [22] has a similar limitation.) Although the question templates were human-written, only two people (authors of the paper) worked on the creation of the question templates and was not crowd sourced from a group of researchers. Additionally, the questions are generated by drawing data from a KG. Hence, the questions may not perfectly reflect the distribution of user information need. However, the machine- generation process allows for programmatic configuration of the questions, setting question characteristics, and controlling dataset size. We utilize the advantage by programmatically augmenting text representations and generating a large scholarly KGQA with complex SPARQL queries. Second, in generating valid and test sets, we utilize additional 19 template tuples which account for about 20% of the template set. Therefore, the syntactic structure for 80% of the generated data in valid and test would already be seen in the train set resulting in test leakage. However, to limit the leakage on 80% of the data, we withhold $2$ question templates in generating the $train$ set. Moreover, the data augmentation steps carried out would also add challenges in the $valid$ and $test$ sets. Another shortcoming of DBLP-QuAD is that the paper titles do not perfectly reflect user behavior. When a user asks a question, they do not type in the full paper title and also some papers are popularly known by a different short name. For example, the papers Language Models are Few-shot Learners and BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding are also known as GPT-3 and BERT respectively. This is a challenging entity linking problem which requires further investigation. Despite the shortcomings, we feel the large scholarly KGQA dataset would ignite more research interest in scholarly KGQA. ## 8 Conclusion In this work, we presented a new KGQA dataset called DBLP-QuAD. The dataset is the largest scholarly KGQA dataset with corresponding SPARQL queries. The dataset contains a wide variety of questions and query types and we present the data generation framework and baseline results. We hope this dataset proves to be a valuable resource for the community. As future work, we would like to build a robust question answering system for scholarly data using this dataset. ## 9 Acknowledgements This research was supported by grants from NVIDIA and utilized NVIDIA 2 x RTX A5000 24GB. Furthermore, we acknowledge the financial support from the Federal Ministry for Economic Affairs and Energy of Germany in the project CoyPu (project number 01MK21007[G]) and the German Research Foundation in the project NFDI4DS (project number 460234259). This research is additonally funded by the “Idea and Venture Fund“ research grant by Universität Hamburg, which is part of the Excellence Strategy of the Federal and State Governments. ## References * Bollacker et al. [2008] K. Bollacker, C. Evans, P. Paritosh, T. Sturge, J. Taylor, Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge, in: Proceedings of the 2008 ACM SIGMOD international conference on Management of data, AcM, 2008, pp. 1247–1250. * Lehmann et al. [2015] J. Lehmann, R. Isele, M. Jakob, A. Jentzsch, D. Kontokostas, P. N. Mendes, S. Hellmann, M. Morsey, P. Van Kleef, S. Auer, et al., DBpedia – A Large-Scale, Multilingual Knowledge Base Extracted from Wikipedia, Semantic Web (2015). * Vrandečić, Denny and Krötzsch, Markus [2014] Vrandečić, Denny and Krötzsch, Markus, Wikidata: A Free Collaborative Knowledge Base, Communications of the ACM (2014). * Dubey et al. [2016] M. Dubey, S. Dasgupta, A. Sharma, K. Höffner, J. Lehmann, AskNow: A Framework for Natural Language Query Formalization in SPARQL, in: H. Sack, E. Blomqvist, M. d’Aquin, C. Ghidini, S. P. Ponzetto, C. Lange (Eds.), The Semantic Web. Latest Advances and New Domains, Springer International Publishing, Cham, 2016, pp. 300–316. * Chakraborty et al. [2019] N. Chakraborty, D. Lukovnikov, G. Maheshwari, P. Trivedi, J. Lehmann, A. Fischer, Introduction to Neural Network based Approaches for Question Answering over Knowledge Graphs, 2019. URL: https://arxiv.org/abs/1907.09361. doi:10.48550/ARXIV.1907.09361. * Perevalov et al. [2022] A. Perevalov, X. Yan, L. Kovriguina, L. Jiang, A. Both, R. Usbeck, Knowledge Graph Question Answering Leaderboard: A Community Resource to Prevent a Replication Crisis, in: Proceedings of the Thirteenth Language Resources and Evaluation Conference, European Language Resources Association, Marseille, France, 2022, pp. 2998–3007. URL: https://aclanthology.org/2022.lrec-1.321. * Jaradeh et al. [2020] M. Y. Jaradeh, M. Stocker, S. Auer, Question answering on scholarly knowledge graphs, in: International Conference on Theory and Practice of Digital Libraries, Springer, 2020, pp. 19–32. * Rajpurkar et al. [2018] P. Rajpurkar, R. Jia, P. Liang, Know what you don’t know: Unanswerable questions for SQuAD, arXiv preprint arXiv:1806.03822 (2018). * Kwiatkowski et al. [2019] T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, et al., Natural questions: a benchmark for question answering research, Transactions of the Association for Computational Linguistics 7 (2019) 453–466. * Trivedi et al. [2017] P. Trivedi, G. Maheshwari, M. Dubey, J. Lehmann, LC-QuAD: A Corpus for Complex Question Answering over Knowledge Graphs, in: C. d’Amato, M. Fernandez, V. Tamma, F. Lecue, P. Cudré-Mauroux, J. Sequeda, C. Lange, J. Heflin (Eds.), The Semantic Web – ISWC 2017, volume 10588, Springer International Publishing, Cham, 2017, pp. 210–218. doi:10.1007/978-3-319-68204-4_22. * Sen et al. [2022] P. Sen, A. F. Aji, A. Saffari, Mintaka: A Complex, Natural, and Multilingual Dataset for End-to-End Question Answering, arXiv preprint arXiv:2210.01613 (2022). * Cai and Yates [2013] Q. Cai, A. Yates, Large-scale semantic parsing via schema matching and lexicon extension, in: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2013, pp. 423–433. * Usbeck et al. [2017] R. Usbeck, A.-C. N. Ngomo, B. Haarmann, A. Krithara, M. Röder, G. Napolitano, 7th Open Challenge on Question Answering over Linked Data (QALD-7), in: M. Dragoni, M. Solanki, E. Blomqvist (Eds.), Semantic Web Challenges, volume 769, Springer International Publishing, Cham, 2017, pp. 59–69. doi:10.1007/978-3-319-69146-6_6. * Yih et al. [2016] W.-t. Yih, M. Richardson, C. Meek, M.-W. Chang, J. Suh, The Value of Semantic Parse Labeling for Knowledge Base Question Answering, in: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Association for Computational Linguistics, Berlin, Germany, 2016, pp. 201–206. doi:10.18653/v1/P16-2033. * Talmor and Berant [2018] A. Talmor, J. Berant, The Web as a Knowledge-base for Answering Complex Questions, 2018\. arXiv:1803.06643. * Berant et al. [2013] J. Berant, A. Chou, R. Frostig, P. Liang, Semantic Parsing on Freebase from Question-Answer Pairs, in: Proceedings of the 2013 conference on empirical methods in natural language processing, 2013, pp. 1533–1544. * Wang et al. [2015] Y. Wang, J. Berant, P. Liang, Building a semantic parser overnight, in: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), 2015, pp. 1332–1342. * Su et al. [2016] Y. Su, H. Sun, B. Sadler, M. Srivatsa, I. Gur, Z. Yan, X. Yan, On Generating Characteristic-rich Question Sets for QA Evaluation, in: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics, Austin, Texas, 2016, pp. 562–572. doi:10.18653/v1/D16-1054. * Dubey et al. [2019] M. Dubey, D. Banerjee, A. Abdelkawi, J. Lehmann, LC-QuAD 2.0: A Large Dataset for Complex Question Answering over Wikidata and DBpedia, in: C. Ghidini, O. Hartig, M. Maleshkova, V. Svátek, I. Cruz, A. Hogan, J. Song, M. Lefrançois, F. Gandon (Eds.), The Semantic Web – ISWC 2019, volume 11779, Springer International Publishing, Cham, 2019, pp. 69–78. doi:10.1007/978-3-030-30796-7_5. * Gu et al. [2021] Y. Gu, S. Kase, M. Vanni, B. Sadler, P. Liang, X. Yan, Y. Su, Beyond I.I.D.: Three Levels of Generalization for Question Answering on Knowledge Bases, in: Proceedings of the Web Conference 2021, ACM, Ljubljana Slovenia, 2021, pp. 3477–3488. doi:10.1145/3442381.3449992. * Cao et al. [2022] S. Cao, J. Shi, L. Pan, L. Nie, Y. Xiang, L. Hou, J. Li, B. He, H. Zhang, KQA pro: A dataset with explicit compositional programs for complex question answering over knowledge base, in: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics, Dublin, Ireland, 2022, pp. 6101–6119. doi:10.18653/v1/2022.Association for Computational Linguistics-long.422. * Keysers et al. [2020] D. Keysers, N. Schärli, N. Scales, H. Buisman, D. Furrer, S. Kashubin, N. Momchev, D. Sinopalnikov, L. Stafiniak, T. Tihon, D. Tsarkov, X. Wang, M. van Zee, O. Bousquet, Measuring Compositional Generalization: A Comprehensive Method on Realistic Data, 2020. arXiv:1912.09713. * Ley [2002] M. Ley, The DBLP Computer Science Bibliography: Evolution, Research Issues, Perspectives, in: G. Goos, J. Hartmanis, J. van Leeuwen, A. H. F. Laender, A. L. Oliveira (Eds.), String Processing and Information Retrieval, volume 2476, Springer Berlin Heidelberg, Berlin, Heidelberg, 2002, pp. 1–10. doi:10.1007/3-540-45735-6_1. * Banerjee et al. [2022] D. Banerjee, P. A. Nair, J. N. Kaur, R. Usbeck, C. Biemann, Modern Baselines for SPARQL Semantic Parsing, in: Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 2260–2265. doi:10.1145/3477495.3531841. arXiv:2204.12793. * Raffel et al. [2020] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, P. J. Liu, Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer, J. Mach. Learn. Res. 21 (2020) 1–67.
††thanks<EMAIL_ADDRESS> # High-fidelity quantum teleportation through noisy channels via weak measurement and environment-assisted measurement Sajede Harraz Department of Automation, University of Science and Technology of China, Hefei 230027 Jiao-Yang Zhang Department of Automation, University of Science and Technology of China, Hefei 230027 Shuang Cong Department of Automation, University of Science and Technology of China, Hefei 230027 ###### Abstract A perfect teleportation protocol requires pure maximally shared entangled states. While in reality the shared entanglement is drastically degraded due to the inevitable interaction with the noisy environment. Here, we propose a teleportation protocol to teleport an unknown qubit through amplitude damping channels with a fidelity up to one with a single copy of the entangled state. Our proposed teleportation protocol, while illustrated using the Bell and W entangled states as examples, can be utilized with any type of entangled states. In our protocol, we utilize environment-assisted measurement during the entanglement distribution, and further modify the original teleportation protocol to apply weak measurement in the last step of teleportation. We find a balance between teleportation fidelity and success probability by varying the strength of the weak measurement. Furthermore, we investigate the protection of controlled teleportation protocols, where all the qubits of the entangled state pass through the amplitude damping channel. In particular, for the controlled teleportation with the W state, the decoherence of the shared entanglement can be totally suppressed by using EAM, hence no weak measurement is required to achieve an average teleportation fidelity of unity. The numerical simulation results reveal that our proposed teleportation protocol outperforms both the weak measurement based probabilistic teleportation protocol and the original teleportation protocols without protection. Quantum teleportation is a quantum communication task which sends an unknown qubit from a sender to a receiver by using shared entanglement and classical communications [1, 2]. The original protocol proposed by Bennett $et\leavevmode\nobreak\ al.$ [3] uses a Bell state as the shared entanglement. Other teleportation protocols are also developed by using other multi-qubit entangled states such as GHZ states and W states [4, 5, 6, 7]. The W states are typically preferred due to their resilience against the loss of particles, i.e., if one of the particles in the W state is traced out, then there remains significant genuine entanglement between the remaining two. Typically, the sender or receiver prepares and sends the shared entangled state to one another, which has been extensively studied [8, 9, 10, 11]. Conversely, in controlled teleportation, a crucial aspect of secure quantum communication, a third party prepares and sends the shared entangled state to both the sender and receiver [12, 13, 14, 15, 16]. As the controller of the whole teleportation protocol, the third party can terminate the teleportation process in time when he notices something aberrant or insecure. In any realistic implementation of the teleportation protocol, noise is unavoidably present and affects the entangled state during its transmission to teleportation parties [17, 18]. Therefore, the entanglement degree of the quantum channel degraded, which seriously deteriorates the performance of the teleportation [19, 20]. Entanglement purification is proposed to improve the fidelity of teleportation in presence of noise, which enhances the entanglement of a pair of qubits at the expense of numerous identically prepared entangled qubits in combination with local operations and the classical communication [21, 22, 23, 24]. Quantum error correction is another method for protecting qubits as they are transmitted through a noisy channel, but it also requires a large number of redundant qubits to encode logical quantum information [25, 26, 27, 28, 29, 30, 31]. Recently, weak measurement- based decoherence control strategies have gained popularity and been verified both theoretically and experimentally; see Ref. [32] and the references therein. In Ref. [8], the teleportation process is analyzed in the framework of quantum measurement and its reversal (MR), where the well-designed weak measurement operators are applied in the last step of the teleportation protocol to overcome the effects of the noisy channel. However, the performance of the MR framework is dependent on the intensity of the noise, and the success probability of achieving high-fidelity teleportation in the presence of intense decay rates dramatically decreases. All the above schemes are applied on the state of the system, while there are some schemes which can effectively protect the system state by directly manipulating the noisy channel, such as environment-assisted error correction (EAEC) [33]. In the EAEC scheme, a measurement is applied to the noisy environment coupled to the system of interest, followed by restoration operations on the system conditioned on the results of the measurement on the environment. In this scheme, all the Kraus operators must be proportional to unitary operators; hence, by applying a reversal operation conditioned on the outcome of the measurement performed on the environment, the initial unknown state of the system is recovered. It is shown that the success probability and the fidelity of the recovered state in the EAEC scheme are always equal to 1. Later, a probabilistic extension of EAEC by combining environment measurement and weak measurement is proposed for the noisy channels with non-random unitary decompositions [34]. In this scheme, just some of the Kraus operators should be invertible instead of unitary. The idea is to perform a measurement to the environment coupled to the system of interest, and select the system states corresponding to the invertible Kraus operators of the noisy environment. Afterwards, by applying the designed weak measurement reversal operators, the initial state of the system is recovered [35, 36, 37]. In this article, we propose a teleportation protocol by utilizing EAM and WM (TP-EW), to transmit an unknown qubit through noisy channels via a single copy of an entangled state111Matlab codes for regenerating the results of this article are available from ¡https://github.com/Sjd-Hz/High-fidelity-TP-EW¿.. We provide the detailed procedure of the modified teleportation protocols with W and Bell entangled states, but it should be stressed that the proposed teleportation protocol is applicable to any type of entanglement. In addition, the TP-EW is applicable for teleportation through arbitrary decoherence channels with at least one invertible Kraus operator. In this article, we only consider the amplitude damping channel (ADC) and prove that the considered Kraus operators are the optimal decomposition in the sense of maximizing the success probability. First, we assume that Alice (sender) prepares the entangled state, and sends one qubit to Bob (receiver) through an ADC. The receiver applies EAM on the ADC during the entanglement distribution process. The teleportation process will not begin unless the ADC is detected to be in the unexcited state. Following a successful entanglement distribution via EAM, the teleportation process begins, with the receiver using the well-designed weak measurement operators in the last step to recover the input state at his end. To strike a balance between the average teleportation fidelity and the total teleportation success probability, we define the weak measurement strength as a variable and discuss teleportation performance for various amounts of weak measurement strength. We show that, by considering designed weak measurement operators, the proposed TP-EW is able to achieve teleportation fidelity equal to one independent of the intensity of the noise. Subsequently, we investigate the application of TP-EW to controlled teleportation with W and Bell states, where a third party (controller) prepares the entangled state and sends the relevant qubits to Alice and Bob through independent ADCs. Particularly, in the case of controlled teleportation with the W state through ADCs, we show that employing EAM during the entanglement distribution process is sufficient to attain an average teleportation fidelity of unity. The comparison results with the original teleportation protocol under no protection and a pioneer weak measurement- based teleportation protocol in the MR framework demonstrate that TP-EW achieves a higher average teleportation fidelity for all decaying rates with a competitive total success probability. The remainder of this article is organized as follows. In Section 2, we briefly review the EAM technique. In Section 3, we present the proposed TP-EW through noisy channels and analyze its performance in details. In Section 4, we investigate the application of TP-EW in controlled teleportation protocols. Finally, our conclusion is given in Section 5. ## 1 Environment-assisted measurement In this section, we briefly explain the fundamental concept of EAM, which is a key component of our teleportation protocols. According to the Schrödinger equation, the evolution of the total (system + environment) density matrix is given by ${\rho_{{\rm{tot}}}}(t)=U(t)\left({{\rho_{S}}(0)\otimes{\rho_{E}}}\right){U^{\dagger}(t)}$, where $U(t)=\exp\left({-{\rm{i}}{H_{{\rm{tot}}}}t}\right)$ is the total evolution operator with the total Hamiltonian ${H_{{\rm{tot}}}}$ including the interaction between system and environment. Assuming the environment is in a vacuum state $|0{\rangle_{E}}$, the evolution of the reduced density matrix of the system is then obtained by tracing over the environmental degrees of freedom as [38] $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rho_{S}}(t)}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{T}}{{\rm{r}}_{E}}\left({{\rho_{{\rm{tot}}}}(t)}\right)}$ (1) $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\sum\limits_{n}{{}_{E}{{\langle{\psi_{n}}|U(t)|0\rangle}_{E}}{\rho_{S}}(0){}_{E}{{\langle 0|{U^{\dagger}}(t)|{\psi_{n}}\rangle}_{E}}}}$ $\displaystyle\buildrel\Delta\over{=}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\sum\limits_{n}{{K_{n}}{\rho_{S}}(0)K_{n}^{\dagger}}}$ where ${K_{n}}\buildrel\Delta\over{=}{}_{E}{\langle{\psi_{n}}|U(t)|0\rangle_{E}}$ are the so-called Kraus operators that depends on the initial state of the environment and the choice of the complete basis of the environment $\left\\{{|{\psi_{n}}{\rangle_{E}}}\right\\}$. Thus, changing the basis of the environment from $\left\\{{|{\psi_{n}}{\rangle_{E}}}\right\\}$ to $\left\\{{|{\psi_{m}}{\rangle_{E}}}\right\\}$ as ${}_{E}\langle{\psi_{m}}|=\sum\nolimits_{m}{\langle{\psi_{m}}|{V_{n,m}}}$ will lead to another set of Kraus operators ${L_{m}}\buildrel\Delta\over{=}{}_{E}{\langle{\psi_{m}}|U(t)|0\rangle_{E}}$, where $V_{n,m}$ is a unitary matrix, and the relation between the two set of Kraus operators is described as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{K_{n}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\sum\limits_{m}{{V_{n,m}}{}_{E}{{\langle{\psi_{m}}|U(t)|0\rangle}_{E}}}}$ (2) $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\sum\limits_{m}{{V_{n,m}}{L_{m}}}}$ After one performs a measurement on the environment, it collapses into an eigenstate of the measured observable. Subsequently, the system will also be projected into a state corresponding to each environmental state after measurement, i.e., ${\rho_{s,n}}={K_{n}}{\rho_{S}}(0)K_{n}^{\dagger}$, if the environment collapses into the $n^{\rm{th}}$ eigenstate. The Kraus decomposition in Eq. (1) is random unitary (RU) if $K_{n}=c_{n}U_{n}$ for each $n$, where $U_{n}^{\dagger}=U_{n}^{-1}$ and $\sum\nolimits_{n}{|{c_{n}}{|^{2}}}=1$. Hence, according to EAEC scheme [33], one can recover the damped state of the system by applying an reversal operation based on the environmental measurement outcome $n$, i.e., ${\rho_{R}}={R_{n}}{\rho_{s,n}}R_{n}^{\dagger}={\rho_{S}}(0)$, where ${R_{n}}=\left({1/{c_{n}}}\right)U_{n}^{-1}$. However, if a Kraus operator is not an RU type, this scheme fails, since reversal operations are not available. Later, by combining environment measurement and weak measurement in Ref. [34], the authors restored quantum states in the presence of non-RU type noise with at least one invertible Kraus operator. Thus, after a measurement is applied on the environment, only the quantum trajectories corresponding to the invertible Kraus operator are considered, and others are discarded. In the end, the initial state of the system is recovered by a weak measurement operator that is defined as the inverse of the invertible Kraus operator as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}R_{n}=N_{n}K_{n}^{-1}}$ (3) where $N_{n}$ is the normalization factor given by ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{N_{n}}=\min\\{\sqrt{{\lambda_{i}}}\\}}$ (4) with $\lambda_{i}$’s being the eigenvalues of the matrix $K_{n}K_{n}^{\dagger}$. ## 2 Teleportation protocol through noisy channels by utilizing environment- assisted measurement and weak measurement In this section, we present the details of our proposed teleportation protocol through noisy channels by utilizing EAM and weak measurement, where EAM is applied during the entanglement distribution process and weak measurement in the last step of the teleportation both by the receiver. First, Alice prepares the entangled state. She keeps her part of the entangled state and sends one qubit to Bob through an ADC. Next, Bob performs a measurement on the noisy channel. His objective is to monitor the noisy channel, keep the system states corresponding to invertible Kraus operators of the channel and discard others corresponding to non-invertible Kraus operators. Hence, after a successful EAM, we can design weak measurement operators to be applied in the last step of the teleportation, to cancel the effects of the noise and retrieve the input state of the teleportation at the receiver’s end. Our proposed teleportation protocol is applicable to quantum teleportation protocols with all kinds of shared entanglement. Here we consider teleportation protocols with W and Bell entangled states through an ADC, and demonstrate that the fidelity and success probability of the proposed TP-EW are the same for both types of entangled states. The schematic diagram of our proposed TP-EW is given in Fig. 1 and the detailed procedure is given as follows. Figure 1: The schematic diagram of proposed TP-EW. The double lines indicate the classical communications, and the dashed line indicates quantum entanglement. ### 2.1 Teleportation with W state In what follows, we elaborate the details of the proposed protocol for W-type entangled states [39]. Alice has an unknown qubit and wishes to teleport to Bob as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\psi_{{\rm{in}}}}\rangle=\alpha|0\rangle+\beta|1\rangle}$ (5) where $|\alpha{|^{2}}+|\beta{|^{2}}=1$. Alice prepares a shared entangle state from a class of W state as [15]: $\displaystyle|{{\text{W}}}{\rangle_{123}}=$ $\displaystyle\frac{1}{{\sqrt{2+2n}}}{(|100\rangle_{123}}$ (6) $\displaystyle+\sqrt{n}{e^{i\gamma}}|010{\rangle_{123}}+\sqrt{n+1}{e^{i\delta}}|001{\rangle_{123}})$ where $n\in{\bf{R}}^{+}$, and $\gamma,\delta\in{\bf{R}}$ are global phases. For the sake of simplicity, we set $n=1$ and $\gamma=\delta=0$ hereafter. She keeps the qubits 1 and 2, and sends the qubit 3 of the entangled shared state to Bob through an ADC. The ADC is defined by the well-known Kraus operators as $e_{0}=\left[{\begin{array}[]{*{20}{c}}1&0\\\ 0&{\sqrt{1-r}}\end{array}}\right],\leavevmode\nobreak\ e_{1}=\left[{\begin{array}[]{*{20}{c}}0&{\sqrt{r}}\\\ 0&0\end{array}}\right]$ (7) where $r\in[0,1]$ is the magnitude of the decoherence and represents the probability of decay from the upper level $|1\rangle$ to the lower level $|0\rangle$ with $r=1-\exp\left({-\Gamma t}\right)$, in which $\Gamma$ is the energy relaxation rate and $t$ is the evolving time. The Kraus decomposition of the noisy channel is non-trivial, since it is closely related to the success probability of the proposed TP-EW. The proof of the optimality of this Kraus decomposition is given in Appendix A. Then Bob performs the EAM on the ADC and tells the result to Alice. If the channel is in an unexcited state ($e_{0}$), the entanglement distribution is successfully done, and they can start the teleportation process. Otherwise, he discards the entanglement distribution at this time and restarts the process. Since only the third qubit of the shared entangled state passes through the noisy channel, the applied Kraus operators for the W state in Eq. (6) are constructed as ${E_{0}^{\text{W}}}=I_{2}\otimes I_{2}\otimes e_{0},\leavevmode\nobreak\ {E_{1}^{\text{W}}}=I_{2}\otimes I_{2}\otimes e_{1}$ (8) where $I_{2}=[1,0;0,1]$ is the $2\times 2$ identity operator, and $e_{i}$’s are the Kraus operators of the ADC in Eq. (7). By considering three-qubit Kraus operators in Eq. (8), we only keep the quantum trajectories corresponding to ${E_{0}^{\text{W}}}$. Hence, after a successful EAM, the quantum channel between two partners has been constructed, and the normalized state of the shared entanglement is described as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{{\rm{W}}^{E_{0}^{\rm{W}}}}{\rangle_{123}}}=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{E_{0}^{\rm{W}}|{\rm{W}}{\rangle_{123}}}}{{\sqrt{{}_{123}{{\langle{\rm{W}}|{{(E_{0}^{\rm{W}})}^{\dagger}}E_{0}^{\rm{W}}|{\rm{W}}\rangle}_{123}}}}}}$ (9) $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{\sqrt{4-2r}}}{(|100\rangle_{123}}+|010{\rangle_{123}}}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+\sqrt{2(1-r)}|001{\rangle_{123}})}$ And the success probability of the entanglement distribution via EAM is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{EAM}}}^{\rm{W}}={}_{123}{\langle{\rm{W}}|{(E_{0}^{\rm{W}})^{\dagger}}E_{0}^{\rm{W}}|{\rm{W}}\rangle_{123}}=1-\frac{r}{2}}$ (10) Note that the teleportation process can be started as long as a successful entanglement distribution via EAM has been achieved. Hence, we use the normalized shared entangled state in Eq. (9). To start the teleportation, Alice interacts the input qubit with her qubit of the entangled shared state. Thus, the state of the total system consisting of the input qubit in Eq. (5) and the shared entanglement in Eq. (9) becomes $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\psi_{{\rm{tot}}}^{E_{0}^{\rm{W}}}\rangle}=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\psi_{{\rm{in}}}}\rangle\otimes|{{\rm{W}}^{E_{0}^{\rm{W}}}}{\rangle_{123}}}$ (11) $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{\sqrt{4-2r}}}}{[\alpha{(|010\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}+|001\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}})|0{\rangle_{3}}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +\sqrt{2(1-r)}\alpha|000{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}|1{\rangle_{3}}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +\beta{(|110\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}+|101{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}})|0{\rangle_{3}}$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +\sqrt{2(1-r)}\beta|100{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}|1{\rangle_{3}}]$ $\displaystyle\buildrel\Delta\over{=}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{\sqrt{4-2r}}}}{[|{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\eta_{1}}}\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}{(\alpha|0\rangle_{3}}+\beta\sqrt{1-r}|1{\rangle_{3}})$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +|{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\eta_{2}}}{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}{(\alpha|0\rangle_{3}}-\beta\sqrt{1-r}|1{\rangle_{3}})$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +|{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\eta_{3}}}{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}{(\beta|0\rangle_{3}}+\alpha\sqrt{1-r}|1{\rangle_{3}})$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ +|{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\eta_{4}}}{\rangle_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in},1,2}}}}{(\beta|0\rangle_{3}}-\alpha\sqrt{1-r}|1{\rangle_{3}})]$ where the subscript “in” denotes the input qubit in Eq. (5), $\left\\{{\left.{|{\eta_{i}}\rangle}\right|i=1,2,3,4}\right\\}$ is a complete set of orthogonal bases with $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\eta_{1}}\rangle}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{2}(|010\rangle+|001\rangle+\sqrt{2}|100\rangle)}$ (12) $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\eta_{2}}\rangle}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{2}(|010\rangle+|001\rangle-\sqrt{2}|100\rangle)}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\eta_{3}}\rangle}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{2}(|110\rangle+|101\rangle+\sqrt{2}|000\rangle)}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\eta_{4}}\rangle}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{2}(|110\rangle+|101\rangle-\sqrt{2}|000\rangle)}$ Next, Alice performs a joint measurement on her three qubits (qubits 1 and 2 of the shared entangled state, and the input state) under the base in Eq. (12). The measurement operators for the whole 4-qubit system in Eq. (11) are constructed as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\phi_{i}}=|{\eta_{i}}\rangle\langle{\eta_{i}}|\otimes{I_{2}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3,4}$ (13) And the occurrence probability of each measurement operator’s outcome is calculated as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{P_{i}^{\rm{W}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\langle\psi_{{\rm{tot}}}^{E_{0}^{\rm{W}}}|\phi_{i}^{\dagger}{\phi_{i}}|\psi_{{\rm{tot}}}^{E_{0}^{\rm{W}}}\rangle}$ (14) $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\left\\{{\begin{array}[]{*{20}{c}}{\frac{{1-|\beta{|^{2}}r}}{{4-2r}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}\\\ {\frac{{1-|\alpha{|^{2}}r}}{{4-2r}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}\end{array}}\right.}$ After Bob receives Alice’s measurement result, he has to apply the corresponding weak measurement operators to compensate the effects of the ADC and recover the input state at his end. One weak measurement operator is defined as the inverse of the invertible Kraus operator $e_{0}$ of the ADC. According to Eq. (3), in the case of ADC, the normalized weak measurement reversal is presented as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}m_{0}=\begin{bmatrix}\sqrt{1-r}&0\\\ 0&1\end{bmatrix}}$ (15) The weak measurement reversal $m_{0}$ is from the complete measurement set $\\{{m_{0}},{{\bar{m}}_{0}}\\}$ with ${{\bar{m}}_{0}}=[\sqrt{r},0;0,0]$. In our designed teleportation protocol, we only preserve the result of $m_{0}$, discard the result of ${{\bar{m}}_{0}}$, and normalize the final state at the end of the teleportation process. Generally, there is a trade-off between fidelity and success probability in weak measurement-based decoherence control schemes [8, 32], hence we define the strength of the weak measurement as a variable $q\in[0,1]$, and consider the unitary operators in the last step of the original quantum teleportation protocols. Hence, in the end, the weak measurement reversal operator is defined as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{M_{i}}={U_{i}}\left[{\begin{array}[]{*{20}{c}}{\sqrt{1-q}}&0\\\ 0&1\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3,4}$ (16) where $U_{1}=I_{2},\leavevmode\nobreak\ U_{2}=\sigma_{z},\leavevmode\nobreak\ U_{3}=\sigma_{x}$ and $U_{4}=\sigma_{x}\sigma_{z}$ with $\sigma_{i}\leavevmode\nobreak\ (i=x,y,z)$ being Pauli operators, which depends on the result of Alice’s joint measurement. The weak measurement operators according to different Alice’s measurement results as well as the non-normalized states of Bob’s qubit are given in Table 1. Table 1: Alice’s measurement results and corresponding Bob’s weak measurement operators to recover damped states in TP-EW. Alice’s result | non-normalized state of Bob’s qubit | Bob’s weak measurement operator ---|---|--- $|{\eta_{1}}\rangle$ | $|\psi_{{\eta_{1}}}^{\rm{W}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\alpha|0\rangle_{3}}+\beta\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{1}}={U_{1}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{\eta_{2}}\rangle$ | $|\psi_{{\eta_{2}}}^{\rm{W}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\alpha|0\rangle_{3}}-\beta\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{2}}={U_{2}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{\eta_{3}}\rangle$ | $|\psi_{{\eta_{3}}}^{\rm{W}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\beta|0\rangle_{3}}+\alpha\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{3}}={U_{3}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{\eta_{4}}\rangle$ | $|\psi_{{\eta_{4}}}^{\rm{W}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\beta|0\rangle_{3}}-\alpha\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{4}}={U_{4}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ Generally, the output states of Bob’s qubit corresponding to different Alice’s measurement outcomes can be described as follows $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{M_{i}}}^{\rm{W}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{{M_{i}}|\psi_{{\eta_{i}}}^{\rm{W}}\rangle\langle\psi_{{\eta_{i}}}^{\rm{W}}|M_{i}^{\dagger}}}{{\langle\psi_{{\eta_{i}}}^{\rm{W}}|M_{i}^{\dagger}{M_{i}}|\psi_{{\eta_{i}}}^{\rm{W}}\rangle}}}$ (17) $\displaystyle\buildrel\Delta\over{=}\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{(4-2r)g_{{M_{i}}}^{\rm{W}}}}}\left[{\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{|\alpha{|^{2}}(1-q)}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color<EMAIL_ADDRESS>\left.\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\alpha{\beta^{*}}\sqrt{1-q}\sqrt{1-r}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{|\beta{|^{2}}(1-r)}}\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i=1,2}\end{array}\\\ \begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{(4-2r)g_{{M_{i}}}^{\rm{W}}}}}\left[{\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{|\alpha{|^{2}}(1-r)}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color<EMAIL_ADDRESS>\left.\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\alpha{\beta^{*}}\sqrt{1-q}\sqrt{1-r}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{|\beta{|^{2}}(1-q)}}\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i=3,4}\end{array}\end{array}}\right.$ where $|\psi_{{\eta_{i}}}^{\rm{W}}\rangle$’s are the non-normalized states of Bob’s qubit corresponding to different measurement results of Alice, $M_{i}$’s are the corresponding weak measurement operators given in Table 1, and $g_{{M_{i}}}^{\rm{W}}\buildrel\Delta\over{=}\langle\psi_{{\eta_{i}}}^{\rm{W}}|M_{i}^{\dagger}{M_{i}}|\psi_{{\eta_{i}}}^{\rm{W}}\rangle$ are the success probability of gaining the state $\rho_{{M_{i}}}^{\rm{W}}$ as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{M_{i}}}^{\rm{W}}=\left\\{{\begin{array}[]{*{20}{c}}{\frac{1}{{4-2r}}\left({|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}\right),i=1,2}\\\ {\frac{1}{{4-2r}}\left({|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}\right),i=3,4}\end{array}}\right.}$ (18) Therefore, the total teleportation success probability of TP-EW can be defined as $g_{{\rm{tot}}}^{{\rm{TP- EW}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\sum\limits_{i=1}^{4}{g_{{M_{i}}}^{\rm{W}}}=1-\frac{q}{{2-r}}}$ (19) To evaluate the performance of the proposed TP-EW, we also consider the fidelity between input state Eq. (5) and the output state of TP-EW in Eq. (17) as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{fid}}_{i}^{\rm{W}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\langle{\psi_{{\rm{in}}}}|\rho_{{M_{i}}}^{\rm{W}}|{\psi_{{\rm{in}}}}\rangle}$ (20) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{|\beta{|^{4}}(1-r)+|\alpha{|^{4}}(1-q)}}{{|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+\frac{{2|\alpha{|^{2}}|\beta{|^{2}}\sqrt{1-q}\sqrt{1-r}}}{{|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}\end{array}\\\ \begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{|\beta{|^{4}}(1-q)+|\alpha{|^{4}}(1-r)}}{{|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+\frac{{2|\alpha{|^{2}}|\beta{|^{2}}\sqrt{1-q}\sqrt{1-r}}}{{|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}\end{array}\end{array}}\right.$ Hence, the average teleportation fidelity of proposed TP-EW over all possible input states is ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP- EW}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\int_{0}^{1}{\sum\limits_{i=1}^{4}{P_{i}^{\rm{W}}{\rm{fid}}_{i}^{\rm{W}}}{\rm{d}}|\alpha{|^{2}}}}$ (21) Here, let us examine the effects of the strength of weak measurement on the performance of the proposed TP-EW according to Eqs. (19)–(21): 1) When $q=r$, the average teleportation fidelity ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ is equal to 1 with the corresponding total teleportation success probability of $g_{{\rm{tot}}}^{{\rm{TP- EW}}}=1-r/(2-r)$. 2) When $q=0$, no weak measurement is applied; hence, the teleportation protocol remains unchanged and the scheme becomes an entanglement distribution process followed by the deterministic standard teleportation. In other words, after a successful EAM during entanglement distribution process, we proceed to the deterministic original teleportation protocol. In other words, after a successful EAM during the entanglement distribution process, we proceed to the deterministic original teleportation protocol. The average teleportation fidelity ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ is always less than 1 with the corresponding total teleportation success probability of unity. 3) When $q\in(0,r)$, one can strike a balance between the average teleportation fidelity and the success probability, i.e., both ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ and $g_{{\rm{tot}}}^{{\rm{TP-EW}}}$ vary between those in the case of $q=0$ and those in the case of $q=r$. 4) However, a value of $q$ within $(r,1]$ is not acceptable, since both the average teleportation fidelity ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ and the total teleportation success probability $g_{{\rm{tot}}}^{{\rm{TP-EW}}}$ are lower than those in the case of $q=r$. For comparison, we consider the original teleportation protocol with the W or Bell state through an ADC under no protection. The average teleportation fidelity is calculated as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av}}}^{{\rm{W/Bell(ori)}}}}=\frac{1}{{30}}\left({8\sqrt{1-r}+22-7r}\right)$ (22) The detailed procedure for the calculation of the average teleportation fidelity of the original teleportation protocol with the W state through an ADC under no protection is given in Appendix B, and the result on the average teleportation fidelity of the original teleportation protocol with the Bell state are available from Eq. (A6) of Ref. [10]. For further comparison, we consider a probabilistic teleportation protocol which is based on weak measurement, namely, the MR framework of teleportation [8]. In the MR framework, Bob applies designed weak measurement reversals instead of unitary operations, to suppress the effects of the ADC. By considering the maximally entangled Bell state as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\psi{\rangle_{{\rm{ab}}}}{\rm{=}}\frac{1}{{\sqrt{2}}}\left({|0{\rangle_{\rm{a}}}|0{\rangle_{\rm{b}}}+|1{\rangle_{\rm{a}}}|1{\rangle_{\rm{b}}}}\right)}$ (23) The average teleportation fidelity of the MR framework through an ADC over all possible input states is presented as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av}}}^{{\rm{MR}}}=\int_{0}^{1}{\left({\frac{{1\\!+\\!r|\alpha{|^{2}}|\beta{|^{2}}}}{{2(1\\!+\\!r|\beta{|^{2}})}}\\!+\\!\frac{{1\\!+\\!r|\alpha{|^{2}}|\beta{|^{2}}}}{{2(1\\!+\\!r|\alpha{|^{2}})}}}\right){\rm{d}}|\alpha{|^{2}}}}$ (24) The MR framework of teleportation is probabilistic due to the incompleteness of weak measurement reversal employed in the last step of the teleportation procedure. The total teleportation success probability of the MR framework is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{tot}}}^{{\rm{MR}}}=\frac{{2-r-{r^{2}}}}{2}}$ (25) For fair comparison, we also investigate our proposed TP-EW with the maximally entangled Bell state and prove that all performance indicators, including the average teleportation fidelity and total teleportation success probability, are exactly the same as those of the TP-EW with the W state. The detailed procedure of the TP-EW with the Bell state is given in Appendix C. ### 2.2 Numerical simulation and comparison In Fig. 2, we plot the average teleportation fidelity of TP-EW ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ in Eq. (21), the average teleportation fidelity of the MR framework ${\rm{Fid}}_{{\rm{av}}}^{{\rm{MR}}}$ in Eq. (24) (the lavender plate), and the the average teleportation fidelity of the original teleportation under no protection in Eq. (22) (the gray plate) as a function of the decaying rate $r$ and the weak measurement strength $q$. Moreover, the total teleportation success probability of TP-EW $g_{{\rm{tot}}}^{{\rm{TP-EW}}}$ in Eq. (19) and the total teleportation success probability of the MR framework $g_{{\rm{tot}}}^{{\rm{MR}}}$ in Eq. (25) (the lavender plate) as a function of decaying rate $r$ and the weak measurement strength $q$ is given in Fig. 2. Figure 2: (a) The average teleportation fidelity of TP-EW as a function of decaying rate $r$ and the weak measurement strength $q$. The lavender plate is the average teleportation fidelity of the MR framework, and the gray plate is the average fidelity of the original teleportation protocol under no protection. The black line indicates the maximum average teleportation fidelity of TP-EW. (b) The total teleportation success probability of TP-EW as a function of decaying rate $r$ and the weak measurement strength $q$. The lavender plate is the total teleportation success probability of the MR framework. As Fig. 2 depicted, the proposed TP-EW significantly improves the average teleportation fidelity compare to the MR framework (the lavender plate) and the original teleportation under no protection (the gray plate). The average teleportation fidelity of TP-EW can be higher than that of the MR framework for all amounts of decaying rates by choosing an appropriate weak measurement strength $q$. By contrasting Fig. 2 with Fig. 2 under the same $r$, it can be inferred that a smaller $q$ within the interval $[0,r]$ leads to the gentler improvement of the average teleportation fidelity with less decreasing in total teleportation success probability. Obviously, the maximum average teleportation fidelity of TP-EW is illustrated by the black line which is equal to one in the case of $q=r$; however, as Fig. 2 illustrates, the total teleportation success probability of our proposed TP-EW is lower than that of the MR framework for smaller decaying rates. Particularly, in the case of $q=0$(the ridge lines of the TP-EW curves in Figs. 2 and 2), the average teleportation fidelity is improved without loss of the total teleportation success probability compared to the MR framework. The comparison results of three schemes—the TP-EW with $q=r$ and $q=0$, and the MR framework—are shown in Fig. 3. In Fig. 3, we plot 1) the average teleportation fidelity of TP-EW ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ in Eq. (21) with $q=r$ which is the maximized average fidelity of TP-EW with its optimum weak measurement strength, 2) average teleportation fidelity ${\rm{Fid}}_{{\rm{av}}}^{{\rm{TP-EW}}}$ in Eq. (21) by considering EAM but without applying the weak measurement reversal ($q=0$), and 3) the average teleportation fidelity of the MR framework ${\rm{Fid}}_{{\rm{av}}}^{{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}MR}}}}}$ in Eq. (24). Moreover, Fig. 3 is the comparison results of 1) the corresponding total teleportation success probability of TP-EW $g_{{\rm{tot}}}^{{\rm{TP-EW}}}$ in Eq. (19) with $q=r$, 2) the total teleportation success probability of TP-EW in Eq. (19) with $q=0$, and 3) the total teleportation success probability of the MR framework in Eq. (25). Figure 3: (a) Average teleportation fidelities vs. $r$. (b) Total teleportation success probabilities vs. $r$. The colored region represents average teleportation fidelities/total teleportation success probabilities of TP-EW by varying the weak measurement strength $q$ within $[0,r]$. By contrasting Fig. 3 with Fig. 3, it is inferred that the average teleportation fidelity equal to one is achieved by setting $q=r$ even for intense decaying rates with the corresponding teleportation success probability higher than that of the MR framework. Also, when $q=0$ (the blue dotted line), the TP-EW can improve the teleportation fidelity significantly compared to the MR framework with a total teleportation success probability of unity for all decaying rates. In fact, the colored region is attainable by varying the weak measurement strength $q$ within the interval $[0,r]$, where the average teleportation fidelity increases and the total success probability decreases as the weak measurement strength is increased from 0 to $r$. ## 3 Controlled teleportation through noisy channels by utilizing weak measurement and environment-assisted measurement In this section, we study controlled teleportation protocols through noisy channels by utilizing EAM and weak measurement. Different from Section 2, here we assume that the shared W state is prepared by a third party (Charlie) who delivers the first two qubits to Alice and the third qubit to Bob through independent ADCs. For simplicity, we assume that the decoherence process occurs for all the qubits of the shared entanglement locally and independently but with the same decay rate $r$. Both Alice and Bob perform the EAM and tell their results to Charlie. If all ADCs are detected to be in an unexcited state, they are allowed to continue the teleportation process; otherwise, they have to discard the results and restart the entanglement distribution process. At the end of the teleportation process, Bob applies the weak measurement on his qubit if necessary. In the following subsections, we provide the detailed procedure of the proposed controlled teleportation with the W state and Bell state via EAM and the necessary weak measurement. ### 3.1 Controlled teleportation with W state In this subsection, we investigate the controlled teleportation with the W state through ADCs. Charlie prepares the W state in Eq. (6), and delivers the first two qubits to Alice and the third qubit to Bob through independent ADCs. The applied Kraus operators for the shared entanglement are ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}E_{i}^{{\rm{CW}}}={e_{j}}\otimes{e_{k}}\otimes{e_{l}}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{for}\leavevmode\nobreak\ j,k,l=0,1}$ (26) Due to the principle of EAM, we only keep the quantum trajectories corresponding to the invertible Kraus operator $E_{0}^{\rm{CW}}$ and discard the quantum trajectories corresponding to other Kraus operators during the entanglement distribution process. In this way, the quantum channel between two partners has been successfully constructed and the normalized state of the shared entanglement can be described as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{{\rm{W}}^{E_{0}^{{\rm{CW}}}}}{\rangle_{123}}}=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}E_{0}^{{\rm{CW}}}|{\rm{W}}{\rangle_{123}}}$ (27) $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{2\sqrt{1-r}}}{(\sqrt{1-r}|100\rangle_{123}}+\sqrt{1-r}}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\times|010{\rangle_{123}}+\sqrt{2}\sqrt{1-r}|001{\rangle_{123}})}$ $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{2}{(|100\rangle_{123}}+|010{\rangle_{123}}+\sqrt{2}|001{\rangle_{123}})}$ The success probability of entanglement distribution via EAM in controlled teleportation with the W state is calculated as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{EAM}}}^{{\rm{CW}}}={}_{123}{\langle{\rm{W}}|{(E_{0}^{{\rm{CW}}})^{\dagger}}E_{0}^{{\rm{CW}}}|{\rm{W}}\rangle_{123}}=1-r}$ (28) According to Eq. (27), after a successful entanglement distribution via EAM, no decoherence occurs to the W state, and the original teleportation protocol can be started, just as the noise-free case. To start the teleportation, Alice interacts the input qubit with her qubit of the shared entangled state, and the state of the total system becomes $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\psi_{{\rm{tot}}}^{E_{0}^{{\rm{CW}}}}\rangle}=$ $\displaystyle(\alpha|0\rangle+\beta|1\rangle{)_{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}in}}}}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\otimes|{{\rm{W}}^{E_{0}^{{\rm{CW}}}}}{\rangle_{123}}}$ (29) $\displaystyle=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{\eta_{1}}{\rangle_{{\rm{in,12}}}}{(\alpha|0\rangle_{3}}+\beta|1{\rangle_{3}})}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+|{\eta_{2}}{\rangle_{{\rm{in,12}}}}{(\alpha|0\rangle_{3}}-\beta|1{\rangle_{3}})}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+|{\eta_{3}}{\rangle_{{\rm{in,12}}}}{(\beta|0\rangle_{3}}+\alpha|1{\rangle_{3}})}$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+|{\eta_{4}}{\rangle_{{\rm{in,12}}}}{(\beta|0\rangle_{3}}-\alpha|1{\rangle_{3}})}$ where the definitions of $|\eta_{i}\rangle$’s are the same as those in Eq. (12). According to Eq. (29), Bob can get the exact input qubit by following the original teleportation and without applying weak measurement. Therefore, in controlled teleportation with the W state, the original teleportation protocol obtains the teleportation fidelity of unity for all possible input qubits after a successful entanglement distribution via EAM. Hence, the average teleportation fidelity of controlled teleportation with the W state via EAM is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av-W}}}^{{\rm{CTP- EAM}}}=1}$ (30) This is because the considered W state subjected to the amplitude damping noise is completely symmetric after a successful EAM, which is favorable for quantum communication and computation. Note that the success probability in Eq. (28) is related to entanglement distribution process, and we consider proceeding to the original teleportation protocol with the W state after a successful entanglement distribution. Therefore, the total teleportation success probability of the controlled teleportation with the W state via EAM is also equal to one. For comparison, we also consider the controlled teleportation with the W state through ADCs under no protection with the average teleportation fidelity as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av}}}^{{\rm{CW(ori)}}}=1-\frac{{11}}{{15}}r}$ (31) ### 3.2 Controlled teleportation with Bell state In this subsection, we study the controlled teleportation protocol with the Bell state through ADCs by utilizing EAM and weak measurement. Charlie prepares the Bell state in Eq. (23), and delivers the first qubit to Alice and the second qubit to Bob through independent ADCs. Since both qubits of the entangled pair pass through the ADC, the applied Kraus operators become ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}E_{i}^{{\rm{CB}}}={e_{j}}\otimes{e_{k}}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{for}\leavevmode\nobreak\ j,k=0,1}$ (32) where only $E_{0}^{{\rm{CB}}}={e_{0}}\otimes{e_{0}}$ is invertible; hence after Bob applies EAM on the ADC, the quantum trajectories corresponding to $E_{0}^{{\rm{CB}}}$ are kept, and the quantum trajectories corresponding to other Kraus operators are discarded. As a result, after a successful entanglement distribution via EAM, the normalized state of the shared entanglement between Alice and Bob is described as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{\rm{ab}}}^{E_{\rm{0}}^{{\rm{CB}}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{E_{\rm{0}}^{{\rm{CB}}}|\psi{\rangle_{{\rm{ab}}}}{}_{{\rm{ab}}}\langle\psi|{{(E_{\rm{0}}^{{\rm{CB}}})}^{\dagger}}}}{{{}_{{\rm{ab}}}{{\langle\psi|{{(E_{\rm{0}}^{{\rm{CB}}})}^{\dagger}}E_{\rm{0}}^{{\rm{CB}}}|\psi\rangle}_{{\rm{ab}}}}}}}$ (33) $\displaystyle\buildrel\Delta\over{=}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{2g_{{\rm{EAM}}}^{{\rm{CB}}}}}}\left[{\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}1}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{1-r}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{1-r}}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}0}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{{(1-r)}^{2}}}}\end{array}}\right]$ where $g_{{\rm{EAM}}}^{{\rm{CB}}}\buildrel\Delta\over{=}{}_{{\rm{ab}}}{\langle\psi|{(E_{\rm{0}}^{{\rm{CB}}})^{\dagger}}E_{\rm{0}}^{{\rm{CB}}}|\psi\rangle_{{\rm{ab}}}}$ is the success probability of entanglement distribution via EAM in controlled teleportation with the Bell state as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{EAM}}}^{{\rm{CB}}}=\frac{1}{2}\left({{{\left({1-r}\right)}^{2}}+1}\right)}$ (34) After the successful entanglement distribution via EAM and employing the shared entangled state in Eq. (33), Alice and Bob proceed to the teleportation protocol via weak measurement. Apart from the last step, detailed steps of the modified quantum teleportation protocol are almost the same as those in Appendix C. However, it is noted that the probabilities of gaining the measurement outcome corresponding to each measurement operator $B_{i}$ are calculated as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}P_{{B_{i}}}^{{\rm{CB}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Tr}}\left({{B_{i}}({\rho_{{\rm{in}}}}\otimes\rho_{{\rm{ab}}}^{E_{\rm{0}}^{{\rm{CB}}}})B_{i}^{\dagger}}\right)}$ (35) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\frac{{|\beta{|^{2}}{r^{2}}-2|\beta{|^{2}}r+1}}{{2({r^{2}}-2r+2)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\frac{{|\alpha{|^{2}}{r^{2}}-2|\alpha{|^{2}}r+1}}{{2({r^{2}}-2r+2)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}}\end{array}}\right.$ where the definitions of $B_{i}$’s are the same as those in Eq. (60) of Appendix C. After Alice applies a joint Bell state measurement on her two qubits and sends the measurement outcome to Bob through a classical channel, Bob knows that the non-normalized state of his qubit now is described as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{B_{i}}}^{{\rm{CB}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\mathop{\rm Tr}\nolimits}_{{\rm{in,a}}}}[{B_{i}}({\rho_{{\rm{in}}}}\otimes\rho_{{\rm{ab}}}^{E_{\rm{0}}^{{\rm{CB}}}})B_{i}^{\dagger}]}$ (36) $\displaystyle=\left\\{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\begin{array}[]{*{20}{c}}\begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}}&{\alpha{\beta^{*}}(1-r)}\\\ {{\alpha^{*}}\beta(1-r)}&{|\beta{|^{2}}{{(1-r)}^{2}}}\end{array}}\right],i=1\end{array}\\\ \begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}}&{\alpha{\beta^{*}}(r-1)}\\\ {{\alpha^{*}}\beta(r-1)}&{|\beta{|^{2}}{{(1-r)}^{2}}}\end{array}}\right],i=2\end{array}\\\ \begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left[{\begin{array}[]{*{20}{c}}{|\beta{|^{2}}}&{\alpha{\beta^{*}}(1-r)}\\\ {{\alpha^{*}}\beta(1-r)}&{|\alpha{|^{2}}{{(1-r)}^{2}}}\end{array}}\right],i=3\end{array}\\\ \begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left[{\begin{array}[]{*{20}{c}}{|\beta{|^{2}}}&{\alpha{\beta^{*}}(r-1)}\\\ {{\alpha^{*}}\beta(r-1)}&{|\alpha{|^{2}}{{(1-r)}^{2}}}\end{array}}\right],i=4\end{array}\end{array}}}\right.$ To restore the state of Bob’s qubit, weak measurement operators are applied in the last step of the controlled teleportation protocol as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\tilde{M}_{i}}^{\prime}={U_{i}}\left[{\begin{array}[]{*{20}{c}}{1-q^{\prime}}&0\\\ 0&0\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3,4}$ (37) where $q^{\prime}\in[0,r]$ is the acceptable strength of the weak measurement reversal, and the definitions of $U_{i}$’s are the same as those in Eq. (6). After Bob applies the weak measurement reversal in Eq. (37), the output state of his qubit becomes $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{M_{i}}^{\prime}}^{{\rm{CB}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{{{\tilde{M}}_{i}}^{\prime}\rho_{{B_{i}}}^{{\rm{CB}}}{{({{\tilde{M}}_{i}}^{\prime})}^{\dagger}}}}{{{\rm{Tr}}\left({{{\tilde{M}}_{i}}^{\prime}\rho_{{B_{i}}}^{{\rm{CB}}}{{({{\tilde{M}}_{i}}^{\prime})}^{\dagger}}}\right)}}}$ (38) $\displaystyle\buildrel\Delta\over{=}\left\\{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\begin{array}[]{*{20}{c}}\begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)g_{i}^{{\rm{CB}}}}}\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}{{(1-q^{\prime})}^{2}}}\\\ {{\alpha^{*}}\beta(1-r)(1-q^{\prime})}\end{array}}\right.\\\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \left.{\begin{array}[]{*{20}{c}}{\alpha{\beta^{*}}(1-r)(1-q^{\prime})}\\\ {|\beta{|^{2}}{{(1-r)}^{2}}}\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2\end{array}\\\ \begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)g_{i}^{{\rm{CB}}}}}\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}{{(1-r)}^{2}}}\\\ {{\alpha^{*}}\beta(1-r)(1-q^{\prime})}\end{array}}\right.\\\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \left.{\begin{array}[]{*{20}{c}}{\alpha{\beta^{*}}(1-r)(1-q^{\prime})}\\\ {|\beta{|^{2}}{{(1-q^{\prime})}^{2}}}\end{array}}\right],\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4\end{array}\end{array}}}\right.$ where $g_{i}^{{\rm{CB}}}\buildrel\Delta\over{=}{\rm{Tr}}\left({{{\tilde{M}}_{i}}^{\prime}\rho_{{B_{i}}}^{{\rm{CB}}}{{({{\tilde{M}}_{i}}^{\prime})}^{\dagger}}}\right)$ are the success probability of gaining the state $\rho_{{M_{i}}^{\prime}}^{{\rm{CB}}}$ as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{i}^{{\rm{CB}}}\\!=\\!\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left({|\alpha{|^{2}}{{(1\\!-\\!q^{\prime})}^{2}}\\!+\\!|\beta{|^{2}}{{(1\\!-\\!r)}^{2}}}\right),i=1,2\end{array}\\\ \begin{array}[]{l}\frac{1}{{2({r^{2}}-2r+2)}}\\\ \times\left({|\alpha{|^{2}}{{(1\\!-\\!r)}^{2}}\\!+\\!|\beta{|^{2}}{{(1\\!-\\!q^{\prime})}^{2}}}\right),i=3,4\end{array}\end{array}}\right.}$ (39) Therefore, the total teleportation success probability of the controlled TP-EW with the Bell state is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{tot- Bell}}}^{{\rm{CTP-EW}}}=\sum\limits_{i=1}^{4}{g_{i}^{{\mathop{\rm CB}\nolimits}}}=1-\frac{{2q^{\prime}-{{(q^{\prime})}^{2}}}}{{{r^{2}}-2r+2}}}$ (40) In the controlled TP-EW with the Bell state, the fidelity between the input state in Eq. (5) and the output state in Eq. (38) is calculated as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{fid}}_{i}^{{\rm{CB}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\langle{\psi_{{\rm{in}}}}|\rho_{{M_{i}}^{\prime}}^{{\rm{CB}}}|{\psi_{{\rm{in}}}}\rangle}$ (41) $\displaystyle=\left\\{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\begin{array}[]{*{20}{c}}{\frac{{{{(|\alpha{|^{2}}q^{\prime}+|\beta{|^{2}}r-1)}^{2}}}}{{|\alpha{|^{2}}(q{{}^{\prime 2}}-2q^{\prime})+|\beta{|^{2}}({r^{2}}-2r)+1}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}\\\ {\frac{{{{(|\beta{|^{2}}q^{\prime}+|\alpha{|^{2}}r-1)}^{2}}}}{{|\alpha{|^{2}}({r^{2}}-2r)+|\beta{|^{2}}(q{{}^{\prime 2}}-2q^{\prime})+1}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}\end{array}}}\right.$ Thus, the average teleportation fidelity of the controlled TP-EW with the Bell state over all possible input states is ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av- Bell}}}^{{\rm{CTP- EW}}}=\int_{0}^{1}{\sum\limits_{i=1}^{4}{(P_{{B_{i}}}^{{\rm{CB}}}{\rm{fid}}_{i}^{{\rm{CB}}})}{\rm{d|}}\alpha{{\rm{|}}^{2}}}}$ (42) For comparison, we consider the original controlled teleportation protocol with the Bell state under no protection with the average teleportation fidelity as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av}}}^{{\rm{CB(ori)}}}=1-\frac{{11}}{{15}}r+\frac{7}{{15}}{r^{2}}}$ (43) ### 3.3 Numerical simulation and comparison In this subsection, we compare the average teleportation fidelities and total teleportation success probabilities of the modified controlled teleportation with W and Bell states to the original teleportation protocols under no protection. In Fig. 4, we plot 1) the average teleportation fidelity of the modified controlled teleportation with the W state ${\rm{Fid}}_{{\rm{av-W}}}^{{\rm{CTP-EAM}}}$ in Eq. (30), 2) the average teleportation fidelity of the controlled TP-EW with the Bell state ${\rm{Fid}}_{{\rm{av-Bell}}}^{{\rm{CTP-EW}}}$ in Eq. (42) with $q^{\prime}=0$ and $q^{\prime}=r$, and 3) the average teleportation fidelity of original controlled teleportation protocols with the W state ${\rm{Fid}}_{{\rm{av}}}^{{\rm{CW(ori)}}}$ in Eq. (31) and with the Bell state ${\rm{Fid}}_{{\rm{av}}}^{{\rm{CB(ori)}}}$ in Eq. (43) under no protection. Furthermore, in Fig. 4, we plot 1) the total teleportation success probability of controlled teleportation with the W state $g_{{\rm{tot-W}}}^{{\rm{CTP- EAM}}}=1$, and 2) the total teleportation success probability of controlled TP-EW with the Bell state $g_{{\rm{tot-Bell}}}^{{\rm{CTP-EW}}}$ in Eq. (40) with $q^{\prime}=0$ and $q^{\prime}=r$. Figure 4: (a) Average teleportation fidelities vs. $r$. (b) Total teleportation success probabilities vs. $r$. Fig. 4 demonstrates that the modified controlled teleportation with the W state achieves the best performance, i.e., both the average teleportation fidelity and the total teleportation success probability are equal to one. The controlled TP-EW with the Bell state achieves the teleportation fidelity of unity when $q^{\prime}=r$ at the price of less total teleportation success probabilities in heavy damping cases. Also, when $q^{\prime}=0$, the average teleportation fidelities of the modified controlled teleportation are remarkably improved compared to the original protocols under no protection, and the corresponding total teleportation success probabilities are always equal to one. Additionally, it can be seen that the original controlled teleportation with the Bell state performs better than that with the W state in the absence of protection. ## 4 Conclusion We proposed a high-fidelity teleportation protocol via EAM and weak measurement through noisy channels with a single copy of the entangled state. The proposed protocol consists of two parts: entanglement distribution via EAM, followed by a modified teleportation protocol by applying designed weak measurement operators in the last step. The EAM is applied in entanglement distribution step to collect system states corresponding to invertible Kraus operators of the noisy channel. Afterwards, we designed weak measurement operators to be applied in the last step of teleportation to reverse the effects of the noise and obtained the average teleportation fidelity equal to one. The proposed teleportation protocol is applicable to any type of entangled state, but we only derived the final expression of the average teleportation fidelity and total teleportation success probability by considering W and Bell entangled states. Numerical simulation results demonstrated the significant performance improvement of our proposed TP-EW in comparison with the original teleportation protocols under no protection and the MR framework of teleportation. Furthermore, we investigated the application of TP-EW to controlled teleportation, in which all qubits in the shared entanglement pass through independent noisy channels with the same decay rate. The results revealed that by considering controlled teleportation with the W state and just applying EAM during entanglement distribution, the optimal average teleportation fidelity of unity is attained without the need for weak measurement. In addition, by presenting a controlled teleportation protocol with the Bell state using both EAM and weak measurement, we achieved a significant improvement of the average teleportation fidelity in comparison with the original controlled teleportation under no protection. These results will contribute to the distribution of multi-qubit entanglement in noisy channels and the protection of quantum communication. ## Acknowledgments This work was supported by the National Natural Science Foundation of China under Grants 61973290 and 61720106009, and Ministry of Science and Technology of P. R. China Program under the Grant QN2022200007L. ## Appendix A: Optimal Kraus decomposition of ADC Only one Kraus operator of the ADC is invertible, which means that only one trajectory can be retrieved to the initial state after applying EAM on the ADC. Hence, it is worthwhile to explore how to select the Kraus decomposition to optimize the performance of the proposed TP-EW. This optimization problem can be quite challenging in general. However, in the following, we show how to simplify the problem to find the optimal Kraus decomposition of the ADC. For each invertible Kraus operator $e_{i}$, we can define the weak measurement reversal operator as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{m_{i}}={N_{i}}e_{i}^{-1}}$ (44) where ${N_{i}}=\min\left\\{{\sqrt{{\lambda_{i}}}}\right\\}$ are the normalization factor with ${{\lambda_{i}}}$ being the eigenvalues of the matrix ${e_{i}}e_{i}^{\dagger}$. By considering an arbitrary unknown state $|\phi\rangle$, we get ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{m_{i}}{e_{i}}|\phi\rangle={N_{i}}|\phi\rangle,\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \forall|\phi\rangle}$ (45) Hence, the unknown state $|\phi\rangle$ is faithfully recovered with the success probability as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{P_{{e_{i}}}}=\sum\limits_{i}{{{({N_{i}})}^{2}}}}$ (46) According to Eqs. (46) and (19), the total teleportation success probability is also determined by the Kraus decomposition. However, according to Theorem 8.2 of Ref. [40], the Kraus decomposition is not unique, i.e., arbitrary linear combination of Kraus operators in Eq. (7) is valid as long as it has the following form: ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{F_{i}}=\sum\limits_{i}{{v_{ij}}{e_{i}}}}$ (47) where $v_{ij}\in{\bf{C}}^{2\times 2}$, and $e_{i}$’s are the Kraus operators of ADC in Eq. (7). Generally, an arbitrary $2\times 2$ unitary matrix can be described as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}F_{\alpha,\beta,\gamma,\delta}=\left[{\begin{array}[]{*{20}{c}}{{{\rm{e}}^{i(\alpha-\beta-\gamma)}}\cos\delta}&{-{{\rm{e}}^{i(\alpha-\beta+\gamma)}}\sin\delta}\\\ {{{\rm{e}}^{i(\alpha+\beta-\gamma)}}\sin\delta}&{{{\rm{e}}^{i(\alpha+\beta+\gamma)}}\cos\delta}\end{array}}\right]}$ (48) According to Eq. (48), it is noted that the success probability only depends on the eigenvalues of the matrix $F_{i}F_{i}^{\dagger}$, for instance, $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{F_{1}}F_{1}^{\dagger}}=$ $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{v_{11}}{|^{2}}{e_{1}}e_{1}^{\dagger}+|{v_{22}}{|^{2}}{e_{2}}e_{2}^{\dagger}}$ (49) $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+{v_{11}}v_{12}^{*}{e_{1}}e_{2}^{\dagger}+{v_{12}}v_{11}^{*}{e_{2}}e_{1}^{\dagger}}$ To find the eigenvalues of $F_{1}F_{1}^{\dagger}$, we realize that $\left|{{F_{1}}F_{1}^{\dagger}-\lambda{I_{2}}}\right|=\left({|{v_{11}}{|^{2}}+|{v_{22}}{|^{2}}|r|-\lambda}\right)\left({|{v_{11}}{|^{2}}|1-r|-\lambda}\right)-|{v_{11}}{|^{2}}|{v_{22}}{|^{2}}|r(1-r)|$ is independent of the phase factors $\alpha$, $\beta$ and $\gamma$; hence we only need to consider $\delta$. Next, we only consider the single-parameter transformation matrix as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{F_{\delta}}=\left[{\begin{array}[]{*{20}{c}}{\cos\delta}&{\sin\delta}\\\ {\sin\delta}&{\cos\delta}\end{array}}\right]}$ (50) We plot the total teleportation success probability, for different Kraus decompositions according to Eqs. (47) and (50), by varying $r\in[0,1]$ and $\delta\in[0,2\pi]$ in Fig. 5. Figure 5: Total teleportation success probability for all possible Kraus decompositions. As Fig. 5 demonstrates, for all values of $r$, the highest success probability occurs at $\delta=n\pi/2$, for $n\in\bf{Z}$. Thus, it can be concluded that the Kraus decomposition in Eq. (7) is the optimal choice to achieve the highest teleportation success probability. ## Appendix B: Original teleportation protocol with W state through ADC under no protection Here, we study the average teleportation fidelity of the original teleportation protocol with the W state through an ADC under no protection. In this case, the shared W entangled state in Eq. (6) after passing through the ADC is $|{{\rm{W}}^{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{E_{0,1}^{\rm{W}}}}}}{\rangle_{123}}=E_{0}^{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W}}}|{\rm{W}}{\rangle_{123}}+E_{1}^{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W}}}|{\rm{W}}{\rangle_{123}}$ (51) where $E_{0}^{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W}}}$ and $E_{1}^{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W}}}$ are the applied Kraus operators given in Eq. (8). Furthermore, by considering the original teleportation protocol with the W state and applying unitary operators in the last step, Bob’s qubit corresponding to different Alice’s measurement outcomes becomes $\rho_{{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}U_{i}}}}^{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W}}}\\!=\\!\left\\{{\begin{array}[]{*{20}{c}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\frac{1}{{4g_{{U_{i}}}^{\rm{W}}}}}}\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}\\!+\\!|\beta{|^{2}}r}&{\alpha{\beta^{*}}\sqrt{1\\!-\\!r}}\\\ {{\alpha^{*}}\beta\sqrt{1\\!-\\!r}}&{|\beta{|^{2}}(1\\!-\\!r)}\end{array}}\right],i\\!=\\!1,2}\\\ {{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\frac{1}{{4g_{{U_{i}}}^{\rm{W}}}}}}\left[{\begin{array}[]{*{20}{c}}{|\alpha{|^{2}}(1\\!-\\!r)}&{\alpha{\beta^{*}}\sqrt{1\\!-\\!r}}\\\ {{\alpha^{*}}\beta\sqrt{1\\!-\\!r}}&{|\alpha{|^{2}}r\\!+\\!|\beta{|^{2}}}\end{array}}\right],i\\!=\\!3,4}\end{array}}\right.$ (52) where $g_{{U_{i}}}^{\rm{W}}(i=1,2,3,4)=1/4$ are the probabilities of gaining the output state $\rho_{{U_{i}}}^{\rm{W}}$. Also, in order to calculate the average teleportation fidelity, it is necessary to present the probabilities $P_{i}^{{\rm{W(ori)}}}$’s of gaining measurement outcome corresponding to each measurement operator $\phi_{i}$ in Eq. (13). Since no result is discarded in the teleportation process, these probabilities are equal to those of gaining the output state $\rho_{{U_{i}}}^{\rm{W}}$’s, i.e., ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}P_{i}^{{\rm{W(ori)}}}=g_{{U_{i}}}^{\rm{W}}\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\text{for}}\leavevmode\nobreak\ i=1,2,3,4}$ (53) The teleportation fidelity corresponding to different Alice’s measurement results is $\displaystyle{\rm{fid}}_{i}^{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W(ori)}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\langle{\psi_{{\rm{in}}}}|\rho_{{U_{i}}}^{\rm{W}}|{\psi_{{\rm{in}}}}\rangle}$ (54) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}|\alpha{|^{4}}+|\beta{|^{4}}(1-r)\\\ +|\alpha{|^{2}}|\beta{|^{2}}(r+2\sqrt{1-r}),\leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2\end{array}\\\ \begin{array}[]{l}|\beta{|^{4}}+|\alpha{|^{4}}(1-r)\\\ +|\alpha{|^{2}}|\beta{|^{2}}(r+2\sqrt{1-r}),\leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4\end{array}\end{array}}\right.$ Hence, the average teleportation fidelity of the original teleportation with the W state under no protection through an ADC is calculated as $\displaystyle{\rm{Fid}}_{{\rm{av}}}^{{\rm{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}W(ori)}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\int_{0}^{1}{\sum\limits_{i=1}^{4}{P_{i}^{{\rm{W(ori)}}}{\rm{fid}}_{i}^{{\rm{W(ori)}}}}{\rm{d}}|\alpha{|^{2}}}}$ (55) $\displaystyle=\frac{1}{{30}}(8\sqrt{1-r}+22-7r)$ ## Appendix C: TP-EW with Bell state through ADC Since only the second qubit of the Bell state passes through the noisy channel, the applied Kraus operators for the whole 2-qubit shared entangled state are ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}E_{0}^{{\rm{Bell}}}={I_{2}}\otimes{e_{0}},\leavevmode\nobreak\ E_{1}^{{\rm{Bell}}}={I_{2}}\otimes{e_{1}}}$ (56) where $e_{i}$’s are the Kraus operators of the ADC in Eq. (7). Bob now applies EAM to the ADC and informs Alice of the result. If the result of EAM is corresponding to the invertible Kraus operator $E_{0}^{{\rm{Bell}}}$, the quantum channel between Alice and Bob is successfully established and defined as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{\rm{ab}}}^{E_{0}^{{\rm{Bell}}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{E_{0}^{{\rm{Bell}}}|\psi{\rangle_{{\rm{ab}}}}{}_{{\rm{ab}}}\langle\psi|{{(E_{0}^{{\rm{Bell}}})}^{\dagger}}}}{{{}_{{\rm{ab}}}{{\langle\psi|{{(E_{0}^{{\rm{Bell}}})}^{\dagger}}E_{0}^{{\rm{Bell}}}|\psi\rangle}_{{\rm{ab}}}}}}}$ (57) $\displaystyle\buildrel\Delta\over{=}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{1}{{2g_{{\rm{EAM}}}^{{\rm{Bell}}}}}}\left[{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\begin{array}[]{*{20}{c}}1&0&0&{\sqrt{1-r}}\\\ 0&0&0&0\\\ 0&0&0&0\\\ {\sqrt{1-r}}&0&0&{1-r}\end{array}}}\right]$ where $g_{{\rm{EAM}}}^{{\rm{Bell}}}\buildrel\Delta\over{=}{}_{{\rm{ab}}}{\langle\psi|{(E_{0}^{{\rm{Bell}}})^{\dagger}}E_{0}^{{\rm{Bell}}}|\psi\rangle_{{\rm{ab}}}}=1-r/2$ denotes the success probability of entanglement distribution before TP-EW with the Bell state, and its value is equal to that before TP-EW with the W state in Eq. (10). Alice interacts the input qubit with her half of the entangled pair. Thus, the state of the whole 3-qubit system consisting of the input qubit and the whole shared entanglement is described as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{\rm{tot}}}^{E_{0}^{{\rm{Bell}}}}={\rho_{{\rm{in}}}}\otimes\rho_{{\rm{ab}}}^{E_{0}^{{\rm{Bell}}}}}$ (58) where ${\rho_{{\rm{in}}}}=|{\psi_{{\rm{in}}}}\rangle\langle{\psi_{{\rm{in}}}}|$ is the density matrix of the input state $|{\psi_{{\rm{in}}}}\rangle$ in Eq. (5). Then, she makes a joint Bell state measurement on her two qubits (the input state and her share of the entangled state) with measurement operators $|{b_{i}}\rangle\langle{b_{i}}|$, where $b_{i}$’s are defined as $\begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{b_{1}}\rangle=\frac{1}{{\sqrt{2}}}(|00\rangle+|11\rangle),\leavevmode\nobreak\ |{b_{2}}\rangle=\frac{1}{{\sqrt{2}}}(|00\rangle-|11\rangle)}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|{b_{3}}\rangle=\frac{1}{{\sqrt{2}}}(|01\rangle+|10\rangle),\leavevmode\nobreak\ |{b_{4}}\rangle=\frac{1}{{\sqrt{2}}}(|01\rangle-|10\rangle)}\end{array}$ (59) and the measurement operators applied on the whole 3-qubit system in Eq. (58) are constructed as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{B_{i}}=|{b_{i}}\rangle\langle{b_{i}}|\otimes{I_{2}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3,4}$ (60) After applying the joint measurement, Alice sends the measurement result to Bob through a classical channel. Thus, Bob knows that the non-normalized state of his qubit is described as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{b_{i}}}^{{\rm{Bell}}}={\rm{T}}{{\rm{r}}_{{\rm{in,a}}}}\left({{B_{i}}\rho_{{\rm{ab}}}^{E_{0}^{{\rm{Bell}}}}B_{i}^{\dagger}}\right),\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2,3,4}$ (61) where ${\rm{T}}{{\rm{r}}_{{\rm{in,a}}}}(\bullet)$ denotes the partial trace over the input qubit and the first qubit of the shared entangled pair, and the occurrence probability of each measurement operator’s outcome is calculated as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}P_{i}^{{\rm{Bell}}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Tr}}\left({{B_{i}}\rho_{{\rm{tot}}}^{E_{0}^{{\rm{Bell}}}}B_{i}^{\dagger}}\right)}$ (62) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{1-|\beta{|^{2}}r}}{{4-2r}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}}\\\ {{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{1-|\alpha{|^{2}}r}}{{4-2r}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}}\end{array}}\right.$ Finally, as is shown in Table 2, Bob applies the corresponding weak measurement operators to his qubit according to Alice’s measurement outcomes, where the definition of $M_{i}$’s is the same as that in Table 1. Table 2: Alice’s measurement results and corresponding Bob’s weak measurement operators to recover damped states in TP-EW with the Bell state. Alice’s result | non-normalized state of Bob’s qubit | Bob’s weak measurement operator ---|---|--- $|{b_{1}}\rangle$ | $|\psi_{{b_{1}}}^{\rm{Bell}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\alpha|0\rangle_{3}}+\beta\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{1}}={U_{1}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{b_{2}}\rangle$ | $|\psi_{{b_{2}}}^{\rm{Bell}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\alpha|0\rangle_{3}}-\beta\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{2}}={U_{2}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{b_{3}}\rangle$ | $|\psi_{{b_{3}}}^{\rm{Bell}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\beta|0\rangle_{3}}+\alpha\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{3}}={U_{3}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ $|{b_{4}}\rangle$ | $|\psi_{{b_{4}}}^{\rm{Bell}}\rangle=\frac{1}{{\sqrt{4-2r}}}{(\beta|0\rangle_{3}}-\alpha\sqrt{1-r}|1{\rangle_{3}})$ | ${M_{4}}={U_{4}}[\sqrt{1-q}|0\rangle\langle 0|+|1\rangle\langle 1|]$ Therefore, after applying WM operators in Table 2, the output states of Bob’s qubit corresponding to different measurement outcomes of Alice are calculated as $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\rho_{{M_{i}}}^{{\rm{Bell}}}}={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{{M_{i}}|\psi_{{b_{i}}}^{{\rm{Bell}}}\rangle\langle\psi_{{b_{i}}}^{{\rm{Bell}}}|M_{i}^{\dagger}}}{{\langle\psi_{{b_{i}}}^{{\rm{Bell}}}|M_{i}^{\dagger}{M_{i}}|\psi_{{b_{i}}}^{{\rm{Bell}}}\rangle}}}$ (63) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}\left[{\begin{array}[]{*{20}{c}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\alpha{|^{2}}(1-q)}}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\alpha{\beta^{*}}\sqrt{1-q}\sqrt{1-r}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\alpha^{*}}\beta\sqrt{1-q}\sqrt{1-r}}}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{|\beta{|^{2}}(1-r)}}\end{array}}\right],\\\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i=1,2}\end{array}\\\ \begin{array}[]{l}\left[{\begin{array}[]{*{20}{c}}{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\alpha{|^{2}}(1-r)}}&{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\alpha{\beta^{*}}\sqrt{1-q}\sqrt{1-r}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{{\alpha^{*}}\beta\sqrt{1-q}\sqrt{1-r}}}&{{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}|\beta{|^{2}}(1-q)}}\end{array}}\right],\\\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}i=3,4}\end{array}\end{array}}\right.$ where ${|\psi_{{b_{i}}}^{{\rm{Bell}}}\rangle}$’s are the non-normalized states of Bob’s qubit corresponding to different measurement results of Alice, $M_{i}$’s are the corresponding weak measurement operators given in Table 2, and $g_{{M_{i}}}^{\rm{Bell}}\buildrel\Delta\over{=}\langle\psi_{{b_{i}}}^{\rm{Bell}}|M_{i}^{\dagger}{M_{i}}|\psi_{{b_{i}}}^{\rm{Bell}}\rangle$ are the success probability of gaining the state $\rho_{{M_{i}}}^{{\rm{Bell}}}$ as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{M_{i}}}^{\rm{Bell}}=\left\\{{\begin{array}[]{*{20}{c}}{\frac{1}{{4-2r}}\left({|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}\right),i=1,2}\\\ {\frac{1}{{4-2r}}\left({|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}\right),i=3,4}\end{array}}\right.}$ (64) Therefore, the total teleportation success probability of TP-EW with the Bell state can be defined as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}g_{{\rm{tot}}}^{{\rm{TP- EW}}}=\sum\limits_{i=1}^{4}{g_{{M_{i}}}^{\rm{Bell}}}=1-\frac{q}{{2-r}}}$ (65) Moreover, the fidelity between input state Eq. (5) and the output state of TP- EW with the Bell state in Eq. (63) is $\displaystyle{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{fid}}_{i}^{\rm{Bell}}}$ $\displaystyle={\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\langle{\psi_{{\rm{in}}}}|\rho_{{M_{i}}}^{\rm{Bell}}|{\psi_{{\rm{in}}}}\rangle}$ (66) $\displaystyle=\left\\{{\begin{array}[]{*{20}{c}}\begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{|\beta{|^{4}}(1-r)+|\alpha{|^{4}}(1-q)}}{{|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+\frac{{2|\alpha{|^{2}}|\beta{|^{2}}\sqrt{1-q}\sqrt{1-r}}}{{|\alpha{|^{2}}(1-q)+|\beta{|^{2}}(1-r)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=1,2}\end{array}\\\ \begin{array}[]{l}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\frac{{|\beta{|^{4}}(1-q)+|\alpha{|^{4}}(1-r)}}{{|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}}}\\\ {\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}+\frac{{2|\alpha{|^{2}}|\beta{|^{2}}\sqrt{1-q}\sqrt{1-r}}}{{|\alpha{|^{2}}(1-r)+|\beta{|^{2}}(1-q)}},\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ i=3,4}\end{array}\end{array}}\right.$ Therefore, the average teleportation fidelity of TP-EW with the Bell state over all possible input states is defined as ${\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}{\rm{Fid}}_{{\rm{av}}}^{{\rm{TP- EW}}}=}{\color[rgb]{0,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@gray@stroke{0}\pgfsys@color@gray@fill{0}\int_{0}^{1}{\sum\limits_{i=1}^{4}{P_{i}^{\rm{Bell}}{\rm{fid}}_{i}^{\rm{Bell}}}{\rm{d}}|\alpha{|^{2}}}}$ (67) It can be found that all performance indicators in TP-EW with the Bell state are exactly the same as those with the W state, by comparing Eqs. (64)–(67) with Eqs. (18)–(21). ## References * [1] Pirandola S, Eisert J, Weedbrook C, Furusawa A and Braunstein S L 2015 Nat. Photonics 9(10): 641–652 DOI: 10.1038/NPHOTON.2015.154 * [2] Duan L M, Lukin M D, Cirac J I and Zoller P 2001 Nature 414(62): 413–418 DOI: 10.1038/35106500 * [3] Bennett C H, Brassard G, Crepeau C, Jozsa R, Peres A and Wootters W K 1993 Phys. Rev. Lett. 70(13): 1895–1899 DOI: 10.1103/PhysRevLett.70.1895 * [4] Song D, He C, Cao Z and Chai G 2018 IEEE Commun. Lett. 22(12): 2427–2430 DOI: 10.1109/LCOMM.2018.2874025 * [5] Verma V 2020 IEEE Commun. Lett. 25(3): 936–939 DOI: 10.1109/LCOMM.2020.3036587 * [6] Jung E et al. 2008 Phys. Rev. A 78(1): 012312 DOI: 10.1103/PhysRevA.78.012312 * [7] Dai H Y, Chen P X and Li C Z 2004 Opt. Commun. 231(1–6): 281–287 DOI: 10.1016/j.optcom.2003.11.074 * [8] Im D G et al. 2021 npj Quantum Inf. 7: 86 DOI: 10.1038/s41534-021-00426-x * [9] Harraz S, Cong S and Nieto J J 2022 IEEE Commun. Lett. 26(3): 528–531 DOI: 10.1109/LCOMM.2021.3138854 * [10] Zhang J Y, Cong S, Wang C and Harraz S 2022 Acta Phys. Sin. 71(22): 220303 DOI: 10.7498/aps.71.20220760 * [11] Xiao X, Yao Y, Li Y L and Xie Y M 2020 Eur. Phys. J. Plus 135(1): 79 DOI: 10.1140/epjp/s13360-019-00010-5 * [12] Man Z X, Xia Y J and An N B 2007 Phys. Rev. A 75(5): 052306 DOI: 10.1103/PhysRevA.75.052306 * [13] Chen X B, Zhang N, Lin S, Wen Q Y and Zhu F C 2008 Opt. Commun. 281(8): 2331–2335 DOI: 10.1016/j.optcom.2007.12.002 * [14] Li X H and Ghose S 2015 Phys. Rev. A 91(1): 012320 DOI: 10.1103/PhysRevA.91.012320 * [15] Hou K, Bao D Q, Zhu C J and Yang Y P 2019 Quantum Inf. Process. 18(4): 104 DOI: 10.1007/s11128-019-2218-5 * [16] Barasinski A, Cernoch A and Lemr K 2019 Phys. Rev. Lett. 122(17): 170501 DOI: 10.1103/PhysRevLett.122.170501 * [17] Cacciapuoti A S, Caleffi M, Van Meter R and Hanzo L 2020 IEEE Trans. Commun. 68(6): 3808–3833 DOI: 10.1109/TCOMM.2020.2978071 * [18] Oh S, Lee S and Lee H W 2002 Phys. Rev. A 66(2): 022316 DOI: 10.1103/PhysRevA.66.022316 * [19] Fortes R and Rigolin G 2015 Phys. Rev. A 92(1): 012338 DOI: 10.1103/PhysRevA.92.012338 * [20] Fonseca A 2019 Phys. Rev. A 100(6): 062311 DOI: 10.1103/PhysRevA.100.062311 * [21] Bennett C H, Brassard G, Popescu S, Schumacher B, Smolin J A and Wootters W K 1996 Phys. Rev. Lett. 76(5): 722–725 DOI: 10.1103/PhysRevLett.76.722 * [22] Krastanov S, Albert V V and Jiang L 2019 Quantum 3: 123 DOI: 10.22331/q-2019-02-18-123 * [23] Li Z D et al. 2020 Phys. Rev. Res. 2(2): 023047 DOI: 10.1103/PhysRevResearch.2.023047 * [24] Shen L T, Chen R X, Yang Z B, Wu H Z and Zheng S B 2014 Opt. Lett. 39(20): 6046–6049 DOI: 10.1364/OL.39.006046 * [25] Cramer J et al. 2016 Nat. Commun. 7: 11526 DOI: 10.1038/ncomms11526 * [26] Ofek N et al. 2016 Nature 536(7617): 441–445 DOI: 10.1038/nature18949 * [27] Hammond A M, Frank I W and Camacho R M 2018 IEEE J. Sel. Topics Quantum Electron. 24(6): 3900308 DOI: 10.1109/JSTQE.2018.2846024 * [28] Beale S J, Wallman J J, Gutierrez M, Brown K R and Laflamme R 2018 Phys. Rev. Lett. 121(19): 190501 DOI: 10.1103/PhysRevLett.121.190501 * [29] Grassl M, Kong L, Wei Z, Yin Z Q and Zeng B 2018 IEEE Trans. Inf. Theory 64(6): 4674–4685 DOI: 10.1109/TIT.2018.2790423 * [30] David F L, Lorenzo C and Markus M 2023 Quantum 7: 942 DOI: 10.22331/q-2023-03-09-942 * [31] Chandra D, Cacciapuoti A S, Caleffi M and Hanzo L 2022 IEEE Trans. Commun. 70(1): 469–484 DOI: 10.1109/TCOMM.2021.3122786 * [32] Harraz S, Cong S and Nieto J J 2022 Int. J. Quantum Inf. 20(4): 2250007 DOI: 10.1142/S0219749922500071 * [33] Gregoratti M and Werner R F 2003 J. Mod. Opt. 50(6–7): 915–933 DOI: 10.1080/0950034021000058021 * [34] Wang K, Zhao X and Yu T 2014 Phys. Rev. A 89(4): 042320 DOI: 10.1103/PhysRevA.89.042320 * [35] Wu H J, Jin Z and Zhu A D 2018 Int. J. Theor. Phys. 57(4): 1235–1244 DOI: 10.1007/s10773-017-3653-7 * [36] Xu X M, Cheng L Y, Liu A P, Su S L, Wang H F and Zhang S 2015 Quantum Inf. Process. 14(11): 4147–4162 DOI: 10.1007/s11128-015-1111-0 * [37] Li Y L, Sun F, Yang J and Xiao X 2021 Quantum Inf. Process. 20(2): 55 DOI: 10.1007/s11128-021-02998-1 * [38] Zhao X, Hedemann S R and Yu T 2013 Phys. Rev. A 88(2): 022321 DOI: 10.1103/PhysRevA.88.022321 * [39] Agrawal P and Pati A 2006 Phys. Rev. A 74(6): 062320 DOI: 10.1103/PhysRevA.74.062320 * [40] Nielsen M A and Chuang I L 2010 Quantum Computation and Quantum Information New York: Cambridge University Press * [41] Nakahara M and Ohmi T 2008 Quantum Computing: From Linear Algebra to Physical Realizations Boca Raton: CRC Press
# Markov Chain Approaches to Payoff Optimization in the Self-Organizing Network Coloring Game Chen Zeyi <EMAIL_ADDRESS>OR<EMAIL_ADDRESS> Division of Mathematics School of Physical & Mathematical Sceinces Nanyang Technological University Singapore ###### Abstract The model of Network Coloring Game (NCG) is used to simulate conflict resolving and consensus reaching procedures in social science. In this work, we adopted some Markov Chain Techniques into the investigation of NCG. Firstly, with no less than $\Delta+2$ colors provided, we proposed and proved that the conflict resolving time has its expectation to be $O(\log n)$ and the variance $O((\log n)^{2})$, thus is $O_{p}(\log n)$, where $n$ is the number of vertices and $\Delta$ is the maximum degree of the network. This was done by introducing an absorbing Markov Chain into NCG. Secondly, we developed algorithms to reduce the network in post-conflict-resolution adjustments when a Borda rule is applied among players. Markov Chain Monte Carlo (MCMC) methods were employed to estimate both local and global optimal payoffs. Supporting experimental results were given to illustrate the corresponding procedures. ###### Index Terms: Network Coloring Game, Social Choices, Absorbing Markov Chain, Monte Carlo Simulation, Color Sampling ## I INTRODUCTION The setting of a Network Coloring Game (NCG) was first introduced by Kearns et al [13]: With merely local information in hand, professors in a faculty need to adjust around until consenting on a non-conflicting classroom assignment, in which classes with timetable clashes share different venues. There are no administrative staffs (or so-called central agents) in the setting and no determined protocol or rules should be assumed. The detailed terminologies of this NCG are given as follows. Regarding professors (or players) as vertices $V_{i}(i=1,2,\cdots,n)$, an edge $e_{j}(j=1,2,\cdots,m)$ can be formulated between any vertices that "conflict" (i.e. professors having classes with timetable clashes). Let $G(V,E)$ denote the graph or network where $V$ = ($V_{1}$, $V_{2}$, $\cdots$, $V_{n}$) and $E$ = ($e_{1}$, $e_{2}$, $\cdots$, $e_{m}$). Totally there are $q$ available colors, representing different venues. The game runs in discrete rounds, where in each rounds, each player $i$ uniformly select any color $l_{i}$ from colors that were not used by his neighbors in the last round. Denote the neighborhood of the player i by $N(i)$, the payoff of $i$ is denoted by the indicator variable $U_{i}=\begin{cases}1&\text{if $l_{i}$ $\neq$ $l_{j}$, $\forall$ $l_{j}$ $\in$ $N(i)$}\\\ 0&\text{otherwise}\\\ \end{cases}$ and every individual aims to maximize his personal payoff thus the entire system reaches its Nash equilibrium. The distinction between this NCG and some classical graph coloring problems exists in the absence of a central agent who can manipulate around all possible assignments before finding the optimal one (i.e. this NCG is self- organizing). The NCGs mentioned in this report all refer to the self- organizing ones. In fact, such a Network Coloring Game (NCG) better imitates the real-life cases when individual agents pursue for a conflict resolution, or a consensus, without an omniscient view. The game theoretical nature is also reflected on the fact that each agent behaves greedily (exhausting all available colors as options at each round) and selfishly (only caring about his own payoff). In this work, we involved in some Markov Chain models in cater of NCG’s dynamic features, mainly in two parts. We first derived upper bounds for the expectation and variance of the conflict resolving time of NCG (See Theorem 1 and Theorem 2, by introducing an absorbing Markov Chain (AMC). Theorem 3 explains why the conflict resolving time is $O_{p}(\log n)$. This could be regarded as an extension of the result in [3] (Proposition 3). The other part is a discussion of the post-conflict-resolution adjustments made by players, where MCMC-aided algorithms to reduce network and detect both "local" and "global" optimal situations were proposed. Several simulation experiments were conducted to validate the correctness and effectiveness of the algorithms. ## II LITERATURE REVIEW ### II-A Self-organizing Network Coloring Game The topic of network (graph) coloring has a long history in graph theory, where the aim is to find the least number of given colors, or so-called chromatic number, to satisfy a proper coloring in a random graph. The solution is well-known to be NP-hard. The concept of the previously introduced self- organizing Network Coloring Game (NCG) originated from an experimental study conducted by Kearns et al.[13] to model the class scheduling procedure among faculty members. It was later rigorously modelled to be some combinatorial optimization problems by Chaudhuri et al.[3]. When the number of available colors $q$ is at least two more than the maximum degree $\Delta$, they proved on the possibility for any individual conflict-resolving in two successive steps, under greedy and selfish strategies. Moreover, the number of steps before the whole system reaches a proper coloring is upper bounded by $O(log(\frac{n}{\epsilon}))$ for large n and arbitrarily small $\epsilon$. A paper by Pelekis and Schauer [14] substantially improved the upper bound by introducing the idea of search games. More recently, Fryganiotis et al. [7] reduced the lower-restriction on $q$ to be $\Delta+1$ after employing a modified version of aforementioned greedy strategy. It is remarkable that a q-proper coloring provides a Nash Equilibrium to the NCG when $k\geq\Delta+1$, in which case each player $i$ can choose a color different from its neighbors thus achieving $U_{i}=1$. Therefore, a Nash Equilibrium of the game must satisfy $U_{v}=1$ for $v=1,\cdots,m$, i.e. the color assignment must be a q-proper coloring. ### II-B Markov Chain (MC) in Graph Coloring Sampling Aiming to approximately counting the number of the k-colorings in a graph, Mark Jurrum [11], as the pioneer, converted this counting problem to estimating the mixing time of a Markov Chain. Inspired by Glauber Dynamic term in statistical physics, he presented an approach to ramdomly sampling colorings with maximum degree $\Delta$ in $O(nlogn)$ time with at least $2\Delta+1$ colors provided. Vigoda [17] later proved that it suffices for the number of colors to be only $\frac{11}{6}\Delta$ for the $O(nlogn)$ bound. Further results were developed for some graphs with special features ([9], [8] , [4]). Most recently, Vigoda’s result on general graphs was improved by Chen et al. [5] who proved that the chains are rapidly mixing in $O(nlogn)$ time when there are at least $(\frac{11}{6}-\epsilon_{0})\Delta$ colors where $\epsilon_{0}$ is a small positive constant, using linear programming approach. ### II-C Metropolis-Hasting Algorithm & Simulated Annealing The Metropolis-Hasting Algorithm was first introduced by N.Metropolis et al and later generalized by W.K.Hastings [10]. It is an MCMC method aiming to approximately sample from some "target" distribution which is difficult for direct sampling within the aid of some "proposal" distribution. The procedure is: 1. 1. Identify a target distribution $\pi$ on the state space $\mathds{S}$. 2. 2. Choose an irreducible transition matrix $Q$ (i.e. the proposal distribution) in the state space with probabilities $Q_{ij}$, $i,j=1,\cdots,m$. 3. 3. Give the initial state $k$, $0\leq k\leq m$. 4. 4. Define an acceptance rate $\alpha(i,j)=\frac{\pi_{j}Q_{ji}}{\pi_{i}Q{ij}}$. 5. 5. Define an adjusted transition matrix $P$, $P_{ij}=\begin{cases}0&\text{for }Q_{ij}=0,i\neq j\\\ Q_{ij}\min[1,\alpha(i,j)]&\text{for }Q_{ij}\neq 0,i\neq j\\\ 1-\sum_{j\neq i}P_{ij}&\text{for }i=j\end{cases}$ 6. 6. Run with $P$ for sufficiently large number of steps result in samples from the target distribution. The Simulated Annealing method can be regarded as a variant to the Metropolis- Hasting Algorithm [16]. Its objective is to conduct importance sampling from sets with large(small) objective values towards some maximization (minimization) problem w.r.t some $h:\mathds{S}\rightarrow\mathds{R}$. In other words, the transition should give larger probability on samples with large(small) objective values while give small probabilities to small(large) ones. This is achieved by define the target distribution as $P_{\lambda}(x)=\frac{e^{\lambda h(x)}}{\sum_{t\in\mathds{S}}e^{\lambda h(t)}}$ and acceptance rate as $r=\min(\alpha(i,j),1)$ where $\alpha(i,j)=\frac{e^{\lambda(j)}Q(j,i)}{e^{\lambda(i)}Q(i,j)}$ Here $\lambda$ is a function over time, named the temperature parameter. There is a trade-off in the size of $\lambda$: If $\lambda$ is too small, the probability to sample optimal solution from the target distribution decreases; On the other hand, if $\lambda$ is too large, the process is likely to stay at some local extremum for long. For example, for a maximization problem, the process has an extremely low probability to overcome a downhill state from some local maximum towards the global optimal point. Therefore, appropriate schemes should be proposed for $\lambda$ to increase elegantly with time. ## III Absorbing Markov Chain (AMC) in NCG ### III-A Preliminaries In this section, we will be working with a special type of Markov Chain which is called the Absorbing Markov Chain (AMC). ###### Definition 1 (Absorbing Markov Chain). Given a Markov Chain $(Z_{t})_{t\in\mathbb{N}}$, a state $i$ is called absorbing if $\Pr[Z_{t+k}=i\mid Z_{t}=i]=1$, $\forall k\in\mathbb{N}$, otherwise is is called transient. $(Z_{t})_{t\in\mathbb{N}}$ is defined to be absorbing is there exists at least one absorbing state that is accessible from any transient state. In the context of NCG, one may use a list with binary digits to represent the payoff of players after each round. The list has a length of $n$ and there are $2^{n}$ possible outcomes. Additionally, since it is impossible for all but one players to have payoff equal to 1, the total number of possible cases is reduced by one. The collection of these $2^{n}-1$ lists then forms up a state space $S$ where a Markov Chain $(X_{t})_{t\in\mathbb{N}}$ can be run. Moreover, the following theorem emphasizes that the chain is absorbing when the number of colours $q$ and the maximum degree of the graph $\Delta$ satisfy $q\geq\Delta+2$. ###### Proposition 1. If $q\geq\Delta+2$, the Markov Chain $(X_{t})_{t\in\mathbb{N}}$ defined in NCG as above is absorbing. ###### Proof. The proof is inspired by a comment in [3]. One may easily observe that the state $K=(1,1,\cdots,1)$ is absorbing because nobody has any incentive to change his choice after being satisfied. All other states are transient because they are likely to alter in later rounds. Thus it suffices to prove the accessibility between other states to the state $K$. Consider the probability of a payoff rise in successive rounds for a single vertex. Let $d$ denote the number of neighbors of an unsatisfied vertex $i$ thus at most $d$ colors become unavailable in the next round. Meanwhile, denote by $d_{1}$, $\cdots$, $d_{d}$ the number of neighbors’ neighbors. Suppose the vertex i propose for the color $l$ in the second round, then only the neighbors also having the color $l$ available for selection may keep $i$ from a payoff rise. Therefore, when $q\geq\Delta+2$, $\begin{split}p_{i}&\geq\prod_{j=1}^{d}(1-\frac{1}{q-d_{j}})\\\ &\geq(1-\frac{1}{q-\Delta})^{d}\\\ &\geq(1-\frac{1}{q-\Delta})^{\Delta}\\\ &\geq(1-\frac{1}{2})^{\Delta}\\\ &>0\\\ \end{split}$ (1) where $p_{i}=\Pr[U_{i}(t+1)=1\mid U_{i}(t)=0]$. Now we focus on the chain. Suppose $I$ and $\hat{I}$ differ only in vertex $i$: The former has $U_{i}$ = 0 while the latter has $U_{i}$ = 1. We call such a pair of lists as "adjacent". By (1), we have $\begin{split}\Pr[X_{t+1}=\hat{I}\mid X_{t}=I]&=p_{i}\prod_{j\neq i}(1-p_{j})\\\ &>0\end{split}$ (2) which derives the positive probability for the chain to land on an "adjacent" state with a payoff rise in a single element, wherever it starts. Therefore, every transient state are accessible to the absorbing state. ∎ By placing the transient states in front and the absorbing ones behind, the transition matrix of an AMC is often expressed in the following canonical form $P=\begin{pmatrix}Q&R\\\ 0&I\\\ \end{pmatrix}$ where each row of the block matrix Q corresponds to a transient state. In the settings of $(X_{t})_{t\in\mathbb{N}}$ in NCG, $P$ is a $(2^{n}-n)\times(2^{n}-n)$ matrix. $I$ is the number 1 and $R$ is a $(2^{n}-n-1)\times 1$ vector with $R_{1}<R_{2}<\cdots<R_{2^{n}-n-1}$. As described, each state in $S$ is a binary list thus may represent a number. We construct the matrix $Q$ by ascendingly ordering the states in terms of the numbers they represent. For example, the 1st state is $(0,0,0,\cdots,0)$, the 5th state is $(0,0,\cdots,1,0,0)$, $\cdots$, the $(2^{n}-n-1)$th state is $(1,1,\cdots,1,0,0)$. Note that this chain is not irreducible because if a vertex has payoff = 1 at round $t$, its payoff remains in subsequent rounds. Therefore, it is impossible for the chain to jump from a "higher"state to any "lower" one. The structure of $Q$ then becomes $\overset{*}{Q}=\begin{pmatrix}p_{1,1}&p_{1,2}&p_{1,3}&\cdots&p_{1,2^{n}-2}\\\ 0&p_{2,2}&p_{2,3}&\cdots&p_{2,2^{n}-2}\\\ 0&0&p_{3,3}&\cdots&p_{3,2^{n}-2}\\\ \vdots&\vdots&\vdots&\ddots&\vdots\\\ 0&0&0&\cdots&p_{2^{n}-2,2^{n}-2}\end{pmatrix}$ which is upper-triangular. Then $P$ is upper-triangular as well. To see how the game converges, we are interested in the absorbing status of the chain in the long run. ###### Proposition 2. If $q\geq\Delta+2$, the NCG where greedy and selfish strategies are employed, all players will eventually have their payoffs equal to 1 and the game stops. ###### Proof. Since $(1,1,\cdots,1)$ is the only absorbing state in the space $S$, it suffices to show that the chain will be absorbed and trapped at $(1,1,\cdots,1)$. This is shown in the following Lemma 1, which is a remarkable feature of general absorbing Markov Chains. ∎ ###### Lemma 1. The process $(X_{t})_{t\in\mathbb{N}}$ with the transition matrix $P=\begin{pmatrix}Q&R\\\ 0&I\\\ \end{pmatrix}$ will be eventually absorbed when $t\to\infty$. ###### Proof. Suppose, from an initial state $i$, the chain requires at least $m_{i}$ rounds to reach an absorbing state and the corresponding probability is $p_{i}$ ($0<p_{i}<1$ because the chain is absorbing). Let $m=\max_{i}m_{i}$ and $p=\min_{i}p_{i}$. Then the probability that the chain does not access to the absorbing state after $km$ rounds is at most $(1-p)^{k}$, which converges to 0 when $k\to\infty$ (i.e. $\lim_{t\to\infty}Q^{t}=0$ ). Therefore, the chain will be absorbed in the long run. Additionally, since every row of $P^{n}$ sums up to 1, $\lim_{t\to\infty}P^{t}=\begin{pmatrix}0&0&0&\cdots&1\\\ 0&0&0&\cdots&1\\\ 0&0&0&\cdots&1\\\ \vdots&\vdots&\vdots&\vdots&\vdots\\\ 0&0&0&\cdots&1\\\ \end{pmatrix}$ ∎ Chaudhuri et al.[3] proposed and rigorously proved the $O(log(\frac{n}{\delta}))$ converging time of the conflict resolving in NCG, which are stated in the following Proposition 3. ###### Proposition 3 ([3]). For $q\geq\Delta+2$, an NCG will converge in $O(log(\frac{n}{\delta}))$ rounds with probability at least $1-\delta$, when the greedy and selfish strategies are applied. ###### Proof. Here we give the sketch of the proof given in [3]. The key idea is built on the result of Lemma 2, which shows the possibility for any individual conflict resolving in two successive steps. Then $\begin{split}\Pr[U_{i}(2\tau)=0\mid U_{i}(0)=0]&=\prod_{t=1}^{\tau}\Pr[U_{i}(2t)=0\mid\bigcap_{s=1}^{t-1}U_{i}(2s)=0]\\\ &\leq(1-c)^{\tau}\\\ &\leq e^{-c\tau}\end{split}$ (3) Let $\tau=\frac{1}{c}\ln(\frac{n}{\delta})$, $\Pr[U_{i}(2\tau)=1\mid U_{i}(0)=0]\geq 1-\frac{\delta}{n}$ (4) Taking an union bound over all vertices, we have $\begin{split}\Pr[T\leq 2\tau]&\geq(1-\frac{\delta}{n})^{n}\\\ &\geq 1-\delta\end{split}$ (5) where $T$ is the time to convergence in NCG, and $\Pr[T>2\tau]\leq\delta.$ ∎ ###### Lemma 2. For $q\geq\Delta+2$, $\Pr[U_{i}(t+2)=1\mid U_{i}(t)=0]\geq c$ (6) where $c=\frac{1}{1050e^{9}}$. The proof of Lemma 2 can be found in Lemma 3 of [3], where tools like Markov’s Inequality and the convexity of exponential function were used to construct inequalities. We omit the details here. Besides the upper bound of the total number of rounds before convergence, we continue to consider the expected number of rounds as well as the corresponding variation. In preparation for our final proposition, we introduce the following Proposition 4 describing partial properties of AMC with the canonical transition matrix, extracted from [12] . ###### Proposition 4. For an AMC with transition matrix $P$ in canonical form, then 1. 1. $(I-Q)^{-1}$ exists. 2. 2. $\lim_{t\to\infty}P^{t}=$ $\begin{pmatrix}0&(I-Q)^{-1}R\\\ 0&I\\\ \end{pmatrix}$ 3. 3. Suppose the number of states is $k$. Define a $k\times 1$ column vector $n$ whose i-th element $n_{i}$ denotes the expected number of steps before absorption, given the initial state $i$. Then $n=(I-Q)^{-1}\mathds{1}$ where $\mathds{1}=(1,1,\cdots,1)_{k}^{\intercal}$. ###### Proof. The first part of Proposition 4 is equivalent to "if $(I-Q)x=0$, then $x=0$". This is easy to prove as $(I-Q)x=0\iff x=Qx\iff x=\lim_{t\to\infty}Q^{t}x\iff x=0$ (7) Let $N=(I-Q)^{-1}$, then $N=\sum_{t=0}^{\infty}Q^{t}$ by Taylor’s formula. Notice that the n-th power of the transition matrix is $P^{t}=\begin{pmatrix}Q^{t}&(I+Q+\cdots+Q^{t-1})R\\\ 0&I\\\ \end{pmatrix}$ then $\lim_{t\to\infty}P^{t}=\begin{pmatrix}0&(\sum_{t=0}^{\infty}Q^{t})R)\\\ 0&I\\\ \end{pmatrix}=\begin{pmatrix}0&NR\\\ 0&I\\\ \end{pmatrix}$ (8) Denote by $X_{ij}$ the times the chain hits state $j$ starting from $i$ and $X_{ij}^{(l)}$ the indicator of arriving at state $j$ in the $l$th step, then $\begin{split}\mathds{E}(X_{ij})&=\sum_{l=0}^{\infty}\mathds{E}{(X_{ij}^{(l)})}\\\ &=I_{ij}+Q_{ij}+Q_{ij}^{2}+Q_{ij}^{3}+\cdots\\\ &=N_{ij}\end{split}$ (9) Since $n_{i}=\mathds{E}(X_{i})$ where $X_{i}$ denotes the number of steps before absorption given initial state $i$, we have $\begin{split}n_{i}&=\mathds{E}(\sum_{j=1}^{k}X_{ij})\\\ &=\sum_{j=1}^{k}N_{ij}\\\ &=[N\mathds{1}]_{i}\end{split}$ (10) ∎ Proposition 4 implies an approach to estimate the expected number of steps of the NCG using the probabilities of not being trapped into the absorbing states in each single round, motivated by which we give the following results. ### III-B Main Results The results given in [3] involves probability when deciding on the upper bound, which leaves certain possibility for the conflict-resolving time to break the bound. The following Theorem 1, though focusing on the expected steps, gives a more deterministic result. ###### Theorem 1. For $q\geq\Delta+2$, the NCG is expected to converge in $O(\log(n))$ rounds, where $n$ is the number of vertices in the network. ###### Proof. We first generate the preciously defined chain in NCG to jump two steps at a time and then double the number of steps. Let $\overset{*}{N}$ denote $(I-\overset{*}{Q^{2}})^{-1}$, then $\overset{*}{N}$ is also upper-triangular. By the third property in Proposition 4, $2\overset{*}{N}\mathds{1}\geq\overset{*}{n}$ (11) where $\overset{*}{n}=(\overset{*}{n_{1}},\overset{*}{n_{2}},\cdots,\overset{*}{n_{2^{n}-n-1}})^{\intercal}$, i.e. $\begin{split}\overset{*}{n_{i}}&\leq 2\sum_{\tau=0}^{\infty}\sum_{j\geq i}^{2^{n}-n-1}{Q^{2}}_{ij}^{(\tau)}\\\ &=2\sum_{\tau=0}^{\infty}\Pr[T>2\tau]\end{split}$ (12) Following the notations in Proposition 3, we have $\delta=ne^{-c\tau}$ for large $\tau$ satisfying $\delta<1$ (c is constant). Then $\begin{split}\sum_{\tau=0}^{\infty}\Pr[T>2\tau]&\leq 1\times\lceil\frac{\ln{n}}{c}\rceil+\sum_{\tau=\lceil\frac{\ln{n}}{c}\rceil}^{\infty}ne^{-c\tau}\\\ &=\lceil\frac{\ln{n}}{c}\rceil+n\sum_{\tau=\lceil\frac{\ln{n}}{c}\rceil}^{\infty}e^{-c\tau}\\\ &\leq\lceil\frac{\ln{n}}{c}\rceil+n(\frac{1}{n}\frac{1}{1-e^{-c}})\\\ &=O(\log(n))\end{split}$ (13) Therefore, $\overset{*}{n_{i}}$ is $O(\log(n))$ for $i=1,2,\cdots,2^{n}-n-1$. ∎ Remark. The significance of Theorem 1 is that it reduces the unsure probability in Proposition 3 when estimating the convergence (absorbing) time of NCG (the NCG chain). In terms of the variance, we approximate it by conducting first-step analysis on the first migration by the chain. ###### Theorem 2. For $q\geq\Delta+2$, the number of rounds before convergence in NCG has its variance bounded by $O((\log n)^{2})$, whatever the initial state is. ###### Proof. Denote by $X_{i}$ the number of steps initiating from the state $i$. Let $r=(Var(X_{1}),Var(X_{2}),\cdots,Var(X_{2^{n}-n-1}))^{\intercal}$. Then $\begin{split}r_{i}=Var(X_{i})&=\mathds{E}[X_{i}^{2}]-\mathds{E}^{2}[X_{i}]\\\ &=\mathds{E}(X_{i}^{2})-\overset{*}{n_{i}}^{2}\end{split}$ (14) By first-step analysis, $\begin{split}\mathds{E}[X_{i}^{2}]&=\sum_{k=1}^{2^{n}-n}\overset{*}{P}_{ik}\mathds{E}[(1+X_{k})^{2}]\\\ &=\sum_{k=1}^{2^{n}-n}\overset{*}{P}_{ik}\mathds{E}[(X_{k}^{2}+2X_{k}+1]\\\ &=1+2\sum_{k=1}^{2^{n}-n}\overset{*}{P}_{ik}\mathds{E}[X_{k}]+\sum_{k=1}^{2^{n}-n}\overset{*}{P}_{ik}\mathds{E}[X_{k}^{2}]\end{split}$ (15) Since $\mathds{E}[X_{2^{n}-n}^{2}]=\mathds{E}[X_{2^{n}-n}]=0$, the column vector $\mathds{E}[X^{2}]=(\mathds{E}[X_{1}^{2}],\mathds{E}[X_{2}^{2}],\cdots,\mathds{E}[X_{2^{n}-n-1}^{2}])^{\intercal}$ can be expressed as $\begin{split}\mathds{E}[X^{2}]=\mathds{1}+2\overset{*}{Q}\overset{*}{n}+\overset{*}{Q}\mathds{E}[X^{2}]\end{split}$ (16) Transpose the terms we have $\begin{split}\mathds{E}[X^{2}]&=(I-\overset{*}{Q})^{-1}(\mathds{1}+2\overset{*}{Q}\overset{*}{n})\\\ &=\overset{*}{n}+2(I-\overset{*}{Q})^{-1}\overset{*}{Q}\overset{*}{n}\\\ &=\overset{*}{n}+2(\overset{*}{Q}+\overset{*}{Q^{2}}+\cdots)\overset{*}{n}\\\ &=\overset{*}{n}+2[(I-\overset{*}{Q})^{-1}-I]\overset{*}{n}\\\ &=[2(I-\overset{*}{Q})^{-1}-I]\overset{*}{n}\\\ &\leq 2(I-\overset{*}{Q})^{-1}\mathds{1}O(\log(n))-\overset{*}{n}\\\ &=(2O(\log(n))-1)\overset{*}{n}\end{split}$ (17) Thus, $Var(X_{i})<\mathds{E}[X^{2}]_{i}=O((\log(n))^{2})$ (18) for any initial state $i$. ∎ ###### Corollary 1. For $q\geq\Delta+2$, the number of steps before convergence in NCG is almost surely no larger than the number of vertices $n$ when $n$ is arbitrarily large. ###### Proof. By Theorem 1, say $\mathds{E}[X_{i}]\leq C\log(n)$ for some constant $C$, then $\begin{split}\lim_{n\to\infty}\Pr[X_{i}\geq n]&=\lim_{n\to\infty}\Pr[X_{i}-C\log(n)\geq n-C\log(n)]\\\ &\leq\lim_{n\to\infty}\Pr[X_{i}-\mathds{E}[X_{i}]\geq n-C\log(n)]\\\ &\leq\lim_{n\to\infty}\frac{Var(X_{i})}{(n-C\log(n))^{2}}\\\ &\leq\lim_{n\to\infty}\frac{D(\log(n))^{2}}{(n-C\log(n))^{2}}\\\ &=0\end{split}$ (19) where $D$ is constant. The second inequality is a variant to the Chebyshev’s Inequality. ∎ Remark. Corollary 1, to some extent, reflects on the result of Proposition 3 since $\lim_{n\to\infty}\frac{\log(n/\delta)}{n}=0$ (20) even for arbitrarily small $\delta>0$. With $q\geq\Delta+2$ guaranteed, the conflict resolving steps given the number of vertices $n$, denoted by $X^{(n)}$, can be regarded as a set of random variables. Theorem 1 and Theorem 2 imply a stochastic boundedness of $X^{(n)}$, as stated in the following Theorem 3. ###### Theorem 3. Denote by $X^{(n)}$ the conflict resolving rounds given the number of vertices $n$. Then $X^{(n)}$ is stochastically bounded above and $X^{(n)}=O_{p}(\log n)$. ###### Proof. We W.T.S. $\frac{X^{(n)}}{\log n}\rightarrow 0,n\rightarrow\infty$ (21) By Chebyshev’s Inequality, $\forall k\geq 1$, $\Pr[X^{(n)}-\mu_{n}\geq k\sigma_{n}]\leq\frac{1}{k^{2}}$ (22) where $\mu_{n}=\mathds{E}(X^{(n)})$ and $\sigma_{n}=s.d.(X^{(n)})=\sqrt{Var(X^{(n)})}$. By Theorem 1 and Theorem 2, $\mathds{E}(X^{(n)})\leq C\log n$ and $Var(X^{(n)})\leq D(\log n)^{2}$ for $\forall n>n_{0}$, ($C$ and $D$ are constant). Let $k=\frac{M\log n-\mu_{n}}{\sigma_{n}}$ where $M$ is a positive integer making $k\geq 1$ and $M>C$, we have $\Pr[X^{(n)}-\mu_{n}>M\log n-\mu_{n}]<\frac{D(\log n)^{2}}{2(M\log n-\mu_{n})^{2}}$ (23) for $\forall n>n_{0}$ Take $M=\frac{\sqrt{D}}{\sqrt{2\epsilon}}+C$, then for $\forall\epsilon>0$, $\Pr[|\frac{X_{n}}{\log n}|>M]<\epsilon$ (24) for $\forall n>n_{0}$ and $X^{(n)}=O_{p}(\log n)$ by the definition of stochastic boundedness. ∎ ## IV Optimal Assignment by Borda’s Rule in NCG In this section, we consider local adjustments of each player after the proper coloring has been met. Such post-conflict-resolution dynamics, to the best of the author’s knowledge, has not been explored in literature. This topic is rather practical as, for example, after the conflict is resolved in classroom assignment, it is reasonable for each professor to develop personal preferences on different venues after a semester’s teaching and to modify his choice, with an effective proper coloring being guaranteed. Our aim here is to find local optimal coloring assignment (i.e. the assignment that gives maximal social welfare $\sum_{i\in G}u(i)$) to the network, by using some Markov Chain Monte Carlo (MCMC) methods. The assumptions we made are: * • The Borda’s rule [6] is applied, i.e. each player gives 0 point to his very bottom-preferred color, 1 point to the outcome that is second to last, and so on, and the payoff of a player is exactly the point he gives to the given color. * • Only local information is shared between neighboring players, including the current color and whether to stay in the game in the next round. * • Once a player obtains his top-preferred color in the current situation (may not be his best preference in the color list), he keeps the color in later rounds thus leaves the game, since there is no incentive to change anymore. The network is then reduced accordingly. Figure 1 gives an example of network reduction. * • Players will choose from those available colors prevailing the current one in their respective preference lists. They will choose uniformly to avoid being trapped in top ones thus result in infinite loops of conflicts never resolved. They step forward only when the new color assignment is also proper, ensuring there is no conflict; otherwise, all remaining players restart their selection in the very round. Note that it only makes sense to consider the Borda’s rule after an assignment without conflicts has been found. This is important because in the greedy and selfish strategy, if all players have a hierarchical ranking on the colors at the initial stage, they will pop between their top-preferred ones thus a proper coloring may never be reached, especially when the preferences are similar. $R$$R$$B$$B$$G$$Y$ Figure 1: Network Reduction Example: If all vertices have the same preference list as Green $\succ$ Red $\succ$ Blue $\succ$ Yellow and the current assignment is given as above, then the red vertices are leaving the game. ### IV-A Local Optimal Payoff Simulation In this subsection, we are interested in simulating the unprompted adjustments around the whole network to reach a "local" payoff, which, obviously, may not be the most fortunate circumstances given the initial color assignment. We represent the initial network by adjacency matrix $A_{n\times n}$. Labelling the $q$ possible colors by $0,1,\cdots,q-1$, we denote the original proper coloring after conflict resolving by $L_{n\times 1}$. Each player has an descending preference list $X-i$ which is a permutation of the $q$ colors, where the index number equals to the point given to the corresponding color. 1 Input : $A_{0}$, $X_{0}$, $L_{0}$. Output : $A$, $X$, $L$, $avaicolorlist$, $payoff$. 2 3 Function _RemoveVertex(_$A,x$_)_: 4 while _x < length(A) - 1_ do 5 for _$i\leftarrow 0$ to $length(A[0])-1$_ do 6 $A[x][i]=A[x+1][i];$ 7 8 for _$j\leftarrow 0$ to $i$_ do 9 $A[j][x]=A[j][x+1];$ 10 11 $x+=1;$ 12 Reduce the dimension of $A$ by one; 13 return _$A$_ 14 15Copy $A_{0}$, $X_{0}$, $L_{0}$ as $A$, $X$, $L$; 16 Initialize $count=1$, $avaicolorlist=[]$, $payoff=0$; /* Introduce the available colors of each vertex at the particular round. */ 17 18for _$i\leftarrow 0$ to $length(A)-1$_ do 19 Find the index $k$ for $X[i][k]==L[i]$, and append $X[i][:k+1]$ to $avaicolorlist$. 20 /* Remove vertices with optimal personal payoff at each round. */ 21 while _$count\neq 0$_ do 22 Initialize $count=0$; $removelist=[]$ 23 for _$i\leftarrow 0$ to $length(A)-1$_ do 24 if _$avaicolorlist[i][0]==L[i]$_ then 25 for _$j\leftarrow 0$ to $length(A)-1$_ do 26 if _A[i][j] == 1_ then 27 Update $payoff$ as 28 $payoff+=q-1-X_{0}[k].index(L[k])$ 29 Remove the color of vertex $j$ from the $avaicolorlist$ of vertex $i$. 30 else 31 $pass$ 32 33 Append $i$’s index to $removelist$ 34 35 else 36 $pass$ 37 38 Reverse the permutation in $removelist$; 39 /* Dimension Reduction. */ 40 for _$k$ in $removelist$_ do 41 Remove $k$’s information from $A$, $X$, $L$ and $avaicolorlist$. 42 43 for _$i\leftarrow 0$ to $length(A)-1$_ do 44 if _$avaicolorlist[m][0]==L[m]$_ then 45 $count=count+1$; 46 47 else 48 $pass$; 49 50 51return _$A$_ Algorithm 1 Network Reduction in NCG Due to the second and third assumptions we made, players with top preference will leave the game and this information will be known to all neighbors. Once the players currently holding the second preferred color get informed that their top-preferred color is occupied by the leaving neighbor, they will also leave the game since they have no better choices. This process continues till the reduced network is found unchanged, i.e. all remaining players wish to proceed to the next round. We developed Algorithm 1 for the network reduction purpose under this idea. Input : $A$, $L_{0}$, $avaicolorlist$, $m$. Output : $L_{1}$ 1 Function _isProper(_$A,L$_)_: 2 $isProper=1$ 3 for _$i\leftarrow 1$ to $length(A)$_ do 4 for _$j\leftarrow 1$ to $length(A)$_ do 5 if _$A[i][j]==1$_ then 6 if _$L[i]==L[j]$_ then 7 $isProper=0$ 8 $break$ 9 else 10 $pass$ 11 12 else 13 $pass$ 14 15 16 return _$isProper$_ 17 18Let $L=rep(0,m)$; 19 Initialize $L[0]=L_{0}$; 20 $C=avaicolorlist$; 21 $n=length(A);$ 22 for _$step\leftarrow 0$ to $m-1$_ do 23 $X=L[step]$ 24 Generate $V_{1}=Unif(0,1,\cdots,n-1)$. 25 Generate $V_{2}=Unif(C[V_{1}])$. 26 $X[V_{1}]=V_{2}$ 27 Denote $size=len(C[V_{1}])$ 28 Proposed: $Q=\frac{1}{n\times size}$ // $\alpha(i,j)=1$ if proper; $\alpha(i,j)=0$ otherwise. 29 Adjusted: $P=isProper(A,X)\times Q$ 30 Generate $U=Unif(0,1)$; 31 if _$U <P$_ then 32 $L[step+1]=X$; 33 else 34 $L[step+1]=L[step]$; 35 36$L_{1}=L[m]$; 37 return _$L_{1}$_ Algorithm 2 Metropolis-Hasting Algorithm for Proper Coloring in NCG Algorithm 1 takes the original adjacency matrix $A_{0}$, the preference matrix $X_{0}$ and the initial color assignment $L_{0}$ as input, and produces the reduced adjacency matrix $A$, the reduced preference list $X$, the reduced color assignment $L$, the temporary available color lists $avaicolorlist$, and the payoffs gained by quitters $payoff$ as output. Leaving players (vertices) have their indices stored in $removelist$ in each round after being identified and finally get eliminated, in a "first in, last out" manner for the ease of dimension reduction. Details are given in the pseudocode and the complexity of Algorithm 1, with respect to $n$, is $O(n^{2})$. After a reduced network is obtained, the players staying in the game start their uniform selection among their respective available color sets to reach a new proper coloring. It is obvious that the social welfare is never decreased when a new color assignment is obtained since each player receives better payoff. In this fashion, all possible proper colorings consisting of respective available colors have equal probability to be accessed to thus follows a uniform distrbution. This can be regarded as a target distribution of an Metropolis-Hasting Algorithm [18]. The detailed procedure of reaching a better-payoff proper coloring is given in Algorithm 2. It takes the adjacency matrix $A$, an initial color assignment $L_{0}$, $avaicolorlist$, and the truncation steps $m$ as input, while giving a new proper coloring assignment $L_{1}$ eventually. In the proposal distribution, we consider color assignments with merely one element different from the previous one, say $i$. The new color of the particular vertex $i$ must be from the previously-defined $avaicolorlist[i]$ thus ensuring the increase in the total welfare. After sufficient number of truncated steps, the outcomes given by the algorithm should commit a uniform distribution on the target distribution, thus could be the new "equilibrium". Algorithm 1 and 2 are run alternately to constantly reduce the network size and achieve proper colorings with higher social welfare. The whole procedure continues until the network $A$ given by Algorithm 1 has zero size, i.e. all players quit the game. The final welfare is the sum of payoffs gained in each round by the quitters. Since an upper bound of $O(n^{2})$ for Algorithm 2 within large data was provided in [1] and the iterations to reduce network is proportional to $n$, the total complexity of the local payoff reaching procedure is $O(n^{3})$. ### IV-B Global Optimal Payoff by Simulated Annealing Based on the context of Borda rule, the payoff (or welfare) function $h\colon\mathds{N}^{n}\to\mathds{N}$ is $h(L)=(q-1)n-\sum_{k=0}^{n-1}X[k].index(L[k])$ where $L$ is a color assignment and $X$ is the preference matrix. Input : $A_{0}$, $X_{0}$, $L_{0}$, $q$, $m$ Output : $L$, $welfare$ 1 2 Function _welfareh(_$q$ , $X$, $L$_)_: 3 $n=len(L)$; 4 $idx=0$; 5 for _$k\leftarrow 0$ to $n-1$_ do 6 $idx=idx+X[k].index(L[k])$; 7 $welfare=(q-1)n-idx$; 8 return _$welfare$_ 9 10Let $L=[0]*(m+1)$; 11 Initialize $L[0]=L_{0}$, $n=len(A_{0})$, $C=[]$; 12 for _$i\leftarrow 0$ to $n-1$_ do 13 Find $k$ S.T. $X_{0}[i][k]==L_{0}[i]$; 14 Append $X_{0}[i][:k+1]$to $C$; 15 16Initialize $welfare=[welfareh(q,X_{0},L[0])]$; 17 for _$t\leftarrow 0$ to $m-1$_ do 18 Determine the temperature $\lambda_{t}$; 19 $x=L[t].copy()$; 20 Generate $V_{1}=Unif(0,1,\cdots,n-1)$; 21 Generate $V_{2}=Unif(C[V_{1}])$; 22 Denote $size=len(C[V_{1}]$; 23 $x[V_{1}]=V_{2}$; 24 $h_{1}=welfareh(q,X_{0},L[t])$; 25 $h_{2}=welfareh(q,X_{0},x)$; 26 27 Let $\alpha=isProper(A_{0},x)\times\frac{e^{\lambda_{t}h_{2}}}{e^{\lambda_{t}h_{1}}}$; 28 then $r=min(1,\alpha)$; 29 30 Proposed: $Q=1/(n\times size)$; 31 Adjusted: $P=Q\times r$; 32 33 Generate $U=Unif(0,1)$; 34 if _$U <P$_ then 35 $L[t+1]=x$; 36 else 37 $L[t+1]=L[t]$ 38 ; 39 40 Append $welfareh(q,X_{0},L[t])$ to $welfare$; return _$L$ , $welfare$_ Algorithm 3 Simulated Annealing for Payoff Maximization in NCG Given an initial conflict-resolved coloring assignment, it is natural to consider the adjustments among players that lead to the global maximum social welfare (i.e. total sum of personal payoffs). Notice that, by "global", instead of finding the most fortunate situation from the very beginning, we still assume that a proper coloring has already been reached using binary- payoff strategies otherwise the game could be endless as stated in the last subsection. Since we focus on the outcomes with high social welfare, simulated annealing method can be employed to conduct importance sampling from the high- welfare situations. Algorithm 3n depicts the detailed procedure of finding the best potential welfare and the corresponding color assignment. The input variables include the original adjacency matrix $A_{0}$, the preference matrix $X_{0}$, the initial color assignment $L_{0}$, the number of colors $q$, and the truncation steps $m$; the output is the optimal welfare $welfare$ as well as the list of corresponding color assignments $L$. The key idea to determine an appropriate temperature parameter is to raise $\lambda_{t}$ in an appropriate pace so that the chain can access to "downhill" point for the global maximum. A simple iterative algorithm was given in [15] to compute the optimal initial temperature w.r.t the required acceptance probability. In terms of "temperature scheme"(i.e. the evolution of $\lambda_{t}$), multiple schemes are available to be experimented, for example, linear cooling schedule, geometric cooling schedule, etc., according to a massive amount of literature [2]. The key idea is to raise the temperature in an appropriate pace so that the chain can access to "downhill" point for the global maximum. The complexity of Algorithm 3 is $O(n^{2})$ owing to the proper coloring identifying steps. One may run the Simulated Annealing Algorithm for multiple times to validate the highest produced welfare. The advantage of Algorithm 3 is that it enjoys less time complexity thus reaches the "highest point" more rapidly, compared to the local optimal payoff reaching process depicted in the last subsection. However, unlike previous algorithms where one can witness the unprompted self-organizing actions made by players, Algorithm 3 does not give detailed behaviors amongst players from the initial color assignment to the one with an optimal payoff. ### IV-C Simulation Experiments This subsection provides some experiments to simulate post-resolution dynamics in NCG, using previous algorithms and techniques 111The Python code for this part is available on Github at https://github.com/CHENZeyi1101/2122URECA.git.. Using networkx packages in Python, we constructed a network $A_{0}$ with $20\times 20$ adjacency matrix, where there is 0.3 probability for an edge to exist between any pair of vertices (Figure 2). The maximum degree was found to be 11, thus we let the color set have a size of 13 to satisfy $q\geq\Delta+2$ (colors represented by $0,1,\cdots,12$). $X_{0}$ is a $20\times 13$ matrix consisting of preference lists of all players, where the preferences were generated as random permutations of colors. The initial proper color assignment $L_{0}$ was intuitively generated by uniformly selecting available colors for each vertex. Figure 2: The network $A_{0}$ We employed Algorithm 1 and Algorithm 2 to alternately reduced the network and updated coloring assignment, until all players leave the game (no vertices left in network). Figure 3 illustrates a possible process to reach the best local payoff, which was found to be 209. Notice that there were cases when payoff kept unchanged, which resulted from the absence of quitters after a specific network reduction. Figure 3: (Local) Maximum Payoff Reaching Process Figure 4: Expectation of Maximum Payoff Figure 5: Simulated Annealing for Global Optimal Payoff It is theoretically difficult to calculate the expected optimal payoff of the system, instead we may apply MCMC method to estimate the expectation. We ran the maximum payoff reaching process for $k$ times for $k$ being large ($k=1000$ here), and the results are given in Figure 4. Then the average of the obtained values was regarded as an unbiased and consistent estimator to the expected optimal payoff. In the case represented by Figure 4, the expected optimal payoff was 208.905 ($\approx 209$), and the maximum result ever in the iterations was 216. TABLE I: Summary of results for different temperature schemes | maximum payoff | reaching time ---|---|--- $\lambda_{t}^{(1)}=\log(1+t)$ | 217 | 94156 $\lambda_{t}^{(2)}=t$ | 216 | 179632 $\lambda_{t}^{(3)}=t^{2}$ | 214 | 183118 Next, we experimented on the Simulated Annealing Algorithm to find the global optimal payoff. Three different temperature schemes were compared: $\lambda_{t}^{(1)}=\log(1+t),\lambda_{t}^{(2)}=t,\lambda_{t}^{(3)}=t^{2}$. Figure 5 gives the payoff reaching dynamic where the number of iterations was $2\times 10^{5}$ in each case. A summary of the maximum payoff obtained and the corresponding reaching time is given in Table I. Based on the results, we inferred that the global maximum payoff after post-resolution adjustments could be around 217 in this example, which is reasonably consistent with the eventual optimal payoff in Figure 4. We ran and evaluated a few more cases to derive generalized results. Among the three candidates of temperature parameters, $\lambda_{t}^{(1)}$ and $\lambda_{t}^{(2)}$ have relatively better performances in the reaching value and efficiency, while $\lambda_{t}^{(3)}$ always led to difficulty in overcoming local optimas. In spite of that, all parameters gave an optimal value between 215 and 220 when the process is iterated for a sufficiently large number of times (e.g. $m>1\times 10^{6}$). ## V Conclusion In this paper, we mainly discussed on two topics in the context of the self- organizing Network Coloring Game (NCG) using some Markov Chain perspectives. Inspired by properties of Absorbing Markov Chain (AMG), we proved the upper bound of the expectation and variance of the number of conflict-resolving rounds, based on which we concluded a stochastic upper bound. We also stepped forward to consider after-game adjustments potentially made by players to optimize both their personal payoffs and the social welfare, which can be sampled by the proposed algorithms. Both discussions demonstrated the huge potential of Markov Chain tools in the related social network or graph topics, on which more extensions and explorations could be expected in future researches. ## VI Acknowledgement This research was under the Undergraduate Research Experience on Campus (URECA) program of Nanyang Technological University (NTU). The author would express my gratitude to my URECA supervisor, Assoc Prof Wu Guohua, for the flexibility and patience during the research project. He would also want to thank his girlfriend Ms Sun Chang for the encouragements, as well as his friend Mr Hu Kairui for his advice on algorithm complexity analysis in experiments. ## References * [1] A. Belloni and V. Chernozhukov. On the computational complexity of mcmc-based estimators in large samples. The Annals of Statistics, 37(4):2011–2055, 2004. * [2] W. Ben-Ameur. Computing the initial temperature of simulated annealing. Computational Optimization and Applications, 29(3):369–385, 2004\. * [3] K. Chaudhuri, C. Fang, and M. S. Jamall. A network coloring game. In International Workshop on Internet and Network Economics, pages 522–530, Springer, Berlin, Heidelberg, 2008. * [4] B. L. Chen and C. H. Yen. Equitable $\delta$-coloring of graphs. Discrete Mathematics, 312(9):1512–1517, 2012. * [5] S. Chen, M. Delcourt, A. Moitra, G. Perarnau, and L. Postle. Improved bounds for randomly sampling colorings via linear programming. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2216–2234, 2019. * [6] P. Favardin, D. Lepelley, and J. Serais. Borda rule, copeland method and strategic manipulation. Review of Economic Design, 7(2):213–228, 2002. * [7] N. Fryganiotis, S. Papavassiliou, and C. Pelekis. A note on the network coloring game: A randomized distributed $(\Delta+1)$-coloring algorithm, 2021. * [8] T. P. Hayes, J. C. Vera, and E. Vigoda. Randomly coloring planar graphs with fewer colors than the maximum degree. Random Structures & Algorithms, 47(4):731–759, 2015. * [9] T. P. Hayes and E. Vigoda. Coupling with the stationary distribution and improved sampling for colorings and independent sets. The Annals of Applied Probability, 16(3):1297–1318, 2006. * [10] D. B. Hitchcock. A history of the metropolis–hastings algorithm. The American Statistician, 57(4):254–257, 2003. * [11] M Jerrum. A very simple algorithm for estimating the number of k-colorings of a low-degree graph. Random Structures & Algorithms, 7(2):157–165, 1995. * [12] A. Kassir. Absorbing Markov chains with random transition matrices and applications. PhD thesis, University of California, Irvine, 2018. * [13] M. Kearns, S. Suri, and N. Montfort. An experimental study of the coloring problem on human subject networks. Science, 313(5788):824–827, 2006. * [14] C. Pelekis and M. Schauer. Network coloring and colored coin games. Search Theory, pages 59–73, 2013. * [15] A. K. Peprah, S. K. Appiah, and S. K. Amponsah. An optimal cooling schedule using a simulated annealing based approach. Applied Mathematics, 8(8):1195, 2017. * [16] R. A. Rutenbar. Simulated annealing algorithms: An overview. IEEE Circuits and Devices magazine, 5(1):19–26, 1989. * [17] E. Vigoda. Improved bounds for sampling colorings. Journal of Mathematical Physics, 41(3):1555–1569, 2000. * [18] J. Zhang. Markov chains, mixing times and coupling methods with an application in social learning. Senior thesis, Northwestern University, 2020.
[1]Sergey A. Khaibrakhmanov # Dynamics of magnetic flux tubes in accretion disks of Herbig Ae/Be stars Alexander E. Dudorov Chelyabinsk State University, 129 Br Kashirinykh str, Chelyabinsk 454001 ###### Abstract The dynamics of magnetic flux tubes (MFTs) in the accretion disk oftypical Herbig Ae/Be star with fossil large-scale magnetic field is modeled taking into account the buoyant and drag forces, radiative heat exchange with the surrounding gas, and the magnetic field of the disk. The structure of the disk is simulated using our magnetohydrodynamic (MHD) model, taking into account the heating of the surface layers of the disk with the stellar radiation. The simulations show that MFTs periodically rise from the innermost region of the disk with speeds up to $10-12$ km s-1. MFTs experience decaying magnetic oscillations under the action of the external magnetic field near the disk's surface. The oscillation period increases with distance from the star and initial plasma beta of the MFT, ranging from several hours at $r=0.012$ au up to several months at $r=1$ au. The oscillations are characterized by pulsations of the MFT's characteristics including its temperature. We argue that the oscillations can produce observed IR-variability of Herbig Ae/Be stars, which would be more intense than in the case of T Tauri stars, since the disks of Herbig Ae/Be stars are hotter, denser and have stronger magnetic field. ## 1 Introduction Accretion disks are commonly observed around young stars. Analysis of contemporary observational data shows that accretion disks of young stars (ADYSs) evolve into protoplanetary disks (PPDs), in which conditions are favourable for planet formation. Polarization mapping of accretion disks and PPDs shows that they have large- scale magnetic field with complex geometry (Li et al., 2016). Outflows and jets, which are ubiquitous in ADYSs, are indirect signs of the large-scale magnetic field in the system (see review by Frank et al., 2014). Robust measurements of the magnetic field strength in ADYSs are still not possible. There are indications that the magnetic field can be dynamically strong near the inner edge of the disk (Donati et al., 2005). Analysis of the observational constraints on magnetic field strength from measurements of the remnant magnetization of meteorites (Levi, 1978) and Zeeman splitting of the CN lines (Vlemmings et al., 2019) shows that the magnetic field strength decreases with distance from the star. The observational data confirm predictions of the theory of fossil magnetic field, according to which the large-scale magnetic field of the accretion disks of young stars is the fossil field of the parent protostellar clouds (Dudorov, 1995; Dudorov and Khaibrakhmanov, 2015). MHD modeling of ADYSs have shown that strong toroidal magnetic field is generated in the innermost region of the ADYS, where thermal ionization operates and magnetic field is frozen in gas (Dudorov and Khaibrakhmanov, 2014). Runaway generation of the magnetic field in this region can be balanced by magnetic field buoyancy leading to the formation of magnetic flux tubes (MFTs), that float from the disk and carry away excess of its magnetic flux (Khaibrakhmanov and Dudorov, 2017). MFTs form in a process of magnetic buoyancy instability (also known as Parker instability, Parker, 1979) in the stratified disk with strong planar magnetic field. Formation of MFT has been found both in MHD simulations of solar interior (Vasil and Brummell, 2008) and simulations of the accretion disks (Takasao et al., 2018). Parker instability and rising MFTs can have different manifestations in the accretion disks (see review in Dudorov et al., 2019). Khaibrakhmanov et al. (2018) and Dudorov et al. (2019) have shown that rising MFTs oscillate under certain conditions, and the oscillations can be the source of infrared (IR) variability of accretion disks of T Tauri stars (TTSs). In this work, we further develop approach of Dudorov and Khaibrakhmanov and model the dynamics of the MFT in the accretion disk of typical Herbig Ae/Be star (HAeBeS). Structure of the paper is following. In section 2, we outline the problem statement, describe our model of the dynamics of the MFT as well as the accretion disk model. In section 3.1, we present results of the simulations of the accretion disk structure. The structure of the disk of the HAeBeS is compared with those of the TTS. Section 3.2 is devoted to the investigation of the dynamics of the MFT in absence of eternal magnetic field. Effect of the external magnetic field leading to magnetic oscillations of the MFT is investigated in section 3.3. We summarize and discuss our results in section 4. ## 2 Model ### 2.1 Problem statement We consider a toroidal MFT formed inside the accretion disk in the region of effective generation of the magnetic field. The dynamics of unit length MFT is modeled in the slender flux tube approximation. Cylindrical coordinates are adopted, $(r,\,0,\,z)$, where $r$ is the radial distance from the center of the star, $z$ is the height above the midplane of the disk. The MFT is characterized by radius-vector $\mathbf{r}=(r,\,0,\,z)$, velocity vector $\mathbf{v}=(0,\,0,\,v)$, cross-section radius $a$, density $\rho$, temperature $T$, and internal magnetic field strength $B$. The disk has density $\rho_{\rm e}$, temperature $T_{\rm e}$, pressure $P_{\rm e}$ and magnetic field strength $B_{\rm e}$. The MFT starts its motion at some radial distance $r$ from the star and a height $z_{0}$ above the disk's midplane, $z=0$. The MFT moves in the $z$-direction under the action of buoyant and drag forces. ### 2.2 Main equations We follow Dudorov et al. (2019) and use the system of equations describing the MFT dynamics taking into account the buoyant force, turbulent and aerodynamic drag, radiative heat exchange with the external gas, magnetic pressure of the disk, $\displaystyle\frac{{\rm d}{\bf v}}{{\rm d}t}$ $\displaystyle=$ $\displaystyle\left(1-\frac{\rho_{{\rm e}}}{\rho}\right)\mathbf{g}+\mathbf{f}_{{\rm d}},$ (1) $\displaystyle\frac{{\rm d}{\bf r}}{{\rm d}t}$ $\displaystyle=$ $\displaystyle\mathbf{v},$ (2) $\displaystyle M_{{\rm l}}$ $\displaystyle=$ $\displaystyle\rho\pi a^{2},$ (3) $\displaystyle\Phi$ $\displaystyle=$ $\displaystyle\pi a^{2}B,$ (4) $\displaystyle{\rm d}Q$ $\displaystyle=$ $\displaystyle{\rm d}U+P_{{\rm e}}{\rm d}V,$ (5) $\displaystyle P+\frac{B^{2}}{8\pi}$ $\displaystyle=$ $\displaystyle P_{{\rm e}},$ (6) $\displaystyle\frac{{\rm d}P_{{\rm e}}}{{\rm d}z}$ $\displaystyle=$ $\displaystyle-\rho_{{\rm e}}g_{z},$ (7) $\displaystyle U$ $\displaystyle=$ $\displaystyle\frac{P_{{\rm e}}}{\rho(\gamma-1)}+\frac{B^{2}}{8\pi\rho},$ (8) where $\mathbf{f}_{{\rm d}}$ is the drag force, $M_{{\rm l}}=$ const is the mass per unit length of the MFT, $\Phi=$ const is the magnetic flux of the MFT, $Q$ is the quantity of heat per unit mass of the MFT, $U$ is the energy of the MFT per unit mass, $g_{z}$ is the vertical component of stellar gravity, $\gamma$ is the adiabatic index. Equations of motion (1, 2) determine dependences ${\bf v}(t)$ and ${\bf r}(t)$. Differential equations describing evolution of the MFT's density and temperature can be deduced by taking time derivative of the energy equation (5) and pressure balance (6) and using the equation of the hydrostatic equilibrium of the disk (7). We define the rate of heat exchange as $h_{\rm c}=dQ/dt$ and estimate it in the diffusion approximation, $h_{{\rm c}}\simeq-\frac{4}{3\kappa_{{\rm R}}\rho^{2}}\frac{\sigma_{{\rm R}}T^{4}-\sigma_{{\rm R}}T_{{\rm e}}^{4}}{a^{2}}.$ (9) where $\kappa_{\rm R}$ is the Rosseland mean opacity adopted from Semenov et al. (2003), $\sigma_{\rm R}$ is the Stefan-Boltzmann constant. We introduce non-dimensional variables $\displaystyle u=v/v_{{\rm a}},$ $\displaystyle\tilde{z}=z/H,$ $\displaystyle\tilde{T}=T/T_{{\rm m}},$ $\displaystyle\tilde{\rho}=\rho/\rho_{{\rm m}},$ $\displaystyle\tilde{t}=t/t_{{\rm A}},$ $\displaystyle\tilde{h}_{{\rm c}}=h_{{\rm c}}/h_{{\rm m}},$ $\displaystyle\tilde{a}=a/H,$ $\displaystyle\tilde{B}=B/B_{{\rm e}},$ $\displaystyle\tilde{g}=g_{z}/f_{{\rm a}},$ $\displaystyle\tilde{f}_{{\rm d}}=f_{{\rm d}}/f_{{\rm a}},$ $\displaystyle\tilde{P}=P/(\rho_{m}v_{{\rm a}}^{2}),$ (10) where $v_{{\rm a}}$ is the Alfvén speed, $t_{{\rm A}}=H/v_{{\rm a}}$ is the Alfvén crossing time, $h_{{\rm m}}={\varepsilon}_{{\rm m}}/t_{{\rm A}}$, ${\varepsilon}_{{\rm m}}$ is the energy density of magnetic field, $f_{{\rm a}}=v_{{\rm a}}/t_{{\rm A}}$. All scales are defined at the midplane of the disk. Then the final equations of the MFT dynamics can be written as (tilde signs are omitted) $\displaystyle\frac{{\rm d}u}{{\rm d}t}$ $\displaystyle=$ $\displaystyle\left(1-\frac{\rho_{{\rm e}}}{\rho}\right)g+f_{{\rm d}},$ (11) $\displaystyle\frac{{\rm d}z}{{\rm d}t}$ $\displaystyle=$ $\displaystyle u,$ (12) $\displaystyle\frac{{\rm d}T}{{\rm d}t}$ $\displaystyle=$ $\displaystyle\frac{2\left(\gamma-1\right)}{\beta}\times$ (13) $\displaystyle\frac{h_{{\rm c}}\left(\dfrac{\beta}{2}T+C_{{\rm m}}\rho\right)+\rho_{e}gu\left(\dfrac{C_{{\rm m}}}{2}-\dfrac{P_{{\rm e}}}{\rho}\right)}{\dfrac{3-\gamma}{2}C_{{\rm m}}\rho+\dfrac{\beta}{2}T+\left(\gamma-1\right)\dfrac{P_{{\rm e}}}{\rho}},$ $\displaystyle\frac{{\rm d}\rho}{{\rm d}t}$ $\displaystyle=$ $\displaystyle-\frac{\rho_{{\rm e}}gu+(\gamma-1)h_{{\rm c}}\rho}{\dfrac{3-\gamma}{2}C_{{\rm m}}\rho+\dfrac{\beta}{2}T+\left(\gamma-1\right)\dfrac{P_{{\rm e}}}{\rho}},$ (14) $\displaystyle a$ $\displaystyle=$ $\displaystyle C_{{\rm a}}\rho^{-1/2},$ (15) $\displaystyle B$ $\displaystyle=$ $\displaystyle C_{{\rm B}}\rho,$ (16) where $\beta$ is the midplane plasma beta, $C_{\rm m}=B_{0}^{2}/4\pi\rho_{0}^{2}$, $C_{\rm a}=\tilde{a}_{0}\tilde{\rho}_{0}^{1/2}$, $C_{\rm B}=\tilde{B}_{0}/\tilde{\rho}_{0}$. Ordinary differential equations (11–14) together with the algebraic equations (15, 16) form closed system of equations describing the dynamics of the MFT. Equations (11–14) are supplemented by the initial conditions $u(t=0)=0$, $z(t=0)=z_{0}$, $T(t=0)=T_{\rm e}$, $\rho(T=0)=\rho_{0}$, $a(t=0)=a_{0}$, $B(t=0)=B_{0}$. Values $z_{0}$ and $a_{0}$ are the free parameters of the model, while the initial density $\rho_{0}$ is calculated from the pressure balance (6) at $t=0$. Initial magnetic field strength $B_{0}$ is specified through the initial plasma beta inside the MFT, $\beta_{0}$, which is also a free parameter. ### 2.3 Model of the disk The distributions of the density, temperature and magnetic field in the disk are calculated using our MHD model of the AD (Dudorov and Khaibrakhmanov, 2014; Khaibrakhmanov et al., 2017). The disk is considered to be geometrically thin and optically thick with respect to its own radiation. The mass of the disk is small compared to the stellar mass $M$. Inner radius of the disk is equal to the radius of stellar magnetosphere. Outer radius of the disk is determined as the contact boundary with the external medium. The model is the generalization of Shakura and Sunyaev (1973) model. In addition to the solution of Shakura and Sunyaev (1973) equations for the low- temperature opacities, we solve the induction equation for magnetic field taking into account Ohmic dissipation, magnetic ambipolar diffusion, magnetic buoyancy and the Hall effect. The ionization fraction is calculated following Dudorov and Sazonov (1987) taking into account thermal ionization, shock ionization by cosmic rays, X-rays and radionuclides, as well as radiative recombinations and recombinations onto dust grains. Vertical structure of the disk is determined from the solution of the hydrostatic equilibrium equation (7) for polytropic dependence of the gas pressure on density, $\displaystyle\rho_{{\rm e}}(z)$ $\displaystyle=$ $\displaystyle\rho_{{\rm m}}\left[1-\left(\frac{z}{H_{\rm k}}\right)^{2}\right]^{\frac{1}{k-1}},$ (17) $\displaystyle T_{{\rm e}}(z)$ $\displaystyle=$ $\displaystyle T_{{\rm m}}\left[1-\left(\frac{z}{H_{\rm k}}\right)^{2}\right],$ (18) where $H_{\rm k}=\sqrt{\frac{2k}{k-1}}H,$ (19) $\rho_{{\rm m}}=\rho_{{\rm e}}(z=0)$, $T_{{\rm m}}=T_{{\rm e}}(z=0)$ are the density and temperature in the midplane of the disk, $k=1+1/n$, $n$ is the polytropic index, scale height $H=v_{{\rm s}}/\Omega_{{\rm k}}$, $\Omega_{{\rm k}}=\sqrt{\frac{GM_{\star}}{r^{3}}}$ (20) is the Keplerian angular velocity. We consider that there is an optically thin hydrostatic corona above the optically thick disk. The corona's temperature is determined by heating due to absorption of stellar radiation, $T_{\rm c}=185\left(\frac{f}{0.05}\frac{L}{1\,L_{\odot}}\right)^{1/4}\left(\frac{r}{1\,\rm{au}}\right)^{-1/2}\,\rm{K},$ (21) where $f$ is the fraction of the stellar radiation flux intercepted by the disk, $L$ is the stellar luminosity (see Akimkin et al., 2012). Transition from the disk to corona is characterized by an exponential change in temperature over the local scale height $H$ in accordance with the results of detailed modeling of the vertical structure of the accretion disks (see Vorobyov and Pavlyuchenkov, 2017). The model of the disk has two main parameters: turbulence parameter $\alpha$ and mass accretion rate $\dot{M}$. ### 2.4 Model parameters and solution method Ordinary differential equations (11–14) of the model are solved with the Runge–Kutta scheme of the 4th order with step size control. Initially the MFT is in thermal equilibrium with external gas at $z_{0}=0.5\,H$. We performed a set of simulation runs for various initial radii of the MFT $a_{0}$, plasma beta $\beta_{0}$ and radial distances from the star $r$. Adopted ranges of the initial parameters are listed in Table 1. Adopted fiducial value of $r$ corresponds to the dust sublimation zone, where gas temperature is of $1500$ K. Table 1: Model parameters: radial distance from the star, initial cross-section radius and plasma beta of the MFT. quantity | range of values | fiducial value ---|---|--- (1) | (2) | (3) $r$ | $0.012-1$ au | $0.5$ au $a_{0}$ | $0.01-0.4\leavevmode\nobreak\ H$ | $0.1\,H$ $\beta_{0}$ | $0.01-10$ | $1$ We consider the accretion disk of Herbig Ae/Be star with mass $2\,M_{\odot}$, radius $1.67\,R_{\odot}$, luminosity $11.2\,L_{\odot}$, surface magnetic field strength $1$ kG, accretion rate $\dot{M}=10^{-7}\,M_{\odot}/\rm{yr}$, and turbulence parameter $\alpha=0.01$. Adopted parameters correspond to the star MWC 480 (Donehew et al., 2011; Hubrig et al., 2011). Ionization and magnetic diffusivity parameters are adopted from the fiducial run in Khaibrakhmanov et al. (2017). ## 3 Results Fig. 1: Radial profiles of the midplane temperature (a), surface density (b), midplane ionization fraction (c) and midplane magnetic field strength (d) in the accretion disks of typical T Tauri star (yellow lines) and Herbig Ae/Be star (blue lines). Empty circle markers show the points at which the modeling of the dynamics of the MFT was performed. . ### 3.1 Radial structure of the disk First of all, let us consider the structure of the accretion disk of HAeBeS in comparison with the structure of the disk of typical TTS according to our simulations. Detailed discussion of the structure of TTS disks can found in our previous papers (Dudorov and Khaibrakhmanov, 2014; Khaibrakhmanov et al., 2017). In Figure 1, we plot the radial profiles of midplane temperature $T_{\rm m}$, gas surface density $\Sigma$, midplane ionization fraction $x$ and magnetic field strength $B_{z}$. Figure 1 shows that the structures of the accretion disks of HAeBeS and TTS are qualitatively similar. Temperature and surface density are decreasing functions of distance, which can be represented as piece-wise power law profiles. The local slopes of the $T_{\rm m}(r)$ and $\Sigma(r)$ profiles are determined by the parameters of opacity dependence on gas density and temperature (see analysis of the analytical solution of model equations by Dudorov and Khaibrakhmanov (2014)). The ionization fraction profiles $x(r)$ is non-monotonic and have minimum at $r_{\rm min}\approx 0.3$ au in the case of TTS and $1$ au in the case of HAeBeS. The ionization fraction is higher closer to the star, $r<r_{\rm min}$, due to thermal ionization of alkali metals and hydrogen. Growth of the $x$ further from the minimum, $r>r_{\rm min}$, is explained by decrease in gas density and corresponding increase in the intensity of ionizing radiation by external sources. Local peak in the $x(r)$ profiles at $r\approx 1$ (TTS) and $4$ au (HAeBeS) is due to evaporation of icy mantles of dust grains. Intensity of the vertical component of the magnetic field $B_{z}$ generally decreases with distance. In the region of thermal ionization, $r<r_{\rm min}$, the magnetic field is frozen into gas and $B_{z}\propto\Sigma$. In the outer region, $r>r_{\rm min}$ magnetic ambipolar diffusion reduces magnetic field strength by 1–2 orders of magnitude as compared to the frozen-in magnetic field. For example, magnetic field strength is of $0.1$ G near the ionization minimum. Comparison of the simulation results for HAeBeS and TTS shows that the accretion disk is hotter and denser in the former case at any given $r$. This is because the disk of HAeBeS has higher accretion rate, which leads to more intensive turbulent heating of the gas in the disk. As a consequence, the size of the innermost region, where runaway growth of the magnetic field is possible due to high ionization level, is more extended in the case of the HAeBeS. For adopted parameters, this region ranges from the inner boundary of the disk, $r_{\rm in}=0.012$ au, up to $r\approx r_{\rm min}=1$ au. Magnetic field strength is greater in the case of HAeBeS. ### 3.2 MFT dynamics without external magnetic field In this section we study the dynamics of the MFT in the disk of HAeBeS in absence of the magnetic field outside the MFT. In Figure 2, we plot dependences of the MFT's speed, density, radius and temperature on the $z$-coordinate at $r=0.5$ au for different initial cross- section radii, $a_{0}=0.01$, $0.1$, $0.2$, and $0.4$ $H$. Fig. 2: Dynamics of the MFTs of various initial cross-section radii $a_{0}$ in absence of the external magnetic field. Dependences of the MFT's speed (panel a), density (b), cross-section radius (c) and temperature (d) on the $z$-coordinate are shown. Vertical lines show the surface of the disk. Grey dashed lines in panels (b) and (d) delineate corresponding profiles of the disk's density and temperature. Initial parameters of the MFT: $r=0.5$ au, $z_{0}=0.5\,H$, and $\beta_{0}=1$. Figure 2(a) shows that thinner MFT, $a_{0}=0.01$ H, is characterized by three stages of evolution. First the MFT accelerates inside the disk, then it rapidly decelerates near the surface, $z\approx 2.6\,H$, and after that it again rises with acceleration in the corona of the disk. Finally, the MFT dissipates in the corona, in a sense that its radius grows fast and becomes comparable with the half-thickness of the disk, as Figure 2(c) shows. Hence, MFTs will form outflowing magnetized corona of the disk, as in the case of TTS discussed by Dudorov et al. (2019). The MFT of intermediate initial radii, $a_{0}\sim 0.1-0.2\,H$, rise with higher speed, $v\approx 1-2$ km s-1, and dissipate right after rising to the corona without proceeding to the stage of further acceleration. Thick MFT with $a_{0}=0.4\,H$ float with highest speeds up to $3$ km s-1 and dissipate near the surface of the disk. Upward motion of the MFT is caused by the buoyancy force, which depends on the difference between internal and external densities, $\Delta\rho=\rho_{\rm e}-\rho$. Figure 2(b) shows that $\Delta\rho>0$ and therefore the buoyant force is positive in all considered cases. The MFT expands and its density decreases during its motion in order to sustain the pressure balance. Near the surface of the disk and in the corona, $\Delta\rho$ approaches zero and therefore the MFT's speed decreases. Abrupt deceleration of the MFT after passing the surface of the disk, is caused by the abrupt disk's density drop in this region of transition from the disk to corona. The MFT stays in thermal equilibrium, $T\approx T_{\rm e}$, during upward motion inside the disk, as Figure 2(d) shows. This is due to fast radiative heat exchange with the external gas. Departure form thermal equilibrium is observed only for the MFTs with $a_{0}=0.1-0.2\,H$ after their rising from the disk to the transtion region, where $T_{\rm e}$ grows up to the corona's temperature of $475$ K. In Figure 3, we plot the dependence of the MFT's speed on the $z$-coordinate at $r=0.5$ au for $z_{0}=0.5\,H$, $a_{0}=0.1\,H$, and various initial plasma beta $\beta_{0}$. Figure 3 shows that the MFT with stronger magnetic field accelerate to greater speed. Maximum speed of $7-8$ km s-1 is achieved by the MFT with $\beta_{0}=0.01$. The increase of the MFT's speed with $\beta_{0}$ is explained by the fact, that the more initial magnetic field strength of the MFT, the more initial $\Delta\rho$ and correspondingly the buoyant force is stronger. Closer to the star, $r<0.5$ au, the maximum speed is of $10-12$ km s-1, according to our simulations. Fig. 3: Dependence of the MFT's speed on the $z$-coordinate for various initial plasma beta $\beta_{0}$ in runs without external magnetic field. Vertical line delineates the surface of the disk. Initial parameters: $r=0.5$ au, $z_{0}=0.5\,H$, and $a_{0}=0.1\,H$. . ### 3.3 Magnetic oscillations In this section we investigate, how does the magnetic pressure outside the MFT influences its dynamics. In this case, external pressure $P_{\rm e}$ in (6) is a sum of gas pressure and magnetic pressure $B_{\rm e}^{2}/8\pi$. We assume that $B_{\rm e}$ is constant with $z$ and has magnitude of $B_{z}$. In Figure 4, we present simulations results for the MFT at $r=0.5$ au with fiducial $z_{0}=0.5$ $H$, $a_{0}=0.1\,H$ and various initial plasma beta, $\beta_{0}$. The dependences of MFT's speed, density, radius and temperature on the $z$-coordinate are depicted. Magnetic field strength is of $8$ G at considered $r$. Figure 4 shows that the dynamics of the MFT differs from the case with zero external magnetic field. The MFT floats with acceleration up to some height $z_{\rm o}$ near the surface of the disk, and then its motion becomes oscillatory: the MFT moves vertically up and down around the point $z_{\rm o}$. According to Figure 4(a), $z_{\rm o}\approx 2.2$, $2.5$, and $2.6$ $H$ for $\beta_{0}=1$, $0.1$, and $0.01$, respectively. Hence, the more $\beta_{0}$ the higher the point $z_{\rm o}$, around which the MFT oscillates. Magnitude of the MFT's speed decreases during oscillations as the MFT loses its kinetic energy due to the friction with the external gas. When the MFT starts to oscillate, its expansion stops at some characteristic cross-section radius $a_{\rm o}$. In the considered case, this radius is of $0.3$ $H$, according to Figure 4(c). The radius of the MFT periodically increases and decreases with respect to $a_{\rm o}$ during the oscillations, i. e. the MFT pulsates. The magnitude of the radius variations decreases, i. e. the pulsations decay with time. Figure 4(b) shows that the point $z_{\rm o}$ is a point of zero buoyancy, such that $\Delta\rho=\rho_{\rm e}-\rho>0$ at $z<z_{\rm o}$ and $\Delta\rho<0$ at $z>z_{\rm o}$. This effect is caused by the contribution of the magnetic pressure outside the MFT to the overall pressure balance (6). The external magnetic field $B_{\rm e}$ is constant with $z$, while the density of the disk $\rho_{\rm e}$ exponentially decreases. As a consequence, the magnetic pressure $B_{\rm e}^{2}/8\pi$ contribution to $P_{\rm e}$ also increases in comparison to the gas pressure. At $z\geq z_{\rm o}$, the external magnetic field becomes stronger than the magnetic field of the MFT, and consequently the MFT becomes heavier than the external gas. This result is similar to that found by Dudorov et al. (2019) for the MFT in the accretion disks of TTS. The beginning of the magnetic oscillations is characterized by violation of the thermal balance, $T\neq T_{\rm e}$, as Figure 4(d) shows. This means that the rate of radiative heat exchange is smaller than the rate of MFT's cooling due to adiabatic expansion. During the oscillations, the MFT's pulsations decay and radiative heat exchange ultimately equalizes the temperature $T$ and $T_{\rm e}$. Fig. 4: Dynamics of MFTs with various initial plasma beta $\beta_{0}$ in presence of external magnetic field. Dependences of the MFT's speed (panel a), density (b), cross-section radius (c) and temperature (d) on the $z$-coordinate are plotted. Vertical lines show the surface of the disk. Grey dashed lines in panels (b) and (d) delineate corresponding profiles of the disk density and temperature. Initial parameters: $r=0.5$ au, $z_{0}=0.5\,H$, and $a_{0}=0.1\,H$. In Figure 5, we plot corresponding dependences of MFT's temperature on time. Figure 5 clearly demonstrates the periodic changes in MFT's temperature during the magnetic oscillations. The period of oscillations increases with $\beta_{0}$ and lies in range from $0.5$ month for $\beta_{0}=0.01$ to $2$ months for $\beta_{0}=1$. Picture of the MFT's thermal evolution in the case $\beta_{0}=1$ can be described as simple decaying oscillations. In this case, the oscillations take place under the surface of the disk (see Figure 4(a)), and the MFT's temperature follows the polytropic disk's temperature profile during its periodic upward and downward motion. The MFTs with $\beta_{0}\leq 0.1$ exhibit more complex behaviour characterized by non-harmonic oscillations of the temperature. In the case $\beta_{0}=0.01$, the maximum of each temperature pulsation is characterized by a constant $T=475$ K during time interval of $0.1-0.5$ months. Such a behaviour is explained by the fact, that the MFT's with $\beta_{0}\leq 0.1$ oscillate near the surface of the disk. The vertical profile of disk's temperature is non-monotonic in this region with minimum at $z_{\rm s}$, according to Figure 4(d). Therefore, the maximum $T$ in the oscillating MFT corresponds to the corona's temperature of $475$ K, while the minimum $T$ is achieved at some point below the surface of the disk. Fig. 5: Dependence of the MFT's temperature on time for various initial plasma beta $\beta_{0}$ in presence of external magnetic field. Horizontal dashed line delineates the temperature of the disk's corona. Initial parameters: $r=0.5$ au, $z_{0}=0.5\,H$, and $a_{0}=0.1\,H$. . In order to investigate characteristic time scales of this process, we plot the dependence of the $z$-coordinate, temperature and magnetic field strength of the MFT on time in Figure 6. The results for the MFT with fiducial parameters $z_{0}=0.5\,H$, $a_{0}=0.1\,H$, and $\beta_{0}=1$ at various radial distances form the star are shown. Considered radial distances are marked with empty circles in $T(r)$, $\Sigma(r)$, $x(r)$, and $B(r)$ profiles in Figure 1. Figures 6(a, b, c) show that the magnetic oscillations take place beneath the surface of the disk, at $z\sim 2-2.5\,H$ typically. The oscillations are found at every radial distance in the considered range, $r=0.012-1$ au. The period of oscillations $P_{\rm o}$ increases with $r$, such that $P_{\rm o}\approx 0.2$ d $\approx 5$ hrs at $r=0.012$ au, $2$ months at $r=0.5$ au, and $5$ months at $r=1$ au. The amplitude of upward and downward motion decreases with time. According to Figures 6(d, e, f), the magnetic oscillations are accompanied by corresponding periodic changes in the MFT's temperature.The MFT heats up during the period of downward motion and cools down during its upward motion. These changes reflect the $z$-distribution of the disk's temperature $T_{\rm e}$, since the radiative heat exchange tends to keep the MFT in thermal balance with the external gas. The magnitude of the temperature fluctuations decreases with $r$. In maximum, it ranges from $\Delta T\sim 3000$ K at $r=0.012$ au to $300$ K at $r=1$ au. Dependences $B(t)$ depicted in Figures 6(d, e, f) show that the MFT's magnetic field strength decreases during its upward motion up to a point of the zero buoyancy. This decrease reflects the expansion of the MFT, $B\propto a^{-2}$ according to the magnetic flux conservations. During the following magnetic oscillations, the magnetic field strength periodically increases and decreases with respect to the value $B_{\rm e}$. This picture confirm above discussions that the magnetic oscillations arise due to the effect of external magnetic pressure near the point of zero buoyancy, which is characterized by the equality $B\approx B_{\rm e}$. Fig. 6: Dynamics of the MFTs at different $r$ in presence of external magnetic field. Top row: the dependence of the $z$-coordinate of the MFT on time during its motion inside the disk at various radial distances $r=0.012$, $0.15$ and $1$ au (panels from left to right). Horizontal lines show the surface of the disk. Bottom row: corresponding dependences of temperature (left $y$-axis, black lines) and magnetic field strength (right $y$-axis, blue lines) on time. Horizontal dashed blue lines correspond to the external magnetic field $B_{\rm e}$. Initial radius and plasma beta of the MFT are $a_{0}=0.1\,H$ and $\beta_{0}=1$, respectively. . ## 4 Conclusions and discussion We numerically modeled the dynamics of MFTs in the accretion disk of typical HAeBeS. The simulations were carried out in frame of the slender flux tube approximations using the model developed by Dudorov et al. (2019). This models allows to investigate the motion of the MFT in the direction perpendicular to the disk's plane taking into account the buoyant and drag forces, radiative heat exchange of the MFT with external gas, magnetic pressure of the disk. The structure and characteristics of the accretion disk were calculated using our MHD model of the accretion disks (see Khaibrakhmanov et al., 2017), which is based on the model of Shakura and Sunyaev (1973). The vertical structure of the disk at each radial distance $r$ is calculated from the solution of the hydrostatic equilibrium equation for the case of polytropic gas. It is considered that the turbulent friction is the main heating mechanism inside the disk. There is optically thin corona above the optically thick disk. The temperature of the corona is determined by the heating of the gas with incident stellar radiation. We adopted that the fraction of the radiation flux intercepted by the disk's surface is constant, $f=0.05$, at every $r$. Transition from the disk to its corona is treated as hydrostatic region with exponential growth of gas temperature over the local scale height of the disk. We adopted the parameters of the star and its accretion disk corresponding to the star MWC 480. This is a typical `isolated' HAeBeS, which was investigated in detail in different spectral ranges (see Sitko et al, 2008; Mendigutía et al., 2013; Tambovtseva et al., 2016; Fernandez et al., 2018). Our simulations have shown that the accretion disk of the HAeBeS is in general larger, denser and hotter than the accretion disk of typical TTS. This is because the disk in the former case is characterized by larger accretion rate. As a consequence, the magnetic field in the disk of HAeBeS is stronger than in the disk of TTS. The innermost region of the disk, where temperature is high enough for thermal ionization of alkali metals and hydrogen and where the magnetic field is frozen into gas, is more extended in the case of HAeBeS. This region ranges from $0.012$ au up to $r=1$ au in radial direction, under adopted parameters. We modeled the dynamics of the MFT of various initial cross-section radii, $a_{0}$, and plasma beta, $\beta_{0}$, at several radial distances $r$ in the range $0.012-1$ au. The simulations have shown that the dynamics of the MFT in the accretion disk of the HAeBeS is in general qualitatively similar to the case of typical TTS. In absence of the external magnetic field, MFTs rise from the disk with typical speeds up to $10-12$ km s-1 and form outflowing magnetized corona of the disk. Radiative heat exchange rapidly equalizes the temperatures inside and outside the MFT, $T\approx T_{\rm e}$, under all considered parameters. We did not found thermal oscillations of the MFT, caused by adiabaticity, unlike the case of the TTS, for which the thermal oscillations of the MFT with $a_{0}\sim 0.1$ and $\beta_{0}=1$ at $r<0.2$ au were found by Dudorov et al. (2019). Like in the case of TTS, MFTs transport excess of the disk's magnetic flux into its corona. The pressure of the magnetic field outside the MFT halts upward motion of the MFT near the point, where internal and external magnetic fields are nearly equal. This point of zero buoyancy, $z_{\rm o}$, typically lies near the surface of the disk, $z_{\rm s}\sim 2.5-3$ $H$, where $H$ is the local isothermal scale height. The more initial magnetic field strength of the MFT, the higher the point $z_{\rm o}$ lies. After the MFT rises to this point, its motion becomes oscillatory. The MFT moves up and down around the point $z_{\rm o}$ and pulsates. The magnitude of these magnetic oscillations decreases with time because of the loss of the MFT's kinetic energy due to friction with external gas. The period of the oscillations increases with radial distance and ranges from few hours at the inner boundary of the disk, $r=0.012$ au, up to several months at $r=1$ au in the case of typical $a_{0}=0.1\,H$ and $\beta_{0}=1$. The oscillation period increases with $\beta_{0}$ at a given $r$. Correspondingly, the temperature of the MFT experiences decaying oscillations around the value of local external temperature at the point $z_{\rm o}$. During the first few periods of the oscillations, temperatures inside and outside the MFT are not equal to each other, i. e. radiative heat exchange is not efficient, and the MFT is practically adiabatic. But ultimately the heat exchange equalizes temperature of the MFT and external gas. The maximum magnitude of the temperature variations ranges from several thousand Kelvin at the inner edge of the disk to several hundred Kelvin at $r=1$ au. Temperature variations during each period of the oscillations may be non-harmonic and asymmetric, since the oscillations take place near the surface of the disk, where the dependence of the disk's temperature on the $z$-coordinate is complex and non-monotonic. Following original idea of Khaibrakhmanov et al. (2018), we propose that the oscillations of MFTs can be a source of the emission variability as well as variable circumstellar extinction observed in young stars with accretion disks. Such a variability is a widespread feature of the accretion disks of TTS and HAeBeS (see Kóspál et al., 2012; Flaherty et al., 2016), which also has been found for the MWC 480 star considered as a reference in our modeling. Generally speaking, periodically rising and oscillating MFTs could contribute to the variability of the emission in different spectral ranges emanating from the innermost region of the disk, where the magnetic field is frozen in into gas: $r=0.012-1$ au in the case of considered HAeBeS. The MFTs formed beyond the dust sublimation radius, $r=0.5$ au, could contain dust grains. In this case, temperature fluctuations of the oscillating MFT may cause the IR- variability of the disk. This assumption is supported by the observations of MWC 480 demonstrating the variations in the IR-flux over $1-13\mu$m wavelength range (Sitko et al, 2008; Fernandez et al., 2018). This radiation emanates from the dust sublimation zone and points to the changes of the disk's structure in this region. It should be noted, that inhomogeneities in the disk centrifugal winds containing both gas and dust can cause the variability of young stars' emission, in particular the variations in circumstellar extinction observed in young stars (Tambovtseva and Grinin, 2008). Application of both models to specific systems with well-established variability is needed in order to determine relative role of various variability mechanisms. In general, our results have shown that the accretion disks of HAeBeS have more extended region of the efficient generation of the magnetic field than in the case of TTS. The temperature of their corona is higher due to more intense stellar radiation. As a consequence, temperature variations in the oscillating MFTs has larger magnitude. Therefore, the IR-variability caused by oscillating MFTs would be more intense in the case of accretion disks of HAeBeS as compared to TTS. In order to investigate the connection between magnetic oscillations of MFTs and IR-variability of TTS and HAeBeS, we plan to calculate spectral energy distributions (SEDs) of the accretion disks taking into account variations of their structure due to the effect of rising MFTs. Interesting task is to model the synthetic light-curves of the accretion disks taking into account contribution of periodically rising MFTs into the IR flux of the disk. ### Acknowledgements Authors thank anonymous referee for useful comments. The work is supported by the Russian Science Foundation (project 19-72-10012). ## References * Akimkin et al. (2012) Akimkin VV, Pavlyuchenkov YN, Launhardt R, Bourke T. 2012. Structure of CB 26 protoplanetary disk derived from millimeter dust continuum maps. Astron Rep. 56(12):915–930. * Donati et al. (2005) Donati J-F, Paletou F, Ferreira J. 2005. Direct detection of a magnetic field in the innermost regions of an accretion disk. Nature. 438:466–469. * Donehew et al. (2011) Donehew B, Brittain S. 2011. Measuring the Stellar Accretion Rates of Herbig Ae/Be Stars. Astron J. 141(2):46. * Dudorov and Sazonov (1987) Dudorov AE, Sazonov YV. 1987. Hydrodynamical collapse of interstellar clouds. IV. The ionization fraction and ambipolar diffusion. Nauchnye Inform. 63:68–86. * Dudorov (1995) Dudorov AE. 1991. Fossil magnetic fields in T Tauri stars. Astron Rep. 39(6):790–798. * Dudorov and Khaibrakhmanov (2014) Dudorov AE, Khaibrakhmanov SA. 2014. Fossil magnetic field of accretion disks of young stars. Astrophys Space Sci. 352:103–121. * Dudorov and Khaibrakhmanov (2015) Dudorov AE, Khaibrakhmanov SA. 2015. Theory of fossil magnetic field. Adv Space Res. 55:843–850. * Dudorov et al. (2019) Dudorov AE, Khaibrakhmanov SA, Sobolev AM. 2019. Dynamics of magnetic flux tubes in accretion discs of T Tauri stars. Mon Not R Astron Soc. 487(4):5388–5404. * Fernandez et al. (2018) Fernandez RB, Long ZC, Pikhartova M, Sitko ML, Grady CA, Russel RW, et al. 2018. Variability of Disk Emission in Pre-main-sequence and Related Stars. IV. Investigating the Structural Changes in the Inner Disk Region of MWC 480. Astrophys J. 856:103. * Flaherty et al. (2016) Flaherty KM, DeMarchi L, Muzerolle J, Balog Z, Herbst W, Thomas S, et al. 2016. Spitzer Observations of Long-term Infrared Variability among Young Stellar Objects in Chamaeleon I. Astrophys J. 833(1):104. * Frank et al. (2014) Frank A, Ray TP, Cabrit S, Hartigan P, Arce HG, Bacciotti F, et al. 2014. Jets and Outflows from Star to Cloud: Observations Confront Theory. In: Beuther H, Klessen RS, Dullemond CP, Henning Th, Editors. Protostars and Planets VI. Tucson: University of Arizona Press. p.451–474. * Hubrig et al. (2011) Hubrig S, Schöller M, Ilyin I, Cowley CR, Mikulášek Z, Stelzer B, et al. 2011. Characterising the magnetic fields of the Herbig Ae/Be stars HD 97048, HD 150193, HD 176386, and MWC 480. Astron Astrophys. 536:A45. * Khaibrakhmanov et al. (2017) Khaibrakhmanov SA, Dudorov AE, Parfenov SYu, Sobolev AM. 2017. Large-scale magnetic field in the accretion discs of young stars: the influence of magnetic diffusion, buoyancy and Hall effect. Mon Not R Astron Soc. 464:586–598. * Khaibrakhmanov and Dudorov (2017) Khaibrakhmanov SA, Dudorov AE. 2017. Magnetic field buoyancy in accretion disks of young stars. Phys Part Nuclei Lett. 14:882–875. * Khaibrakhmanov et al. (2018) Khaibrakhmanov SA, Dudorov AE, Sobolev AM. 2018. Dynamics of magnetic flux tubes and IR-variability of young stellar objects. Res Astron Astrophys. 18:090. * Kóspál et al. (2012) Kóspál A, Ábrahám P., Acosta-Pulido JA, Dullemond CP, Henning Th, Kun M, et al. 2021. Mid-Infrared Spectral Variability Atlas of Young Stellar Objects. Astrophys J Suppl S. 201:11. * Levi (1978) Levi EH. 1978. Magnetic field in the primitive solar nebula. Nature. 276:481. * Li et al. (2016) Li D, Pantin E, Telesco CM, Zhang H, Wright CM, Barnes PJ, et al. 2016. An Ordered Magnetic Field in the Protoplanetary Disk of AB Aur Revealed by Mid-Infrared Polarimetry. Astrophys J. 832:18. * Mendigutía et al. (2013) Mendigutía I, Brittain S, Eiroa C, Meeus G, Montesinos B, Mora A, et al. 2013. Accretion variability of Herbig Ae/Be stars observed by X-Shooter. HD 31648 and HD 163296. Astrophys J. 776:44. * Parker (1979) Parker E. 1979. Cosmical magnetic fields: Their origin and their activity. Oxford: Clarendon Press. 858 p. * Semenov et al. (2003) Semenov D, Henning T, Helling C, Ilgner M, Sedlmayr E. 2003. Rosseland and Planck mean opacities for protoplanetary discs. Astron Astrophys. 410:611-621. * Shakura and Sunyaev (1973) Shakura NI, Sunyaev RA. 1973. Black holes in binary systems. Observational appearance. Astron Astrophys. 24:337–355. * Sitko et al (2008) Sitko ML, Carpenter WJ, Kines RL, Wilde JL, Lynch DK, Russel RW, et al. 2008. Variability of Disk Emission in Pre-Main Sequence and Related Stars. I. HD 31648 and HD 163296 — Isolated Herbig Ae Stars Driving Herbig-Haro Flows. Astrophys J. 678:1070–1087. * Takasao et al. (2018) Takasao S, Tomida K, Iwasaki K, Suzuki TK. 2018. A Three-dimensional Simulation of a Magnetized Accretion Disk: Fast Funnel Accretion onto a Weakly Magnetized Star. Astrophys J. 857:4. * Tambovtseva and Grinin (2008) Tambovtseva LV, Grinin VP. 2008. Dust in the disk winds from young stars as a source of the circumstellar extinction. Astron Lett. 34:231. * Tambovtseva et al. (2016) Tambovtseva LV, Grinin VP, Potravnov IS, Mkrtichian DE. 2016. Disk wind and magnetospheric accretion in emission from the Herbig Ae star MWC 480. Astron Lett. 42:583–597. * Turner et al (2010) Turner NJ, Carballido A, Sano T. 2010. Dust transport in protostellar disks through turbulence and settling. Astrophys J. 708:188. * Vasil and Brummell (2008) Vasil GM, Brummell NH. 2008. Magnetic Buoyancy Instabilities of a Shear-generated Magnetic Layer. Astrophys J. 686:709–730. * Vlemmings et al. (2019) Vlemmings W, Lankhaar B, Cazzoletti P, Ceccobello C, DallÓlio D, van Dishoek EF, et al. 2019. Stringent limits on the magnetic field strength in the disc of TW Hya. Astron Astrophys. L7:10. * Vorobyov and Pavlyuchenkov (2017) Vorobyov EI, Pavlyuchenkov YN. 2017. Improving the thin-disk models of circumstellar disk evolution. The 2+1–dimensional model. Astron Astrophys. 606:A5.
# CloudShield: Real-time Anomaly Detection in the Cloud Zecheng He Princeton University <EMAIL_ADDRESS>Ruby Lee Princeton University <EMAIL_ADDRESS> ###### Abstract In cloud computing, it is desirable if suspicious activities can be detected by automatic anomaly detection systems. Although anomaly detection has been investigated in the past, it remains unsolved in cloud computing. Challenges are: characterizing the normal behavior of a cloud server, distinguishing between benign and malicious anomalies (attacks), and preventing alert fatigue due to false alarms. We propose CloudShield, a practical and generalizable real-time anomaly and attack detection system for cloud computing. Cloudshield uses a general, pretrained deep learning model with different cloud workloads, to predict the normal behavior and provide real-time and continuous detection by examining the model reconstruction error distributions. Once an anomaly is detected, to reduce alert fatigue, CloudShield automatically distinguishes between benign programs, known attacks, and zero-day attacks, by examining the prediction error distributions. We evaluate the proposed CloudShield on representative cloud benchmarks. Our evaluation shows that CloudShield, using model pretraining, can apply to a wide scope of cloud workloads. Especially, we observe that CloudShield can detect the recently proposed speculative execution attacks, e.g., Spectre and Meltdown attacks, in milliseconds. Furthermore, we show that CloudShield accurately differentiates and prioritizes known attacks, and potential zero-day attacks, from benign programs. Thus, it significantly reduces false alarms by up to 99.0%. ## I Introduction The importance of cloud computing has grown significantly in the past years. Cloud customers can lease virtual machines from the cloud providers economically, sharing the physical resources provided by the cloud computing servers. Large cloud providers, like Amazon AWS [1], Google Cloud Platform [2], and Microsoft Azure [3], have proliferated this trend. There have been various attacks against cloud computing, especially on shared resources. For example, security-critical information, e.g., encryption keys, can be leaked by cache side-channel attacks. Previous work have revealed that many types of cache side-channel attacks can successfully obtain secret or private cryptographic keys [12, 50, 26, 63, 66, 25, 38, 45, 39]. Recently, speculative execution attacks [42, 44, 41] exploit performance optimization features of modern processors to breach the user-user, user-kernel or user- hypervisor isolation. Besides, zero-day attacks introduce challenges as they do not have known code nor known behavior. Anomaly detection techniques are perhaps the only viable solution for detecting unknown zero-day attacks. By its nature, anomaly detection does not look for specifics of an attack but models the normal behavior of a system. Deviation from normal behavior indicates anomalies: either an attack or a benign anomaly. However, existing anomaly detection systems in the cloud have challenges. First, a model of cloud server behavior is usually scenario-specific and is not easy to extend [13, 11, 23]. Multiple models have to be built to cover various cloud workloads. Second, false alarms in anomaly detection systems are very common in practice. The large volume of false alarms overwhelms the security analysts and causes alert fatigue, potentially causing real attacks to be missed. In this work, we investigate three questions. First: Can we make an anomaly detection system generalizable to different scenarios in cloud computing? We hypothesize that the normal behavior of a cloud server, although different from workload to workload running on it, consists of a major predictable part, and a minor unpredictable part that follows a certain probability distribution. If we pre-train a general model to predict a cloud system’s behavior, an anomaly can be detected by subtracting the predictable part from the original behavior markers and identifying the distribution of the remaining unpredictable part. To this end, we propose that the distribution of the unpredicted part denoted Reconstruction Error Distribution (RED), can capture the characteristics of any cloud workload. Thus, we show that rather than deploying an individual model for each workload, a general pretrained predictor model is leveraged, and anomalies are identified by statistically comparing the REDs. The second question we investigate is: How to select appropriate behavior markers to detect anomalous behavior in the cloud in real-time? Quick detection of anomalies and attacks can prevent further damage. To support real-time anomaly detection in the cloud, we need an approach to select appropriate behavior markers that can be measured at high frequency and can reliably represent the system’s behavior. To this end, we propose a principal component analysis (PCA)-based behavior marker selection method, and leverage the hardware performance counters, which are originally designed to monitor system performance and can be measured at high frequency, as exemplary markers to support real-time protection. The third question we explore is: How to deal with false-alarm fatigue? In practice, the “benign anomalous” behavior of a cloud system is quite common. For example, a cloud server used for database applications may be scheduled a different task when its workload is low. The missing piece in the past anomaly detection is the ability to correctly recognize the new tasks as benign. Otherwise, a large number of false alarms are raised, causing the system to be no longer usable. In this work, we refine each detected anomaly with the identification of benign anomalies and known attacks as a second step. This can significantly alleviate the false alarm problem in anomaly detection. Section II describes the background. Section III presents the threat model. Section IV discusses key challenges for anomaly detection in cloud computing. Section V describes our CloudShield methodology and Section VI evaluates our design. ## II Background ### II-A Attacks in Cloud Computing There have been many attacks on cloud computing. We focus on the rapidly growing and representative class of software attacks on shared hardware resources in cloud servers. Two main types are speculative execution attacks and cache-based side-channel attacks, which we use as example attacks in the evaluation of our anomaly detection system. We also include software attacks, e.g., buffer overflow, in our evaluation. Our system is not tailored at all to defeat these attacks, and the goal of our system is to detect even zero-day attacks, which are attacks that have never been seen before. #### II-A1 Speculative Execution Attacks Since their first appearance in January 2018, speculative execution attacks [42, 44, 4, 41, 5, 43, 61] have bombarded the world, with new variants continuously popping up. These attacks can leak the entire memory and break the software isolation provided by different virtual machines in the cloud, different virtual address spaces, and even by secure enclaves provided by SGX [61, 15]. Speculative attacks misuse the hardware performance optimization features in modern processors, e.g., Out-of-Order (OoO) execution, speculative execution, hardware prediction, and caching. These attacks allow transient instructions to execute, illegally access a secret, and change the microarchitectural state based on the secret [28]. When the transient instructions abort, architectural changes are discarded but the microarchitectural changes, e.g., cache state changes, remain. This leaves an opportunity for obtaining the secret by monitoring the microarchitectural state. #### II-A2 Cache Side-channel Attacks Cache-based side-channel attacks are timing attacks that have traditionally been used to leak the secret key of symmetric-key ciphers or the private key of public-key ciphers, thus nullifying any security provided by such cryptographic protections [29]. They can be classified based on cache “hit” or “miss”, “access” or “operation”. The access-based attacks leverage the difference in timing between a hit and a miss to indicate a “1” or “0” based on single memory access, while the operation-based attacks leverage the time difference for a whole encryption operation. These attacks target different levels of the cache hierarchy, e.g., L1 cache and last-level cache (LLC). Two representative cache side-channel attacks are the flush-reload attack and prime-probe attack. In the flush-reload attack, the initial state of a shared cacheline is set to absent by a clflush instruction. After waiting for a while for the victim to execute, if the victim did use the cacheline, the attacker will find the cacheline present (indicated by a fast cache hit). A variant of the flush-reload attack, i.e., the flush-flush attack [24], exploits the early abort if the cacheline to be flushed is not in the cache. In the prime-probe attack, the attacker first loads his data to fill the cache. After waiting for the victim to execute, the attacker checks (probe) if his cache lines are now absent, i.e., a slow cache miss, because the victim has brought in his data that evicted the attacker’s cache lines. #### II-A3 Buffer Overflow Attacks A buffer overflow attack [58] occurs when the written data exceeds the size of an allocated buffer. Buffer overflow attacks can be exploited by an attacker to insert code and data. A buffer overflow attack is usually triggered by malformed input to write executable code or malicious data to a destination that exceeds the size of the buffer. If the malicious code or wrong data is used in the program, erratic program behavior would occur, e.g., system crash, incorrect results, or incorrect privilege escalation. ### II-B Hardware Performance Counters Hardware performance counters (HPCs) are special registers that record hardware events. HPCs are widely available in commodity processors, including Intel, ARM, AMD, and PowerPC. Processors have been equipped with a Performance Monitor Unit (PMU) to manage the recording of hardware events. HPCs measure hardware events like the number of cache references, the number of instructions executed, and the number of branch mis-predictions; they also measure system events, like the number of page faults and the number of context switches. Although the HPCs were designed for system performance monitoring and software debugging, previous work have also shown the feasibility of using hardware performance counters in security, e.g., detecting malware [51, 19], firmware- modification [60] and kernel root-kits [59]. Zhou et al.[67] and Das et al.[18] cautioned using HPCs for security. Unlike these existing work, CloudShield leverages the reconstruction error distribution of HPCs, rather than directly using the noisy HPCs for anomaly detection. [51], [64] and [53] exploited hardware performance counters and supervised neural networks for malware and intrusion detection, respectively. A major drawback of using supervised deep learning for attack detection is that they require attack examples for training. Thus the zero-day attacks which are not seen in the training set can not be detected at runtime. ## III Threat Model The target system is a cloud-based Infrastructure-as-a-Service (IaaS) system, where programs share hardware resources. The programs running on the IaaS platform may interfere with each other. As is commonly done, important and frequently used cloud services are scheduled one main task per machine, or per processor core, e.g., machine learning training, database query, MapReduce, or being used as a web or stream server. New tasks can be scheduled on the same core if the workload of the main task is low. Our threat model covers attacks that breach the confidentiality and integrity of the cloud computing system. Notably, the side-channel attacks and the recently proposed speculative execution attacks are considered in this threat model. We assume the attacker can launch attack programs in the cloud. We assume that an attack program can hide by switching between running and sleeping. Our threat model particularly includes zero-day attacks. Unlike signature- based attack detection, we do not make particular assumptions about the attacks. We assume that there is no prior knowledge of attack code and the way the adversary interferes with the system. Furthermore, once an anomaly is detected, we explicitly consider reducing false alarms caused by other benign programs that concurrently run. These benign programs need to be distinguished from attacks, otherwise, they can cause a large number of false alarms. Consequently, cyber analysts can be overwhelmed by false alarms and miss real attacks, making the detection system ineffective in practice. Therefore, discriminating benign programs, known attacks, and zero-day attacks is an important component in this work. From the system design perspective, the rapidity and generalizability of the detection system are important. First, real-time anomaly detection is crucial, because an attack, e.g., the side-channel attacks and the speculative execution attacks, can quickly achieve their goal of leaking secrets in less than a minute. Quick detection and mitigation of the attacks prevent more damage to the system. We assume the hardware performance counters (HPCs) can be measured in the system. This assumption is reasonable because most of the modern processors used in cloud computing servers have been equipped with HPCs, and the mainstream operating systems support collecting HPC measurements. Second, generalizability plays an essential role in our design. We would like as few models as possible, ideally, a single model, to cover various cloud scenarios and workloads with minimum changes. This reduces the cost of switching models between workload changes. ## IV Cloudshield Challenges We first identify three challenges of anomaly detection in the cloud, and how they can be handled: 1. 1. How to model the different cloud workloads? 2. 2. How to select appropriate behavior markers? 3. 3. Anomaly detection can cause false-alarm fatigue. How to deal with the false alarms of anomaly detection in the cloud? ### IV-A How to Model Different Cloud Workloads? Our intuition of modeling cloud workloads, which may vary a lot in their functionalities, scales, and required resources, is that the behavior of a cloud server running a common cloud workload can be decomposed into two parts: a major predictable component and a minor unpredictable component. The predictable component can be predicted by a pre-trained model. The unpredictable component follows an unknown but fixed distribution. We will validate this hypothesis in Section VI. With this assumption, rather than an individual model for each workload, we can pre-train a general program behavior predictor model $M$ for the predictable component, and subtract the prediction from the observed measurement of the system. The distribution of the remaining unpredictable component, i.e., the reconstruction error distribution (RED), can reveal the normal behavior from abnormal behavior. We leverage RED as the key to anomaly detection. Stealthy attacks can be subtle and hide within normal measurements. However, subtracting the major predictable component of the measurements from the total observed measurements amplifies the anomalous behavior and provides a robust way of detecting sneaky anomalies. We present a running example to illustrate this idea in Figure 1. Two sine curves plus subtle perturbations are shown in the top row, marked green and yellow, respectively. By looking at only these two raw measurements, one may not be able to tell the difference. We then subtract the predictable signal (the blue sine curves in the second row) to get the remaining part in the third row and examine the distribution of this remaining unpredictable part (bottom row). It shows that the probability distribution of the remaining part, which we denote the reconstruction error distribution (RED) from a prediction model, amplifies the difference. Figure 1: A running example of the reconstruction error distribution. Our intuition is that the behavior of a system can be decomposed into two parts: a major predictable component and a minor unpredictable component. If we can separate the predictable from the unpredictable component using a prediction model, the difference between normal and anomaly is more clearly revealed. ### IV-B How to Select Appropriate Behavior Markers? Modern processors usually provide counts of various events that can be used as behavior markers. Monitoring all of them is inefficient, if not impossible. Therefore, we need a method to choose the appropriate behavior markers from all possible markers that can represent the normal behavior of a system. Our key idea for selecting behavior markers is to quantify the relative importance of the selected events representing the normal behavior of a system. Given the set of all possible behavior markers $b_{1},b_{2}...b_{n}$, we can define a metric $f$ to evaluate the relative contribution of a marker in representing the normal behavior of the system. Then, the behavior markers are sorted in descending order according to $f(b)$. The markers that exceed a certain threshold of importance are selected as candidate markers. In our implementation, we define $f$ based on principal component analysis (PCA). Other metrics can also be leveraged to automatically select behavior markers. ### IV-C How to Distinguish Benign Anomalies and Malicious Attacks? The ultimate goal of CloudShield is to detect attacks, i.e., malicious anomalies. Once an anomaly is detected, the next step is to determine if it is a benign anomaly or a malicious attack. Without loss of generality, we simplify the discussion by making the assumption that a processor core runs one cloud workload, e.g., a stream server or a web server. A malicious anomaly can be a known attack or a zero-day attack. A benign anomaly can be benign programs that run concurrently with the cloud workload, where their interference could potentially cause false alarms. It could also be a stealthy attack that looks like a benign program. Note that the key difference between a cloud workload and a benign program is that the cloud workload, as is commonly done, is the one main task per cloud server, or per processor core, while benign programs are relatively small programs that can be scheduled on the same core if the workload of the main task (cloud workload) is low. While anomaly detection systems typically fall short of detecting benign versus malicious anomalies, Cloudshield can detect not just anomalies, but also the subset of anomalies that are attacks. Specifically, CloudShield builds two detectors, one is to identify known benign programs, and the other is to identify attacks. Also, while actual attack detection tends to be very domain-specific, our new contribution is to show that it is possible to use a general framework based on a pre-trained model to do attack detection. We are even able to detect stealthy attacks and potential zero-day attacks. ## V CloudShield ### V-A Overview We show an overview of CloudShield in Figure 2. There are three phases for learning and detecting anomalies and attacks in the cloud: 1) offline training and profiling, 2) online anomaly detection and mitigation and 3) online attack versus benign program detection. Figure 2: CloudShield methodology for anomaly and attack detection. The offline training and profiling phase consists of four steps: ① constructing three sets of programs: normal cloud workloads, known attacks, and certified benign programs. A _Certificate Validation Module_ is responsible for verifying the certificates of the workload and benign programs. The certificates are generated by trusted entities, e.g., companies that create these programs, and organizations or labs that verify the correctness and security of the programs. The certificate must contain the hash of the program binary and the public key signature of the trusted entity. ② Executing the workloads and programs in an offline clean environment and collecting their behavior markers. A _Program Behavior Collection Module_ is designed for this. ③ Training a default program-behavior predictor model $M$ in a _Training Module_. ④ Calculating the corresponding REDs $RD_{n}$, $RD_{a}$, and $RD_{b}$ as the reference Reconstruction Error Distributions (REDs) for normal cloud workloads, known attacks, and benign programs, respectively. We use the distribution $RD_{n}$ as the normal behavior of the processor core running a cloud workload, while $RD_{a}$ and $RD_{b}$ are used to further distinguish between known attacks and benign programs when an anomaly is identified. Note that the cloud workload needs to be paused before collecting HPCs and calculating REDs for attacks and benign programs. The normal cloud workload detector, known attack detector, and benign program detector are also computed at this time. The online anomaly detection and mitigation phase has two steps: ⑤ An _Online Detection Module_ collects runtime behavior markers of each processor core in a cloud server from the Performance Monitor Unit (PMU) in the host OS. These markers are input into the pre-trained model $M$ for the inference phase, to generate the run-time observed RED $D_{1}$. ⑥ Comparing the run-time RED $D_{1}$ and the reference RED $RD_{n}$. If $D_{1}$ does not follow the distribution of normal cloud workloads $RD_{n}$, an anomaly is detected and the cloud workload is paused to avoid further security breaches. Once an anomaly is detected, the online attack versus benign program detection phase (step2 detection) is performed to distinguish benign programs from known attacks. This phase has three steps: ⑦ Collecting behavior markers when the cloud workload is no longer running. This step is necessary to eliminate the interference from the cloud workload, which is usually heavy, and increase the detection accuracy. The new measurements are inferred through the pre-trained model $M$ and new RED $D_{2}$ is gathered. ⑧ Comparing $D_{2}$ to the distribution of known attacks $RD_{a}$ to identify if the anomaly is caused by a known attack. ⑨ Comparing $D_{2}$ to the distribution of certified benign programs $RD_{b}$ to identify if the anomalous behavior is a false alarm. Note that the steps ⑧ and ⑨ can be performed in parallel. As a complementary component, the cloud provider can confirm that benign programs are scheduled on this machine. In the above discussion, we have assumed that a single pre-trained model of normal cloud workloads is sufficient, and that different known attacks can be detected with a single known attack detector, and that all benign programs added to a cloud workload can be identified with a single benign program detector. This significantly simplifies the implementation of CloudShield, and we will show that this results in excellent anomaly and attack detection in practice. More cloud workloads, attacks, and benign programs can always be added to the three sets of programs to retrain the model $M$ and the three detectors. The CloudShield implementation consists of four modules: a certificate validation module, a program behavior collection module, a training module, and an online detection module. The servers can share a set of the first three modules, as they are used during training phase. Only the last module needs to run on each cloud server. ### V-B Pre-training Program Behavior Predictor Feature selection. Modern processors usually provide various events to be monitored by using hardware performance counters. However, due to the limited number of hardware registers in the PMU, only a few of them can be monitored at the same time. While round-robin scheduling of HPC measurements is feasible, it increases overhead. Therefore, it is important to select the appropriate events from all possible events as behavior markers. We propose a principal component analysis (PCA) based selection method to help determine the events to monitor. Our key idea is the selected events should be important to represent normal behavior. Specifically, the principle component $PCA_{1}$ can be represented as a linear combination of all features. The coefficient of the corresponding HPC measurement represents the contribution of that feature in the principal component. Formally, $\displaystyle PCA_{1}$ $\displaystyle=||x^{T}w||^{2}$ (1) $\displaystyle=\sum_{i}|w_{i}|^{2}x_{i}^{2}$ (2) where $x=(x_{1},x_{2}...x_{n})$ is an HPC reading of $n$ events. $|w_{i}|$ is the coefficient of $x_{i}$ in the first principal component. It represents the importance of event $x_{i}$ in the first principal component. We collect 34 HPC events (Table XII in Appendix) from five representative cloud benchmarks, i.e., ML training (PyTorch), stream server (FFserver), database server (Mysql), web server (Nginx), and Hadoop MapReduce. We collect the event measurements for an entire processor core, to provide system-level monitoring, rather than just monitor a specific process or thread. We observe that although the benchmarks are different, they show consistency in the events’ importance. We use $\eta_{i}=\frac{|w_{i}|}{\sum_{j}|w_{j}|}$ as the importance of the corresponding event for a workload. We average $\eta$ over the five representative benchmarks as the final importance score $\overline{\eta}$ of the corresponding event. We show the features with $\overline{\eta}\geq 1\%$ in Table I. We use the thirteen selected events throughout the experiments. In fact, these are also the thirteen distinct events in the top-10 events for the five cloud workloads XII. TABLE I: HPC features with $\overline{\eta}\geq 1\%$. Rank | Event | $\overline{\eta}$ | Rank | Event | $\overline{\eta}$ ---|---|---|---|---|--- 1 | Instruction | 0.267 | 8 | BPU read | 0.030 2 | Stall during issue | 0.189 | 9 | DTLB write | 0.025 3 | Stall during retirement | 0.178 | 10 | Branch | 0.023 4 | Cycles | 0.106 | 11 | L1D read miss | 0.020 5 | Load | 0.067 | 12 | L1I read miss | 0.018 6 | DTLB read | 0.043 | 13 | Context switch | 0.015 7 | Store | 0.037 | | | Model selection. Recurrent Neural Network (RNN) and its variant, Long Short- Term Memory (LSTM), have become the popular model for sequential data. To balance the model complexity and its prediction power, in the proof-of-concept implementation, we start from a single-cell LSTM as the behavioral model of the system. An LSTM cell has three gates that control information flow: the forget gate, the input gate, and the output gate. LSTM automatically determines what information to “remember” and “forget”. Alternative models, e.g., Gated Recurrent Units (GRUs) [17] and BERT [20], can also be used as behavioral models of the system. As the main focus of this work is not to find the best model, but to show the feasibility of using RED of HPCs to detect anomalies in the cloud system, without loss of generality, we just show that LSTM models are enough for this anomaly detection. Model training. Our goal is to train a model that can capture the predictable component of the behavior of a program. The program behavior markers $\\{S_{i}\\}_{i=1}^{N}$ (in our case HPC events of cloud workloads), are obtained from a clean environment. $N$ is the total number of time frames collecting HPCs. In our experiments, each behavior measurement $S_{i}^{t}$ is a vector consisting of the thirteen monitored hardware events. At time $t$, the deep learning model is trained to predict $S_{i}^{t+1}$ using behavior history $[S_{i}^{1},...,\ S_{i}^{t}]$. Intuitively, since $\\{S_{i}\\}_{i=1}^{N}$ are normal behavior markers collected in the clean environment, the loss penalizes the incorrect prediction of normal behavior. We train this model to minimize the loss function with Stochastic Gradient Descent (SGD). ### V-C RED Profiling RED generation of cloud workloads. We generate a profile of the normal cloud workloads in terms of reconstruction error distribution (RED), illustrated as $RD_{n}$ in Figure 2. First, reference sequences of the behavior measurement, $R=[R^{1},...,R^{T^{\prime}}]$, are collected in a clean environment. For this cloud server setting, each time frame $R^{i}$ is a vector of thirteen dimensions (the number of monitored events) in our experiment. Second, at time frame $t$, we use the trained model to predict time frame $t+1$ using the corresponding history behavior. We denote the prediction as $P^{t+1}$. The reconstruction error is defined as: $\displaystyle E(t)=R^{t+1}-P^{t+1}$ (3) Each reconstruction error sample $E(t)$ is a vector of dimension $n$, where $n$ is the number of monitored events. We gather the prediction errors of each cloud workload and define the overall distribution of {E(1), E(2), E(3)…} from all workloads as $RD_{n}$. KDE profiling of cloud workload. We use Kernel Density Estimation (KDE), a non-parametric estimation approach that better handles high-dimensional data, to profile the high-dimensional distribution of reconstruction errors from reference samples, denoted ④-a in Figure 2. We use non-parametric estimation because the formula of the RED of normal workloads is unknown, and its formula can be too complex to assume. KDE represents the distribution from elementary kernels. It assumes a small high probability area (Gaussian in our implementation) within a bandwidth around the observed samples, and sums them up as the probability distribution. Formally, KDE is defined as: $\displaystyle\hat{f}(x)=\frac{1}{nb}\sum_{i=1}^{n}K(\frac{x-x_{i}}{b})$ (4) where $\hat{f}(x)$ is the estimated probability density. $K(\cdot)$ is a kernel function, whose value drops rapidly outside a bandwidth $b$. $x_{i}$s are the samples from the distribution, i.e., E(t) in our case. $n$ is the total number of samples. In Figure 3, we show examples of reconstruction error distribution (RED) of normal cloud workloads (first five in green), benign programs (next six in blue), and attacks (last nine in red). To illustrate the high-dimensional distribution, we calculate the magnitude of REDs in Eq. 3 and observe that the normal cloud workloads, in general, have the smallest REDs (distributions to the left). Figure 3 shows clear difference between the cloud workloads, the benign programs, and the attacks. The cloud workloads have the smallest REDs (leftmost). The REDs of different benign programs are distinct, showing that RED can be used to distinguish benign programs. Most of the benign programs have larger REDs than cloud workloads, except the gpg-rsa program whose RED is similar to the cloud workloads. Moreover, the REDs of all evaluated attacks are to the right side, meaning larger reconstruction errors than cloud workloads and benign programs. We observe that one spectre attack (spectre v3) induces significantly larger RED than the other workloads, benign programs, and attacks. This shows that the spectre v3 attack’s behavior is unique compared to the other attacks. Figure 3: Reconstruction error distribution (RED) of normal cloud workloads (first five in green), benign programs (next six in blue) and attacks (last nine in red). RED of all attacks are different from the normal behavior of the cloud workload. The x-axis is the magnitude of RED, and y-axis is the probability density of that RED magnitude. Profiling for benign programs and known attacks. Similarly, we profile the RED of the benign programs and known attacks. We collect their behavior data in a clean execution environment from the Program Behavior Collection Module. Interestingly, we observe that it is not necessary to train another program behavior predictor model for benign programs and attacks. The pre-trained one on cloud workloads can be reused to profile the benign programs and known attacks. We hypothesize that it is because pre-training on different workloads improves the generalizability of the model, by suppressing potential overfitting. At last, two KDE estimations are performed on the RED of known attacks and benign programs, shown as ④-b and ④-c in Figure 2, respectively. We illustrate an example of kernel density estimation of benign programs in Figure 4. To illustrate, we first use t-SNE [35] to map the thirteen HPCs to a 2-D plane and build a KDE estimator of benign programs (gcc, gpg, and libquantum) using the REDs from the pre-trained model. The high-density regions (likely to be benign programs) are colored red while the low-density areas (unlikely to be benign programs) are colored blue. We plot three benign programs, i.e., gcc (green square), gpg (green diamond) and libquantum (green triangle) in Figure 4. We also depict four attacks, i.e., l3pp (red cross), fr (red square), spectre v1 (red diamond), and buffer overflow (red triangle), in Figure 4 and observe that they are all in the low-density area, where the benign program detector can identify them as non-benign programs. Figure 4 explains why KDE works, specifically the benign programs form high-density clusters while the attacks are outside the clusters. Figure 4: Illustration of kernel density estimation of benign programs. The high-density regions (likely to be benign programs) are colored red while the low-density areas (unlikely to be benign programs) are marked blue. We observe that benign programs (gcc, gpg, libquantum) are in the high-density area and attacks (l3pp, fr, spectre v1, and buffer overflow) are in the low-density areas. ### V-D Runtime Anomaly Detection and Mitigation The online detection module is responsible for detecting anomalies and distinguishing attacks and benign programs at runtime. A processor core’s behavior, in terms of hardware event measurements, is dynamically monitored at runtime. Anomaly detection based on RED. Similar to the offline profiling phase, the runtime gathered HPC sequences are sent through the pre-trained model (⑤ in Figure 2) to obtain the runtime observed RED $D_{1}$. The likelihood of the observed reconstruction error following the RED of normal cloud workloads ($RD_{n}$) is computed using the KDE normal workload detector ($\hat{f}(x)$ in Eq. 4) 111Tree-based structures, e.g., KD tree, can be used to find the $x_{i}$s close to $x$ and accelerate the computation because the effect of $x_{i}$s outside the bandwidth $b$ is negligible.. If the likelihood $\hat{f}(x)$ is lower than a pre-defined threshold, i.e., the prediction error does not follow the distribution of $RD_{n}$, an anomaly is detected. Based on the results of the anomaly detection, different response actions can be taken. If no anomaly is detected, no further actions are required. Once an anomaly is detected, CloudShield triggers different responses (⑥ in Figure 2). First, the cloud workload running on the machine is temporarily paused to avoid further damage. This also eliminates the interference between the cloud workload and other tasks that concurrently run (attacks or benign programs). Second, access to the most security-critical data and resources is temporarily turned off. Attacks against data confidentiality, e.g., side-channels, can target these secret data. Thus, cutting access to the security-critical data prevents these data from being leaked out. Third, the known attack detector and benign program detector are woken up, to identify if the anomaly is malicious (an attack) or benign (a false alarm). This can further reduce false-alarm fatigue in practice, as discussed below. ### V-E Distinguishing Benign Programs and Attacks A detected anomaly can be caused by benign programs. Thus, CloudShield attempts to distinguish “benign anomalies” caused by benign programs versus real attacks. As discussed in Section V-D, the cloud workload is paused once an anomaly is detected (⑥ in Figure 2). Now the monitored core is possibly running attacks. Moreover, other benign programs (can be a victim program) that concurrently run with the attack may hide the attack and make identifying attacks even harder. We will show CloudShield can detect an attack in both scenarios, with and without benign programs running. Attacks and benign programs identification. To distinguish the attacks and benign programs, firstly, hardware events’ measurements are monitored through the PMU after the main cloud workload is switched off. Then the PMU sends the newly measured data (without cloud workload) to the same pre-trained program behavior predictor $M$ for inference. Similar to anomaly detection, we compute the RED $D_{2}$ in the form of Eq. 3. The KDE attack detector (④-b) and the KDE benign program detector (④-c) were loaded into the online detection module from the training module 222Note that here we only need two KDE estimators, one for attacks and the other for benign programs, rather than an individual detector for each attack or benign program.. The attack detector computes the likelihood of the observed prediction errors following the RED of known attacks ($RD_{a}$), using Eq. 4. If a high likelihood is observed, the attack detector reports an attack. Similarly, the benign program detector computes the likelihood of the observed prediction error following the RED of benign programs ($RD_{b}$). If a high likelihood is observed, the benign program detector reports a benign program. Based on the decisions of the two detectors, we list the four possible final decisions in Table II. TABLE II: Benign program and attack decisions and responses. | | Known Attack --- Detector | Benign Program --- Detector Decision | Response Case 1 | Y | Y | Stealthy attack | Alarm (high priority) Case 2 | Y | N | Attack | Alarm (high priority) Case 3 | N | Y | Benign program | Resume cloud workload Case 4 | N | N | Zero-day attack or new benign programs | Alarm (medium priority) Case 1: The attack detector recognizes it as a known attack, and the benign program detector recognizes it as a benign program. In this case, CloudShield reports it as a stealthy attack where the attack program hides by mimicking the behavior of a benign program. Another possible scenario of this case is that a benign program, which could be a victim program, is concurrently running with the attack program. We will show in the experiments that attacks can still be detected even when they run together with benign programs. A high-priority alarm is raised and a detailed report is sent for inspection. Case 2: The attack detector recognizes it as an attack, and the benign program detector does not report it as a benign program. This case indicates clear attacks and a high-priority alarm is raised and a detailed report is sent for inspection. Case 3: The attack detector does not report it as an attack, and the benign program detector recognizes it as a benign program. In this case, the previously detected anomaly is caused by a benign program. The cloud workload is resumed to execute and no alarm is raised. Case 4: The attack detector does not report it as a known attack, and the benign program detector does not report it as a benign program. In this case, a potential zero-day attack or an unknown benign program is possible. A medium-priority alarm is raised by CloudShield. The cyber analysts can handle these alarms after the high-priority alarms. In fact, in our experiments, we show that case 4 is very unlikely. Response. Once an anomaly is detected (step 1), CloudShield has already paused the normal cloud workload to shield it from the attacks. Access to highly sensitive data, code, and resources can also be denied, depending on the server’s security response policy. If in the second step, an attack is detected, an alarm will be raised. Further responses can be taken to protect the system, and the code and data on it. CloudShield can also stop all processes running on the core. Meanwhile, CloudShield records the relative information into logs for further investigation. ### V-F System Update We discuss possible system updates of CloudShield. Specifically, CloudShield can update itself if new types of cloud workloads are added, new attacks are discovered or new benign programs are certified. A new model has to be trained only if new cloud workloads are added. For new attacks and benign programs, only the KDE detectors for attacks and benign programs need to be updated. New types of cloud workloads. The commonly used cloud workloads in practice share common characteristics [40, 49], thus this re-training process only needs to be performed when a new type of cloud workload is added. This kind of update is not frequent. Moreover, the whole update procedure can be performed during low usage time. CloudShield loads the updated models and detectors to the processor cores. New certified benign programs. Update of new certified benign programs is relatively lightweight, compared to cloud workload update, because the pre- trained model does not need to change. CloudShield then executes the new benign program, collects its behavior measurements in a clean execution environment, and calculate the REDs. As shown in the formula of KDE estimator (Eq. 4), the estimated likelihood $\hat{f}(x)$ is summed over all reference prediction errors $x_{i}$. Therefore, CloudShield only needs to append the new prediction errors of the new certified program to the existing prediction errors to form the new RED. New discovered attacks. This follows the same procedure of updating certified benign programs. It is also lightweight as the pretrained model does not need to be updated. ## VI Evaluation ### VI-A Experimental Settings Platform. We perform our evaluation of CloudShield on a server equipped with 2 Intel Xeon E5-2667 CPUs, each with 6 physical processor cores. Each core has a 32KB L1D (Level-1 Data) cache and a 32KB L1I (Level-1 Instruction) cache. Each package of six cores shares a 256KB L2 (Level-2) cache and a distributed last- level cache of 15MB (2.5MB*6). The server has 64GB memory and a 2TB hard disk. The machine is also equipped with an Nvidia 1080Ti GPU. The HPC values are collected every 10 milliseconds using Perf [8] supplied by the Ubuntu 14.04.6. Cloud workload benchmarks. We choose five representative cloud benchmarks, as shown in Table III. TABLE III: Cloud workload benchmarks. Cloud workload | Description ---|--- Web server (Nginx) | | Serving 1000 remote connections to request webpages --- using WRK benchmark [6] Database server (Mysql) | Performing 128 concurrent queries using SysBench [7] Stream server (FFserver) | | Streaming a MPEG video in real-time to a remote user --- with FFserver and FFmpeg ML training (Pytorch) | Training an LSTM model using an Nvidia 1080Ti GPU Hadoop | Perform Terasort [9] using MapReduce Evaluated attacks. We select nine representative runtime attacks against cloud computing systems for evaluation (Table IV). The evaluated attacks are cache side-channel attacks, speculative execution attacks, and buffer overflow attacks. The cache side-channel attacks silently leak information. The four recently discovered speculative execution attacks represent the main hardware resources exploited by the different speculative attack variants. We also evaluate a representative software attack, i.e., buffer overflow attack. TABLE IV: Three catagories of nine attacks are evaluated: cache side-channel attacks, speculative execution attacks and buffer overflow attack. Catagory | Attack ---|--- Cache side- channel attacks | L1 cache prime-probe attack (l1pp) [26] L3 cache prime-probe attack (l3pp) [45] Flush-reload (fr) [63] Flush-flush (ff) [24] Speculative execution attacks | Speculative boundary bypass (spectre v1) [42] Indirect branch mis-prediction (spectre v2) [42] Meltdown (spectre v3) [44] Speculative store bypass (spectre v4) [5] Buffer overflow | Stack overflow attack [58] Benign programs. We choose representative benign programs from the SPEC2006 benchmark suite [34]. The evaluated benign programs cover a large scope of programs: crypto software (gpg-rsa), compiler (gcc), file and video compression tools (bzip2, h264ref), scientific computation (mcf, milc, namd, libquantum), statistics, and machine learning (soplex, hmmer) and gaming (gobmk). Data collection. Data were collected in different scenarios. To evaluate the first step, i.e., for detection, we collected data when ① only the cloud workload is running; ② the cloud workload is running with benign programs listed above; ③ the cloud workload is running with the attacks listed above; and ④ the cloud workload is running with both benign programs and attacks. To evaluate the second step, i.e., for detection of attacks and benign programs, which we do when the cloud workload is not running, we collected data when ① only an attack is running; ② only a benign program is running; and ③ an attack is running together with a benign program. Due to the large number of combinations of cloud workloads, attacks and benign programs, we run each combination for six minutes on a server, and split the data equally into training, validation and testing sets. ### VI-B Metrics We first compute an anomaly score for each behavior measurement and then use a threshold to determine False Positive Rate (FPR) and False Negative Rate (FNR). Anomaly score. An anomaly score is $-log(\hat{f}(x))$, where $\hat{f}(x)$ is the KDE density in Eq. 4. Low density $f(x)$ indicates a high anomaly score. During inference, an anomaly score is computed for each behavior measurement, and the score is compared to a threshold to make a binary decision whether it is normal or abnormal (for the normal cloud workload detector), or whether it is a benign program (for the benign program detector), or whether it is an attack (for the attack detector). Threshold. The threshold calculation for the cloud workload detector is different from the attack and benign program detectors. The threshold of the cloud workload detector is obtained such that 80% of the validation normal measurements during the training phase are correctly classified as normal (no attack data are used to construct the normal cloud workload detector and to determine the threshold). For the benign program detector and attack detector, the threshold is obtained such that the FPR (False Positive Rate) and FNR (False Negative Rate) are equal, i.e., the equal error rate is achieved, on the validation set. False Positive Rate (FPR) and False Negative Rate (FNR). We also report the standard FPR and FNR as metrics. Based on the anomaly scores and thresholds, the cloud workload detector makes binary decisions (normal or abnormal). Similarly, the benign program detector and known attack detector determine if a benign program or a known attack is running. We report the FPR when the cloud workload or benign programs are running, and the FNR if an attack is running (with and without cloud workloads or benign programs). ### VI-C Anomaly+Attack Evaluation As CloudShield first detects anomalies (step 1) and then identifies attacks and benign programs (step 2), we first illustrate the end-to-end (anomaly detection + attack detection) results in Figure 5 and show the numerical results in Table V. Separated results and analysis of each step are discussed in Section VI-D-Section VI-F. We evaluate different window sizes: if the window size is $w$, in step 1, $w$ contiguous anomalous behavior marker measurements are identified as an anomaly. Similarly, $w$ contiguous behavior marker measurements are collected before an attack or benign program can be identified. For a specific cloud workload, we report the average FPR for that cloud workload + each benign program. We report the average FNR for that cloud workload + each attack + each benign program we evaluated. A higher FPR increases the number of false alarms, while a higher FNR increases the chance that an attack will go undetected. Low rates of both are desired. We observe that the CloudShield indeed has very low FNRs for all 5 workloads for all window sizes - less than 0.3%, indicating excellent detection accuracy and hence, excellent security. FPRs are slightly higher but also less than 0.6%. When $w$=1, the webserver workload has the highest FPR (0.51%), while stream server achieves the lowest FPR (0.26%). For all five cloud workloads, the FPR decreases as $w$ becomes larger, however, the FNR increases accordingly. When $w=100$, FPR decreases to 0.13% (for stream server) and 0.24% (for webserver). FNR increases to 0.09% and 0.19%. When $w=200$, FNR tends to exceed FPR for all five cloud workloads. Note that a larger window size can increase the detection delays (evaluated in Section VI-G). Because of the low FPR and FNR, a window size of 5-10 should be sufficient. Figure 5: End-to-end (anomaly detection + attack detection) results for 5 different cloud workloads. TABLE V: Quantitative end-to-end (anomaly detection + attack detection) evaluation results. We compare the proposed CloudShield to four representative anomaly detection methods in the literature, i.e., Isolation Forrest (IF) [46], One-class SVM (OCSVM) [55], Local Outlier Factor (LOF) [14], and Principal Component Analysis (PCA) [36]. We show the end-to-end results in Table VI. For the existing anomaly detection methods, we replace the pretrained model + KDE in steps 1 and 2 of CloudShield with the corresponding method. We average the FPR and FNR across each combination of cloud workload, benign program, and attack. We observe that, with $w$=5 or $w$=10, CloudShield achieves lower FPR and FNR compared to other methods. Specifically, when $w$=5, the best FNR and FPR of existing methods are 1.41% (OCSVM) and 6.95% (PCA), respectively, while CloudShiled has much lower (better) FPR of 0.34% and FNR of 0.06%. Similar results are shown when $w$=10. TABLE VI: Compare CloudShield to existing anomaly detection methods. | | False Positive Rate (FPR) | False Negative Rate (FNR) ---|---|---|--- w=5 | Isolation Forrest (IF) | 0.1728 | 0.442 One-class SVM (OCSVM) | 0.0141 | 0.1011 Local Outlier Factor (LOF) | 0.0518 | 0.0956 PCA | 0.0587 | 0.0695 CloudShield | 0.0034 | 0.0006 w=10 | Isolation Forrest (IF) | 0.1539 | 0.416 One-class SVM (OCSVM) | 0.01571 | 0.1031 Local Outlier Factor (LOF) | 0.0516 | 0.0990 PCA | 0.0519 | 0.1150 CloudShield | 0.0033 | 0.0007 ### VI-D Can CloudShield Detect Anomalous Behavior in Realtime? A key challenge for real-time anomaly detection is short or stealthy attacks. Attacks can hide by switching between running and sleeping. A good anomaly detection system should be able to capture the attack once it is running. We evaluate CloudShield against such attacks and show it can detect them almost immediately. We schedule each of the nine attacks to run and then sleep for a random period (10s-40s) before the next attack runs. The experiment is performed when the ML training workload is running. We show the attack scheduling and the anomaly scores output ($-log(\hat{f}(x))$ in Eq. 4) by CloudShield in Figure 6. It is clear that once an attack is running, possibly after sleeping, CloudShield captures it (indicated by a large anomaly score). Once the attack program’s behavior is suspended, the anomaly score quickly goes back to a low value. Therefore, the proposed CloudShield can detect anomalies in real-time. We also observe that two last-level cache attacks, i.e., the flush-flush attack (ff) and the LLC prime-probe attack (l3pp), and two speculative execution attack variants, i.e., the spectre v2 and the spectre v3, result in higher anomaly scores than the other attacks, indicating their distinctive behavior. Figure 6: Real-time detection of anomalies. We schedule each of the nine attacks to run for 10 seconds and sleep for a random period of time (10-40s). ### VI-E Can CloudShield Detect Zero-day Attacks? We evaluate the anomaly detection (step 1) on the nine attacks, including the four recently proposed speculative execution attacks. Note that in the anomaly detection step, CloudShield is only trained on the normal behavior of the cloud workloads, and has not seen code or data of any of the nine attacks, so they are like zero-day attacks to CloudShield in this experiment. We consider the model predictions for the four scenarios: 1. 1. Normal workload 2. 2. Normal workload and a benign program running 3. 3. Normal workload and an attack running 4. 4. Normal workload, a victim program, and an attack running We first illustrate a real example of anomaly detection in Figure 7 with the ML training workload. We run a flush-reload attack and a victim program, i.e., gpg-rsa. In period ①, the flush-reload attack is activated. We observe that the anomaly score quickly increases significantly. In period ②, the victim program gpg-rsa is running and it was not recognized as an anomaly. In the period ③, the flush-reload attack is executed and the anomaly score again quickly jumps to a high value. In period ④, the victim program ends and the anomaly score remains high as the attack is still running. Figure 7: An example of anomaly detection of the flush-reload attack with the ML training workload. We show quantitative results of anomaly detection (step 1) in Table VII. The first line of the results is scenario 1 where only the normal cloud workload is running. The next four lines show scenario 2, i.e., the normal workload and an additional benign program are running. The next nine lines present the results of scenario 3, where an attack is running concurrently with the normal cloud workload. Finally, representatives of scenario 4 are shown in the remaining (27) lines of Table VII where an attack and a victim program are running together. TABLE VII: Results of anomaly detection (step 1) with different cloud workloads. We find that when only the normal workload is running (scenario 1), CloudShield almost always correctly recognizes it as normal (the first line) for the ML training, database, stream server, and web server benchmarks with a 0.1%-0.5% false positive rate (predict abnormal column). When MapReduce is running, CloudShield misrecognizes 1.7% of normal workloads as anomalous – still a small level. When a benign program is running concurrently with the cloud benchmark (scenario 2), we observe that the results highly depend on the cloud workload and the benign program. For example, the GPG-RSA is recognized as normal with less than 1% false positive rate in the database, web server, and MapReduce workloads. However, large false positives, i.e., 6.4% and 47.6% of the GPG- RSA, are observed in the stream server and ML training workloads, respectively. These false alarms can cause false alarm fatigue. Thus it requires the next step to further distinguish benign programs versus malicious anomalies, and reduce the number of false alarms. Note that CloudShield distinguishes certified benign programs from attacks (step 2) to reduce false alarms after an anomaly is identified (results discussed in Section VI-F). In scenario 3, once an attack is running with the cloud workload, it can be detected with zero false negatives in the database, web server, and MapReduce workloads. For the ML training workload, the Spectre v1 and v2 attacks cause 1.2% and 0.5% false-negative rates, respectively. For the stream server workload, the Spectre v1 and v2 attacks introduce 1.6% and 0.3% false-negative rates, respectively. These results show that CloudShield is capable of detecting zero-day attacks, since the normal cloud workload detector in step1 has not been trained with any attack. We also evaluate scenario 4 where an attack program runs concurrently with a benign or victim program and the cloud workload. Similar to scenario 3, we observe that the attacks can be detected with zero false negatives with the database, web server, and MapReduce workloads. For the ML training workload, the worst case is when spectre v1 and libquantum are running concurrently, the attack is missed by 3.1%, slightly higher than scenario 3 where the spectre v1 attack is running alone (1.2%). For the stream server workload, the highest false-negative rate is 4.5% when a flush-reload attack is executed with gpg- rsa. Next is when Spectre v1 and libquantum are running with the stream server, the FNR is 2.8%. Although these results from just step 1 for anomaly detection are very good for not missing attacks (low FNRs), the FPRs in scenario 2 when benign programs cause false alarms seem higher than we would like to see. Hence, we propose step 2, to detect benign anomalies from real attacks. ### VI-F Can CloudShield Distinguish Benign Anomalies from Attacks? Anomalies can be caused by benign programs, i.e., benign anomalies. Therefore, once an anomaly is detected, CloudShield takes the next step to figure out whether it is a benign anomaly or an attack. As shown earlier, CloudShield implements two detectors to identify known attacks and certified benign programs, respectively. These two detectors can reduce false alarms by 99.0%. We show a real example of CloudShield reducing false alarms by distinguishing known attacks and certified benign programs in Figure 8. We run an attack (spectre v3) and a benign program (gcc), both with the ML training workload. The periods ① and ③ indicate that the attack is running, and the period ② means the benign program is running. Figure 8 (a) illustrates the anomaly scores in the anomaly detection step. We observe that while both attacks are correctly identified (periods ① and ③), the beginning of gcc execution is incorrectly recognized as attacks (false alarms). Then the ML training workload is paused and the behavior measurements are re-collected as input to the two step 2 detectors. Figure 8 (b) shows the result of the attack detector. High values indicate an attack and low values mean no attack. It correctly identifies periods ① and ③ as attacks, while ② is not an attack. Figure 8 (c) shows the result of the benign program detector. High values represent a benign program and low values indicate a program that is not in the set of certified benign programs. We find that the certified benign program detector reports high values in period ② (and idle periods), while the values in periods ① and ③ are low (not certified benign programs). Jointly considering the two detectors, CloudShield correctly determines that ② is a certified benign program, while ① and ③ are real attacks. Figure 8: An example of reducing false alarms by indentifying attacks and certified benign programs. We show quantitative results of attacks and certified benign program detection (step 2) in Table VIII. We select eleven representative benign programs from the SPEC benchmark suite and the same nine attacks as in previous sections for evaluation. For the benign program detection, we observe that six benign programs (gpg-rsa, bzip2, namd, soplex, hmmer, and libquantum) can be recognized correctly with no false alarms. The milc program introduces the highest but acceptable FPR of 3.5%. Of this, 2.6% were identified as stealthy attacks and 0.9% as zero-day attacks or unknown benign programs. On average, 99.0% of the benign programs can be identified correctly, i.e., the false alarms raised by benign programs in the anomaly detection is suppressed by 99.0%. Within the remaining false alarms (1.0%), we observe that 0.6% are recognized as case 4 (zero-day attacks or unknown benign programs) which results in a medium-priority alarm, and 0.4% are recognized as high-priority attacks (case 1 and 2). For attack detection, we observe that all attacks are correctly identified. A detailed analysis shows that 99.8% of attacks are identified as high-priority attacks (case 2) and 0.2% attacks are recognized as stealthy attacks. TABLE VIII: Results of benign programs/attacks detection (step 2). TABLE IX: Results of benign programs/attacks detection (step 2), when attacks and benign programs run concurrently. We consider a more difficult scenario where an attack is running concurrently with a certified benign program. We run three benign programs: gpg-rsa, gcc, and libquantum with the nine evaluated attacks in Table IX. First, on average, 99.9% of attacks when they are concurrently running with benign programs, are correctly recognized as attacks. A detailed analysis shows that, when an attack program is running concurrently with a benign program, 96.4% are identified as high-priority attacks (case 2), 3.1% are recognized as high- priority stealthy attacks (case 1), only 0.4% are classified as medium- priority zero-day attacks (case 4). These results show that CloudShield can still detect attacks even if they hide in benign programs. TABLE X: Results of zero-day (unknown) attacks in step 2. Zero-day attack detection in step 2. We conduct another experiment by putting only L1 prime-probe (l1pp), LLC prime-probe (l3pp), spectre v1, spectre v2, and buffer overflow attacks in the set of known attacks. This means that the flush-reload (fr), flush-flush (ff), spectre v3, and spectre v4 attacks are unknown zero-day attacks. We show the known and zero-day attack detection results in Table X. We observe that CloudShield can still correctly recognize unknown attacks. The flush-flush and spectre v3 attacks are classified as case 4 (zero-day attacks). The other two attacks, i.e., the flush-reload and spectre v4 attacks, are detected as known attacks probably because their behavior is similar to the known attacks. Necessity of the two-step method. We have also investigated detecting attacks together with detecting anomalies in the first step, when the cloud workloads are running. The benefit of doing this is that the attacks can be identified more quickly, than in the second step. However, the downside of detecting attacks in the first step is that the attacks and cloud workloads interfere with each other, making the behavior markers collected in the first step not capable enough to identify the attacks. Hence our two-steps method is much better. ### VI-G Detection Latency and Overhead Detection latency. The detection latency is defined as the period from the time the attack starts running, to the time an attack alarm is raised. We present the overhead of robust detection using more than one set of behavior marker measurements, e.g., with a sequence of $w=5$ sets of measurements. The timeline for detecting an attack is shown in Figure 9 (similar for attack and benign program detection). $t_{B}$ denotes the time interval for collecting $w$ behavior marker measurement. $t_{RED}$ represents the time needed for computing the RED by inferencing the pre-trained model. $t_{KDE}$ is the time to infer the KDE detector. The computation of RED and KDE can overlap with the behavior marker collection if $w>1$ (Figure 9). Figure 9: Illustration of the timeline for anomaly detection. Table XI presents the detection latency when $w=$1, 5, 10, 50 and 100. As the HPCs are sampled every 10ms, $t_{B}$=10ms when $w$=1. We measure $t_{RED}$ and $t_{KDE}$ on the server. Specifically, the calculation of RED ($t_{RED}$) is performed on the GPU and the calculation of KDE ($t_{KDE}$) is performed on the CPU of the server. The two overall numbers in the parenthesis are detection time when there is no anomaly (thus no step 2) and there is an attack, respectively. We show that CloudShield can detect anomalies and identify the attacks and benign programs in 32 to 112 milliseconds if $w=1$ or $w=5$. Considering the attack usually takes seconds to succeed, e.g., several encryption operations for side-channel attacks, this latency can achieve our design goal of real-time detection. Larger window sizes can reduce the false- positive rate, however, the false-negative rate is slightly increased (Figure 5) and detection time is increased to seconds. We suggest $w$=5 is sufficient. TABLE XI: Detection latency (ms) versus window sizes. (ms) | Anomaly Detection | Benign program/Attack detection | Overall (no anomaly, attack) ---|---|---|--- | $t_{B}$ | $t_{RED}$ | $t_{KDE}$ | $t_{B}$ | $t_{RED}$ | $t_{KDE}$ | w=1 | 10.0 | 0.02 | 0.76 | 10.0 | 0.02 | 1.58 | (10.78, 32.38) w=5 | 50.0 | 0.02 | 0.76 | 50.0 | 0.02 | 1.58 | (50.78, 112.38) w=10 | 100.0 | 0.02 | 0.77 | 100.0 | 0.02 | 1.60 | (100.79, 212.41) w=50 | 500.0 | 0.02 | 0.78 | 500.0 | 0.02 | 1.62 | (500.80, 1012.44) w=100 | 1000.0 | 0.02 | 0.79 | 1000.0 | 0.02 | 1.65 | (1000.81, 2012.48) Performance overhead. We evaluate the performance overhead of CloudShield. We use the benchmarks in Table III. We use completion time as the metric for ML training and MapReduce, average time per query for Database and Webserver, and processing time per frame for Stream Server. All the metrics are normalized to the cloud workload running without Cloudshield. Figure 10 reports the normalized metrics without CloudShield (blue solid) and with CloudShield running (orange dashed). Results are averaged over five runs). We see that CloudShield only introduces a small performance overhead. The maximum overhead is 6.3% for MapReduce and the minimum is 0.5% for database. In our experiments, we observe that on average CloudShield consumes 17.1% CPU time on the server. Figure 10: Performance overhead of CloudShield with different cloud workloads. ### VI-H Discussion: Evasion Attacks There have been many attacks against deep learning systems [31, 33, 32, 62]. Previous work [54] revealed that attackers can effectively generate adversarial examples in the black-box setting to evade deep learning based intrusion detection systems. However, generating adversarial examples against our system is generally harder. First, our system monitors the dynamic behavior of a program. Generating dynamic adversarial examples that can both interact with other programs and escape detection in the black-box setting remains challenging. Second, the behavior markers monitored in our system are HPC measurements. As HPC measurements highly depend on the context of the executing environment, this introduces an extra obstacle for the attacker to construct the same execution environment when generating evasion adversarial examples. How to design and develop efficient evasive attacks and how to detect these attacks are worth exploring as future work. ## VII Past Work Past work on anomaly detection in the cloud mainly focused on machine performance degradation. Liu et al.[47] proposed self-organizing maps for detecting anomalies in machine performance. Vallis et al.[56] leveraged statistical measurement, e.g., median, and median absolute deviation, to detect machine performance degradation. Pannu et al.[52] implemented adaptive anomaly detection (AAD) based on non-linear transformation for detecting failures in cloud infrastructures. These work detected performance anomalies to provide reliable cloud service, however, unlike our work, they did not consider anomalies caused by attacks. Another line of research detected specific attacks in the cloud. For example, Zhang et al.[65] developed CloudRadar for side-channel attack detection in the cloud using hardware performance counters. Guo et al.[27] detected cache side- channel leakage with symbolic execution. Wang et al.[57] leveraged symbolic execution to detect speculative execution attacks. However, each of these detected a specific type of attack, unlike our work, which covers a broader scope of attacks, including zero-day attacks. Recent work used deep learning for anomaly detection. Alam et al.[10] proposed AutoPerf based on an autoencoder to detect hardware performance anomalies. DeepLog [22] leveraged system event logs to detect system failures. Sucheta et al.[16] and Malhotra et al.[48] proposed LSTM for sequential anomaly detection. He et al.[30] leveraged LSTM for anomaly detection in critical infrastructures. Du et al.[21] updated anomaly detection model through unlearning. Hu et al.[37] developed a deep learning hardware module for impostor detector in smartphone systems. As stated in their work, unlearning may introduce a higher false-positive rate. In contrast, CloudShield significantly reduces false positives by distinguishing benign and malicious anomalies. ## VIII Conclusion In this paper, we proposed CloudShield, a real-time anomaly and attack detection system for cloud computing. CloudShield leverages a single pre- trained deep learning model and leverages the reconstruction error distribution (RED) of hardware performance counters to model the normal behavior of a system using kernel density estimation (KDE). It is worth noting that CloudShield explicitly takes false-alarm reduction into account, a critical problem in anomaly detection systems. Once an anomaly is detected, CloudShield automatically distinguishes benign programs, known attacks, and zero-day attacks by investigating the different attack and benign program reconstruction error distributions, using the pre-trained model and kernel density estimators. We evaluate CloudShield on various cloud workloads, attacks, and benign programs. Experimental results show that CloudShield can reliably detect various attacks in real-time with high accuracy and very low FNR and FPR. Moreover, experiments show that it can correctly identify unknown zero-day attacks and stealthy attacks that are running concurrently with benign programs. CloudShield achieves very low 0.3% FNR and 0.6% FPR for overall anomaly-attack detection. Especially, we find that CloudShield can detect the recently proposed speculative execution attacks in 32-112ms, and it can reduce false alarms by up to 99.0%. ## References * [1] https://aws.amazon.com, 2018. * [2] http://cloud.google.com, 2018. * [3] https://azure.microsoft.com/en-us/services/machine-learning-studio/, 2018\. * [4] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2018-3640, 2018. * [5] https://msrc-blog.microsoft.com/2018/05/21/analysis-and-mitigation-of-speculative-store-bypass-cve-2018-3639/, 2018\. * [6] https://github.com/wg/wrk, 2019. * [7] https://github.com/akopytov/sysbench, 2019. * [8] https://perf.wiki.kernel.org/index.php/Main_Page, 2020. * [9] https://hadoop.apache.org/docs/current/api/org/apache/hadoop/examples/terasort/package-summary.html, 2020\. * [10] M. Alam, J. Gottschlich, N. Tatbul, J. S. Turek, T. Mattson, and A. Muzahid, “A zero-positive learning approach for diagnosing software performance regressions,” in _Advances in Neural Information Processing Systems (NeurIPS)_ , 2019. * [11] E. Asselin, C. Aguilar-Melchor, and G. Jakllari, “Anomaly detection for web server log reduction: A simple yet efficient crawling based approach,” in _IEEE Conference on Communications and Network Security_ , 2016. * [12] J. Bonneau and I. Mironov, “Cache-collision timing attacks against aes,” in _International Workshop on Cryptographic Hardware and Embedded Systems (CHES)_ , 2006. * [13] L. Bossi, E. Bertino, and S. R. Hussain, “A system for profiling and monitoring database access patterns by application programs for anomaly detection,” _IEEE Transactions on Software Engineering_ , 2016. * [14] M. M. Breunig, H.-P. Kriegel, R. T. Ng, and J. Sander, “Lof: identifying density-based local outliers,” in _ACM SIGMOD International Conference on Management of Data (SIGKDD)_ , 2000. * [15] M. S. Brunella, S. Turco, G. Bianchi, and N. B. Melazzi, “Foreshadow-vmm: on the practical feasibility of l1 cache terminal fault attacks,” 2018. * [16] S. Chauhan and L. Vig, “Anomaly detection in ecg time signals via deep long short-term memory networks,” in _IEEE International Conference on Data Science and Advanced Analytics_ , 2015. * [17] K. Cho, B. Van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio, “Learning phrase representations using rnn encoder-decoder for statistical machine translation,” _arXiv preprint arXiv:1406.1078_ , 2014. * [18] S. Das, J. Werner, M. Antonakakis, M. Polychronakis, and F. Monrose, “Sok: The challenges, pitfalls, and perils of using hardware performance counters for security,” in _IEEE Symposium on Security and Privacy (S &P)_, 2019. * [19] J. Demme, M. Maycock, J. Schmitz, A. Tang, A. Waksman, S. Sethumadhavan, and S. Stolfo, “On the feasibility of online malware detection with performance counters,” _ACM SIGARCH Computer Architecture News_ , 2013. * [20] J. Devlin, M.-W. Chang, K. Lee, and K. N. Toutanova, “Bert: Pre-training of deep bidirectional transformers for language understanding,” _arXiv preprint arXiv:1810.04805_ , 2018. * [21] M. Du, Z. Chen, C. Liu, R. Oak, and D. Song, “Lifelong anomaly detection through unlearning,” in _ACM Conference on Computer and Communications Security (CCS)_ , 2019. * [22] M. Du, F. Li, G. Zheng, and V. Srikumar, “Deeplog: Anomaly detection and diagnosis from system logs through deep learning,” in _ACM Conference on Computer and Communications Security (CCS)_ , 2017. * [23] S. Garg, K. Kaur, S. Batra, G. S. Aujla, G. Morgan, N. Kumar, A. Y. Zomaya, and R. Ranjan, “En-abc: An ensemble artificial bee colony based anomaly detection scheme for cloud environment,” _Journal of Parallel and Distributed Computing_ , 2020. * [24] D. Gruss, C. Maurice, K. Wagner, and S. Mangard, “Flush+flush: a fast and stealthy cache attack,” in _International Conference on Detection of Intrusions and Malware, and Vulnerability Assessment_ , 2016. * [25] D. Gruss, R. Spreitzer, and S. Mangard, “Cache template attacks: Automating attacks on inclusive last-level caches,” in _USENIX Security Symposium_ , 2015. * [26] D. Gullasch, E. Bangerter, and S. Krenn, “Cache games–bringing access-based cache attacks on aes to practice,” in _IEEE Symposium on Security and Privacy (S &P)_, 2011. * [27] S. Guo, Y. Chen, P. Li, Y. Cheng, H. Wang, M. Wu, and Z. Zuo, “Specusym: Speculative symbolic execution for cache timing leak detection,” in _International Conference on Software Engineering_ , 2020. * [28] Z. He, G. Hu, and R. B. Lee, “New models for understanding and reasoning about speculative execution attacks,” in _IEEE International Symposium on High-Performance Computer Architecture (HPCA)_ , 2021. * [29] Z. He and R. B. Lee, “How secure is your cache against side-channel attacks?” in _Annual IEEE/ACM International Symposium on Microarchitecture_ , 2017. * [30] Z. He, A. Raghavan, G. Hu, S. Chai, and R. Lee, “Power-grid controller anomaly detection with enhanced temporal deep learning,” in _IEEE International Conference On Trust, Security And Privacy In Computing (TrustCom)_ , 2019. * [31] Z. He, T. Zhang, and R. Lee, “Sensitive-sample fingerprinting of deep neural networks,” in _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 2019. * [32] Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in _Annual Computer Security Applications Conference (ACSAC)_ , 2019. * [33] ——, “Attacking and protecting data privacy in edge-cloud collaborative inference systems,” _IEEE Internet of Things Journal_ , 2020. * [34] J. L. Henning, “Spec cpu2006 benchmark descriptions,” _ACM SIGARCH Computer Architecture News_ , 2006. * [35] G. E. Hinton and S. Roweis, “Stochastic neighbor embedding,” _Advances in Neural Information Processing Systems (NeurIPS)_ , 2002. * [36] H. Hotelling, “Analysis of a complex of statistical variables into principal components.” _Journal of educational psychology_. * [37] G. Hu, Z. He, and R. B. Lee, “Smartphone impostor detection with behavioral data privacy and minimalist hardware support,” in _TinyML Symposium_ , 2021\. * [38] G. Irazoqui, T. Eisenbarth, and B. Sunar, “S $ a: A shared cache attack that works across cores and defies vm sandboxing–and its application to aes,” in _IEEE Symposium on Security and Privacy (S &P)_, 2015. * [39] M. Kayaalp, N. Abu-Ghazaleh, D. Ponomarev, and A. Jaleel, “A high-resolution side-channel attack on last-level cache,” in _Design Automation Conference (DAC)_ , 2016. * [40] A. Khan, X. Yan, S. Tao, and N. Anerousis, “Workload characterization and prediction in the cloud: A multiple time series approach,” in _IEEE Network Operations and Management Symposium_ , 2012. * [41] V. Kiriansky and C. Waldspurger, “Speculative buffer overflows: Attacks and defenses,” _arXiv preprint arXiv:1807.03757_ , 2018. * [42] P. Kocher, J. Horn, A. Fogh, D. Genkin, D. Gruss, W. Haas, M. Hamburg, M. Lipp, S. Mangard, T. Prescher _et al._ , “Spectre attacks: Exploiting speculative execution,” in _IEEE Symposium on Security and Privacy (S &P)_, 2019. * [43] E. M. Koruyeh, K. N. Khasawneh, C. Song, and N. Abu-Ghazaleh, “Spectre returns! speculation attacks using the return stack buffer,” in _USENIX Workshop on Offensive Technologies_ , 2018. * [44] M. Lipp, M. Schwarz, D. Gruss, T. Prescher, W. Haas, A. Fogh, J. Horn, S. Mangard, P. Kocher, D. Genkin _et al._ , “Meltdown: Reading kernel memory from user space,” in _USENIX Security Symposium_ , 2018. * [45] F. Liu, Y. Yarom, Q. Ge, G. Heiser, and R. B. Lee, “Last-level cache side-channel attacks are practical,” in _IEEE Symposium on Security and Privacy (S &P)_, 2015. * [46] F. T. Liu, K. M. Ting, and Z.-H. Zhou, “Isolation forest,” in _IEEE International Conference on Data Mining (ICDM)_ , 2008. * [47] J. Liu, S. Chen, Z. Zhou, and T. Wu, “An anomaly detection algorithm of cloud platform based on self-organizing maps,” _Mathematical Problems in Engineering_ , 2016. * [48] P. Malhotra, L. Vig, G. Shroff, and P. Agarwal, “Long short term memory networks for anomaly detection in time series,” in _European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning_ , 2015. * [49] A. K. Mishra, J. L. Hellerstein, W. Cirne, and C. R. Das, “Towards characterizing cloud backend workloads: insights from google compute clusters,” _ACM SIGMETRICS Performance Evaluation Review_ , 2010. * [50] D. A. Osvik, A. Shamir, and E. Tromer, “Cache attacks and countermeasures: the case of aes,” in _Cryptographers’ Track at the RSA conference_ , 2006. * [51] M. Ozsoy, K. N. Khasawneh, C. Donovick, I. Gorelik, N. Abu-Ghazaleh, and D. Ponomarev, “Hardware-based malware detection using low-level architectural features,” _IEEE Transactions on Computers_ , 2016. * [52] H. S. Pannu, J. Liu, and S. Fu, “Aad: Adaptive anomaly detection system for cloud computing infrastructures,” in _IEEE Symposium on Reliable Distributed Systems_ , 2012. * [53] N. Patel, A. Sasan, and H. Homayoun, “Analyzing hardware based malware detectors,” in _Design Automation Conference (DAC)_ , 2017. * [54] H. Qiu, T. Dong, T. Zhang, J. Lu, G. Memmi, and M. Qiu, “Adversarial attacks against network intrusion detection in iot systems,” _IEEE Internet of Things Journal_ , 2020. * [55] B. Schölkopf, R. C. Williamson, A. J. Smola, J. Shawe-Taylor, and J. C. Platt, “Support vector method for novelty detection,” in _Advances in Neural Information Processing Systems (NeurIPS)_ , 2000. * [56] O. Vallis, J. Hochenbaum, and A. Kejariwal, “A novel technique for long-term anomaly detection in the cloud,” in _USENIX Workshop on Hot Topics in Cloud Computing_ , 2014. * [57] G. Wang, S. Chattopadhyay, A. K. Biswas, T. Mitra, and A. Roychoudhury, “Kleespectre: Detecting information leakage through speculative cache attacks via symbolic execution,” _ACM Transactions on Software Engineering and Methodology_ , 2020. * [58] X. Wang, C.-C. Pan, P. Liu, and S. Zhu, “Sigfree: A signature-free buffer overflow attack blocker,” _IEEE Transactions on Dependable and Secure Computing_ , 2008. * [59] X. Wang and R. Karri, “Detecting kernel control-flow modifying rootkits,” in _Network Science and Cybersecurity_ , 2014. * [60] X. Wang, C. Konstantinou, M. Maniatakos, and R. Karri, “Confirm: Detecting firmware modifications in embedded systems using hardware performance counters,” in _International Conference on Computer-Aided Design (ICCAD)_ , 2015. * [61] O. Weisse, J. Van Bulck, M. Minkin, D. Genkin, B. Kasikci, F. Piessens, M. Silberstein, R. Strackx, T. F. Wenisch, and Y. Yarom, “Foreshadow-ng: Breaking the virtual memory abstraction with transient out-of-order execution,” Tech. Rep., 2018. * [62] Q. Yao, Z. He, H. Han, and S. K. Zhou, “Miss the point: Targeted adversarial attack on multiple landmark detection,” in _International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)_ , 2020. * [63] Y. Yarom and K. Falkner, “Flush+reload: a high resolution, low noise, l3 cache side-channel attack,” in _USENIX Security Symposium_ , 2014. * [64] C. Yin, Y. Zhu, J. Fei, and X. He, “A deep learning approach for intrusion detection using recurrent neural networks,” _IEEE Access_ , 2017. * [65] T. Zhang, Y. Zhang, and R. B. Lee, “Cloudradar: A real-time side-channel attack detection system in clouds,” in _International Symposium on Research in Attacks, Intrusions, and Defenses (RAID)_ , 2016. * [66] Y. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Cross-tenant side-channel attacks in paas clouds,” in _ACM Conference on Computer and Communications Security (CCS)_ , 2014. * [67] B. Zhou, A. Gupta, R. Jahanshahi, M. Egele, and A. Joshi, “Hardware performance counters can detect malware: Myth or fact?” in _Asia Conference on Computer and Communications Security (AsiaCCS)_ , 2018. TABLE XII: List of all hardware performance counters. HPC | Description ---|--- Instruction | Number of instructions Load | Number of memory loads Store | Number of memory stores L1D read miss | Number of L1 data cache read misses L1D write miss | Number of L1 data cache write misses L1D prefetch miss | Number of L1 data cache prefetch misses L1I read miss | Number of L1 instruction cache read misses LLC read access | Number of Last level cache read accesses LLC read miss | Number of Last level cache read misses LLC write access | Number of Last level cache write access LLC write miss | Number of Last level cache write misses LLC prefetch access | Number of Last level cache prefetch accesses LLC prefetch miss | Number of Last level cache prefetch misses DTLB read access | Number of data translation lookaside buffer read accesses DTLB read miss | Number of data translation lookaside buffer read misses DTLB write access | Number of data translation lookaside buffer write accesses DTLB write miss | Number of data translation lookaside buffer write misses ITLB read access | Number of instruction translation lookaside buffer read accesses ITLB read miss | Number of instruction translation lookaside buffer read misses BPU read access | Number of branch prediction unit read accesses BPU read miss | Number of branch prediction unit read misses Cache node read access | Number of cache node read accesses Cache node read miss | Number of cache node read misses Cache node write access | Number of cache node write accesses Cache node write miss | Number of cache node write misses Cache node prefetch access | Number of cache node prefetch accesses Cache node prefetch miss | Number of cache node prefetch misses Cycles | Number of cycles Branch instructions | Number of branch instructions Branch prediction miss | Number of branch prediction misses Page faults | Number of page faults Context switch | Number of context switches Stall during issue | Number of stalled cycles during instruction issue Stall during retirement | Number of stalled cycles during instruction retirement TABLE XIII: Top-10 important events for each cloud benchmark. Bold means the event ranks top 10 for all benchmarks. Rank | | ML Training --- (Pytorch) | Stream Server --- (FFserver) | Database --- (Mysql) | Web Server --- (Nginx) MapReduce 1 | Instruction | Cycles | Cycles | | Stall during --- issue Instruction 2 | Load | | Stall during --- issue | Stall during --- issue | Stall during --- retirement | Stall during --- issue 3 | | Stall during --- retirement | Stall during --- retirement | Stall during --- retirement Cycles | Cycles 4 | Store | Instruction | Instruction | Load | | Stall during --- retirement 5 | | Stall during --- issue Load | Load | DTLB read | Load 6 | DTLB read | BPU read | BPU read | Branch | DTLB read 7 | BPU read | DTLB read | DTLB read | | L1I read --- miss Branch 8 | DTLB write | Store | Store | Instruction | BPU read 9 | Cycles | DTLB write | DTLB write | BPU read | DTLB write 10 | | L1D read --- miss | L1I read --- miss | L1I read --- miss Context Switch | Store
JUST Team<EMAIL_ADDRESS>11institutetext: Tsung-Dao Lee Institute, Shanghai Jiao Tong University, Shanghai 200240, China; 22institutetext: Department of Astronomy, School of Physics and Astronomy, Shanghai Jiao Tong University, Shanghai 200240, China; 33institutetext: Nanjing Institute of Astronomical Optics & Technology, Chinese Academy of Sciences, Nanjing 210042, China; 44institutetext: University of Chinese Academy of Sciences, Nanjing 211135, China; 55institutetext: University of Chinese Academy of Sciences, Beijing 101408, China; 66institutetext: Lenghu Technology Innovation Industrial Park Management Committee, Lenghu 817400, China; 77institutetext: Shanghai Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China; 88institutetext: Department of Astronomy, University of Science and Technology of China, Hefei 230026, China; 99institutetext: National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100012, China; 1010institutetext: Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210023, China. # The Jiao Tong University Spectroscopic Telescope Project JUST Team 1122 Chengze Liu 2211 Ying Zu 2211 Fabo Feng 1122 Zhaoyu Li 22 Yu Yu 22 Hua Bai 3344 Xiangqun Cui 3355 Bozhong Gu 33 Yizhou Gu 1122 Jiaxin Han 22 Yonghui Hou 3355 Zhongwen Hu 3355 Hangxin Ji 33 Yipeng Jing 1122 Wei Li 66 Zhaoxiang Qi 77 Xianyu Tan 11 Cairang Tian 66 Dehua Yang 33 Xiangyan Yuan 3344 Chao Zhai 88 Congcong Zhang 77 Jun Zhang 22 Haotong Zhang 99 Pengjie Zhang 1122 Yong Zhang 99 Yi Zhao 66 Xianzhong Zheng 1010 Qingfeng Zhu 88 Xiaohu Yang 1122 (Received 2023 month day; accepted 2023 month day) ###### Abstract The Jiao Tong University Spectroscopic Telescope (JUST) is a 4.4-meter $f/6.0$ segmented-mirror telescope dedicated to spectroscopic observations. The JUST primary mirror is composed of 18 hexagonal segments, each with a diameter of 1.1 m. JUST provides two Nasmyth platforms for placing science instruments. One Nasmyth focus fits a field of view of $10^{\prime}$ and the other has an extended field of view of $1.2^{\circ}$ with correction optics. A tertiary mirror is used to switch between the two Nasmyth foci. JUST will be installed at a site at Lenghu in Qinghai Province, China, and will conduct spectroscopic observations with three types of instruments to explore the dark universe, trace the dynamic universe, and search for exoplanets: (1) a multi-fiber (2000 fibers) medium-resolution spectrometer (R=4000-5000) to spectroscopically map galaxies and large-scale structure; (2) an integral field unit (IFU) array of 500 optical fibers and/or a long-slit spectrograph dedicated to fast follow- ups of transient sources for multi-messenger astronomy; (3) a high-resolution spectrometer (R$\sim$100000) designed to identify Jupiter analogs and Earth- like planets, with the capability to characterize the atmospheres of hot exoplanets. ###### keywords: Astronomical instrumentation(799) — Optical telescopes(1174) — Large-scale structure of the universe(902) — Redshift surveys(1378) — Time domain astronomy(2109) — Exoplanet astronomy (486) 1112030 Jan 0000000.000 ## 1 Introduction Observing facilities play a fundamental role in advancing our understanding of the universe. These facilities, including ground-based telescopes, space observatories, and specialized instruments, provide astronomers the necessary tools to gather data from distant celestial objects and phenomena, and explore the properties, compositions, and behaviors of objects such as stars and galaxies, leading to remarkable discoveries and profound insights into the nature of the universe. Moreover, long-term observations with these facilities enable monitoring of transient events, and probe the cosmos across various wavelengths, which is essential for unveiling cosmic mysteries. Observing facilities are indispensable for pushing the boundaries of astronomical knowledge and fostering scientific breakthroughs. The development of powerful observing facilities, whether for general-purpose use or dedicated surveys, has become a critical requirement for astronomers to achieve groundbreaking advancements. The progress of astronomy hinges on the construction of large telescopes which are currently evolving to possess large apertures, wide fields of view, and high spatial and spectral resolution. Given the disparity in associated cost between acquiring images and spectra, there is a notable shortfall in high-quality spectroscopic observational facilities compared with the plentiful availability of image-based observational facilities. This gap highlights the need for further attention and investment in advancing spectroscopic capabilities to complement the existing observational landscape. Astronomical spectroscopy enables the precise measurement of redshift, identification of specific chemical elements, and the determination of kinematics of celestial objects. It leads to a deeper understanding of the nature and characteristics of the observed objects. Spectroscopic observations offer a wealth of information that complements and enhances the insights gained from imaging observations. Figure 1: Bird’s-eye view of Saishiteng Mountain. The largest dome at Position C (at an altitude of 4200 m) houses the WFST, which is dedicated to imaging surveys. JUST will be placed at Position B (at 4322 m). Photo credit: Bin Chen. To fulfill the scientific needs for spectroscopic observations, as well as owing to the great success of spectroscopic projects such as the Sloan Digital Sky Survey (SDSS111https://sdss.org/; York et al., 2000), multiple new projects are thriving. The Dark Energy Spectroscopic Instrument (DESI222https://www.desi.lbl.gov; DESI Collaboration et al., 2016) is the first stage-IV dark energy survey project, comprising a 4-meter telescope with 5 000 robotic fiber positioners to feed a collection of spectrographs covering the 360-980 nm wavelength range. It has reportedly finished over 50before the planned 5 years of run time, demonstrating its high efficiency in observation. Near future 4-meter-class telescope projects include the WHT Enhanced Area Velocity Explorer (WEAVE; Jin et al., 2023) and the 4-metre Multi-Object Spectroscopic Telescope (4MOST333https://www.4most.eu; de Jong et al., 2019). They will provide the spectroscopic follow-up required for full scientific exploitation of other projects, such as the Gaia, LOFAR and Apertif surveys. The MegaMapper (Schlegel et al., 2019) will be a dedicated cosmology facility with highly efficient redshift measurements on a 6.5 m telescope. 8-meter- class projects include the Subaru Prime Focus Spectrograph (PSF444https://pfs.ipmu.jp; Tamura et al., 2022) project, and the Multi-Object Optical and Near-IR Spectrograph (MOONS555https://vltmoons.org; Cirasuolo et al., 2020). With increasing telescope size, the Maunakea Spectroscopic Explorer (MSE666https://mse.cfht.hawaii.edu; Hill et al., 2018), SpecTel (Ellis & Dawson, 2019) and the Fiber-Optic Broadband Optical Spectrograph (FOBOS777https://fobos.ucolick.org/; Bundy et al., 2019) are 10-meter-class projects. All of these large telescopes will equip instruments with thousands to tens of thousands of optical fibers, aiming at simultaneous spectroscopic observations of multiple objects at once. In China, optical telescopes currently fall behind world-class standards. However, with improved funding availability and technological capabilities, observatories and universities have initiated the construction of optical telescopes with diameters exceeding 2 meters. This initiative is driven by diverse scientific objectives, and aims to facilitate distinctive observational research, enabling universities within China to make substantial progress with medium-sized observing facilities, such as the Wide Field Survey Telescope (WFST or “Mocius”888https://wfst.ustc.edu.cn) is one of them and is on commissioning phase (Wang et al., 2023). For spectroscopic observations, there are several telescopes, either proposed or under construction, such as the 4.4-meter Jiao Tong University Spectroscopic Telescope (JUST999https://just.sjtu.edu.cn), and 6.5-meter MUltiplexed Survey Telescope (MUST101010https://must.astro.tsinghua.edu.cn), as well as the stage II of Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST- II111111https://www.lamost.org). Construction has already begun on JUST, and the first light observations are expected within three years. This telescope will be equipped with dedicated spectroscopic instruments to explore the dark universe, trace the dynamic universe, and search for exoplanets. In this paper, we provide basic information about details such as the site, structure, optical system, and science motivations of JUST. The structure of this paper is outlined as follows. We first introduce the site condition in Section 2, followed by the conceptual design of the JUST telescope in Section 3. An overview of the planned instruments and science motivations is presented in Section 4. We summarize the JUST project in Section 5. ## 2 Site and dome Mauna Kea in Hawaii and certain summits and plateaus in northern Chile are among the best observing sites on the Earth. Over the past two decades, much effort has been dedicated to the search for excellent astronomical sites in China. Recently, the summit of Saishiteng Mountain in Lenghu, located on the Tibetan Plateau, was identified as possessing favorable observing conditions. Site monitoring has shown that the summit of Saishiteng Mountain, situated at an altitude of 4 200 to 4 500 meters, experiences clear nights for approximately 70% of the year and boasts good median seeing of 0.75 arcsec (Deng et al., 2021). The climate in the surrounding area of the site is extremely arid, and the sky background is exceptionally dark due to minimal light pollution. Furthermore, the time zone of the Lenghu site is distinct from that of nearly all observatories worldwide, facilitating complementary timedomain astronomy. The planned installation of the JUST telescope is shown in Fig. 1, at position B, situated at an altitude of 4322 m on the summit of Saishiteng Mountain, close to position C where WFST is located. Spectroscopic observations with JUST will complement imaging observations made with WFST, providing essential photometric and spectroscopic data for advancing researches across various domains of astronomy. Fig. 2 illustrates the design concept of the telescope dome for JUST. It incorporates a classical semi-spherical dome, featuring a shutter that can be opened to allow the telescope to observe. The dome will also integrate ventilation systems to regulate temperature and air circulation, improving dome seeing. Additionally, it will include lighting and other equipments to support telescope operation and maintenance. The construction of the site infrastructure and telescope dome is scheduled to commence in 2024. Figure 2: The conceptual design of the dome for JUST. ## 3 JUST main part conceptual design ### 3.1 General parameters The telescope project encompasses three main functional subsystems: telescope optics, support and structure, and telescope control. The telescope optics subsystem comprises optical mirrors and mirror support with active optics. The support and structure subsystem includes a tracking mount and telescope tube. The control subsystem incorporates the telescope control subsystem (TCS), observation control subsystem, and active optics control subsystem. A lightweight telescope design is achieved through the selection of a horizontal tracking mount and a truss-type telescope tube structure. The overall conceptual view of the telescope is illustrated in Fig. 3. Figure 3: The conceptual design of the structure of JUST. JUST has two Nasmyth foci with a focal ratio of $f/6.0$. One Nasmyth focus has a field of view (FoV) of $10^{\prime}$ and the other Nasmyth focus has an extended FoV of $1.2^{\circ}$ with correction optics, as shown in Fig. 4. The two Nasmyth foci can be switched by rotating the tertiary mirror (M3). Nasmyth focus 1 is a purely reflecting system, in which high-resolution wide- wavelength instruments or infrared instruments can be mounted. The image quality is defined as the full width at half maximum (FWHM) of the image profile. The intended image quality of Nasmyth focus 1 is $0.09^{\prime\prime}$. With manufacturing, alignment, and control error, this value can be reduced to $0.35^{\prime\prime}$. Nasmyth focus 2 is used for the Multi-Object Fiber Spectroscopic Survey. The diameter of the focal plane is 570 mm, and about 2 000 optical fibers can be accommodated. At a zenith distance of $60^{\circ}$, observation site altitude of 4 200 m, and the wavelength range of 0.35-1.3 $0.35-1.3$ $\mu$m, the atmospheric dispersion is $3.1^{\prime\prime}$. However, astigmatism rapidly increases with the square of the FoV, so it is essential to include a corrector for widening FoV and compensating for atmospheric dispersion. The corrector consists of four silica lenses, two of which are the atmospheric dispersion correctors (ADCs). The target image quality of Nasmyth focus 2 is $0.51^{\prime\prime}$. With errors, the delivered image quality will be $0.7^{\prime\prime}$, which is close to the value of median seeing at the Lenghu site. As a reference, we list in Table 1 the main optical parameters of JUST. Table 1: Optical parameters. Parameters | Values ---|--- primary mirror diameter | 4.4 m (segmented) M1 focal length | $6.4$ m System $F$ ratio | $6.0$ Focal scale of Focus 1 & 2 | $7.8$ arcsec mm-1 Focus 1 | Nasmyth focus | (high precision) Field of view 1 | $10$ arcmin Wavelength range 1 | Purely reflecting Focus 2 | Nasmyth focus | (large field of view) Field of view 2 | $1.2$ degree Wavelength range 2 | $0.35-1.3\,\mu$m Figure 4: Optical design of JUST. Left: Nasmyth focus 1 with field of view of $10^{\prime}$; Right: Nasmyth focus 2 with an extended field of view of $1.2^{\circ}$. ### 3.2 The mirrors The primary mirror (M1) is composed of 18 hexagonal segments, with an effective aperture of 4.4 m. The detailed configuration is depicted in Fig. 5. Each segment is equipped with its own support system to maintain the correct optical surface. The axial support system utilizes an 18-point whiffletree support for the segments, while the radial support employs a central flexible support. This support system serves the following functions: * • Accurate installation of the segments onto the main truss; * • Support for the segments to meet the requirements for the mirror surface shape; * • Active optical technology to control the closed-loop segmented system of the primary mirror, mitigating the effects of temperature and gravity, and achieving co-focusing/co-phasing of the segments. Figure 5: Configuration of the segments of M1, showing 18 hexagonal segments. The central area of the primary mirror is vacant, where M3 will be installed. The secondary mirror (M2) support module is designed to preserve its original machining accuracy and stabilize its spatial position. The support system for the secondary mirror includes a bottom support, lateral support, and centering mechanism. The bottom support features a suspended whiffletree support structure, while the lateral support uses a lever-balanced weight support structure. The centering mechanism employs a bi-directional membrane structure in both radial and axial directions. The tertiary mirror (M3) uses a whiffletree floating support structure, with the centering mechanism also adopting a bi-directional membrane structure to serve the radial and axial directions, in addition to function as lateral support. ### 3.3 Active optics The M1 is designed to use active optics technology for real-time closed-loop control splicing to join the segments into a single mirror surface. The active optical system primarily comprises core devices such as segment surface support, displacement actuator, and an active optical wavefront sensor. * • Segment surface support system: Ensures that the support surface meets and exceeds the technical requirements for the segment surface shape. * • Displacement actuator: Utilizes nano-electromechanical displacement actuators controlled in parallel by multiple active optical intelligent controllers to achieve nanometer-level displacement resolution and millimeter-level displacement range output accuracy and stroke under the full load of the segments. * • Active optical wavefront sensor: Uses a Shack-Hartmann wavefront sensor based on physical optics. It measures the surface shape, imaging quality of individual segments, and the segmented primary mirror. It uses the central star as a target source, providing precise feedback to continuously drive the displacement actuators for segmented mirror calibration and maintenance. Upon implementation of active optics technology, the telescope will achieve co-focusing of the primary mirror and obtain imaging quality close to the visibility limit of the telescope’s location, in conjunction with the optical system design. Table 2: Key parameters of three types of spectrographs. Parameters | fibers | Resolution ---|---|--- multi-object spectrograph | 2 000 | $4~{}000-5~{}000$ IFU array | 500 | $4~{}000-5~{}000$ long-slit spectrograph | N/A | $4~{}000-5~{}000$ high-resolution spectrograph | 1-3 | $\sim 100~{}000$ ## 4 Science cases and scientific instruments JUST has two Nasmyth platforms and will be installed with three types of spectrographs on both. The basic parameters of these spectrographs are listed in Table 2. * • Galaxies and large-scale structures: JUST will be equipped with multiple fiber positioners and medium-resolution spectrometers to conduct spectral surveys of a large number of galaxies. * • Multi-messenger astronomy: JUST will be equipped with hundreds of optical fibers to form an Integrated Field Unit (IFU) array and/or a long-slit spectrograph for follow-up observation of a large number of transient sources. * • Exoplanet detection and characterization: JUST will use advanced high- resolution spectrometers to detect cold giant planets and earth-like terrestrial planets, and to provide detailed atmospheric characterization for hot exoplanets. ### 4.1 Exploring the dark Universe More than 95% of the Universe remains dark to humanity, whether in the form of dark matter or dark energy (Planck Collaboration et al., 2020). The first step toward understanding the dark Universe requires the accurate measurement of the growth of cosmic structures on scales ranging from a few kiloparsecs to hundreds of megaparsecs, with highly-multiplexed spectroscopic surveys of galaxies (Weinberg et al., 2013). To complement the current stage-IV surveys that focus on the galaxy distribution at linear scales, JUST will dedicate its multi-object spectroscopic (MOS) capability to the mapping of structures from quasi-linear to highly non-linear scales, centered on massive galaxy clusters at $z{<}0.6$. JUST aims for complete spectroscopic coverage of the galaxies at $r<20$ mag in the cosmic web surrounding clusters below $z<0.6$, complementing DESI spectra. The JUST spectroscopic cluster survey(SCS) will improve cluster cosmology as one of the most sensitive probes of cosmic growth, through the mitigation of systematic uncertainties in the cluster redshifts, satellite membership assignment, and various projection effects associated with photometric cluster finders (Erickson et al., 2011; Noh & Cohn, 2012; Zu et al., 2017; Costanzi et al., 2019). The spectroscopic cluster catalogue will provide stringent constraints of key cosmological parameters, including the matter density, the amplitude of matter clustering, the equation-of-state of dark energy, and the sum of neutrino masses (Sartoris et al., 2016). The combination of the redshift-space distortion(RSD) of infalling galaxies (Lam et al., 2013; Zu & Weinberg, 2013; Hamabata et al., 2019; Shirasaki et al., 2021) and the weak lensing of background sources (Johnston et al., 2007; Simet et al., 2017; Wang et al., 2022) by galaxy clusters will enable stringent tests of theories of cosmic acceleration and distinguish between dark energy and modified gravity on inter-cluster scales (Zu et al., 2014; Koyama, 2016; Joyce et al., 2016; Baker et al., 2021). Meanwhile, JUST-SCS will fully sample cluster galaxies in both the velocity phase space(cluster-centric radius vs. line-of-sight velocity) and the color-magnitude diagram, from infall to the splashback regions, and into the virialized cores of clusters (Fillmore & Goldreich, 1984; Bertschinger, 1985; Kravtsov & Borgani, 2012; Diemer & Kravtsov, 2014; More et al., 2016; Walker et al., 2019). Such spectroscopic coverage of the cosmic web will provide a comprehensive picture of galaxy formation in different environments surrounding galaxy clusters (Kauffmann et al., 2004). In recent decades, Chinese astronomers have made significant contributions to revealing the nature of the dark universe with contributions such as measuring and quantifying large scale structure, elucidating the galaxy-halo connection, constraining the cosmological parameters. Among these efforts, representative work includes establishing the halo occupation distribution model (Jing et al., 1998) based on the Las Campanas Redshift Survey (Shectman et al., 1996), establishing the conditional luminosity function model (Yang et al., 2003) based on the 2-degree Field Galaxy Redshift Survey(2dFGRS) (Colless et al., 2001), establishing the halo-based group finder (Yang et al., 2005, 2007) based on 2dFGRS and the Sloan Digital Sky Survey (York et al., 2000), and making dark energy model constraints (Zhao et al., 2017) based on the Baryon Oscillation Spectroscopic Survey (Alam et al., 2015). Most of these achievements were made based on either public data releases or through international collaborations of large galaxy redshift surveys. With JUST-SCS, we will have greater opportunity to explore the dark universe with our own observational data set. To maximize the science return of the MOS survey on cluster cosmology and galaxy evolution, JUST-SCS will include three layers as summarized by Figure 6. We will discuss each of the three in the subsections below. Figure 6: Illustration of the JUST-SCS program. Panel(a): Distribution of the photometric galaxies within a simulated lightcone. Galaxies from the same cluster are dispersed over a large line-of-sight distance due to photo-z uncertainties. Panel(b): Distribution of the spectroscopic clusters (red circles) that will be observed by the JUST cluster cosmology survey within the same lightone. Panel(c): A cluster at $z=0.1$ within the JUST field of view (white circle) targeted by the JUST cluster galaxy evolution survey. The Background image is Abell 1689. Panel(d): The cosmic web structure centered on a cluster at $z{\simeq}0.3$ targeted by the JUST cluster infall survey. The background image is from the Millennium Simulation. #### 4.1.1 JUST Cluster Cosmology Survey The upcoming China Space Survey Telescope (CSST; Miao et al., 2023)) will detect approximately 300,000 photometric halo-based cluster candidates with halo mass above $10^{14}\,M_{\odot}/h$ up to $z<1.5$ (Yang et al., 2021), serving as the basis of target selection for JUST-SCS. In particular, the JUST cluster cosmology survey will target ${\simeq}50,000$ clusters over $10,000$ deg2 at $z{<}0.6$, producing an unprecedented spectroscopic cluster sample for cosmological analysis. For each cluster, JUST will be used to obtain spectra for the brightest cluster galaxy (BCG) and the bright member galaxy candidates down to $r{=}20$, including but without re-observing the spectra from the full DESI survey. This program will not only provide secure spectroscopic redshifts for a cosmologically significant volume of individual clusters, but also improve the centering of clusters both perpendicular and along the line-of- sight (Sohn et al., 2021). Such an accurate localization of individual clusters in three dimensions enables cosmological analyses using massive dark matter haloes, instead of galaxies, as spectroscopic tracers of the large- scale structure. With spectroscopic redshifts for up to 20 member galaxy candidates, JUST will be able to disentangle the chance alignment of structures along the line of sight, and mitigate interlopers from any correlated structures on the velocity phase diagram. The spectroscopically confirmed satellite galaxies will enable mass estimates of individual haloes through the velocity dispersion (Evrard et al., 2008; Wu et al., 2013; Ntampaka et al., 2015) and caustic boundary (Diaferio, 1999; Gifford et al., 2013; Rines et al., 2013), improving the calibration of the cluster mass-observable relation beyond the optical richness (Rozo et al., 2009). Meanwhile, JUST will probe the properties of the intracluster medium, particularly the circumgalactic medium of cluster galaxies, by measuring metal absorption lines recorded in the DESI spectra of background quasars in the cluster fields (Zhu & Ménard, 2013; Lee et al., 2021; Zu, 2021; Anand et al., 2022; Napolitano et al., 2023). #### 4.1.2 JUST Cluster Infall Survey In the intermediate redshift range between $0.1{<}z{<}0.4$, JUST-SCS aims to achieve a complete spectroscopic coverage of galaxies within an approximately 20 Mpc/h radius surrounding each cluster down to r = 20, on top of the existing spectra from the DESI Bright Galaxy Survey (BGS; Hahn et al., 2023). In addition, JUST will spectroscopically cover a large number of non-cluster fields to the same depth, as the control sample of field galaxies for the cluster-galaxy cross-correlation measurements and the galaxy evolution study. The target selection of the non-cluster fields will be optimized based on the signal-to-noise forecast of the cluster RSD analysis. The JUST cluster infall survey will push the($\texttt{E}_{G}$) method (Zhang et al., 2007) from the linear regime to the infall region around clusters, where the potential imprint of modified gravity remains unscreened and the signal-to-noise of the RSD and weak lensing measurements is high. In particular, JUST will accurately measure the cluster-galaxy cross-correlation function in the redshift-space on projected scales below 20 Mpc/h, allowing high-fidelity reconstruction of the galaxy infall kinematics (GIK) as a function of distance to the cluster center. The GIK reconstruction provides a unique probe of the average dynamical mass profile of clusters in the infall region, which will enable stringent tests of the theories of cosmic acceleration when compared with the cluster mass profile measured from weak lensing (Zu et al., 2014). In addition, the spectr dense spectroscopic sampling of the infall region allows individual measurements of the cluster dynamical mass using the caustics technique. One of the primary systematics in cluster cosmology is the projection effect due to the 2D aperture adopted by photometric cluster catalogues, leading to the correlation between cluster richness and large-scale overdensity, hence the bias in the large-scale weak lensing signals of clusters (McClintock et al., 2019; Sunayama, 2023; Salcedo et al., 2023). The JUST cluster infall survey will mitigate this projection effect by adopting a 3D aperture in the velocity phase space for measuring cluster mass observables. Meanwhile, this program will provide a panorama of the star formation, chemical enrichment, and dynamical evolution of galaxies across the cosmic web. Spectral stacking at different cosmic web environments will allow a robust reconstruction of the average histories of star formation and chemical evolution, as galaxies are funneled through the filaments into clusters (Andrews & Martini, 2013; Lin & Zu, 2023). By comparing the galaxy population surrounding clusters with those observed in the non-cluster fields, JUST will provide the key observational evidence on the concept of “nature versus nurture” in galaxy formation. #### 4.1.3 JUST Cluster Galaxy Evolution Survey In the nearby universe below $z{<}0.1$, JUST will obtain spectra for galaxies within the virial radius of each SDSS galaxy group (Yang et al., 2007) above $10^{13}\,M_{\odot}/h$ down to a stellar mass of ${\sim}10^{8}\,M_{\odot}$. Focusing on the faint end of the conditional luminosity function of groups (Lan et al., 2016; Golden-Marx et al., 2023), the JUST cluster galaxy evolution survey will explore the star-forming histories of dwarf galaxies inside the group and cluster-size haloes, and ascertain the existence of a characteristic stellar mass of quenching among the satellites (Meng et al., 2023). With the accurate measurement of the group/cluster masses, JUST will provide strong constraints on the stellar-to-halo mass relation of the dwarf satellites via abundance matching and satellite weak lensing (Li et al., 2014; Niemiec et al., 2017; Sifón et al., 2018; Dvornik et al., 2020; Danieli et al., 2023). The JUST cluster galaxy evolution survey will reveal the co-evolution between cluster galaxies and dark matter haloes, by connecting the spectroscopic observations to the individual halo assembly histories predicted by ELUCID, a state-of-the-art constrained simulation that accurately reconstructed the initial density perturbations within the SDSS volume below $z{=}0.1$ (Wang et al., 2014, 2016). Another unique aspect of this program is the exciting synergy with the FAST All Sky HI Survey (FASHI) (Zhang et al., 2023), which will provide the largest extragalactic HI catalogue at $z<0.1$ using the Five- hundred-meter Aperture Spherical radio Telescope (FAST; Nan et al., 2011). Meanwhile, JUST will reserve a fixed set of fiber assignment for a sample of low-surface brightness targets(e.g., ultra-compact dwarfs) to allow spectral coverage down to ${\simeq}23$ magnitudes per arcsec2 in the r-band (Liu et al., 2020; Wang et al., 2023). For extended sources of interest(e.g., including the outskirts of BCGs and bar galaxies), MOS-mode observations can be supplemented by follow-up observations with the IFU instrument (Gu et al., 2020; Chen et al., 2022). Taking advantage of the synergy with ELUCID and FAST, the versatility of JUST will present an exquisite view of cluster galaxy evolution in the local universe. ### 4.2 Tracing dynamical universe The Universe is not static. It is in motion and constantly changing. Time domain astronomy, which focuses on dynamic astronomical events, is a promising method to study this in greater detail. In the 2020 NASA decadal survey for astronomy and astrophysics, it is considered an important research frontier in astronomy (National Academies of Sciences, Engineering, and Medicine, 2021). Rapid follow-up observations of unexpected events is crucial in the era of multi-messenger astronomy, allowing astronomers to combine various observation methods such as neutrinos, electromagnetic waves, and gravitational wave signals, which are of great significance for understanding important high- energy astrophysical processes such as black hole and neutron star mergers. The main targets of time-domain astronomy are sporadic events(such as supernova explosions and tidal collapse events.), and the follow-up spectroscopic observation of these events can help to understand the specific physical processes in these transient sources. There are currently dozens of time-domain astronomical survey projects, such as the Catalina Survey, PanSTARRS, iPTF, ASASSN, ATLAS and ZTF. In the past decade, the number of transient sources discovered has increased tenfold (Gal- Yam et al., 2013). The first gravitational wave electromagnetic counterpart was discovered in 2017 (Abbott et al., 2017) and confirmed to be a millennium nova(kilonova; Coulter et al. 2017). At present, the number of supernovae discovered is increasing year by year, exceeding one thousand per year. Based on large sample studies, new types of supernovae and explosive physical processes have been discovered. At the same time, new processes of active galactic nucleus explosions and tidal disruption events are also being continuously discovered. Time-domain astronomy has evidently become one of the fastest developing frontier astrophysical research fields. The study of time- domain astronomy can answer the following important questions: What is the explosive process of the evolution of massive stars to their final stages? What are the precursor stars of Type Ia supernovae? How did they erupt? Why does the universe accelerate its expansion? What determines the mass, spin, and radius of a dense star? How do supermassive black holes accrete and grow? Although astronomers have made some progress in addressing these issues, they are still far from fully understanding the physical reasons behind these phenomena. In the future, surveys like LSST, CSST and WFST will obtain larger transient source samples. It is expected that hundreds or thousands of supernovae and other explosive phenomena will be discovered every night. These future surveys will significantly expand the redshift coverage of transient sources and expand the observation wavelength ranges. Space telescopes such as the Einstein Probe(EP), Swift, and Wide-field Infrared Survey Explorer(WISE) will observe the transient sources in X-ray, ultraviolet, and infrared bands, respectively. It can be expected that these larger samples will bring higher statistical significance, to reveal systematic differences among different types of transient sources and to discover extreme cases in each category. For example, in the past decade, the increasing number of transient source events has spawned research on the relationship between Type la supernovae and the star formation rate in their host galaxies (Jones et al., 2018), and the discovery of Type II supernovae that lasted for a year (Arcavi et al., 2017), as well as a new type of thermonuclear explosion supernova(SNe Iax; Foley et al. (2013); Jha (2017)). The photometric detection of supernovae or other transient sources is only the first step. Only by completing the second step of spectral observation can the physical origin of these transient sources be clearly explained. According to Blagorodnova et al.(2018) Blagorodnova et al. (2018), compared to photometric observations, the spectroscopic observation of transient sources is still insufficient. With the development of more photometric surveys, this difference will only become more severe in the future. JUST, with hundreds of optical fibers, can effectively carry out spectroscopic observations of a large number of transient sources by assigning each target with a fiber. It will also provide information on the two-dimensional kinematics and chemical properties of the host galaxy of the transient source with the fibers forming an IFU array, providing first-hand data for studying its triggering environment mechanism. The transient sources may be induced from many different high-energy phenomena. Among them, events such as gamma-ray bursts, supernovae, and tidal disruption events are generated by cataclysmic processes. AGN flares, X-ray binary bursts, and rapid radio bursts involve periodic and intense physical processes near black holes or compact objects with strong magnetic field. The study of these phenomena not only reveals the specific physical mechanisms, but also helps to test basic theories such as relativity under extreme conditions. JUST will primarily focus on follow-up spectral observations of various transient sources, which is crucial for revealing the driving mechanisms of transient sources. #### 4.2.1 Supernova identification and classification Important for cosmological research, Type Ia supernovae can serve as standard candles for cosmological distance determination, ultimately leading to the discovery of accelerated expansion of the Universe. High redshift supernovae are mainly discovered through photometric methods, and subsequent spectral analysis helps to distinguish different types of supernovae. On one hand, distinguishing the different types of supernovae can reduce the impact of other types of supernovae on the distance measurement of high redshift galaxies, improving the accuracy of galaxy distance measurement, to better constrain on the accelerated expansion of the universe. On the other hand, analyzing supernova subclasses can help in understanding the basic parameters of precursor stars, the physical processes of explosions, and the interaction between the outflow material and the interstellar medium. JUST is capable of rapid response and subsequent spectral observations of supernovae at moderate redshifts($z\sim 0.1-0.3$). Within this redshift range, the magnitude of Type Ia supernovae ranges from 18 to 22 magnitudes. The aperture of this telescope is large enough to accomplish this, and the observation conditions at its location(Lenghu) are excellent, which can allow the recording of high signal-to-noise ratio spectra of these sources (Deng et al., 2021). This will significantly increase the number of supernova observations in the medium redshift range and may potentially discover new supernova types. #### 4.2.2 Gravitational wave electromagnetic counterpart properties The discovery of gravitational wave GW150914 was a milestone event in gravitational wave astronomy, which confirmed the existence of a black hole merger for the first time (Abbott et al., 2016). However, the electromagnetic wave counterpart of the gravitational event was not discovered until 2017, when global synchronous observations of GW170817 confirmed its electromagnetic counterpart for the first time as a binary neutron star merger event (Abbott et al., 2017a, b). Within minutes to hours, Chile’s Swope telescope confirmed an optical flare event in NGC 4993 galaxy. In the following weeks, observatories around the world conducted follow-up observations of the event in different wavelengths, providing a panoramic view of the physical process of the binary neutron star merger event (Cho, 2017). The visual magnitude of the optical counterpart of this binary neutron star merger event varies between 17.5 and 23 magnitudes, and JUST can also perform spectral observations of this source and others like it. In the future, more gravitational wave events will be detected. Timely follow-up spectroscopic observation of the source is very important to provide additional information(such as chemical abundance, redshift, and kinematics) to reveal the physical properties of gravitational wave sources. #### 4.2.3 The physical process of tidal disruption events If a star is too close to a supermassive black hole, it will be disrupted by tidal forces, causing about half of the material to be accreted, resulting in flares at optical, infrared, ultraviolet, X-ray, and other wavelengths. This is known as a tidal disruption event(TDE), which was theoretically proposed in 1970s (Hills, 1975; Lidskii & Ozernoi, 1979; Rees, 1988; Phinney, 1989; Evans & Kochanek, 1989; Ulmer, 1999) and observationally confirmed in 1990s (Bade et al., 1996; Grupe et al., 1999; Komossa & Greiner, 1999; Greiner et al., 2000). It has become one of the most important targets in time-domain astronomy. With the advancement of various photometric surveys(such as South Sky LSST and North Sky WFST), a large number of TDEs will be discovered. For example, WFST in China expects to discover tens to hundreds of TDEs annually and to obtain complete light curves, including the early brightening phase. TDEs are one of the main targets of the Einstein Probe X-ray telescope in China. TDE detection is an important method to observe supermassive black holes(including quiescent ones) and provides information on black hole mass and spin, accretion disk physics, strong field gravity, and black hole environment(gas, dust environment, and stellar properties). TDEs are also useful to identify intermediate mass black holes, with the potential to resolve the mass gap between stellar mass black holes and supermassive black holes, completing the evolutionary landscape of black holes. JUST can efficiently perform follow-up spectroscopic observations of TDEs detected by WFST and EP. Its 4.4-meter aperture, fast pointing adjustment, same observation location, and medium resolution spectrograph make it perfectly compatible with WFST(a 2.5-meter telescope) to carry out joint measurements of TDEs. The typical brightness of TDEs is $\sim 20-23$ mag(with a redshift range of $0.3-1$), and the light curve variation period is on the order of months, allowing a considerable success rate in obtaining TDE spectra with redshifts below 1. The acquisition of TDE spectra can provide important information such as accretion disk wind properties, stellar/accretion and disk chemical composition (Dai et al., 2018; Parkinson et al., 2020). Combined with the light profile curves of other photometric surveys, it will significantly improve the understanding of the physical mechanisms of TDEs, as well as the strong gravitational field properties. JUST can also provide key spectroscopic evidence for the tidal disruption of white dwarfs in intermediate mass black holes that have been discovered. The IFU observation mode is expected to obtain spectroscopic information on host galaxies, measuring their redshift, dispersion velocity, and chemical composition, to provide further observational constraints on the coevolution of galaxies and supermassive black holes. #### 4.2.4 Long term monitoring of active galactic nuclei(reverberation mapping) JUST can also monitor the long-term spectral variability of active galactic nuclei(AGN). A considerable number of AGN with redshift less than 1 have $r$-band magnitudes brighter than 22 mag, suitable for future observation with JUST. By analyzing the time delay between the variability of the emission lines and the continuum, the reverberation mapping method can be used to analyze the structural characteristics of the broad line region(BLR) near the black hole, and to estimate the mass of the black hole. In addition, by observing the post spectral variability of some sudden flare phenomena in AGN, we can understand the physical reasons behind the changes in the continuum, broad line structure, and kinematics of AGN with the variation of the accretion rate, to better understand the physical processes of accretion by supermassive black holes. ### 4.3 Detection and characterization of exoplanets The third category of science motivations for JUST is exoplanet detection and characterization. With its high-resolution spectrometer, JUST will enable the discovery of a substantial number of cold giant planets by employing a combination of radial velocity(RV) and astrometric analyses. In its upgraded phase, JUST will feature an exceptionally high-precision spectrograph designed for detecting Earth-like planets. Leveraging these advanced capabilities, JUST will further enable the characterization of the atmospheres of hot exoplanets, contributing valuable insights into their formation and evolution. #### 4.3.1 Detection of cold giants The planets in our Solar System and most of the known exoplanets are thought to form in a bottom-up fashion through collisions of dust, pebbles, and planetesimals. This so-called “core accretion”(CA) mechanism is able to form Jupiter-like planets through the processes of core formation, envelope formation and contraction. However, this formation channel is probably not efficient to form substellar companions on wider orbits before the dispersion of a protoplanetary disk in $\sim$10 Myr (Kratter & Lodato, 2016). These objects are more likely to form like stars in a top-down fashion through the so-called “gravitational instability”(GI) mechanism (Boss, 1997). However, due to the flexibility and ambiguity of the features of substellar companions predicted by CA and GI, it is challenging to determine which formation channel is responsible for specific giant companions such as the four directly imaged giant planets around HR 8799 (Marois et al., 2008). Hence, a statistically significant sample of giant planets on wide orbits(or cold giants) would be essential to statistically distinguish between GI and CA and draw a boundary between these two formation channels. Thanks to the high precision astrometry catalogs released by Gaia (Gaia Collaboration et al., 2016, 2018, 2021, 2023) and the long baseline formed by Gaia and its precursor, Hipparcos (Perryman et al., 1997; van Leeuwen, 2007), many substellar companions detected by the radial velocity method are confirmed and their absolute masses are determined by combined analyses of radial velocity(RV) and astrometry data (Snellen & Brown, 2018; Brandt et al., 2019; Kervella et al., 2022; Feng et al., 2022). However, these detection are limited to super-Jupiters or more massive companions due to the limited precision and time span of the current Gaia data and the limited number of stars with high precision RV data. While the precision and time span of Gaia data will be significantly improved in Gaia DR4, it is hard to significantly increase the current sample of stars with high precision RVs because of the limited number of high resolution spectrographs and the low efficiency of the current high precision RV survey. To facilitate the detection of large number of cold giants with the combined RV and astrometry method, JUST will be equipped with the High Resolution Spectrograph(HRS), which can measure RV with a precision of about 1 m$s^{-1}$. HRS will be a fiber-fed, white-pupil spectrograph with a design resolution of R=60,000-80,000 and a wavelength of 380-760nm. The instrument design will be based on the successful high resolution spectrograph on the LAMOST (Zhang, Tianyi and Zhu, Yongtian and Hou, Yonghui and Zhang, Kai and Hu, Zhongwen and Wang, Lei and Chen, Yi and Jiang, Haijiao and Tang, Zhen and XU, Mingming and Jiang, Mingda, 2019) and HARPS-N (Cosentino et al., 2012) on the TNG telescope. In order to obtain a precision radial velocity(PRV), HRS will be enviromentally stabilized in the vacuum enclosure and via two optical fibers will provide simultaneous measurement of the science source and a spectral calibration source. Like other PRV instruments, the HRS will includes three main subsystems, 1) front-end module to correct for atomspheric dispersion, reimage the telescope beam onto the science fiber, stablize the image with fast tip-tilt corrections, 2) calibration unit to enable the injection of different light sources and 3) spectrograph is vibrationally and thermally isolated from the room. To ensure optimal optical performance and superior angular resolution, HRS will be integrated with the first Nasmyth focus of JUST. #### 4.3.2 Detection of Earth twins One holy grail of exoplanetology is to find the most Earth-like planets. These so-called Earth twins are Earth-sized planets located in the habitable zones of Sun-like stars (Kasting et al., 1993). These temperate worlds can sustain liquid water on their surface and probably also have other habitable conditions such as plate tectonics, magnetic fields, and stable orbits. The Earth twins are perfect targets for future missions such as LUVOIR, HabEx (The LUVOIR Team, 2019; Gaudi et al., 2020) and Habitable World Observatory(HWO; Mamajek & Stapelfeldt 2023). However, it is challenging to detect Earth twins due to limited instrumental precision and stellar activity. The measurement error of single RVs is typically $>$0.3 ms-1 for second-generation spectrometers such as ESPRESSO on VLT (Pepe et al., 2010), Maroon-X on Gemini-North (Seifahrt et al., 2022), NEID on WIYN 3.5m (Schwab et al., 2016), and the Keck Planet Finder (Gibson et al., 2016). With advanced data analysis techniques, we are able to detect RV signals as small as 0.3 ms-1 (Feng et al., 2017; Faria et al., 2022). While instruments like ESPRESSO and KPF have achieved an RV precision of sub-m$s^{-1}$ for detecting habitable Earths, stellar activity introduces noise reaching several m$s^{-1}$, surpassing the planetary signal. The challenge in using RV to detect habitable Earths lies in effectively distinguishing this “red noise” from the planetary signal, given its time dependence. Advanced noise modeling techniques such as Gaussian processes have been used to mitigate such red noise (Haywood et al., 2014; Rajpaul et al., 2015). However, these techniques may lead to false negatives due to over- fitting (Feng et al., 2016; Ribas et al., 2018). To mitigate the impact of wavelength-dependent stellar activity noise on radial velocity, traditional methods involve measuring the intensity of spectral lines characterizing stellar magnetic fields to remove the velocity variations linearly correlated with these so-called “activity indicators” (Dumusque, 2016; Dumusque et al., 2017; Zechmeister et al., 2018). However, different types of stars respond differently to various stellar activity indicators, and the linear removal of velocity correlated with these indicators introduces inherent noise. Therefore, recent research favors directly selecting spectral lines from the spectrum that are less “contaminated” by stellar activity (Dumusque, 2018; Lisogorskyi et al., 2019). In the upgraded phase of JUST instrumentation, an ESPRESSO-like spectrograph, named Extremely high Resolution Spectrograph(ERS), will be built for the detection of Earth twins. ERS will have a resolution of at least 100,000 and can measure RV with a precision of about 0.1 m$s^{-1}$. It will be built following the design of CHORUS on GTC121212https://www.nao.cas.cn/gtc/hrs/gkoverview/. With this spectrometer, JUST will survey a sample of 20-40 Sun-like nearby stars over 5 years to discover Earth twins. Given the uncertainty of the current occurrence rate of Earth twins (Ge et al., 2022), we expect to discover at least 1-3 Earth twins as golden samples for future direct imaging missions such as LUVOIR and Habex (The LUVOIR Team, 2019; Gaudi et al., 2020). #### 4.3.3 Characterization of hot extrasolar giant planets One of the primordial goals of exoplanet sciences is to characterize exoplanetary atmospheres and inform the formation and evolution history of the diverse planetary systems (Madhusudhan, 2019). High-resolution spectroscopy has offered a unique means to measure chemical species in the atmospheres of close-in hot Jupiters because this type of exoplanets so far offers the best signal-to-noise ratio(see a review by Birkby, 2018). Using the same framework as measuring the extremely precise RV of the planet-hosting stars, this method can be applied to phase-resolved planetary spectral lines which can be identified through the Doppler effects of the orbiting planets. For typical hot Jupiters, the orbital speed is a few orders of magnitude larger than that of the star, and thus the stellar and telluric spectral features are relatively unchanged compared to the planetary spectral lines and can be removed by various detrending methods. The time-varying components of the planetary spectra then reveal compositions in the planetary atmospheres. Typical constituents expected in hot Jupiters’ atmospheres include major oxygen- and carbon-bearing species such as ${\rm H_{2}O}$, ${\rm CH_{4}}$, ${\rm CO}$, and ${\rm CO_{2}}$ which are most easily detected in the near-IR and IR wavelengths. In the visible wavelengths, various key heavy elements, including Si, Ti, V, and Fe, have rich spectral features and have been observed in atmospheres of a dozen hot Jupiters(e.g., Yan et al., 2022). These refractory elements offer a valuable window to probe the formation and migration history of hot Jupiters (Lothringer et al., 2021). The high-resolution spectroscopy has also been applied to the atmospheric characterization of directly imaged exoplanets, i.e., giant planets that are hot, self-luminous, and with large orbital separations. Key molecules including CO and ${\rm H_{2}O}$ have been identified in several directly imaged exoplanets; isotopes are also within reach for some of the best targets (Currie et al., 2023). In addition to composition measurements, the rotationally induced spectral line shapes allow us to determine the rotation periods of directly imaged exoplanets, an important piece of information for tracking how planets accreted their angular momentum when they grew within the disk (Snellen et al., 2014). The ERS in the upgraded phase of JUST instrumentation should be able to carry out spectroscopic surveys of dozens of hot Jupiters, yielding statistical trends of metallicity and carbon-to-oxygen ratios for the hot Jupiter population. Equipped with extreme adaptive optics, we expect to characterize several directly imaged exoplanets and measure their atmospheric chemical inventories and spin states. ## 5 Summary JUST is a 4.4-meter telescope equipped with a segmented primary mirror and a lightweight framework, allowing for reduced construction costs and rapid switching between observation targets. It features two Nasmyth foci, each offering a field of view of 10 arcmin and 1.2 degree, with the ability to alternate between them by rotating the tertiary mirror(M3). The telescope also boasts three types of spectrographs: a multiple-fiber medium-resolution spectrometer, an IFU array and/or a long-slit spectrograph, and a multiple- fiber high-resolution spectrometer. JUST will be installed and operated at a high-quality site with an altitude of 4322 meters on Saishiteng Mountain in Lenghu town, Qinghai province. Expected to achieve first light in 2026, it is poised to become the most powerful telescope for spectroscopic observations in China for a considerable period. Upon completion, JUST will focus on research in three main directions:(1) Exploring the dark universe through spectroscopic surveys of numerous galaxies in the cosmic web;(2) Tracking the dynamic universe by conducting follow-up spectroscopic observations of various transient sources; (3) Detecting and characterizing exoplanets through the acquisition of high-resolution stellar spectra and the precise measurement of sub-ms-1 RV. The JUST project is anticipated to produce impactful research outcomes in the fields of dark matter, dark energy, transient astronomy, and exoplanet searches. ###### Acknowledgements. We thank Shanghai Jiao Tong University for supports of building the JUST telescope, Qinghai provincial government and Haixi prefecture for supports on providing the site, dome and infrastructure. This work is supported by “the Fundamental Research Funds for the Central Universities”, 111 project No. B20019, and Shanghai Natural Science Foundation, grant No.19ZR1466800. ## References * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, The Sloan Digital Sky Survey: Technical Summary, _AJ_, 120(3), 1579–1587 * DESI Collaboration et al. (2016) DESI Collaboration, Aghamousa, A., Aguilar, J., et al. 2016, The DESI Experiment Part I: Science,Targeting, and Survey Design, _arXiv e-prints_ , , arXiv:1611.00036 * Jin et al. (2023) Jin, S., Trager, S. C., Dalton, G. B., et al. 2023, The wide-field, multiplexed, spectroscopic facility WEAVE: Survey design, overview, and simulated implementation, _MNRAS_ , * de Jong et al. (2019) de Jong, R. S., Agertz, O., Berbel, A. A., et al. 2019, 4MOST: Project overview and information for the First Call for Proposals, _The Messenger_ , 175, 3–11 * Schlegel et al. (2019) Schlegel, D., Kollmeier, J. A., & Ferraro, S. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 229 * Tamura et al. (2022) Tamura, N., Moritani, Y., Yabe, K., et al. 2022, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12184, Ground-based and Airborne Instrumentation for Astronomy IX, ed. C. J. Evans, J. J. Bryant, & K. Motohara, 1218410 * Cirasuolo et al. (2020) Cirasuolo, M., Fairley, A., Rees, P., et al. 2020, MOONS: The New Multi-Object Spectrograph for the VLT, _The Messenger_ , 180, 10–17 * Hill et al. (2018) Hill, A., Flagey, N., McConnachie, A., et al. 2018, The Maunakea Spectroscopic Explorer Book 2018, _arXiv e-prints_ , , arXiv:1810.08695 * Ellis & Dawson (2019) Ellis, R., & Dawson, K. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 45 * Bundy et al. (2019) Bundy, K., Westfall, K., MacDonald, N., et al. 2019, in Bulletin of the American Astronomical Society, Vol. 51, 198 * Wang et al. (2023) Wang, T., Liu, G., Cai, Z., et al. 2023, Science with the 2.5-meter Wide Field Survey Telescope (WFST), _Science China Physics, Mechanics, and Astronomy_ , 66(10), 109512 * Deng et al. (2021) Deng, L., Yang, F., Chen, X., et al. 2021, Lenghu on the Tibetan Plateau as an astronomical observing site, _Nature_ , 596(7872), 353–356 * Planck Collaboration et al. (2020) Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, Planck 2018 results. VI. Cosmological parameters, _A &A_, 641, A6 * Weinberg et al. (2013) Weinberg, D. H., Mortonson, M. J., Eisenstein, D. J., et al. 2013, Observational probes of cosmic acceleration, _Phys. Rep._ , 530(2), 87–255 * Erickson et al. (2011) Erickson, B. M. S., Cunha, C. E., & Evrard, A. E. 2011, Influence of projection in cluster cosmology studies, _Phys. Rev. D_ , 84(10), 103506 * Noh & Cohn (2012) Noh, Y., & Cohn, J. D. 2012, Disentangling correlated scatter in cluster mass measurements, _MNRAS_ , 426(3), 1829–1844 * Zu et al. (2017) Zu, Y., Mandelbaum, R., Simet, M., Rozo, E., & Rykoff, E. S. 2017, On the level of cluster assembly bias in SDSS, _MNRAS_ , 470(1), 551–560 * Costanzi et al. (2019) Costanzi, M., Rozo, E., Rykoff, E. S., et al. 2019, Modelling projection effects in optically selected cluster catalogues, _MNRAS_ , 482(1), 490–505 * Sartoris et al. (2016) Sartoris, B., Biviano, A., Fedeli, C., et al. 2016, Next generation cosmology: constraints from the Euclid galaxy cluster survey, _MNRAS_ , 459(2), 1764–1780 * Lam et al. (2013) Lam, T. Y., Schmidt, F., Nishimichi, T., & Takada, M. 2013, Modeling the phase-space distribution around massive halos, _Phys. Rev. D_ , 88(2), 023012 * Zu & Weinberg (2013) Zu, Y., & Weinberg, D. H. 2013, The redshift-space cluster-galaxy cross-correlation function - I. Modelling galaxy infall on to Millennium simulation clusters and SDSS groups, _MNRAS_ , 431(4), 3319–3337 * Hamabata et al. (2019) Hamabata, A., Oguri, M., & Nishimichi, T. 2019, Constraining cluster masses from the stacked phase space distribution at large radii, _MNRAS_ , 489(1), 1344–1356 * Shirasaki et al. (2021) Shirasaki, M., Egami, E., Okabe, N., & Miyazaki, S. 2021, Stacked phase-space density of galaxies around massive clusters: comparison of dynamical and lensing masses, _MNRAS_ , 506(3), 3385–3405 * Johnston et al. (2007) Johnston, D. E., Sheldon, E. S., Wechsler, R. H., et al. 2007, Cross-correlation Weak Lensing of SDSS galaxy Clusters II: Cluster Density Profiles and the Mass–Richness Relation, _arXiv e-prints_ , , arXiv:0709.1159 * Simet et al. (2017) Simet, M., McClintock, T., Mandelbaum, R., et al. 2017, Weak lensing measurement of the mass-richness relation of SDSS redMaPPer clusters, _MNRAS_ , 466(3), 3103–3118 * Wang et al. (2022) Wang, J., Yang, X., Zhang, J., et al. 2022, Halo Properties and Mass Functions of Groups/Clusters from the DESI Legacy Imaging Surveys DR9, _ApJ_ , 936(2), 161 * Zu et al. (2014) Zu, Y., Weinberg, D. H., Jennings, E., Li, B., & Wyman, M. 2014, Galaxy infall kinematics as a test of modified gravity, _MNRAS_ , 445(2), 1885–1897 * Koyama (2016) Koyama, K. 2016, Cosmological tests of modified gravity, _Reports on Progress in Physics_ , 79(4), 046902 * Joyce et al. (2016) Joyce, A., Lombriser, L., & Schmidt, F. 2016, Dark Energy Versus Modified Gravity, _Annual Review of Nuclear and Particle Science_ , 66(1), 95–122 * Baker et al. (2021) Baker, T., Barreira, A., Desmond, H., et al. 2021, Novel Probes Project: Tests of gravity on astrophysical scales, _Reviews of Modern Physics_ , 93(1), 015003 * Fillmore & Goldreich (1984) Fillmore, J. A., & Goldreich, P. 1984, Self-similar gravitational collapse in an expanding universe, _ApJ_ , 281, 1–8 * Bertschinger (1985) Bertschinger, E. 1985, Self-similar secondary infall and accretion in an Einstein-de Sitter universe, _ApJS_ , 58, 39–65 * Kravtsov & Borgani (2012) Kravtsov, A. V., & Borgani, S. 2012, Formation of Galaxy Clusters, _ARA &A_, 50, 353–409 * Diemer & Kravtsov (2014) Diemer, B., & Kravtsov, A. V. 2014, Dependence of the Outer Density Profiles of Halos on Their Mass Accretion Rate, _ApJ_ , 789(1), 1 * More et al. (2016) More, S., Miyatake, H., Takada, M., et al. 2016, Detection of the Splashback Radius and Halo Assembly Bias of Massive Galaxy Clusters, _ApJ_ , 825(1), 39 * Walker et al. (2019) Walker, S., Simionescu, A., Nagai, D., et al. 2019, The Physics of Galaxy Cluster Outskirts, _Space Sci. Rev._ , 215(1), 7 * Kauffmann et al. (2004) Kauffmann, G., White, S. D. M., Heckman, T. M., et al. 2004, The environmental dependence of the relations between stellar mass, structure, star formation and nuclear activity in galaxies, _MNRAS_ , 353(3), 713–731 * Jing et al. (1998) Jing, Y. P., Mo, H. J., & Börner, G. 1998, Spatial Correlation Function and Pairwise Velocity Dispersion of Galaxies: Cold Dark Matter Models versus the Las Campanas Survey, _ApJ_ , 494(1), 1–12 * Shectman et al. (1996) Shectman, S. A., Landy, S. D., Oemler, A., et al. 1996, The Las Campanas Redshift Survey, _ApJ_ , 470, 172 * Yang et al. (2003) Yang, X., Mo, H. J., & van den Bosch, F. C. 2003, Constraining galaxy formation and cosmology with the conditional luminosity function of galaxies, _MNRAS_ , 339(4), 1057–1080 * Colless et al. (2001) Colless, M., Dalton, G., Maddox, S., et al. 2001, The 2dF Galaxy Redshift Survey: spectra and redshifts, _MNRAS_ , 328(4), 1039–1063 * Yang et al. (2005) Yang, X., Mo, H. J., van den Bosch, F. C., & Jing, Y. P. 2005, A halo-based galaxy group finder: calibration and application to the 2dFGRS, _MNRAS_ , 356(4), 1293–1307 * Yang et al. (2007) Yang, X., Mo, H. J., van den Bosch, F. C., et al. 2007, Galaxy Groups in the SDSS DR4. I. The Catalog and Basic Properties, _ApJ_ , 671(1), 153–170 * York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, The Sloan Digital Sky Survey: Technical Summary, _AJ_ , 120(3), 1579–1587 * Zhao et al. (2017) Zhao, G.-B., Raveri, M., Pogosian, L., et al. 2017, Dynamical dark energy in light of the latest observations, _Nature Astronomy_ , 1, 627–632 * Alam et al. (2015) Alam, S., Albareti, F. D., Allende Prieto, C., et al. 2015, The Eleventh and Twelfth Data Releases of the Sloan Digital Sky Survey: Final Data from SDSS-III, _ApJS_ , 219(1), 12 * Miao et al. (2023) Miao, H., Gong, Y., Chen, X., et al. 2023, Cosmological constraint precision of photometric and spectroscopic multi-probe surveys of China Space Station Telescope (CSST), _MNRAS_ , 519(1), 1132–1148 * Yang et al. (2021) Yang, X., Xu, H., He, M., et al. 2021, An Extended Halo-based Group/Cluster Finder: Application to the DESI Legacy Imaging Surveys DR8, _ApJ_ , 909(2), 143 * Sohn et al. (2021) Sohn, J., Geller, M. J., Hwang, H. S., et al. 2021, The HectoMAP Cluster Survey: Spectroscopically Identified Clusters and their Brightest Cluster Galaxies (BCGs), _ApJ_ , 923(2), 143 * Evrard et al. (2008) Evrard, A. E., Bialek, J., Busha, M., et al. 2008, Virial Scaling of Massive Dark Matter Halos: Why Clusters Prefer a High Normalization Cosmology, _ApJ_ , 672(1), 122–137 * Wu et al. (2013) Wu, H.-Y., Hahn, O., Evrard, A. E., Wechsler, R. H., & Dolag, K. 2013, Virial scaling of galaxies in clusters: bright to faint is cool to hot, _MNRAS_ , 436(1), 460–469 * Ntampaka et al. (2015) Ntampaka, M., Trac, H., Sutherland, D. J., et al. 2015, A Machine Learning Approach for Dynamical Mass Measurements of Galaxy Clusters, _ApJ_ , 803(2), 50 * Diaferio (1999) Diaferio, A. 1999, Mass estimation in the outer regions of galaxy clusters, _MNRAS_ , 309(3), 610–622 * Gifford et al. (2013) Gifford, D., Miller, C., & Kern, N. 2013, A Systematic Analysis of Caustic Methods for Galaxy Cluster Masses, _ApJ_ , 773(2), 116 * Rines et al. (2013) Rines, K., Geller, M. J., Diaferio, A., & Kurtz, M. J. 2013, Measuring the Ultimate Halo Mass of Galaxy Clusters: Redshifts and Mass Profiles from the Hectospec Cluster Survey (HeCS), _ApJ_ , 767(1), 15 * Rozo et al. (2009) Rozo, E., Rykoff, E. S., Evrard, A., et al. 2009, Constraining the Scatter in the Mass-richness Relation of maxBCG Clusters with Weak Lensing and X-ray Data, _ApJ_ , 699(1), 768–781 * Zhu & Ménard (2013) Zhu, G., & Ménard, B. 2013, The JHU-SDSS Metal Absorption Line Catalog: Redshift Evolution and Properties of Mg II Absorbers, _ApJ_ , 770(2), 130 * Lee et al. (2021) Lee, J. C., Hwang, H. S., & Song, H. 2021, Searching for Mg II absorbers in and around galaxy clusters, _MNRAS_ , 503(3), 4309–4319 * Zu (2021) Zu, Y. 2021, Kinematics of Mg II absorbers from the redshift-space distortion around massive quiescent galaxies, _MNRAS_ , 506(1), 115–127 * Anand et al. (2022) Anand, A., Kauffmann, G., & Nelson, D. 2022, Cool circumgalactic gas in galaxy clusters: connecting the DESI legacy imaging survey and SDSS DR16 Mg II absorbers, _MNRAS_ , 513(3), 3210–3227 * Napolitano et al. (2023) Napolitano, L., Pandey, A., Myers, A. D., et al. 2023, Detecting and Characterizing Mg II Absorption in DESI Survey Validation Quasar Spectra, _AJ_ , 166(3), 99 * Hahn et al. (2023) Hahn, C., Wilson, M. J., Ruiz-Macias, O., et al. 2023, The DESI Bright Galaxy Survey: Final Target Selection, Design, and Validation, _AJ_ , 165(6), 253 * Zhang et al. (2007) Zhang, P., Liguori, M., Bean, R., & Dodelson, S. 2007, Probing Gravity at Cosmological Scales by Measurements which Test the Relationship between Gravitational Lensing and Matter Overdensity, _Phys. Rev. Lett._ , 99(14), 141302 * McClintock et al. (2019) McClintock, T., Varga, T. N., Gruen, D., et al. 2019, Dark Energy Survey Year 1 results: weak lensing mass calibration of redMaPPer galaxy clusters, _MNRAS_ , 482(1), 1352–1378 * Sunayama (2023) Sunayama, T. 2023, Observational constraints of an anisotropic boost due to the projection effects using redMaPPer clusters, _MNRAS_ , 521(4), 5064–5076 * Salcedo et al. (2023) Salcedo, A. N., Wu, H.-Y., Rozo, E., et al. 2023, Dark Energy Survey Year 1 Clusters are Consistent with Planck, _arXiv e-prints_ , , arXiv:2310.03944 * Andrews & Martini (2013) Andrews, B. H., & Martini, P. 2013, The Mass-Metallicity Relation with the Direct Method on Stacked Spectra of SDSS Galaxies, _ApJ_ , 765(2), 140 * Lin & Zu (2023) Lin, Y., & Zu, Y. 2023, Constraints on galactic outflows from the metallicity-stellar mass-SFR relation of EAGLE simulation and SDSS galaxies, _MNRAS_ , 521(1), 411–432 * Lan et al. (2016) Lan, T.-W., Ménard, B., & Mo, H. 2016, The galaxy luminosity function in groups and clusters: the faint-end upturn and the connection to the field luminosity function, _MNRAS_ , 459(4), 3998–4019 * Golden-Marx et al. (2023) Golden-Marx, J. B., Zu, Y., Wang, J., et al. 2023, Satellite content and halo mass of galaxy clusters: comparison between red-sequence and halo-based optical cluster finders, _MNRAS_ , 524(3), 4455–4471 * Meng et al. (2023) Meng, J., Li, C., Mo, H. J., et al. 2023, Galaxy Populations in Groups and Clusters: Evidence for a Characteristic Stellar Mass Scale at M ∗ 109.5 M ⊙, _ApJ_ , 944(1), 75 * Li et al. (2014) Li, R., Shan, H., Mo, H., et al. 2014, First galaxy-galaxy lensing measurement of satellite halo mass in the CFHT Stripe-82 Survey, _MNRAS_ , 438(4), 2864–2870 * Niemiec et al. (2017) Niemiec, A., Jullo, E., Limousin, M., et al. 2017, Stellar-to-halo mass relation of cluster galaxies, _MNRAS_ , 471(1), 1153–1166 * Sifón et al. (2018) Sifón, C., Herbonnet, R., Hoekstra, H., van der Burg, R. F. J., & Viola, M. 2018, The galaxy-subhalo connection in low-redshift galaxy clusters from weak gravitational lensing, _MNRAS_ , 478(1), 1244–1264 * Dvornik et al. (2020) Dvornik, A., Hoekstra, H., Kuijken, K., et al. 2020, KiDS+GAMA: The weak lensing calibrated stellar-to-halo mass relation of central and satellite galaxies, _A &A_, 642, A83 * Danieli et al. (2023) Danieli, S., Greene, J. E., Carlsten, S., et al. 2023, ELVES. IV. The Satellite Stellar-to-halo Mass Relation Beyond the Milky Way, _ApJ_ , 956(1), 6 * Wang et al. (2014) Wang, H., Mo, H. J., Yang, X., Jing, Y. P., & Lin, W. P. 2014, ELUCID—Exploring the Local Universe with the Reconstructed Initial Density Field. I. Hamiltonian Markov Chain Monte Carlo Method with Particle Mesh Dynamics, _ApJ_ , 794(1), 94 * Wang et al. (2016) Wang, H., Mo, H. J., Yang, X., et al. 2016, ELUCID - Exploring the Local Universe with ReConstructed Initial Density Field III: Constrained Simulation in the SDSS Volume, _ApJ_ , 831(2), 164 * Zhang et al. (2023) Zhang, C.-P., Zhu, M., Jiang, P., et al. 2023, The FAST all sky HI survey (FASHI): The first release of catalog, _arXiv e-prints_ , , arXiv:2312.06097 * Nan et al. (2011) Nan, R., Li, D., Jin, C., et al. 2011, The Five-Hundred Aperture Spherical Radio Telescope (fast) Project, _International Journal of Modern Physics D_ , 20(6), 989–1024 * Liu et al. (2020) Liu, C., Côté, P., Peng, E. W., et al. 2020, The Next Generation Virgo Cluster Survey. XXXIV. Ultracompact Dwarf Galaxies in the Virgo Cluster, _ApJS_ , 250(1), 17 * Wang et al. (2023) Wang, K., Peng, E. W., Liu, C., et al. 2023, An evolutionary continuum from nucleated dwarf galaxies to star clusters, _Nature_ , 623(7986), 296–300 * Gu et al. (2020) Gu, M., Conroy, C., Law, D., et al. 2020, Spectroscopic Constraints on the Buildup of Intracluster Light in the Coma Cluster, _ApJ_ , 894(1), 32 * Chen et al. (2022) Chen, X., Zu, Y., Shao, Z., & Shan, H. 2022, The sphere of influence of the bright central galaxies in the diffuse light of SDSS clusters, _MNRAS_ , 514(2), 2692–2706 * National Academies of Sciences, Engineering, and Medicine (2021) National Academies of Sciences, Engineering, and Medicine. 2021, Pathways to Discovery in Astronomy and Astrophysics for the 2020s * Gal-Yam et al. (2013) Gal-Yam, A., Mazzali, P. A., Manulis, I., & Bishop, D. 2013, Supernova Discoveries 2010-2011: Statistics and Trends, _PASP_ , 125(929), 749 * Abbott et al. (2017) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017, GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, _Phys. Rev. Lett._ , 119(16), 161101 * Coulter et al. (2017) Coulter, D. A., Foley, R. J., Kilpatrick, C. D., et al. 2017, Swope Supernova Survey 2017a (SSS17a), the optical counterpart to a gravitational wave source, _Science_ , 358(6370), 1556–1558 * Jones et al. (2018) Jones, D. O., Riess, A. G., Scolnic, D. M., et al. 2018, Should Type Ia Supernova Distances Be Corrected for Their Local Environments?, _ApJ_ , 867(2), 108 * Arcavi et al. (2017) Arcavi, I., Hosseinzadeh, G., Howell, D. A., et al. 2017, Optical emission from a kilonova following a gravitational-wave-detected neutron-star merger, _Nature_ , 551(7678), 64–66 * Foley et al. (2013) Foley, R. J., Challis, P. J., Chornock, R., et al. 2013, Type Iax Supernovae: A New Class of Stellar Explosion, _ApJ_ , 767(1), 57 * Jha (2017) Jha, S. W. 2017, in Handbook of Supernovae, ed. A. W. Alsabti & P. Murdin, 375 * Blagorodnova et al. (2018) Blagorodnova, N., Neill, J. D., Walters, R., et al. 2018, The SED Machine: A Robotic Spectrograph for Fast Transient Classification, _PASP_ , 130(985), 035003 * Abbott et al. (2016) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2016, Properties of the Binary Black Hole Merger GW150914, _Phys. Rev. Lett._ , 116(24), 241102 * Abbott et al. (2017a) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017a, Multi-messenger Observations of a Binary Neutron Star Merger, _ApJ_ , 848(2), L12 * Abbott et al. (2017b) Abbott, B. P., Abbott, R., Abbott, T. D., et al. 2017b, GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, _Phys. Rev. Lett._ , 119(16), 161101 * Cho (2017) Cho, A. 2017, Cosmic convergence, _Science_ , 358(6370), 1520–1521 * Hills (1975) Hills, L. D. 1975, Plants, _Nature_ , 255(5504), 102 * Lidskii & Ozernoi (1979) Lidskii, V. V., & Ozernoi, L. M. 1979, Tidal triggering of stellar flares by a massive black hole, _Soviet Astronomy Letters_ , 5, 16–19 * Rees (1988) Rees, M. J. 1988, Tidal disruption of stars by black holes of 106-108 solar masses in nearby galaxies, _Nature_ , 333(6173), 523–528 * Phinney (1989) Phinney, E. S. 1989, in The Center of the Galaxy, ed. M. Morris, Vol. 136, 543 * Evans & Kochanek (1989) Evans, C. R., & Kochanek, C. S. 1989, The Tidal Disruption of a Star by a Massive Black Hole, _ApJ_ , 346, L13 * Ulmer (1999) Ulmer, A. 1999, Flares from the Tidal Disruption of Stars by Massive Black Holes, _ApJ_ , 514(1), 180–187 * Bade et al. (1996) Bade, N., Komossa, S., & Dahlem, M. 1996, Detection of an extremely soft X-ray outburst in the HII-like nucleus of NGC 5905., _A &A_, 309, L35–L38 * Grupe et al. (1999) Grupe, D., Thomas, H. C., & Leighly, K. M. 1999, RX J1624.9+7554: a new X-ray transient AGN, _A &A_, 350, L31–L34 * Komossa & Greiner (1999) Komossa, S., & Greiner, J. 1999, Discovery of a giant and luminous X-ray outburst from the optically inactive galaxy pair RX J1242.6-1119, _A &A_, 349, L45–L48 * Greiner et al. (2000) Greiner, J., Schwarz, R., Zharikov, S., & Orio, M. 2000, RX J1420.4+5334 - another tidal disruption event?, _A &A_, 362, L25–L28 * Dai et al. (2018) Dai, J., Yang, J., Li, L., & Zhang, J. 2018, Current Sheets in the Wake of an Eruption of Two Crossing Filaments, _ApJ_ , 869(2), 118 * Parkinson et al. (2020) Parkinson, E. J., Knigge, C., Long, K. S., et al. 2020, Accretion disc winds in tidal disruption events: ultraviolet spectral lines as orientation indicators, _MNRAS_ , 494(4), 4914–4929 * Kratter & Lodato (2016) Kratter, K., & Lodato, G. 2016, Gravitational Instabilities in Circumstellar Disks, _ARA &A_, 54, 271–311 * Boss (1997) Boss, A. P. 1997, Giant planet formation by gravitational instability., _Science_ , 276, 1836–1839 * Marois et al. (2008) Marois, C., Macintosh, B., Barman, T., et al. 2008, Direct Imaging of Multiple Planets Orbiting the Star HR 8799, _Science_ , 322(5906), 1348 * Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, The Gaia mission, _A &A_, 595, A1 * Gaia Collaboration et al. (2018) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2018, Gaia Data Release 2. Summary of the contents and survey properties, _A &A_, 616, A1 * Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, Gaia Early Data Release 3. Summary of the contents and survey properties, _A &A_, 649, A1 * Gaia Collaboration et al. (2023) Gaia Collaboration, Vallenari, A., Brown, A. G. A., et al. 2023, Gaia Data Release 3. Summary of the content and survey properties, _A &A_, 674, A1 * Perryman et al. (1997) Perryman, M. A. C., Lindegren, L., Kovalevsky, J., et al. 1997, The HIPPARCOS Catalogue, _A &A_, 323, L49–L52 * van Leeuwen (2007) van Leeuwen, F. 2007, Validation of the new Hipparcos reduction, _A &A_, 474, 653–664 * Snellen & Brown (2018) Snellen, I. A. G., & Brown, A. G. A. 2018, The mass of the young planet Beta Pictoris b through the astrometric motion of its host star, _Nature Astronomy_ , 2, 883–886 * Brandt et al. (2019) Brandt, T. D., Dupuy, T. J., & Bowler, B. P. 2019, Precise Dynamical Masses of Directly Imaged Companions from Relative Astrometry, Radial Velocities, and Hipparcos-Gaia DR2 Accelerations, _AJ_ , 158(4), 140 * Kervella et al. (2022) Kervella, P., Arenou, F., & Thévenin, F. 2022, Stellar and substellar companions from Gaia EDR3. Proper-motion anomaly and resolved common proper-motion pairs, _A &A_, 657, A7 * Feng et al. (2022) Feng, F., Butler, R. P., Vogt, S. S., et al. 2022, 3D Selection of 167 Substellar Companions to Nearby Stars, _ApJS_ , 262(1), 21 * Zhang, Tianyi and Zhu, Yongtian and Hou, Yonghui and Zhang, Kai and Hu, Zhongwen and Wang, Lei and Chen, Yi and Jiang, Haijiao and Tang, Zhen and XU, Mingming and Jiang, Mingda (2019) Zhang, Tianyi and Zhu, Yongtian and Hou, Yonghui and Zhang, Kai and Hu, Zhongwen and Wang, Lei and Chen, Yi and Jiang, Haijiao and Tang, Zhen and XU, Mingming and Jiang, Mingda. 2019, Construction of a LAMOST high resolution spectrograph, _Chinese Optics_ , 12, 148 * Cosentino et al. (2012) Cosentino, R., Lovis, C., Pepe, F., et al. 2012, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 84461V * Kasting et al. (1993) Kasting, J. F., Whitmire, D. P., & Reynolds, R. T. 1993, Habitable Zones around Main Sequence Stars, _Icarus_ , 101(1), 108–128 * The LUVOIR Team (2019) The LUVOIR Team. 2019, The LUVOIR Mission Concept Study Final Report, _arXiv e-prints_ , , arXiv:1912.06219 * Gaudi et al. (2020) Gaudi, B. S., Seager, S., Mennesson, B., et al. 2020, The Habitable Exoplanet Observatory (HabEx) Mission Concept Study Final Report, _arXiv e-prints_ , , arXiv:2001.06683 * Mamajek & Stapelfeldt (2023) Mamajek, E., & Stapelfeldt, K. 2023, in American Astronomical Society Meeting Abstracts, Vol. 55, American Astronomical Society Meeting Abstracts, 116.07 * Pepe et al. (2010) Pepe, F. A., Cristiani, S., Rebolo Lopez, R., et al. 2010, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 7735, Ground-based and Airborne Instrumentation for Astronomy III, ed. I. S. McLean, S. K. Ramsay, & H. Takami, 77350F * Seifahrt et al. (2022) Seifahrt, A., Bean, J. L., Kasper, D., et al. 2022, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 12184, Ground-based and Airborne Instrumentation for Astronomy IX, ed. C. J. Evans, J. J. Bryant, & K. Motohara, 121841G * Schwab et al. (2016) Schwab, C., Rakich, A., Gong, Q., et al. 2016, in Proc. SPIE, Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, 99087H * Gibson et al. (2016) Gibson, S. R., Howard, A. W., Marcy, G. W., et al. 2016, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 9908, Ground-based and Airborne Instrumentation for Astronomy VI, ed. C. J. Evans, L. Simard, & H. Takami, 990870 * Feng et al. (2017) Feng, F., Tuomi, M., Jones, H. R. A., et al. 2017, Color Difference Makes a Difference: Four Planet Candidates around $\tau$ Ceti, _AJ_ , 154, 135 * Faria et al. (2022) Faria, J. P., Suárez Mascareño, A., Figueira, P., et al. 2022, A candidate short-period sub-Earth orbiting Proxima Centauri, _A &A_, 658, A115 * Haywood et al. (2014) Haywood, R. D., Collier Cameron, A., Queloz, D., et al. 2014, Planets and stellar activity: hide and seek in the CoRoT-7 system, _MNRAS_ , 443(3), 2517–2531 * Rajpaul et al. (2015) Rajpaul, V., Aigrain, S., Osborne, M. A., Reece, S., & Roberts, S. 2015, A Gaussian process framework for modelling stellar activity signals in radial velocity data, _MNRAS_ , 452, 2269–2291 * Feng et al. (2016) Feng, F., Tuomi, M., Jones, H. R. A., Butler, R. P., & Vogt, S. 2016, A Goldilocks principle for modelling radial velocity noise, _MNRAS_ , 461(3), 2440–2452 * Ribas et al. (2018) Ribas, I., Tuomi, M., Reiners, A., et al. 2018, A candidate super-Earth planet orbiting near the snow line of Barnard’s star, _Nature_ , 563, 365–368 * Dumusque (2016) Dumusque, X. 2016, Radial velocity fitting challenge. I. Simulating the data set including realistic stellar radial-velocity signals, _A &A_, 593, A5 * Dumusque et al. (2017) Dumusque, X., Borsa, F., Damasso, M., et al. 2017, Radial-velocity fitting challenge. II. First results of the analysis of the data set, _A &A_, 598, A133 * Zechmeister et al. (2018) Zechmeister, M., Reiners, A., Amado, P. J., et al. 2018, Spectrum radial velocity analyser (SERVAL). High-precision radial velocities and two alternative spectral indicators, _A &A_, 609, A12 * Dumusque (2018) Dumusque, X. 2018, Measuring precise radial velocities on individual spectral lines. I. Validation of the method and application to mitigate stellar activity, _A &A_, 620, A47 * Lisogorskyi et al. (2019) Lisogorskyi, M., Jones, H. R. A., & Feng, F. 2019, Activity and telluric contamination in HARPS observations of Alpha Centauri B, _MNRAS_ , 485(4), 4804–4816 * Ge et al. (2022) Ge, J., Zhang, H., Zang, W., et al. 2022, ET White Paper: To Find the First Earth 2.0, _arXiv e-prints_ , , arXiv:2206.06693 * Madhusudhan (2019) Madhusudhan, N. 2019, Exoplanetary Atmospheres: Key Insights, Challenges, and Prospects, _ARA &A_, 57, 617–663 * Birkby (2018) Birkby, J. L. 2018, Exoplanet Atmospheres at High Spectral Resolution, _arXiv e-prints_ , , arXiv:1806.04617 * Yan et al. (2022) Yan, F., Reiners, A., Pallé, E., et al. 2022, Detection of iron emission lines and a temperature inversion on the dayside of the ultra-hot Jupiter KELT-20b, _A &A_, 659, A7 * Lothringer et al. (2021) Lothringer, J. D., Rustamkulov, Z., Sing, D. K., et al. 2021, A New Window into Planet Formation and Migration: Refractory-to-Volatile Elemental Ratios in Ultra-hot Jupiters, _ApJ_ , 914(1), 12 * Currie et al. (2023) Currie, T., Biller, B., Lagrange, A., et al. 2023, in Astronomical Society of the Pacific Conference Series, Vol. 534, Protostars and Planets VII, ed. S. Inutsuka, Y. Aikawa, T. Muto, K. Tomida, & M. Tamura, 799 * Snellen et al. (2014) Snellen, I. A. G., Brandl, B. R., de Kok, R. J., et al. 2014, Fast spin of the young extrasolar planet $\beta$ Pictoris b, _Nature_ , 509(7498), 63–65
# Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs Kelvin Guu∗,1 Albert Webson∗,⋄,2 Ellie Pavlick1,2 Lucas Dixon1 Ian Tenney1 Tolga Bolukbasi$\enskip{}^{,1}$ <EMAIL_ADDRESS>1Google Research <EMAIL_ADDRESS>2Brown University Lead contributors. Please see Contributions section for details. ⋄ Work done during an internship at Google Research. ###### Abstract Training data attribution (TDA) methods offer to trace a model’s prediction on any given example back to specific influential training examples. Existing approaches do so by assigning a scalar _influence score_ to each training example, under a simplifying assumption that influence is additive, whereby the total influence of a training set is the sum of its parts. But in reality, we observe that training examples interact in highly non-additive ways due to factors such as inter-example redundancy, training order, and curriculum learning effects. To study such interactions, we propose _Simfluence_ , a new paradigm for TDA where the goal is not to produce a single influence score per example, but instead a _training run simulator_ : the user asks, _“If my model had trained on example $z_{1}$, then $z_{2}$, …, then $z_{n}$, how would it behave on $z_{\text{test}}$?”_; the simulator should then output a _simulated training run_ , which is a time series predicting the loss on $z_{\text{test}}$ at every step of the simulated run. This enables users to answer counterfactual questions about what their model would have learned under different training curricula, and to directly see where in training that learning would occur. Under the Simfluence paradigm, we present a simulator (Simfluence-Linear) that captures important non-additive interactions using a Markov process. It is often able to predict the spiky trajectory of individual example losses with surprising fidelity, while matching the interpretability of prior TDA work and running in milliseconds. Furthermore, we show that existing TDA methods such as TracIn and influence functions can be viewed as special cases of Simfluence-Linear. This enables us to directly compare methods in terms of their simulation accuracy, subsuming several prior TDA approaches to evaluation. In experiments on large language model (LLM) fine-tuning, we show that our method predicts loss trajectories with much higher accuracy than existing TDA methods (doubling Spearman’s correlation and reducing mean- squared error by 75%) across several tasks, models, and training methods. Figure 1: Training data attribution (TDA) methods seek to understand the effect of individual training examples. Simfluence is a new paradigm for TDA, where the goal is to develop _training run simulators_ that can accurately predict how any given sequence of training examples would affect the model’s loss on any particular test example. Here, we plot the loss of three different test examples over the course of a training run. We compare the true observed loss trajectories (blue) with our simulator’s predicted trajectories (green). Surprisingly, many of the ups and downs in the true loss trajectories are not “random” but can be anticipated by our simulator, showing the extent to which our simulator understands the effect of each training example. ## 1 Introduction Important advances in machine learning are often made possible by better or more data (Halevy et al., 2009; Deng et al., 2009; Kaplan et al., 2020). But which training examples are actually responsible for specific model successes or failures? Training data attribution (TDA) methods seek to answer this question. Many existing TDA methods share the following formulation: given any test example of interest, $z_{\text{test}}$, they aim to identify which training examples caused the biggest change in the model’s loss on $z_{\text{test}}$. If a training example $z$ helped reduce the loss on $z_{\text{test}}$, it is called a _proponent_ , while increasing the loss makes it an _opponent_. The amount of change in loss on $z_{\text{test}}$ is called the _influence_ of $z$ on $z_{\text{test}}$, which we denote $\mathcal{I}(z,z_{\text{test}})$. This description encompasses some of the most well-known methods for TDA, such as influence functions (Koh & Liang, 2017; Hampel, 1974; Cook & Weisberg, 1980) and TracIn (Pruthi et al., 2020). Both aforementioned methods make a simplifying assumption that influence is _additive_ : the combined influence of several examples should equal the sum of their individual influence scores. Influence functions make this assumption within a local neighborhood of the model’s parameters, while TracIn assumes this when integrating over training steps. But this common assumption does not adequately capture important non- additive aspects of real training, which we discuss next. ##### Non-additive effects. Many examples in a training set may provide similar or redundant information. This is generally hard to express when using a single influence score per training example. For example, influence functions (Koh & Liang, 2017) evenly “split the credit” among all redundant examples, meaning that 100 examples teaching a very common but essential piece of information could all rank lower than one example teaching something less important but unique. Likewise, TracIn-Ideal (Pruthi et al., 2020) tends to handle the issue by assigning credit to whichever example arbitrarily appeared earlier in training — hiding the otherwise equivalent value of later examples. These limitations were noted by Søgaard et al. (2021), which found that existing TDA methods tend to over- or underestimate influence in such situations, as well as by Koh et al. (2019) who found that influence systematically underestimated the effect of group interventions. Redundancy between two examples ($z_{1}$ and $z_{2}$) is a special case of _submodular interaction_ (Fujishige, 2005): informally speaking, $\text{information}(z_{1}\cup z_{2})<\text{information}(z_{1})+\text{information}(z_{2})$. Conversely, if something can only be learned through the combination of multiple examples (e.g., as observed in curriculum learning (Bengio et al., 2009)), then we have a _supermodular interaction_ , which also is not modeled by existing TDA methods. ##### Influence as counterfactual simulation. To account for these important phenomena, we propose _Simfluence_ , a new paradigm for TDA where the output is a _training run simulator_ , rather than a single score per example. In this setup, the user is interested in the model’s performance on some example $z_{\text{test}}$, and they would like to simulate various counterfactual training scenarios, such as: “What if I had removed _< X>_ group of examples from my training data?” (Koh et al., 2019), or “What if I had trained on _< Y>_ first, then _< X>_?” (Søgaard et al., 2021), or “What if I duplicated _< Z>_ ten times?” (Han & Tsvetkov, 2022). To explore these questions, the user first poses a training curriculum (which examples are seen, and in what order). Then, the simulator generates a time series that predicts what the loss on $z_{\text{test}}$ would be after each step of the training run (a _loss trajectory_). Such a loss trajectory can directly show the user which training steps are helping or hurting, and what the final loss will be. It can also change depending on the order and combination of training examples, thereby modeling non-additive effects. Finally, the accuracy of this simulated trajectory can be directly validated by comparing it to the true loss trajectory during an actual training run using the same curriculum; for instance, Figure 1 shows several examples of how well our simulator can predict true trajectories. Inspired by the framing of simulation, we propose a new simulator, Simfluence- Linear, that models non-additive effects caused by training order and redundancy using a Markov process, while preserving the interpretability of prior TDA methods. Furthermore, we show that both TracIn and influence functions can be reproduced as special cases of this simulator. By framing both prior and proposed methods as simulators, we are now able to directly compare them, evaluating each in terms of their simulation accuracy against real loss trajectories. We find that Simfluence-Linear significantly outperforms existing methods (doubling Spearman’s correlation and reducing mean squared error by 75%) across several tasks and models in the setting of large language model (LLM) fine-tuning (both standard full-model tuning and parameter-efficient tuning). ## 2 Task Here, we formally define Simfluence, the task of training run simulation. Let $z$ denote a single example. Let $\mathcal{Z}_{\text{train}}$ and $\mathcal{Z}_{\text{test}}$ denote the set of all training examples and test examples111Or validation examples, depending on the use case., respectively. We use $i=1,\dots,n$ to index over training examples. We use $c=(c_{1},c_{2},\dots,c_{t},\dots,c_{T})$ to specify a training curriculum: for each training step $t=1,\dots,T$, let $c_{t}\subset\\{1,\dots,n\\}$ be a set of integers specifying the batch of training examples consumed at step $t$. Then, let $L_{t}(z)$ denote the loss on example $z$ after taking training step $t$. A training run simulator, $\mathcal{S}$, should be able to simulate the loss of any given test example, $z_{\text{test}}$, over the course of training on any arbitrary curriculum. Formally, it takes two inputs: 1) the curriculum, $c$, and 2) the initial loss before training begins, $L_{0}(z_{\text{test}})$. It then outputs a predicted loss, $\hat{L}_{t}(z_{\text{test}})$, for every time step (the loss trajectory): $\mathcal{S}(c,L_{0}(z_{\text{test}}))\mapsto\hat{L}_{1}(z_{\text{test}}),\hat{L}_{2}(z_{\text{test}})\dots,\hat{L}_{T}(z_{\text{test}})$ We are mainly interested in how this loss trajectory would change as a function of the curriculum, $c$. Finally, we note that there is generally a trade-off between simulation accuracy and simulation speed. At one extreme, we could always obtain a perfect simulation of training by just _actually running training_ , but this is generally too slow for quickly probing and exploring the effect of different training examples and curricula. A useful simulator should be orders of magnitude faster to run; ideally taking seconds instead of hours or days. Whenever possible, we also prefer simulators with interpretable structure, provided this does not compromise simulation accuracy. ### 2.1 Training a simulator In this work, we consider simulators that are themselves also _learned models_. We will learn our simulator from previously conducted training runs. For each previous run, we record what curriculum was used ($c$, the input to our simulator), and the true observed loss trajectory of a given example $z_{\text{test}}$ (which we denote $L_{1:T}(z_{\text{test}})$, which is the desired output of the simulator). Hence, each run provides an (input, output) pair for us to train a simulator via supervised learning. To formalize this, we will define a run as $r=(c,L_{1:T}(z_{\text{test}}))$. Once we have trained the simulator, we can then use it to predict loss trajectories for future planned runs, where we know what curriculum we will use, but have not observed the true loss trajectory. We will use $\mathcal{R}_{\text{past}}$ to denote the set of previously observed runs, and $\mathcal{R}_{\text{future}}$ to denote future planned runs. For each run in $\mathcal{R}_{\text{past}}$, we assume that we have recorded the loss for any test example of interest ($z_{\text{test}}$) for some subset of time steps. These requirements can be readily met in many practical scenarios. Model developers will often re-train their model multiple times before obtaining a version that meets their goals, and usually already track key metrics such as validation set losses over the course of each run. Hence, it often does not add much marginal cost to conduct a few extra runs for the purpose of learning a simulator.222Large language model pre-training is currently an exception to this, as even single training runs may require months. Early work in TDA often assumed access to only one run (Pruthi et al., 2020), or even just the final checkpoint of one run (Koh & Liang, 2017), while more recent work has obtained better results by leveraging multiple runs (Søgaard et al., 2021), which we also demonstrate in our experiments. ### 2.2 Evaluating a simulator To evaluate a simulator, $\mathcal{S}$, we can check how well its simulation of a future run, $r\in\mathcal{R}_{\text{future}}$, predicts what happens when we actually perform that training run, by comparing our simulator’s predicted losses at each time step, $\hat{L}_{1:T}(z_{\text{test}})=[\hat{L}_{1}(z_{\text{test}}),\dots,\hat{L}_{T}(z_{\text{test}})]$, with the true observed losses, $L_{1:T}(z_{\text{test}})=[L_{1}(z_{\text{test}}),\dots,L_{T}(z_{\text{test}})]$. We measure this using the mean squared error (MSE), which is averaged over test examples and training steps: $\text{MSE}(\mathcal{S})=\frac{1}{\mathcal{Z}_{\text{test}}}\frac{1}{T}\sum_{z\in\mathcal{Z}_{\text{test}}}\sum_{t=1}^{T}(L_{t}(z)-\hat{L}_{t}(z))^{2}$ Second, we wish to evaluate some simulators that do not provide a good absolute prediction of an example’s loss, but may predict a good relative ordering of final losses among test examples at time $T$. For this, we use Spearman’s correlation, which is only sensitive to the ranking among losses: $\text{Spearman}_{\text{final}}(\mathcal{S})=\text{Spearman}(\\{(\hat{L}_{T}(z),L_{T}(z))\text{ for }z\in\mathcal{Z}_{\text{test}}\\})$ By casting TDA as a simulation task, we now have an unambiguous goal and ground-truth: all simulators should strive to match real observed training runs. In Section 4, we show how existing TDA methods can also be formulated as simulators, enabling us to directly compare our approach and prior work all by the same yardstick. In contrast, prior TDA research has not always agreed on a shared ground-truth or evaluation setup. For example, Koh & Liang (2017) used leave-one-out retraining (LOO) as their ground-truth. In this setup, a practitioner removes exactly one training example $z$ from the training set and then re-trains their model. They then evaluate whether the resulting change in loss on a test example $z_{\text{test}}$ correlates well with the predicted influence score, $\mathcal{I}(z,z_{\text{test}})$. This can be considered a special case of our simulation task, where $\mathcal{R}_{\text{past}}$ and $\mathcal{R}_{\text{future}}$ differ by only one example. Recent work by Søgaard et al. (2021) has argued that LOO retraining tends to result in small or negligible changes in loss (due to redundancy among training examples), and hence argue that it is not the most important quantity that all TDA methods should aspire to predict. ## 3 Approach ### 3.1 Simulator We now present a simple training run simulator which models the loss trajectory of a single test example, $z$, as a _linear Markov process_. Given the loss at time $t-1$, we model the loss at time $t$ to be: $L_{t}(z)={\alpha(c_{t})}L_{t-1}(z)+{\beta(c_{t})}$ (1) where multiplicative factor $\alpha(c_{t})$ and additive factor $\beta(c_{t})$ are learned functions of $c_{t}$ (the batch of training examples consumed at step $t$). We call this model _Simfluence-Linear_. This model expresses our simplifying assumption that the change in loss at step $t$ is a linear function of the training examples consumed at that step. For example, if $\alpha(c_{t})=0.5$ and $\beta(c_{t})=-2.1$, this would mean that the training examples at step $t$ cause a 50% relative reduction in the loss, followed by a 2.1 absolute reduction. Performing simulation with this model is straightforward. We start with the known initial loss, $L_{0}(z)$. We then recursively apply Equation 1 to predict subsequent time steps: from $L_{0}(z)$ we predict $L_{1}(z)$, which is then used to predict $L_{2}(z)$, and so on. Next, we elaborate on the learned functions $\alpha(c_{t})$ and $\beta(c_{t})$. Recall that $c_{t}\subset\\{1,\dots,n\\}$ is a set of integers indicating the batch of training examples consumed at step $t$, and let the learnable parameters of Simfluence-Linear be $\Omega=(A,B)$, where $A\in\mathbb{R}^{n}$ and $B\in\mathbb{R}^{n}$. Then, we define $\alpha(c_{t})$ and $\beta(c_{t})$ as: $\alpha(c_{t})=\sum_{i\in c_{t}}A_{i}\qquad\quad\beta(c_{t})=\sum_{i\in c_{t}}B_{i}$ Under this model, $A_{i}$ represents how much training example $z_{i}$ multiplicatively reduces the loss on test example $z$ — we call this _multiplicative influence_. Likewise, $B_{i}$ reflects the _additive influence_ of $z_{i}$ on $z$. Note that we have two learned parameters ($A_{i}$ and $B_{i}$) for each training example, meaning that the entire simulator has $2n$ parameters total. Also, note that this simulator only models the loss of a single test example, $z$. To model $m$ test examples, we would learn $m$ separate simulators, each with $2n$ parameters. In Section 7, we discuss future work that could be more parameter-efficient. In our experiments, we wish to study the relative importance of multiplicative and additive influence. To do so, we introduce two ablations of Simfluence- Linear: * • Simfluence-Additive: we only model additive influence, disabling multiplicative influence by setting $\alpha(c_{t})=1$ for all $c_{t}$. * • Simfluence-Multiplicative: we only model multiplicative influence, disabling additive influence by setting $\beta(c_{t})=0$ for all $c_{t}$. In Section 4, we will show that prior TDA methods can be viewed as special cases of Simfluence-Additive: they model additive influence, but not multiplicative influence. The multiplicative factor $\alpha(c_{t})$ in Simfluence-Linear is important because it enables us to model redundancy between training examples, and the effect of training order. For example, consider two training examples, $z_{1}$ and $z_{2}$, which both have the same multiplicative influence: $A_{1}=A_{2}=0.5$. For simplicity, let us also assume they both have 0 additive influence, $B_{1}=B_{2}=0$. Under this model, both examples have the same effect: each time we encounter them, they cut the loss in half. Now, suppose the initial loss is $L_{0}(z_{\text{test}})=100$. If we take a training step on $z_{1}$ first, then our simulator predicts $z_{1}$ will reduce the loss from 100 to 50. If we then take a training step on $z_{2}$, it will further reduce the loss from 50 to 25. Both examples reduce the loss by 50%, but the second example causes less absolute loss reduction (25 rather than 50), because it came second. If the examples were reversed, the reverse would be true. Such a phenomenon cannot be modeled using additive influence, which always assumes the same effect regardless of order. ### 3.2 Learning the simulator Let $\mathcal{T}$ denote the subset of training steps in $\mathcal{R}_{\text{past}}$ where we recorded the loss of test example $z$ (both _before_ and _after_ the training step). To learn the parameters of the simulator, $\Omega=(A,B)$, we simply minimize the following L2-regularized regression objective: $\displaystyle\mathcal{O}(A,B)$ $\displaystyle=\sum_{t\in\mathcal{T}}\left(L_{t}(z)-\hat{L}_{t}(z)\right)^{2}+\lambda(\|A\|^{2}+\|B\|^{2})$ (2) $\displaystyle\hat{L}_{t}(z)$ $\displaystyle=\alpha(c_{t})L_{t-1}(z)+\beta(c_{t})$ (3) where $\lambda$ is a hyperparameter controlling the amount of L2 regularization. In Appendix A.1, we show that this reduces to a standard multivariate linear regression problem, and provide the closed form solution. When the training batch size is 1, we can simplify further. Let $\mathcal{T}_{i}$ denote the subset of training steps where training example $z_{i}$ was encountered. Then, we can reduce $\mathcal{O}(A,B)$ to only the terms involving $z_{i}$: $\mathcal{O}(A_{i},B_{i})=\sum_{t\in\mathcal{T}_{i}}(L_{t}(z)-A_{i}L_{t-1}(z)-B_{i})^{2}+\lambda(A_{i}^{2}+B_{i}^{2})$ (4) This objective is now a _univariate_ linear regression problem (again with a closed form solution), which only depends on data from the time steps where example $z_{i}$ was observed, and has no dependence on any other parameters $A_{j}$ or $B_{j}$ for $j\neq i$. This is convenient: for each example $z_{i}$ of interest, we can estimate $A_{i}$ and $B_{i}$ without modeling any other examples. ### 3.3 Data requirements We now analyze how much data is needed to learn Simfluence-Linear. For the sake of building intuition, we start by deriving the bare minimum amount of data needed for Simfluence-Linear’s training objective to have a unique solution, for the simple case where the training batch size is 1 and L2-regularization is disabled ($\lambda=0$). As noted in the previous section, we need to solve Equation 4 for each training example of interest. Since Equation 4 is a univariate linear regression problem with two parameters ($A_{i}$ and $B_{i}$), it is necessary and sufficient to observe two training steps on $z_{i}$ for Equation 4 to have a unique solution. If we apply this requirement to all training examples, then we need $\mathcal{R}_{\text{past}}$ to include at least two training steps for every training example of interest. This requirement can be met by a single training run of two epochs, or two training runs of one epoch each. For the more general case of batch size > 1, a similar conclusion holds: under reasonable conditions, we need roughly $2n$ training steps to estimate $2n$ parameters — see Appendix A.2 for details and important requirements. In some settings, it may be computationally undesirable to compute losses after every training step. If so, we could still meet the requirement by only computing losses every $H$ steps, while doing $H$ times more runs or epochs to meet the overall $2n$ data requirement. Finally, note that $2n$ is the bare minimum needed for a unique solution. We present this result mainly to show that data requirements scale linearly with $n$. In our experiments, we find that $20n$ to $60n$ training steps are more than sufficient. Note that we can always obtain more data for Simfluence by performing more training runs — hence, the real limiting factor is the computational cost of training runs. Prior TDA methods were not formulated as learned models and therefore do not have a “data requirement” per se. However, we can compare their computational cost with Simfluence-Linear — see Appendix A.3 for details. ### 3.4 Generality beyond gradient-based learning So far, we have presented Simfluence-Linear as a method to simulate gradient descent on a loss function. But it is worth noting that Simfluence-Linear contains no assumptions that are specific to gradient-based learning: unlike most other TDA methods, it does not require any access to model gradients or model parameters; it simply needs to know what examples are consumed at each step, and how losses change. Furthermore, instead of predicting test example losses, Simfluence-Linear could just as well be used to predict the trajectory of any arbitrary metric, such as the model’s average accuracy across multiple examples, the L2-norm of a model’s parameters, the gap between train and test loss, etc. In the most generic terms, Simfluence-Linear is designed to simulate algorithms that incrementally consume examples, and to determine how those examples affect any given metric. This description encompasses numerous other algorithms of interest, such as in-context learning (Brown et al., 2020) (where examples are consumed by the the forward pass of a large language model) or reinforcement learning (where the RL agent consumes “episodes”, and the metric to track is expected reward). We hope to explore these applications in future work. This framing also connects the Simfluence paradigm to research on credit assignment, such as Shapley values (Shapley et al., 1953). However, unlike the general credit assignment problem, we make the useful extra assumption that the metric of interest can be measured after each step of the process, rather than just the end. ## 4 Connections to prior TDA methods Here, we show how prior TDA methods, TracIn (Pruthi et al., 2020), and influence functions (Koh & Liang, 2017), can all be viewed as simulators under the Simfluence paradigm. In particular, we show that they are all special cases of Simfluence-Additive as they do not model multiplicative influence. ### 4.1 TracIn-Ideal TracIn-Ideal was first introduced by Pruthi et al. (2020). It assumes that we have performed a single training run, where each step consumes a single example (batch size 1). TracIn-Ideal measures the net amount of loss reduction caused by a particular example $z_{i}$: $\mathcal{I}_{\text{TracIn- Ideal}}(z_{i},z)=\sum_{t\in\mathcal{T}_{i}}L_{t-1}(z)-L_{t}(z)$ (5) where $\mathcal{T}_{i}$ denotes the subset of time steps where example $z_{i}$ was encountered. Here, we will show that TracIn-Ideal is equivalent to Simfluence-Additive, a simple simulator that only models additive influence: $L_{t}(z)=L_{t-1}(z)+\beta(c_{t})\qquad\beta(c_{t})=\sum_{i\in c_{t}}B_{i}$ Let us consider the training objective for Simfluence-Additive when batch size equals 1 and L2 regularization is disabled ($\lambda=0$). We start with Equation 4 and remove the multiplicative influence terms to get: $\mathcal{O}(B_{i})=\sum_{t\in\mathcal{T}_{i}}(L_{t}(z)-L_{t-1}(z)-B_{i})^{2}$ This objective has a simple closed-form solution — $B_{i}$ should equal the negative average of all loss reductions where training example $z_{i}$ was consumed: $\hat{B}_{i}=-\frac{1}{\mathcal{T}_{i}}\sum_{t\in\mathcal{T}_{i}}L_{t-1}(z)-L_{t}(z)$ (6) We see that $\hat{B}_{i}$ is equivalent to the influence score defined by TracIn-Ideal, up to a normalization term ($-1/\mathcal{T}_{i}$), which is constant when all examples are encountered equally often. Hence, TracIn-Ideal can be viewed as a simple simulator that only models the additive influence of $z_{i}$. Note that TracIn-Ideal only uses data from a single training run. Søgaard et al. (2021) proposed _Expected TracIn-Ideal_, where the TracIn-Ideal score is averaged over multiple runs, rather than just one. This too is encompassed by Simfluence-Additive, corresponding to the case where we have $\mathcal{R}_{\text{past}}>1$. ### 4.2 TracIn-CP TracIn-CP (“CP” stands for “checkpoint”) was proposed by Pruthi et al. (2020) as a computationally cheaper approximation to TracIn-Ideal. TracIn-CP makes two approximations. First, instead of summing loss reductions over all steps where training example $z_{i}$ is encountered ($\mathcal{T}_{i}$), it only sums over steps where model checkpoints are saved, which we denote $\mathcal{T}_{\text{CP}}$. Second, it replaces the actual observed loss reduction at each step $t\in\mathcal{T}_{\text{CP}}$ with the following approximation: $L_{t}(z)-L_{t+1}(z)\approx\eta_{t}\nabla_{\theta}L_{t}(z_{i})^{\top}\nabla_{\theta}L_{t}(z)$ (7) where $\eta_{t}$ is the learning rate at step $t$ and $\nabla_{\theta}L_{t}(\cdot)$ denotes the gradient of the loss function w.r.t. model parameters $\theta$. The final measure of influence is: $\mathcal{I}_{\text{TracIn- CP}}(z_{i},z)=\sum_{t\in\mathcal{T}_{\text{CP}}}\eta_{t}\nabla_{\theta}L_{t}(z_{i})^{\top}\nabla_{\theta}L_{t}(z)$ (8) In words, this approximation computes what the loss reduction _would have been_ at each $t\in\mathcal{T}_{\text{CP}}$ under 1) the _simplifying assumption_ that the loss function is locally linear in a neighborhood around $\theta_{t}$, and 2) _if_ we had taken a gradient step on example $z_{i}$ at step $t$ (note that $t$ is an arbitrary checkpoint, so the actual training example encountered at step $t$ was probably not $z_{i}$). We therefore call this approximation the _hypothetical loss reduction_ , to contrast it with the _actual loss reduction_. We offer a derivation of Equation 7 to support this interpretation. First, we write the loss as $L_{t}(z)=L(z,\theta_{t})$ to explicitly acknowledge the loss’s dependence on $\theta_{t}$, the model parameters at step $t$. Next, we approximate $L(z,\theta)$ as a linear function of $\theta$ in the neighborhood of $\theta_{t}$, using the standard first-order Taylor series approximation: $L(z,\theta)\approx L(z,\theta_{t})+(\theta-\theta_{t})^{\top}\nabla L(z,\theta_{t})$ (9) Then, we use Equation 9 to approximate $L(z,\theta_{t+1})$, the loss at time $t+1$. If we had hypothetically taken a gradient step on example $z_{i}$ at time $t$, then $\theta_{t+1}=\theta_{t}-\eta_{t}\nabla_{\theta}L(z_{i},\theta_{t})$. Plugging this into Equation 9, we get: $\displaystyle L(z,\theta_{t+1})$ $\displaystyle=L(z,\theta_{t}-\eta_{t}\cdot\nabla_{\theta}L(z_{i},\theta_{t}))$ $\displaystyle\approx L(z,\theta_{t})-\eta_{t}\cdot\nabla_{\theta}L(z_{i},\theta_{t})^{\top}\nabla_{\theta}L(z,\theta_{t})$ $\displaystyle\overset{\text{def}}{=}\tilde{L}_{t+1}(z)$ We call this approximation the _hypothetical loss_ at time $t+1$, and denote it $\tilde{L}_{t+1}(z)$ (note the tilde). Returning to the original formula for TracIn-Ideal in Equation 5, we simply replace the true loss $L_{t}(z)$ with the hypothetical loss $\tilde{L}_{t}(z)$, and we see that the result is TracIn-CP: $\displaystyle L_{t-1}(z)-L_{t}(z)$ $\displaystyle\approx L_{t-1}(z)-\tilde{L}_{t}(z)$ $\displaystyle=L_{t-1}(z)-\left(L_{t-1}(z)-\eta_{t-1}\nabla_{\theta}L_{t-1}(z_{i})^{\top}\nabla_{\theta}L_{t-1}(z)\right)$ $\displaystyle=\eta_{t-1}\nabla_{\theta}L_{t-1}(z_{i})^{\top}\nabla_{\theta}L_{t-1}(z)$ In conclusion, TracIn-CP is the same as TracIn-Ideal, but where the loss $L_{t}(z)$ has been replaced by the hypothetical loss $\tilde{L}_{t}(z)$. In isolation, the hypothetical loss is not actually cheaper to compute than the actual loss. However, it has one key advantage. The actual loss $L_{t}(z)$ must be recorded _while training is happening_ , at every step $t$ where training example $z_{i}$ is actually encountered. In contrast, the hypothetical loss only requires us to save model checkpoints at regular intervals. Then, at any later time, we can use each checkpoint to compute the hypothetical loss for any test example $z$, after a training step on any example $z_{i}$. In Appendix A.3, we provide a deeper analysis of the difference in computational cost between computing actual and hypothetical losses. Hypothetical losses can also be incorporated into Simfluence-Linear. Starting with the Simfluence-Linear objective in Equation 2, we can replace the actual loss $L_{t}(z)$ with $\tilde{L}_{t}(z)$, and also only sum over time steps where checkpoints were saved: $\mathcal{O}_{\text{hypo}}(A,B)=\sum_{t\in\mathcal{T}_{\text{CP}}}\left(\tilde{L}_{t}(z)-\hat{L}_{t}(z)\right)^{2}+\lambda(\|A\|^{2}+\|B\|^{2})$ (10) We call this Hypothetical Simfluence-Linear, and it shares the same advantages as TracIn-CP. We can apply the same change to our ablation, Simfluence-Additive. Recall the closed-form solution to Simfluence-Additive in Equation 6. If we replace true losses with hypothetical losses and only sum over saved checkpoints, it becomes: $\hat{B}_{i}=-\frac{1}{\mathcal{T}_{\text{CP}}}\sum_{t\in\mathcal{T}_{\text{CP}}}L_{t}(z)-\tilde{L}_{t+1}(z)=-\frac{1}{\mathcal{T}_{\text{CP}}}\sum_{t\in\mathcal{T}_{\text{CP}}}\eta_{t}\nabla_{\theta}L_{t}(z_{i})^{\top}\nabla_{\theta}L_{t}(z)$ (11) Note that $\hat{B}_{i}$ matches $\mathcal{I}_{\text{TracIn-CP}}(z,z_{i})$ up to a normalization constant. Hence, TracIn-CP is equivalent to Hypothetical Simfluence-Additive. For this reason, TracIn-CP can be viewed as a purely additive simulator with parameters $B_{i}$ as defined above. ### 4.3 Influence functions Influence functions were first developed in the context of robust statistics (Hampel, 1974; Cook & Weisberg, 1980) and later adapted to deep learning by Koh & Liang (2017). They model the influence of example $z_{i}$ on example $z$ as: $\mathcal{I}_{\text{inf- fns}}(z,z_{i})=\nabla_{\theta}L(z_{i},\theta_{T})^{\top}H^{-1}_{\theta_{T}}\nabla_{\theta}L(z,\theta_{T})$ where $H_{\theta_{T}}$ is the Hessian of the training loss at the final checkpoint, $\theta_{T}$. To draw a connection between Simfluence and influence functions, we continue to build on the notion of _hypothetical training steps_ from the previous section. Let $\theta_{T}$ be the final model checkpoint. Then, imagine a hypothetical training step on example $z_{i}$. For first-order gradient descent, the new parameters would be $\theta_{T+1}=\theta_{T}-\eta_{T}\nabla_{\theta}L(z_{i},\theta_{T})$. But for second-order gradient descent (e.g. Newton’s method), the parameters would be: $\theta_{T+1}=\theta_{T}-H^{-1}_{\theta_{T}}\nabla_{\theta}L(z_{i},\theta_{T})$ If we plug this into the Taylor series approximation from Equation 9, then the hypothetical loss at time $T+1$ would be: $\tilde{L}_{T+1}(z)=L(z,\theta_{T})-\nabla_{\theta}L(z_{i},\theta_{T})^{\top}H^{-1}_{\theta_{T}}\nabla_{\theta}L(z,\theta_{T})$ (12) Similar to the previous section, we can replace true observed losses with second-order hypothetical losses in our objective function for Simfluence- Additive. Then, the closed form solution (following Equation 6) is: $\hat{B}_{i}=-\nabla_{\theta}L(z_{i},\theta_{T})^{\top}H^{-1}_{\theta_{T}}\nabla_{\theta}L(z,\theta_{T})$ Each $-\hat{B}_{i}$ is exactly equal to the influence score defined by influence functions, $\mathcal{I}_{\text{inf-fns}}(z,z_{i})$. Hence, Influence functions are equivalent to Hypothetical Simfluence-Additive when using second-order hypothetical losses. As such, Hessian-based influence also models only the additive terms in our simulation. For our evaluations, we focus on comparisons to first-order methods (TracIn), as these have been shown to scale better with model and dataset size (Yeh et al., 2018; Koh & Liang, 2017; Søgaard et al., 2021) and the integration over timesteps aligns more closely with our time-series simulation paradigm. ## 5 Experiments Now that we have presented our proposed approach (Simfluence-Linear) and prior TDA methods as training run simulators, we can evaluate and compare their simulation accuracy using the metrics defined in Section 2.2: all-steps MSE (which measures a simulator’s accuracy in predicting an example’s loss at each step of training) and final-step Spearman’s $\rho$ (which measures a simulator’s ability to predict the relative ordering of losses among test examples at the end of training). Our experiments simulate large language model (LLM) fine-tuning on different datasets and training methods, described below. ##### LLM fine-tuning methods and models. We consider two different LLM fine-tuning methods: standard full-model tuning where all model parameters are updated, and parameter-efficient tuning where only a subset of model parameters are updated. For standard full-model tuning, we use either T0 3B (Sanh et al., 2022) or T5 LM-Adapted XL (T5-LMA; Lester et al., 2021). For few-shot parameter-efficient tuning, we apply the IA3 method of Liu et al. (2022) to T0 3B. We use a batch size of 4 for all training runs. ##### Datasets. We perform LLM fine-tuning on three datasets: Recognizing Textual Entailments (RTE; Dagan et al., 2006), Choice of Plausible Alternatives (COPA; Roemmele et al., 2011), and Winogrande (Sakaguchi et al., 2021). These datasets are well- studied in the literature, and all tasks of their categories (natural language inference, next sentence prediction, and coreference resolution) are specifically held out from the instruction tuning mixture of T0. For RTE, the model must generate either “Yes” or “No”. For COPA and Winogrande, the model is presented with multiple free-text options, and must generate the correct option (which sentence should be the next sentence or which referent does a pronoun refer to) — unlike RTE, these options do not come from a fixed set of labels. ##### Training runs. For RTE and COPA, we consider a few-shot setting. For each run, we train on 64 examples randomly selected from a fixed pool of 100 examples, such that each training run involves a different set of 64 examples. Each run is 4 epochs, and examples are randomly shuffled for each epoch. For Winogrande, we study a non-few-shot setting. For each run, we train on 1024 examples randomly selected from a fixed pool of 1536 examples. Each run is 1.5 epochs, with examples randomly shuffled for each epoch. For each dataset and fine-tuning method, we perform 32 training runs (as described above). We then randomly split these training runs into $\mathcal{R}_{\text{past}}$ (22 runs) and $\mathcal{R}_{\text{future}}$ (10 runs). We use $\mathcal{R}_{\text{past}}$ for training our simulators: 20 runs for fitting simulator parameters $\Omega=(A,B)$, and 2 runs as a validation set for finding the best setting of $\lambda$, the L2-regularization hyperparameter. We use $\mathcal{R}_{\text{future}}$ as held-out runs for evaluation: we report simulation accuracy metrics on this set in Tables 1 and 2. ##### Adjustments for TracIn-CP. In Section 4, we noted that TracIn-CP uses the approximation $L_{t}(z)-L_{t+1}(z)\approx\eta_{t}\nabla_{\theta}L_{t}(z_{i})^{\top}\nabla_{\theta}L_{t}(z)$. This approximation assumes that each training step applies vanilla gradient descent with learning rate $\eta_{t}$ — this assumption is actually false in most LLM fine-tuning setups, where training steps typically use Adam (Kingma & Ba, 2014) or Adafactor (Shazeer & Stern, 2018), which produce parameter updates that have significantly different magnitude than vanilla gradient descent. For this reason, we empirically observed that the loss trajectories predicted by TracIn-CP would often have a reasonable “shape”, but the wrong scale. This did not affect our Spearman’s $\rho$ results (which are only sensitive to relative orderings), but did result in very high MSE. To strengthen TracIn-CP, we rescale its predicted loss trajectories by an optimal factor $\sigma$, chosen to minimize $\sum_{t=1}^{T}(\sigma\hat{L}_{t}(z)-L_{t}(z))^{2}$. Note that this gives TracIn-CP an unfair advantage over other methods, since the scaling factor depends on the ground truth losses, $L_{1:T}$. Our experiments show that Simfluence-Linear still outperforms TracIn-CP, even when it is optimally rescaled. Another issue with TracIn-CP stems from its memory requirements. TracIn-CP requires the user to compute and store model gradients for every example at every checkpoint. For LLMs with billions of parameters, saving full model gradients for every training example requires a prohibitive amount of storage. Hence, most applications of TracIn-CP resort to approximations such as only saving gradients for one particular layer of the LLM. To avoid error introduced by such approximations, we only evaluate TracIn-CP for parameter- efficient tuning. In this setting, only a small number of parameters are being updated, so the gradients to store are orders of magnitude smaller. In contrast, Simfluence does not depend on model checkpoints or anything that scales with model size, so it is equally applicable to all fine-tuning methods. Task | TDA Method | | All-steps Mean --- Squared Error | Final-step --- Spearman’s $\rho$ COPA | TracIn-CP (10 ckpts) | $5.873_{\pm 0.307}$ | $0.448_{\pm 0.079}$ TracIn-CP (all steps) | $5.792_{\pm 0.331}$ | $0.469_{\pm 0.077}$ Simfluence-Additive (TracIn-Ideal) | $2.506_{\pm 0.491}$ | $0.786_{\pm 0.065}$ Simfluence-Multiplicative | $1.557_{\pm 0.469}$ | $0.763_{\pm 0.046}$ Simfluence-Linear | $\mathbf{1.503_{\pm 0.494}}$ | $\mathbf{0.886_{\pm 0.033}}$ RTE | TracIn-CP (10 ckpts) | $3.317_{\pm 0.323}$ | $0.040_{\pm 0.202}$ TracIn-CP (all steps) | $3.733_{\pm 0.275}$ | $0.034_{\pm 0.200}$ Simfluence-Additive (TracIn-Ideal) | $3.096_{\pm 3.161}$ | $0.543_{\pm 0.288}$ Simfluence-Multiplicative | $5.818_{\pm 14.311}$ | $0.599_{\pm 0.182}$ Simfluence-Linear | $\mathbf{0.819_{\pm 1.222}}$ | $\mathbf{0.887_{\pm 0.080}}$ Table 1: Comparison between Simfluence and TracIn-CP. Results are averaged over 10 held-out test runs. Lower MSE is better, and higher Spearman’s $\rho$ is better. See Section 2.2 for the full definitions of the metrics. Task | | Fine-tuning --- Method | Language --- Model | Simfluence --- Model | All Steps Mean --- Squared Error | Final Step --- Spearman’s $\rho$ COPA | FMT | T0 | additive | $0.296_{\pm 0.163}$ | $0.278_{\pm 0.141}$ multiplicative | $1.582_{\pm 4.274}$ | $\mathbf{0.629_{\pm 0.058}}$ linear | $\mathbf{0.272_{\pm 0.425}}$ | $\mathbf{0.628_{\pm 0.065}}$ T5-LMA | additive | $0.801_{\pm 0.216}$ | $0.368_{\pm 0.117}$ multiplicative | $0.240_{\pm 0.158}$ | $\mathbf{0.672_{\pm 0.079}}$ linear | $\mathbf{0.175_{\pm 0.113}}$ | $0.565_{\pm 0.085}$ IA3 | T0 | additive | $2.506_{\pm 0.491}$ | $0.786_{\pm 0.065}$ multiplicative | $1.557_{\pm 0.469}$ | $0.763_{\pm 0.046}$ linear | $\mathbf{1.503_{\pm 0.494}}$ | $\mathbf{0.886_{\pm 0.033}}$ RTE | FMT | T0 | additive | $7.385_{\pm 7.696}$ | $0.451_{\pm 0.374}$ multiplicative | $6.818_{\pm 5.063}$ | $0.609_{\pm 0.233}$ linear | $\mathbf{6.709_{\pm 14.756}}$ | $\mathbf{0.813_{\pm 0.116}}$ T5-LMA | additive | $14.531_{\pm 8.214}$ | $0.122_{\pm 0.193}$ multiplicative | $2.866_{\pm 3.141}$ | $0.086_{\pm 0.087}$ linear | $\mathbf{2.290_{\pm 2.575}}$ | $\mathbf{0.399_{\pm 0.266}}$ IA3 | T0 | additive | $3.096_{\pm 3.161}$ | $0.543_{\pm 0.288}$ multiplicative | $5.818_{\pm 14.311}$ | $0.599_{\pm 0.182}$ linear | $\mathbf{0.819_{\pm 1.222}}$ | $\mathbf{0.887_{\pm 0.080}}$ Winogrande | FMT | T5-LMA | additive | $7.060_{\pm 1.961}$ | $0.256_{\pm 0.156}$ multiplicative | $1.496_{\pm 0.400}$ | $0.339_{\pm 0.190}$ linear | $\mathbf{0.910_{\pm 0.104}}$ | $\mathbf{0.383_{\pm 0.151}}$ Table 2: Quality of fit between our simulator’s predicted losses and the ground-truth losses of 10 held-out runs. Standard deviation is reported after the $\pm$ sign. Simfluence-Additive is equivalent to TracIn-Ideal (Pruthi et al., 2020; see Section 4 for a proof). FMT = full-model tuning. IA3 = parameter-efficient tuning Liu et al. (2022). ##### Results. Table 1 shows our primary results: a comparison of our proposed method (Simfluence-Linear) and its ablations (Simfluence-Additive and Simfluence- Multiplicative) to existing methods (TracIn-CP and TracIn-Ideal), evaluated on how well their simulated loss trajectories match the loss trajectories of real training runs. For these results, we used IA3 parameter-efficient tuning on T0 3B. Following the typical usage of TracIn-CP, we select 10 checkpoints for TracIn- CP over which to accumulate losses. However, as noted in Section 4.2, accumulating losses over just 10 checkpoints (rather than all steps) is an approximation that may introduce error. Therefore, we also evaluate TracIn-CP in its best-case scenario where we accumulate losses over all training steps of all 20 training runs. Table 1 yields several important insights. First, all Simfluence variants outperform all TracIn variants. They have smaller MSE, meaning they predict loss trajectories better, and they have much higher Spearman’s $\rho$, meaning they predict the final loss of each example better. As noted in Section 4, the only difference between Simfluence-Additive and TracIn-CP (all steps) is that the former uses _actual observed losses_ while the latter uses _hypothetical losses_ estimated from saved model checkpoints. Both are purely additive models of influence. Therefore, the significant gap in quality between these two approaches can be attributed to the approximation error introduced by hypothetical losses. Second, we see that Simfluence-Linear strongly outperforms both Simfluence- Additive (equivalent to TracIn-Ideal) and Simfluence-Multiplicative. This shows that it is important to model _both additive and multiplicative_ influence. This improvement is also qualitatively visible: Figure 2 shows that Simfluence-Linear (green lines) predicts the shape of true held-out loss trajectories (blue lines) better than either Simfluence-Additive or Simfluence-Multiplicative (gray lines with plus/multiply signs). Table 2 continues to evaluate Simfluence on a wider range of tasks (RTE, COPA, Winogrande), language models (T0, T5), fine-tuning methods (standard full- model tuning, parameter-efficient tuning), and numbers of examples (few-shot, many-shot). Again, Simfluence-Linear gives significant gains over both Simfluence-Multiplicative and Simfluence-Additive in nearly all settings, reinforcing the importance of modeling both additive and multiplicative influence. Interestingly, when we compare the two ablations (Simfluence- Additive and Simfluence-Multiplicative), neither is strictly better than the other across all setups, suggesting that influence may be more additive or more multiplicative depending on the task or dataset. Next, we restrict our attention to just Simfluence-Linear. We find that simulation accuracy varies across setups. If we focus on all-steps mean squared error, there does not appear to be a clear trend. On the other hand, if we focus on final-step Spearman’s $\rho$, we find that Simfluence-Linear appears to best at simulating T0 IA3 fine-tuning (roughly 0.88 Spearman on both COPA and RTE), does a little worse on T0 full-model tuning (Spearman ranges between 0.628 and 0.813), and the least well on T5-LMA full-model tuning (Spearman ranges between 0.383 and 0.565). Further research is needed to study and explain these differences. (a) COPA (b) RTE Figure 2: Qualitative examples of Simfluence’s predicted loss trajectories on the loss of one random test example in one run. ## 6 Related work As described in Section 4, our method is closely related to other TDA methods for estimating the influence of training examples on test predictions. Prior methods use gradients for tractability; in this work, we explore settings where that approximation is not needed and where we can directly measure loss deltas from repeated training runs. Because our method is based on the full loss trajectory over the course of training, it bears the closest relationship to TracIn (Pruthi et al., 2020), in contrast to Koh & Liang (2017), Schioppa et al. (2022), and Guo et al. (2021) which use Hessian-based approximations on the final trained model. Recently, these methods have been applied to explain predictions and identify data artifacts in a number of NLP tasks, including NLI (Koh et al., 2019; Han et al., 2020; Pezeshkpour et al., 2022) and language model pretraining (Han & Tsvetkov, 2022). Very closely related to our work is Søgaard et al. (2021), which observes that existing influence methods are a poor approximation of leave-one-out accuracy and do not account for training data order, to which they are highly sensitive. Additionally, Koh et al. (2019) study influence in the group setting where multiple training points are left out, and find high correlation between predicted and actual effects, although the actual errors are large and the predicted effects tend to be an underestimate. Han & Tsvetkov (2021) evaluate using influence of specific examples as a training objective, in order to guide the model away from reliance on spurious correlations. In work contemporaneous to ours, Yang et al. (2023) also use influence functions to guide targeted interventions, using them as a heuristic to identify training examples to remove in order to flip a target prediction, although they only evaluate with a convex model (logistic regression). While the above training data attribution methods start from an interpretability perspective, Simfluence can also be viewed as a simulation of the training process, and in this respect is related to more general work on learning dynamics of deep networks. Specifically, Simfluence bears resemblence to prior work on curriculum design; see Wang et al. (2021) for an overview. For example, Kim & Choi (2018); Fan et al. (2017) use deep reinforcement learning in order to select an optimal training curriculum, and Jiang et al. (2018) train an auxiliary neural network model in order to improve the primary model’s generalization performance. Similar to our approach, Swayamdipta et al. (2020) use a model’s loss on and confidence in individual training examples in order to “map” the training set as a whole, and provide insights regarding which examples best support e.g., out of distribution generalization. In general, prior work on curricula focus on characterizing “easy” and “hard” examples, with the goal of improving overall model performance. In contrast, our focus is on interpretability and training data attribution for specific predictions at test time. ## 7 Limitations and future work ##### Parameter-efficiency. In Section 3, we noted that Simfluence-Linear is not particularly data- efficient: each simulator must learn two parameters ($A_{i}$ and $B_{i}$) for every training example of interest, which ultimately requires us to observe at least $2n$ training steps if we wish to simulate $n$ training examples. In future work, we hope to explore more data-efficient, featurized simulators — for example, instead of learning a separate simulator for every test example, and learning separate parameters $(A_{i},B_{i})$ for every training example, we could model $\alpha(c_{t})$ and $\beta(c_{t})$ as: $\alpha(c_{t})=\sum_{i\in c_{t}}\Phi(z_{i})^{\top}\Phi(z)\qquad\beta(c_{t})=\sum_{i\in c_{t}}\Psi(z_{i})^{\top}\Psi(z)$ where $\Phi(z)$ and $\Psi(z)$ are neural network encoders that map any example to a low-dimensional vector.333Like TracIn-CP, we are now modeling influence as a dot-product of low-dimensional vectors, but the vectors are no longer loss gradients. More generally, any kind of learned model can be used to parameterize both $\alpha(c_{t})$ and $\beta(c_{t})$. This may increase the generalization of the simulator, as well as its sample efficiency. ##### Handling previously unseen examples. The above proposal also overcomes another current limitation of Simfluence- Linear: it cannot simulate training runs that include examples which have never been seen before in $\mathcal{R}_{\text{past}}$. This problem is eliminated if we switch to a featurized representation of examples, proposed above. ##### Expressive power. We note that while Simfluence-Linear can model redundancy, it still cannot model other _supermodular interactions_ between examples. For example, if example $z_{1}$ teaches a model that “Paris is in France”, and a later example $z_{2}$ teaches a model that “France is in Europe”, then the two examples combined may be enough for a model to learn that “Paris is in Europe”, while neither alone would be sufficient. For this, we would need to move beyond a simple Markov model to something that allows for longer-range interactions between examples — a direction we leave for future work. There is also the more general point that Simfluence-Linear oversimplifies the real dynamics of training. In reality, the loss at each time step is governed by the model’s underlying parameters, optimizer variables and learning rate — aspects that we have ignored in Simfluence-Linear. While these are limitations of Simfluence-Linear, they are not limitations of the general _training run simulation_ paradigm (Simfluence): future work could explore a wide range of time series models to more accurately simulate training. The only requirements are that the simulator should run much faster than actual training, and should be learnable from a modest number of previously observed runs. #### Acknowledgments We would like to thank Daphne Ippolito, Deepak Ramachandran, Kathy Meier- Hellstern, Arun Chaganty and Raphael Hoffmann for their insightful feedback on the paper. ## References * Bengio et al. (2009) Yoshua Bengio, Jérôme Louradour, Ronan Collobert, and Jason Weston. Curriculum learning. In _Proceedings of the 26th annual international conference on machine learning_ , pp. 41–48, 2009. * Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. _Advances in neural information processing systems_ , 33:1877–1901, 2020. * Cook & Weisberg (1980) R Dennis Cook and Sanford Weisberg. Characterizations of an empirical influence function for detecting influential cases in regression. _Technometrics_ , 22(4):495–508, 1980. * Dagan et al. (2006) Ido Dagan, Oren Glickman, and Bernardo Magnini. The pascal recognising textual entailment challenge. In _Machine Learning Challenges. Evaluating Predictive Uncertainty, Visual Object Classification, and Recognising Tectual Entailment: First PASCAL Machine Learning Challenges Workshop, MLCW 2005, Southampton, UK, April 11-13, 2005, Revised Selected Papers_ , pp. 177–190. Springer, 2006. * Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In _2009 IEEE conference on computer vision and pattern recognition_ , pp. 248–255. Ieee, 2009. * Fan et al. (2017) Yang Fan, Fei Tian, Tao Qin, Jiang Bian, and Tie-Yan Liu. Learning what data to learn. _arXiv preprint arXiv:1702.08635_ , 2017. * Fujishige (2005) Satoru Fujishige. _Submodular functions and optimization_. Elsevier, 2005. * Guo et al. (2021) Han Guo, Nazneen Rajani, Peter Hase, Mohit Bansal, and Caiming Xiong. Fastif: Scalable influence functions for efficient model interpretation and debugging. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 10333–10350, 2021. * Halevy et al. (2009) Alon Halevy, Peter Norvig, and Fernando Pereira. The unreasonable effectiveness of data. _IEEE intelligent systems_ , 24(2):8–12, 2009\. * Hampel (1974) Frank R Hampel. The influence curve and its role in robust estimation. _Journal of the american statistical association_ , 69(346):383–393, 1974. * Han & Tsvetkov (2021) Xiaochuang Han and Yulia Tsvetkov. Influence tuning: Demoting spurious correlations via instance attribution and instance-driven updates. In _Findings of the Association for Computational Linguistics: EMNLP 2021_ , pp. 4398–4409, Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.findings-emnlp.374. URL https://aclanthology.org/2021.findings-emnlp.374. * Han & Tsvetkov (2022) Xiaochuang Han and Yulia Tsvetkov. Orca: Interpreting prompted language models via locating supporting data evidence in the ocean of pretraining data. _arXiv preprint arXiv:2205.12600_ , 2022. * Han et al. (2020) Xiaochuang Han, Byron C Wallace, and Yulia Tsvetkov. Explaining black box predictions and unveiling data artifacts through influence functions. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pp. 5553–5563, 2020. * Hoerl & Kennard (1970) Arthur E Hoerl and Robert W Kennard. Ridge regression: Biased estimation for nonorthogonal problems. _Technometrics_ , 12(1):55–67, 1970. * Jiang et al. (2018) Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In _International conference on machine learning_ , pp. 2304–2313. PMLR, 2018. * Kaplan et al. (2020) Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. _arXiv preprint arXiv:2001.08361_ , 2020. * Kim & Choi (2018) Tae-Hoon Kim and Jonghyun Choi. Screenernet: Learning self-paced curriculum for deep neural networks. _arXiv preprint arXiv:1801.00904_ , 2018. * Kingma & Ba (2014) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_ , 2014. * Koh & Liang (2017) Pang Wei Koh and Percy Liang. Understanding black-box predictions via influence functions. In _International conference on machine learning_ , pp. 1885–1894. PMLR, 2017. * Koh et al. (2019) Pang Wei W Koh, Kai-Siang Ang, Hubert Teo, and Percy S Liang. On the accuracy of influence functions for measuring group effects. _Advances in neural information processing systems_ , 32, 2019. * Lester et al. (2021) Brian Lester, Rami Al-Rfou, and Noah Constant. The power of scale for parameter-efficient prompt tuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pp. 3045–3059, Online and Punta Cana, Dominican Republic, November 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.emnlp-main.243. URL https://aclanthology.org/2021.emnlp-main.243. * Liu et al. (2022) Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit Bansal, and Colin Raffel. Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning. _arXiv preprint arXiv:2205.05638_ , 2022. * Pezeshkpour et al. (2022) Pouya Pezeshkpour, Sarthak Jain, Sameer Singh, and Byron C Wallace. Combining feature and instance attribution to detect artifacts. In _Findings of the Association for Computational Linguistics: ACL 2022_ , pp. 1934–1946, 2022. * Pruthi et al. (2020) Garima Pruthi, Frederick Liu, Mukund Sundararajan, and Satyen Kale. Estimating training data influence by tracking gradient descent. _CoRR_ , abs/2002.08484, 2020. URL https://arxiv.org/abs/2002.08484. * Roemmele et al. (2011) Melissa Roemmele, Cosmin Adrian Bejan, and Andrew S Gordon. Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _AAAI spring symposium: logical formalizations of commonsense reasoning_ , pp. 90–95, 2011. * Sakaguchi et al. (2021) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. Winogrande: An adversarial winograd schema challenge at scale. _Communications of the ACM_ , 64(9):99–106, 2021\. * Sanh et al. (2022) Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Tali Bers, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. Multitask prompted training enables zero-shot task generalization. 2022\. * Schioppa et al. (2022) Andrea Schioppa, Polina Zablotskaia, David Vilar, and Artem Sokolov. Scaling up influence functions. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 36, pp. 8179–8186, 2022. * Shapley et al. (1953) Lloyd S Shapley et al. A value for n-person games. 1953\. * Shazeer & Stern (2018) Noam Shazeer and Mitchell Stern. Adafactor: Adaptive learning rates with sublinear memory cost. In _International Conference on Machine Learning_ , pp. 4596–4604. PMLR, 2018. * Søgaard et al. (2021) Anders Søgaard et al. Revisiting methods for finding influential examples. _arXiv preprint arXiv:2111.04683_ , 2021. * Swayamdipta et al. (2020) Swabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, and Yejin Choi. Dataset cartography: Mapping and diagnosing datasets with training dynamics. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pp. 9275–9293, Online, November 2020\. Association for Computational Linguistics. doi: 10.18653/v1/2020.emnlp-main.746. URL https://aclanthology.org/2020.emnlp-main.746. * Wang et al. (2021) Xin Wang, Yudong Chen, and Wenwu Zhu. A survey on curriculum learning. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 44(9):4555–4576, 2021. * Yang et al. (2023) Jinghan Yang, Sarthak Jain, and Byron C. Wallace. How many and which training points would need to be removed to flip this prediction? 2023\. doi: 10.48550/ARXIV.2302.02169. URL https://arxiv.org/abs/2302.02169. * Yeh et al. (2018) Chih-Kuan Yeh, Joon Sik Kim, Ian En-Hsu Yen, and Pradeep Ravikumar. Representer point selection for explaining deep neural networks. In _Proc. NeurIPS_ , 2018. ## Appendix A Appendix ### A.1 Closed form solution for learning the parameters of Simfluence-Linear Here, we provide a closed form solution to the Simfluence-Linear learning objective in Equation 2, reproduced here: $\mathcal{O}(A,B)=\sum_{t\in\mathcal{T}}\left(L_{t}(z)-\hat{L}_{t}(z)\right)^{2}+\lambda(\|A\|^{2}+\|B\|^{2})$ First, let us concatenate parameters $A\in\mathbb{R}^{n}$ and $B\in\mathbb{R}^{n}$ into a single vector, $\textbf{w}\in\mathbb{R}^{2n}$. Our approach will then be to rewrite the objective as a standard L2-regularized multivariate linear regression problem: $\mathcal{O}(\textbf{w})=\|\textbf{y}-\textbf{Xw}\|^{2}+\lambda\|\textbf{w}\|^{2}$ (13) where y is a vector, X is a matrix, and $\lambda$ is the same L2 regularization hyperparameter as in Equation 2. Once we have it in this form, the optimal value for w is just the standard ridge estimator (Hoerl & Kennard, 1970): $\hat{\textbf{w}}=(\textbf{X}^{\top}\textbf{X}+\lambda\textbf{I})^{-1}\textbf{X}^{\top}\textbf{y}$ where I is an $n\times n$ identity matrix. We now define X and y. First, recall that $\mathcal{T}$ denotes the subset of training steps in $\mathcal{R}_{\text{past}}$ where we recorded the loss of test example $z$ (both before and after the training step). Let $s=1,\dots,S$ index over those steps, so that $\mathcal{T}=\\{t_{s}\ \text{for}\ s=1,\dots,S\\}$. Now, define $\textbf{y}\in\mathbb{R}^{S}$ to be a vector where each entry $y_{s}=L_{t_{s}}(z)$, and $\textbf{X}=[\textbf{X}^{\alpha}\ \textbf{X}^{\beta}]\in\mathbb{R}^{S\times 2n}$ to be a matrix with submatrices $\textbf{X}^{\alpha}\in\mathbb{R}^{S\times n}$ and $\textbf{X}^{\beta}\in\mathbb{R}^{S\times n}$. The submatrices are defined as follows: $\textbf{X}^{\alpha}_{s,i}=\begin{cases}L_{t_{s}-1}(z)&\text{if $i\in r_{t_{s}}$}\\\ 0&\text{otherwise.}\end{cases}$ $\textbf{X}^{\beta}_{s,i}=\begin{cases}1&\text{if $i\in r_{t_{s}}$}\\\ 0&\text{otherwise.}\end{cases}$ Recall that $i\in r_{t_{s}}$ indicates whether training example $z_{i}$ was consumed at training step $t_{s}$. With these values, one can verify that Equation 13 is equal to Equation 2. ### A.2 Data requirements for Simfluence-Linear In Section 3.3, we studied the data requirements for Simfluence-Linear when the training batch size is 1. We now turn to the general case of batch size > 1. Again, we approach this question by asking what conditions are necessary to guarantee a unique solution to the Simfluence-Linear training objective. We will use our observations from the previous section, where we formulated Simfluence-Linear as a multivariate linear regression problem (Equation 13). When L2 regularization is enabled ($\lambda>0)$, the conditions for a unique solution are simple but not enlightening: Equation 13 always has a unique solution, even when zero training steps are observed. To develop better intuitions, we will study the case where L2 regularization is disabled ($\lambda=0$). Then, Equation 13 has a unique solution if and only if the matrix X has linearly independent column vectors. Note that X has $2n$ column vectors: $n$ column vectors comprising $\textbf{X}^{\alpha}$ and $n$ comprising $\textbf{X}^{\beta}$. Each vector has dimension $S$, the number of training steps that we observed. We will use $\textbf{X}^{\alpha}(i)$ to denote the $i^{th}$ column of $\textbf{X}^{\alpha}$ and similarly for $\textbf{X}^{\beta}(i)$. Finally, note that $\textbf{X}^{\alpha}(i)$ and $\textbf{X}^{\beta}(i)$ are both sparse vectors with the same sparsity pattern: the $s^{th}$ entry of each vector is zero unless example $i$ was present at training step $t_{s}$. We can now describe a few conditions that determine whether X has linearly independent columns (hence guaranteeing a unique solution): 1. 1. We always need to observe at least $S=2n$ training steps. If $S<2n$, then the column vectors could not be linearly independent, since we have more vectors than dimensions. 2. 2. For every training example of interest, we need to observe at least _two_ training steps involving that example (just as in the batch size 1 setting). If not, then $\textbf{X}^{\alpha}(i)$ would be a multiple of $\textbf{X}^{\beta}(i)$, creating linear dependence. 3. 3. We cannot obtain a unique solution if $L_{t}(z)$ is constant over time (for example if training does not change the loss). This is because column $\textbf{X}^{\alpha}(i)$ would be equal to column $\textbf{X}^{\beta}(i)$ times a scaling factor, $L_{t}(z)$ — again, violating linear independence. The intuition: if the loss does not change, we cannot distinguish between additive versus multiplicative effects. Next, we present a manually designed training curriculum with exactly $S=2n$ training steps, and show that this curriculum is sufficient to obtain a unique solution. We will define a curriculum of just $n$ steps, and then repeat that exact same curriculum to obtain $2n$ steps. To define our $n$-step curriculum, we introduce a “batching matrix”, $\mathbf{Q}$: it is an $n\times n$ binary matrix, where $\mathbf{Q}_{ij}=1$ if the batch at training step $i$ contains example $j$, and 0 otherwise. Furthermore, let $k$ denote the batch size, and let us assume that $n$ is evenly divisible by $k+1$. Then, we define $\mathbf{Q}$ to have the following block-diagonal structure: $\mathbf{Q}=\begin{bmatrix}\mathbf{U}&\mathbf{0}&\cdots&\mathbf{0}\\\ \mathbf{0}&\mathbf{U}&\cdots&\mathbf{0}\\\ \vdots&\vdots&\ddots&\vdots\\\ \mathbf{0}&\mathbf{0}&\mathbf{\cdots}&\mathbf{U}\end{bmatrix}$ where the submatrix $\mathbf{U}$ is a $(k+1)\times(k+1)$ binary matrix with the following structure: $\mathbf{U}=\mathbf{1}\mathbf{1}^{\top}-\mathbf{I}=\begin{bmatrix}0&1&\cdots&1\\\ 1&0&\cdots&1\\\ \vdots&\vdots&\ddots&\vdots\\\ 1&1&\cdots&0\end{bmatrix}.$ where $\mathbf{1}$ denotes a vector of all ones, and $\mathbf{I}$ denotes an identity matrix. In other words, $\mathbf{U}$ is a matrix where all non- diagonal entries are 1, and all diagonal entries are 0. By this construction, each row of $\mathbf{Q}$ sums to $k$, our desired batch size. Furthermore, we can show that $\mathbf{Q}$ is full rank. First, note that $\mathbf{U}$ is full rank, because we can use rank-preserving elementary column operations to convert it into an identity matrix: first, we can sum all columns and divide by $k$ to get a vector of all ones, $\mathbf{1}$. Then, if we multiply each column in $\mathbf{U}$ by $-1$ and add the vector of ones, the result is an identity matrix. Finally, recall that any block-diagonal matrix is full rank if each sub-matrix on its diagonal is full rank (one can apply elementary column operations to convert each sub-matrix into an identity matrix, without affecting any other rows or columns). Hence, $\mathbf{Q}$ is full rank. Now that we have defined our $n$-step curriculum using $\mathbf{Q}$, we can write $\mathbf{X}$ as follows: $\text{{X}}=\begin{bmatrix}\mathbf{X}^{\alpha}&\mathbf{X}^{\beta}\end{bmatrix}=\begin{bmatrix}\text{diag}(L_{1:n})\mathbf{Q}&\mathbf{Q}\\\ \text{diag}(L_{n+1:2n})\mathbf{Q}&\mathbf{Q}\end{bmatrix}$ where $\text{diag}(L_{1:n})$ denotes a diagonal matrix whose diagonal entries are equal to $L_{1}(z),L_{2}(z),\dots,L_{n}(z)$. Next, we will perform several rank-preserving transformations on $\mathbf{X}$. First, we left-multiply by $\begin{bmatrix}\mathbf{Q}^{-1}&\mathbf{0}\\\ \mathbf{0}&\mathbf{Q}^{-1}\end{bmatrix}$: $\displaystyle\begin{bmatrix}\text{diag}(L_{1:n})\mathbf{Q}&\mathbf{Q}\\\ \text{diag}(L_{n+1:2n})\mathbf{Q}&\mathbf{Q}\end{bmatrix}$ $\displaystyle\rightarrow\begin{bmatrix}\text{diag}(L_{1:n})&\mathbf{I}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}$ This is rank-preserving because the left-multiplier is full rank (it is block- diagonal and $\mathbf{Q}$ is full rank). Then, we subtract the lower rows from the upper rows (rank-preserving elementary row operations): $\begin{bmatrix}\text{diag}(L_{1:n})&\mathbf{I}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}\rightarrow\begin{bmatrix}\text{diag}(L_{1:n}-L_{n+1:2n})&\mathbf{0}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}$ Now, let us consider the upper-left corner, $\text{diag}(L_{1:n}-L_{n+1:2n})$. This is a diagonal matrix, so it is full rank if $L_{t}(z)\neq L_{t+n}(z)$ for any $t$. This typically holds, since an example’s loss almost always changes after $n$ steps. Under this additional assumption, we can apply further elementary row operations to convert the upper left corner into an identity matrix: $\displaystyle\begin{bmatrix}\text{diag}(L_{1:n}-L_{n+1:2n})&\mathbf{0}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}$ $\displaystyle\rightarrow\begin{bmatrix}\mathbf{I}&\mathbf{0}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}$ With identity in the upper left corner, we can apply more elementary row operations to cancel out the lower-left corner, to obtain a full rank identity matrix: $\begin{bmatrix}\mathbf{I}&\mathbf{0}\\\ \text{diag}(L_{n+1:2n})&\mathbf{I}\end{bmatrix}\rightarrow\begin{bmatrix}\mathbf{I}&\mathbf{0}\\\ \mathbf{0}&\mathbf{I}\end{bmatrix}$ Since all our operations were rank-preserving, this shows that $\mathbf{X}$ is full rank. ### A.3 Comparing the computational cost of Simfluence-Additive versus TracIn-CP As noted in Section 4, both Simfluence-Additive and TracIn-CP represent the same additive model of influence. The main difference is that Simfluence- Additive fits a model to _actual observed losses_ , while TracIn-CP uses _hypothetical losses_. Let $V_{L}$ denote the computational cost of computing the loss, $L_{t}(z)$, and let $V_{G}$ denote the cost of computing the loss gradient, $\nabla_{\theta}L_{t}(z)$. For many modern neural network implementations, $V_{G}\approx 2V_{L}$. In Simfluence-Additive, we compute an actual loss reduction, $L_{t}(z)-L_{t+1}(z)$, which costs $2V_{L}$. In TracIn-CP, we compute a hypothetical loss reduction, $\nabla_{\theta}L_{t}(z)^{\top}\nabla_{\theta}L_{t}(z_{i})$, which costs $2V_{G}$. As noted in Section 3, Simfluence-Additive requires us to roughly observe one loss reduction per model parameter. Each simulator has $n$ parameters, one for each training example. Furthermore, we fit a separate simulator for each test example of interest ($m$ total). Hence, to get a simulator for every test example, modeling every training example, the total cost of Simfluence- Additive is $2nmV_{L}$. To achieve the same goal with TracIn-CP, we must compute a hypothetical loss reduction caused by every training example ($n$) on every test example ($m$), at every checkpoint ($C$). Hence, naively, the total cost of TracIn-CP is $2nmKV_{G}$. However, we can do better than this. Instead of computing the gradient $\nabla_{\theta}L_{t}(z)$ from scratch each time we calculate a hypothetical loss reduction, we can precompute and cache the gradient for every train and test example, just once for each checkpoint. This precomputation costs $(n+m)KV_{G}$. Then, let us assume that the actual dot product between gradients has negligible cost compared to computing the gradient itself. So, TracIn-CP costs approximately $(n+m)KV_{G}$. Now we can compare Simfluence-Additive, $2nmV_{L}$, and TracIn-CP, $(n+m)KV_{G}$. Making the common assumption that $V_{G}=2V_{L}$, we can see that the cost of the two approaches is the same when $C=nm/(n+m)$: $\text{{TracIn-CP}{}}=(n+m)KV_{G}=nmV_{G}=2nmV_{L}=\text{{Simfluence- Additive}{}}$ In the common situation where $n\gg m$, we have that $nm/(n+m)\approx m$. So the general conclusion is that Simfluence-Additive becomes more expensive than TracIn-CP when the number of test examples that you wish to simulate ($m$) grows larger than the number of checkpoints used by TracIn-CP. Finally, Simfluence-Multiplicative has the same cost as Simfluence-Additive, and Simfluence-Linear is just twice the cost of Simfluence-Additive.
# KGSecConfig: A Knowledge Graph Based Approach for Secured Container Orchestrator Configuration Mubin Ul Haque1, M. Mehdi Kholoosi2, and M. Ali Babar3 Centre for Research on Engineering Software Technologies (CREST) School of Computer Science, and Engineering, The University of Adelaide, Adelaide, Australia Cyber Security Cooperative Research Centre<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS> ###### Abstract Container Orchestrator (CO) is a vital technology for managing clusters of containers, which may form a virtualized infrastructure for developing and operating software systems. Like any other software system, securing CO is critical, but can be quite challenging task due to large number of configurable options. Manual configuration is not only knowledge intensive and time consuming, but also is error prone. For automating security configuration of CO, we propose a novel Knowledge Graph based Security Configuration, KGSecConfig, approach. Our solution leverages keyword and learning models to systematically capture, link, and correlate heterogeneous and multi-vendor configuration space in a unified structure for supporting automation of security configuration of CO. We implement KGSecConfig on Kubernetes, Docker, Azure, and VMWare to build secured configuration knowledge graph. Our evaluation results show 0.98 and 0.94 accuracy for keyword and learning-based secured configuration option and concept extraction, respectively. We also demonstrate the utilization of the knowledge graph for automated misconfiguration mitigation in a Kubernetes cluster. We assert that our knowledge graph based approach can help in addressing several challenges, e.g., misconfiguration of security, associated with manually configuring the security of CO. Keywords- Container, Configuration, Security, Knowledge Graph ## I Introduction Container Orchestrator (CO), such as Kubernetes [1], Docker-Swarm [2], Nomad [3] enable system administrators to perform multiple tasks, e.g., scaling and interconnecting a large number of containers seamlessly [4]. Furthermore, organizations gain a significant benefit in deploying containers in clouds across different production environments [5] for rapid software delivery using continuous software engineering. Despite the reported benefits of using CO, security is one of the key concerns while deploying containers in CO [6, 7]. CO is typically used to manage thousands of containers hosting business critical services, whose security can be compromized if CO is not secured. That is why CO security misconfigurations have emerged as one of the biggest concerns of system administrator and software professionals [8]. In a recent survey [9], CO security misconfiguration has been mentioned as the top concern (59%). By exploiting a misconfigured identity management roles of a CO in a Capital One system in 2019 [10], hackers accessed 30 GB of application data containing valuable financial and personal information of 106 million people. If the security of a CO is not appropriately configured, it cannot enforce security policies (e.g., restriction of capabilities, accessibility of root files, privileges to run containers) for the underlying clusters on its own as the default options are not always security-focused [11]. Therefore, system administrators need to manually perform security configuration tasks, such as identifying and updating the default options, implementing configuration arguments and the options involved in the security policy [12]. Moreover, when deploying containers in a cluster, system administrators may need to maintain diverse configuration files for different purposes, such as deployment (e.g., replica, namespaces), managing credentials (e.g., roles and resources), and observing performances (e.g., audit and log). Each of such configuration files consists of several arguments. For example, a deployment configuration file in Kubernetes may consist of 40 configuration arguments [13] and more than 100 options. Besides, CO has a distributed architecture constituting different components with a vast configuration space, e.g., arguments, options, default values and types. For example, Kubernetes consists of eight different components, more than 1K configuration arguments across all components, which require manual intervention for configuration [12]. Thus, the process of manually identifying and configuring a vast configuration space and different configuration files from the security perspective in CO is effort-intensive, time-consuming, and error-prone [11, 12]. Therefore, automation is highly desirable for securing the configuration of CO to reduce the required effort and minimize the risk of misconfiguration [7, 14]. However, automation support may need to overcome several challenges, such as configuration data scatteredness, dynamicity, and overload for automating the security configuration task of CO. We provide brief explanation of each of these challenges below to help better understand our research’s motivators. Automation of CO security configuration is a knowledge-intensive task; it is important to know ‘what’ options are required for ‘which’ arguments for each component of a CO. This task is not trivial due to a large number of available configuration options and arguments. In addition, CO is typically integrated with multi-vendor software systems [9], such as Continuous Integration and Development or Deployment (CI/CD) tools, e.g., Jenkins; Development and Operations (DevOps) platforms, e.g., Docker; and cloud resource providers, e.g., Azure. Therefore, administrators need to investigate each of the scattered tools, platforms, or providers’ configuration space for identifying and capturing the secured configuration arguments and options. CO security configuration options data may dynamically change due to new vulnerabilities, malware, security flaws, and patch releases. For example, the configuration option for the argument ‘imagePullPolicy’ in Kubernetes was expected to change after Distributed Denial of Service (DDoS) attack in DataDog [11]. Hence, to mitigate the rapidly changing threat landscape, configuration space needs to be updated continuously, which requires a frequent manual monitor and search of diverse data sources (e.g., HTML, JSON or XML). Cyber security information overload as the number and the sources of threat advisories are increasing. Whilst threat advisories contain in-depth information on how attackers target the existing configuration to compromise software systems, it is a cognitively challenging task to separate relevant from the irrelevant information. For example, to obtain the secured configuration options for kube api server [15] (one of the eight constituent components of Kubernetes), relevant documents contain 400 sentences on average, whereas the concepts, i.e., relevant sentences, such as reason, implementation details, are 30 sentences on average. There is a need for a unified solution for secured CO configuration, constituting configuration space associated with their secured arguments, options, and concepts of heterogeneous, multi-vendor, and diverse tools, platforms, providers for enabling automation of secured CO configuration. There are a few studies that have discussed the best security practices [7] and defects [14] for Kubernetes. Moreover, Kubernetes usage for monitoring resources and performances had also been reported in [16, 17]. However, there is a general lack of solutions for automating CO security configuration. To address the above identified problem, we propose to build a KG for Secured Configuration (KGSecConfig) that provides a fundamental support for automatic security configuration of CO. KG can provide the unification of configuration space associated with their secured arguments, options, and concepts in terms of entities and relations. For example, configuration options, arguments, types, default values, code snippets (e.g., Infrastructure-as-Code (IaC)) represent entities. Relations connect entities using phrases that describe relationship among entities, such as ‘is’ or ‘has’ relationship. Besides, we propose a keyword and learning-based model to address the dynamicity and overload barriers, which can automatically extract secured configuration knowledge in terms of secured arguments, options, and concepts hidden in large documents. These types of knowledge are populated in the KG for organized structure, so that the encapsulated knowledge can be utilized for automating the security configuration of CO. Our KGSecConfig can capture and link the configuration knowledge through real- time monitoring of multiple, diverse, and heterogeneous configuration space from different data sources to reflect the intricate and evolving security threats. Our KGSecConfig can be utilized for various downstream configuration tasks, such as automated misconfiguration mitigation and interpretation of secured configuration options which can increase the understanding of the reason of enabling/disabling particular arguments. Furthermore, our KGSecConfig can provide a way for visualizing configuration arguments where default values are not security-focused and can help administrators to prioritize their efforts in configuration task. Our paper makes the following three main contributions: * • We are the first, to the best of our knowledge, to propose a keyword and learning-based approach for CO to capture, store and correlate configuration knowledge automatically. * • We build a secured configuration knowledge graph using KGSecConfig on Kubernetes [1], Docker [18], Azure [19], and VMWare [20]. * • Our evaluation shows high accuracy to build the KGSecConfig. Moreover, we demonstrated the effectiveness of KGSecConfig in automated mitigation of misconfiguration of a Kubernetes cluster. The remainder of our paper proceeds as follows. Section II describes the related work. Section III and IV discuss the approach and implementation. The process of the evaluation and results are discussed in Section V. We report the implications and limitations in Sections VI and VII. Finally, our paper concludes in Section VIII with some future directions. ## II Related Work Our research is related to the prior studies that have investigated the security of CO, the configuration studies in software engineering, and cyber security KG. ### II-A Security in Container Orchestrator Shamim et al. [7] reported a study of grey literature, e.g., blog posts and tutorials to identify security best practices for Kubernetes. They stated secure usage of Kubernetes requires the implementation of security practices applicable to multiple components within Kubernetes, such as containers [21] and pods [22]. They also advocated for a need of a deep understanding of Kubernetes configurations to implement security practices. Bose et al. [14] conducted a study on open-source software repositories, e.g., GitLab, and identified commits for updating security-related defects in Kubernetes manifests [7]. They applied closed coding [23], a qualitative analysis technique, on the collected commits to determine commits related to a security defect. Moreover, there were studies on the usage of Kubernetes in creating monitoring systems [16] and comparison of performance and resource management [17]. However, our research is different to these studies as we aim to develop an approach which can automatically capture, link, and encapsulate the configuration knowledge of diverse configuration space in a unified KG which can be used for secured configuration of CO. ### II-B Configuration in Software Engineering Sayagh and Hassan [24] identified the appropriate options to configuration related user questions by mining already answered configuration questions on online forums. Jin et al. [25] and Wen et al. [26] identified the appropriate options to a configuration-related question by extracting options whose option names are textually similar to the new user question. Xia et al. [27] predicted if a bug report is related to configuration or not. However, the study [24] focused on the already answered question; thus, there would be a long trail of less discussed configuration options. Moreover, all the configuration options might not be discussed on online forums, and the answered options might not be reliable, as shown by prior studies [28, 29]. Besides, given a bug report, the study [27] determined whether the bug report was a configuration or not, and the studies [25, 26] extracted configuration options from the bug reports. However, our research goal is different as our motivation is not only to extract configuration options but also realize and extract configuration concepts (e.g., reason, implementation details). We specifically intend to encapsulate such extracted configuration knowledge in a structured way in the unified knowledge graph. This unified knowledge graph allows us to correlate the vast configuration space of diverse CO software systems and, afterward, enables us to automate the secured configuration and compliance. ### II-C Cyber Security Knowledge Graph Rastogi et al. [30] proposed a KG for predicting missing information for malware. Pingle et al. [31] proposed a system, RelExt, that would detect relationships and create semantic triples over cyber security text using a deep-learning approach. Piplai et al. [32] extracted the information from malware After Action Reports (AAR) that can be merged to create a KG using RelExt. Mendsaikhan et al. [33] identified the significance of the cyber security text using the deep-learning approach. Whilst the prior studies focused on detecting cyber security text, e.g., attack patterns, vulnerabilities, malware, our study, in contrast, aims to build a KG automatically for secured configuration knowledge for CO. In summary, our research, to the best of our knowledge, has provided a unique solution, KGSecConfig, for building a secured configuration knowledge graph automatically for diverse software systems in CO. Our KGSecConfig is expected to minimize the manual efforts required to extract and utilize configuration knowledge for securing CO configuration. ## III Approach Figure 1: Conceptual schema for configuration knowledge Figure 2: Approach for constructing KGSecConfig We propose a Knowledge Graph (KG) based approach to overcome the above- mentioned barrier, i.e., data scatteredness, dynamicity and overload, to automating the configuration of CO. The conceptual schema of our KGSecConfig is shown in Fig. 1; and Fig. 2 shows the different stages of envisioned process for automatically constructing KGSecConfig. In particular, we present how KGSecConfig can be constructed from the natural language text sources by retrieving the entities and relations to form the triples [31, 32] of KGSecConfig. Our approach contains three main modules named Configuration Knowledge Graph (KGConfig) Construction, Security Document Relevancy Estimation, and Security Configuration Concept Classifier. The details of each module are described as follows. ### III-A Configuration Knowledge Graph (KGConfig) Construction The purpose of this module is to create a Knowledge Graph for Configuration (KGConfig) of heterogeneous platforms, cloud resource providers, and tools used in CO to mitigate data scatteredness. We encapsulated configuration arguments, options, types, default values, and descriptions. Configuration arguments and options are indispensable knowledge for performing the configuration task (e.g., ‘RBAC’ is an option in ‘–authorization-mode’ argument for configuring kube api-server). Types and default values are necessary knowledge to verify whether the used types (e.g., strings for ‘–audit-policy-file’ argument, or integer for ‘MinReadySeconds’ argument) and values in the configuration file (e.g., Kubernetes manifest file) are aligned with the required setting. Descriptions are essential knowledge for discovering and understanding the functionality of the configuration argument. In this module, we used different adapters to extract the configuration information, e.g., arguments, options, descriptions, and default options, from official documentation of diverse orchestrators, tools, platforms and built a configuration corpus with the extracted information. Then we applied our schema as shown in Fig. 1 to the configuration corpus to develop KGConfig. We need different adapters (e.g., data scrapers) since the documentation content is distributed in diverse formats such HTML, XML, or JSON. One example of extracting configuration information from documentation and constructing triples to populate KGConfig is illustrated in Fig. 3 and Fig. 4. The extracted arguments, options, types, and default options are considered as the entities and we defined ‘has’ relationship, e.g., ‘hasArgument’, ‘hasOption’, ‘hasType’, ‘hasDefault’ and ‘hasDescription’ among the entities. These entities and relations are important for identifying the configuration syntax and used to formulate keyword-based rules for estimating the relevancy of security documents with configuration (Section III-B). In summary, the input of this module is a set of official documentations, and the output is a KGConfig consisting of configuration entities and corresponding relations. Figure 3: Example of configuration space extraction Figure 4: Triples corresponding to Fig. 3 ### III-B Security Document Relevancy Estimation (SDRE) The purpose of this module is to filter out security documents from configuration point of view to mitigate data dynamicity barrier. Our KGSecConfig extracts security configuration information from publicly available security information-sharing sources. While security information- sharing sources (e.g., National Vulnerability Database (NVD), Security Tool White Paper, Security Bulletin) have grown in popularity [33], the amount of shared security information has also increased tremendously. This security information is shared in text documents and all the documents may not be relevant to configuration. Fig. 5 and Fig. 6 show the examples of two excerpts from Kubernetes Blog [34], where a document is relevant to configuration and other is not relevant to configuration. In Fig. 6, bold text (e.g., configuration argument) represents the reason of a sentence being relevant to configuration. In particular, we labeled a sentence that is relevant with configuration if any word in the sentence is matched with the extracted configuration argument, which is obtained from the previous step (Section III-A). In the same way, we considered a security document is relevant to configuration if there is any sentence being labelled as configuration. We used configuration argument and its associated entities as the keyword for estimating the relevancy of security documents to configuration. In summary, the input of this module is a set of security documents and the output is a set of security configuration documents. Figure 5: Example of a non-configuration document Figure 6: Example of a configuration document ### III-C Security Configuration Concept Classifier (SCCC) The purpose of this module is to realize and extract configuration concepts from security configuration documents to mitigate data overload barrier. We proposed four main classes for security configuration concepts as (i) ‘statement’ concept describes to perform a security configuration task (e.g., Ensure that the ‘–authorization-mode’ argument includes ‘RBAC’); (ii) ‘goal’ concept describes the reason behind the security configuration task (e.g., ‘RBAC’ ensures fine-grained control over the operations…cluster, ‘RBAC’ restricts unauthorized access…server); (iii) ‘action’ concept describes the actionable steps to implement the task (e.g., Edit the API server pod specification file…node); and (iv) ‘other’ concept describes the rest of sentences in the document (e.g., In this article, we will take a deep dive into key Kubernetes security configurations and the recommended best practices). A secured configuration ‘statement’ concept is essential to correlate statements with actionable steps in the knowledge graph. For example, Security Announcement Kubernetes [35] did not provide the actionable steps to mitigate the security issues due to CVE-2020-8557, however, it mentioned the statement (e.g., force containers to drop CAP-DAC-OVERRIDE capabilities in PodSecurityPolicy). In this regard, storing a statement in the knowledge graph can help find actionable steps for implementing secured configuration tasks. Moreover, ‘goal’ concept is essential to demonstrate ‘why’ a particular option should be used or not for security purposes. Dietrich et al. described system administrators suffer from a lack of knowledge in configuration, which is one of the primary reasons for misconfiguration [36]. Therefore, the ‘goal’ concept provides a way to interpret the cause of a particular configuration, which can be influential in broadening the configuration knowledge. The ‘action’ concept provides the required low-level implementation details, which are necessary for the machine execution of configuration tasks. Besides, ‘other’ concept is necessary to filter out irrelevant sentences from the security configuration document. We leveraged learning-based techniques to classify the sentences in security configuration documents automatically. Learning-based techniques are used since they are able to independently adapt new data for their learning ability from the previous computation, e.g., historical data for producing repeatable and data-driven decisions [37, 38, 39]. We designed the problem as a multi- class supervised text classification problem from a learning perspective. The components of building a learning model include- text pre-processing, model selection, model building, and prediction. Pre-processing. security configuration document may contain noise (e.g., punctuation, stop-words), which can make the learning model overfit [37], [40]. Therefore, we used the state-of-the-art approaches [41] for pre-processing the sentences, e.g., removal of noises, stop-words, lower casing, and lemmatization. Lemmatization is used to minimize the inflectional and derivational forms of a word to a standard base form [38]. We left the configuration argument and options intact, e.g., ‘–anonymous-auth’, to preserve the configuration syntax. Model Selection. we used the pre-processed text to perform stratified k-fold cross- validation. Stratification ensures the ratio of each input source is kept throughout the cross-validation [37], avoiding different data distribution of the folds. Our model selection component has two steps as (i) feature engineering, (ii) model training and validation. Feature engineering is the process where textual data is transformed into features to improve the performance of the learning models [42]. In the model training and evaluation steps, (k-1) folds are used for feature engineering and training a model, while the remaining one is used for validation. The validation performance of a model is the average of k runs. The model configurations with the highest performance metric would be selected as the optimal classifier for the following model building process. Model Building. the model building process uses the pre-processed data to generate a feature model based on the identified feature configuration. The feature model has been saved to transform the data for future prediction. Prediction. the prediction process is used for both testing the trained model and classifying the new sentences. In this process, the sentences are first pre-processed and then transformed to a feature set using the saved feature model. Finally, the feature set is used by the saved trained model to determine the class of the sentences. Our SCCC module will reduce the manual analysis required for realizing concepts from security configuration documents. In summary, the input of this module is a security configuration document and the output is the sentences classified according to the four configuration concepts. Once we identified the classified sentences, we considered all the concepts except ‘other’ concept for updating the initial KGConfig. We used graph- traversal algorithm [43] to locate the configuration argument and then added concepts as entities with ‘has’ relationship as described in Fig. 2 to construct a secure configuration knowledge graph. Moreover, we identified the required options for security configuration mentioned in the concepts by a rule-based approach. The rules are formulated from the available options associated with the particular arguments that had been initially encapsulated in the KGConfig. Fig. 7 shows an example to construct a secure configuration knowledge graph from the identified concepts. Figure 7: Configuration concepts added to KGConfig (Fig. 4) from the text of Fig. 6 To mitigate data scatteredness barrier, we proposed a configuration schema which is used to capture the configuration arguments, descriptions, default, and available options from diverse, heterogeneous, and multi-vendor tools documentation using entities and relations. While the schema can be used for instantiating KG, it can also map data of different formats (e.g., structured, unstructured, or semi-structured) from multiple sources into a common structure (Section III-A). Secondly, we proposed a keyword-based configuration knowledge localization method (Section III-B) to overcome data dynamicity barrier. Thirdly, to mitigate the data overload barrier, we proposed a learning-based model (Section III-C). The proposed learning-based model can automatically adopt new data from diverse sources to keep the configuration updated for mitigating security incidents. ## IV Implementation We applied our knowledge graph construction approach to Kubernetes [1] (container orchestrator), Docker [18] (DevOps platform), Azure [19] (cloud resource provider), and VMware [20] (infrastructure manager) which can be integrated with Kubernetes [44]. We selected the above four software systems due to their large-scale adoption for integrated orchestration service in a software-defined network [44]. We used Beautiful Soup [45], a Python library for parsing HTML and XML documents to crawl the web pages of official documentation for knowledge extraction. In addition, NLTK [46], a Python library for text processing, was used to implement the text pre-processing in descriptive knowledge extraction. Beautiful Soup and NLTK were widely used in software engineering studies involving information retrieval process [47, 48]. We leveraged following security information-sharing sources for secured configuration knowledge extraction. * • Security Announcement Kubernetes: Official site [35] for announcing security adversaries discovered or reported for Kubernetes and its integrated software systems, e.g., Docker or Azure. This site is maintained by Kubernetes security experts and recommended for the Kubernetes users for security task information [1]. * • Internet Artifacts: Shamim et al. proposed a taxonomy for secured installation of Kubernetes clusters and shared a dataset of 101 blog posts [7] from internet. Researchers have acknowledged the value of internet artifacts in deriving security tasks in various domains, such as DevOps [49], CI/CD [50], and testing [51]. * • Security Tool White Paper: Security tools, such as SYSDIG [52] release white papers to secure deployment of container workloads and contain valuable information. The previous researches showed the importance of analyzing white papers for realizing security tasks [53]. We used Vulners database [54], a rich source of security task information for diverse and heterogeneous software systems from more than 50 security tools. Vulners database was also used by other researches [39, 55, 56]. TABLE I: Description of data set Source | Official | Internet | Vulners ---|---|---|--- Security-related Document | 42 | 101 | 2187 Security Configuration Document | 39 | 91 | 205 Table I shows the distribution of the documents related to security and configuration. We used our 5,172 arguments as a keyword to search and identify security configuration-related documents. The search was performed in August 2021. We required a labelled dataset for the supervised learning model to extract configuration concepts from the security configuration documents. To the best of our knowledge, there is no labelled dataset for security configuration concept classification. Manual labelling of data is time-consuming and labour- intensive [38]. Therefore, we randomly selected 3,300 sentences from the security configuration documents for manual labelling. Two authors manually and independently label the sentences based on our four configuration concepts (Section III-C). We used Cohen’s Kappa [57] to measure the agreement between two labelers. Cohen’s Kappa is used since the same two labelers had rated the set of selected sentences, and also used in software engineering studies to report the agreement [58, 59]. We obtained the Kappa value of 0.7, indicating substantial agreement [57]. We used 3,032 sentences which were agreed upon by both labelers for training the model to reduce the labeling bias. Five traditional machine learning classifiers, Logistic Regression (LR), Naive Bayesian (NB), Support Vector Machines (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGB), were selected for learning-based models. Those classifiers were chosen due to the common practice in the literature [60, 61, 62] and real data science competition (e.g., Kaggle [63]). The first three (e.g., LR, NB, SVM) are single models, whereas the rest two (RF, XGB) are ensemble models [37]. In addition, we considered Term Frequency-Inverse Document Frequency (TF-IDF)-based word level, character level, and combination of word and character level to obtain relevant features. We also considered NLP features, such as word count, character, noun, verb, adjective, adverb, pronoun, and word density. These features are common and also used in the previous studies of software engineering studies that involve classification process [27, 37, 60]. To select optimal traditional ML models, we applied Bayesian optimization [64] using hyperopt library [65]. We chose bayesian optimisation due to its robustness against noisy objective function evaluations [66]. We utilised the average Matthews Correction Coefficient (MCC) of 10-fold cross-validation with stratified sampling [37] and early stopping criteria to select the optimal hyper parameters. MCC was used to select the optimal model since MCC explicitly considers all classes and is proclaimed as the best metric for error consideration by the prior study [67]. Besides, we implemented our models using scikit-learn [68] and gensim [69]. After concept extraction from the corpus, we used the Breadth-First-Search (BFS) [43] algorithm to identify the configuration argument and then update the KGConfig. BFS is a traditional graph traversal algorithm and is also commonly used in prior knowledge graph research [70]. ## V Results and Evaluation Our KGSecConfig built upon Kubernetes, Docker, Azure, and VMWare consists of 5,172 arguments, 2,774 options, and 23,177 descriptions. We obtained 1,463 argument, 1,793 options, where 984 arguments are not secured by default, and 479 arguments are secured by default. We conducted a series of experiments to evaluate the effectiveness and feasibility of our KGSecConfig by answering the following Research Questions (RQs). * • RQ1: How effective is a keyword-based method for estimating relevancy of security documents to configuration? Our keyword-based method considers configuration arguments with the exact syntax as mentioned in the software systems. The answer to RQ1 will inform whether the method returns the accurate configuration documents. Accurate configuration documents are important since they are the primary source of secure options and their implementation details. * • RQ2: How effective is our concept classifier module based on machine-learning? Our classifier module aims to classify sentences of the security configuration documents to identify concepts. The answer to RQ2 will reveal how accurately the module predicts in terms of classification. An accurate model is important since the predicted concepts will be further used to build KGSecConfig on top of KGConfig by adding concepts. Besides, it will help organizations to select suitable models in their orchestration service in a software-defined network. * • RQ3: What is the intrinsic quality of the knowledge captured in KGSecConfig? Our KGSecConfig aims to capture, link, and correlate the configuration knowledge of multi-vendor software systems from diverse data sources, which can be further utilized for securing the CO configuration automatically. The answer to RQ3 will reveal the quality, i.e., whether the captured knowledge precisely convey the completeness and correctness of the configuration knowledge. Ascertaining the completeness and correctness is crucial for determine KGSecConfig’s purpose for various downstream utilization. ### V-A Protocol for answering RQs #### V-A1 RQ1 We adopted a sampling method [58] similar to the prior studies [71, 72] to ensure that the ratios observed in the sample are generalized to the population within a specific confidence interval at a certain confidence level. The required sample size is 385 for a confidence interval of 5 at 95% confidence level. Therefore we randomly selected 385 security documents and identified how accurately the keyword-based method could select configuration- related documents. TABLE II: Evaluation metrics Metric | Description ---|--- Accuracy | Percentage of correctly classified instances. Recall | The correctly identified proportion of positive instances. Precision | | The percentage of the detected positive instances that --- were correct. f1-score | The harmonic mean of recall and precision. | Matthews correlation --- coefficient (MCC) | It is correlation coefficient that depicts the performance --- of the classifier by considering all four dimensions in the confusion metric. #### V-A2 RQ2 We used our manually curated dataset of 3,032 sentences (‘statement’ concept 790 sentences, ‘goal’ concept 860 sentences, ‘action’ concept 751 sentences, and ‘other’ concept 631 sentences) to evaluate the ML-models. Besides, we used the evaluation metrics defined in Table II. 80% of the dataset was randomly selected for model selection and building, and 20% for model prediction on unseen sentences. To select the optimal hyper parameter for each model, we performed stratified 10-fold cross-validation. Stratified sampling ensures that the proportion of each source would be kept. We ran our approach (e.g., KGConfig construction, security configuration document estimation, and security configuration concept classification) with different ML-models five times to calculate the time required to build KGSecConfig. Moreover, one- tailed non-parametric Mann-Whitney U-test [73] and Cohen’s $d$ effect size [74] tests were calculated to compare the statistical significance of the observed samples. These tests were selected since those were not affected by the data distribution and also used in prior studies to confirm their observation on the selection of ML-models [37, 60]. #### V-A3 RQ3 We used a statistical sampling method similar to RQ1 to examine the randomly sampled instances of entities in KGSecConfig. Our statistical sampling ensures the evaluation is in 0.05 error margin at 95% confidence level. We used accuracy as the evaluation metric for assessing the quality of KGSecConfig since accuracy is the most used and well-known evaluation metrics for the quality assessment of KG as reported in the prior researches [58, 59, 70, 71]. Two authors independently evaluated the accuracy and discussed to reach the consensus where independent assessment were different. We followed the prior research [58, 59, 71] criterion, that is whether an extracted instance is correct and meaningful (e.g., complete). We computed Cohen’s Kappa [57] to measure the inter rater agreement. Our SeConfigKG is built upon KGConfig (Section III-A) by adding our three secured configuration concepts (Section III-C) as ‘statement’, ‘goal’, and ‘action’ entities using learning models. As we already evaluated the learning models in RQ2, we did not repeat the evaluation of these entities in RQ3. We focused our quality assessment on other configuration entities, e.g., argument, option, types, default values, and description. ### V-B Evaluation Results #### V-B1 RQ1 We achieved an accuracy of 0.98 for our keyword-based method. Besides, we obtained 1.00, 0.95, and 0.97 precision, recall, and f1-score, respectively. Our approach could not identify the configuration documents where syntax were not preserved in the document, e.g., ‘admissionControl’, ‘–azure-container- registry’, or ‘show.hidden.metrics’ violating camel case, hyphenation, or dotted syntax. However, relaxing the syntax preservation causes generation of huge false positives, e.g., providing irrelevant documents as configuration documents. For example, the sentences ‘The azure container registry is Microsoft’s own hosting platform for Docker images’ [75], or ‘How to Best Secure Azure Container Registry’ [76] resulted in a configuration sentence. However, the documents are not providing any configuration information. Thus, the relaxation of syntax preservation may generate many irrelevant documents to process and large search space. In this regard, our keyword-based method with syntax preservation can keep the search space of configuration knowledge localization minimized, yet representative and provide high precision results for retrieving secured configuration documents. Figure 8: MCC for each of the ML-models and features TABLE III: Optimal hyper parameters for the studied ML-models Model | Hyper Parameter | Model | Hyper Parameter | Model | Hyper Parameter ---|---|---|---|---|--- LR | | C: 2.60 --- fit_intercept: True max_iter: 582 solver: liblinear tol: 9.95 e-05 warm_start: False scale: 0 normalize: 0 SVM | | C: 8.68 --- gamma: 3.56 kernel: linear RF | | criterion: gini --- max_depth: 34 max_features: auto n_estimators: 100 NB | | alpha: 0.52 --- fit_prior: False XGB | | max_depth: 92 --- tree_method: exact | TABLE IV: Accuracy achieved by the studied ML-models Model | LR | NB | SVM | RF | XGB ---|---|---|---|---|--- Accuracy | 0.94 | 0.82 | 0.88 | 0.76 | 0.93 TABLE V: Precision, Recall, f1-score for each concept using the studied ML- models | LR | NB | SVM | RF | XGB ---|---|---|---|---|--- concept | Precision | Recall | f1-score | Precision | Recall | f1-score | Precision | Recall | f1-score | Precision | Recall | f1-score | Precision | Recall | f1-score statement | 0.98 | 0.93 | 0.96 | 0.94 | 0.77 | 0.85 | 0.93 | 0.93 | 0.93 | 0.97 | 0.82 | 0.89 | 0.97 | 0.9 | 0.93 goal | 0.9 | 0.88 | 0.89 | 0.81 | 0.7 | 0.75 | 0.83 | 0.88 | 0.85 | 0.9 | 0.42 | 0.57 | 0.9 | 0.84 | 0.88 action | 0.86 | 0.93 | 0.9 | 0.7 | 0.91 | 0.79 | 0.88 | 0.83 | 0.85 | 0.58 | 0.96 | 0.72 | 0.86 | 0.91 | 0.89 other | 0.9 | 0.88 | 0.89 | 0.82 | 0.79 | 0.8 | 0.9 | 0.89 | 0.89 | 0.81 | 0.74 | 0.73 | 0.9 | 0.87 | 0.88 #### V-B2 RQ2 Fig. 8 shows a combination of word and character level feature performs better than other features in terms of MCC. For the character level feature, we chose $n$-gram in the range of 2 $\leq n$ $\leq$ 4 since vocabulary size does not increase after character size 4. A combination of word and character level features considers both word and character to learn the semantic representation of the sentence to give better results. NLP features (e.g., count of word, character, noun, verb, adjective, adverb, and pronoun) perform similarly for all single models except ensemble models. One of the reasons could be that ensemble models learn the overlapping of NLP features among classes due to their intrinsic characteristics of individual model aggregation. We verified NLP features perform statistically significant different than other features with Mann-Whitney U-test ($z$-score is 2.506, $p$-value is .006, significant at $p$-value $<$ .05) and large Cohen’s d effect size (4.324). Table III represents the optimal hyper-parameter of each model for its best feature, e.g., a combination of word and character level features. Besides, Table IV presents the accuracy achieved by different ML- model. It is observed that LR and XGB outperform other models. We verified our observation with Mann-Whitney U-test ($z$-score is 3.741, $p$-value is 9 $\times$ 10${}^{-}3$) and Cohen’s d effect size. Table V shows the precision, recall, and f1-score obtained for each model. We found that LR and XGB also perform better than other models in terms of precision, recall, and f1-score. Table VI shows the average time requirement to build KGSecConfig with different ML models. Our approach with the LR model takes the least amount of time (4.4 minutes on average) to build KGSecConfig. The difference among the time requirements is due to the different training times for ML models since the KGConfig construction, configuration documents estimation, and BFS algorithm execution required similar time to generate their respective output. XGB and RF take longer time to train than LR, NB, and SVM. We verified our observation with Mann-Whitney U-test, where we obtained statistical significant difference ($z$-score$<$3.185, $p$-value$<$9.1 $\times$10${}^{-}3$, significant at $p$-value $<$ 0.05) and larger Cohen’s $d$ effect size ($d>$174.1). XGB and RF are tree-based methods and traditionally reported to take longer time [77]. Calero and Pattini [78] mentioned current commercial designs are motivated by arguments based on sustainability (i.e., using fewer resources to achieve results). In particular, they asserted organizations used sustainability-based redesigns to motivate extensive cost-cutting opportunities. Therefore, we suggest LR should be preferred in building KGSecConfig. TABLE VI: Time (s) required to build KGSecConfig by ML-models Model | LR | NB | SVM | RF | XGB ---|---|---|---|---|--- Time (in seconds) | 264 | 313 | 438 | 773 | 694 #### V-B3 RQ3 Table VII shows the results of our quality assessment of the extracted knowledge, where ACC-1, ACC-2, and ACC-F denotes the accuracy by two annotators independently, and final accuracy after resolving disagreement. The accuracies for all the configuration entities are above 90% and Cohen’s Kappa are above 0.6, indicating substantial to almost-perfect agreement between annotators. Our configuration entities argument and options achieved 100% accuracy, which is not surprising since such entities are extracted from structured document contents (e.g., enclosed in html, xml tags) with careful implementation and verification. Null values, empty fields, and empty strings were the common problems for default values and types extraction. For example, ‘–allow-metric- labels’ argument in kube api server provided a square bracket notation to present empty fields in its default values, whereas ‘–cgroup-root’ provided double quotes, ‘–system-reserved’ provided backslash, and ‘–kube-reserved- cgroup’ kept the space blank after mentioning types (e.g., ‘–kube-reserved- cgroup’, ‘Type:’ $<$space$>$) in Kubernetes, which caused erroneous extraction. Besides, incomplete sentences due to erroneous splitting is the common problem for the description entity. For example, ‘–container-runtime’ argument had the description ‘The container, e.g., docker, remote runtime to use’. Our sentence tokenizer based on NLTK provided two separate sentences as ‘The container, e.g.,’, and ‘docker, remote runtime to use’ causing incomplete sentences for the description entity. Our KGSecConfig contains highly accurate configuration knowledge, which can support practical downstream applications, such as automated misconfiguration detection, verification, and mitigation. The common problems with the quality of extracted knowledge include text processing errors and meaningless description due to incomplete sentences. These problems can be minimized by developing more rules to enhance the text processing techniques, which can be further leveraged for accurate configuration entity extraction. TABLE VII: Accuracy of knowledge in KGSecConfig Entities | ACC-1 | ACC-2 | ACC-F | Agreement ---|---|---|---|--- argument | 1.00 | 1.00 | 1.00 | 1.00 options | 1.00 | 1.00 | 1.00 | 1.00 default values | 0.97 | 0.98 | 0.98 | 0.62 type | 0.96 | 0.94 | 0.94 | 0.77 description | 0.90 | 0.91 | 0.91 | 0.86 ## VI Implication and Discussion In this Section, we provide how KGSecConfig can be utilized for secured configuration of CO and some implications for researchers and practitioners. Figure 9: An example of Kubernetes manifest Figure 10: Automated misconfiguration mitigation using KGSecConfig ### VI-A Secured Configuration-as-a-Service #### VI-A1 Automated mitigation of misconfiguration Our KGSecConfig can be used to mitigate the misconfiguration of CO clusters automatically. Fig. 9 shows an example of a configuration file to deploy a containerized application [79]. We built a Kubernetes cluster with two nodes [80], a control plane [81], and a worker node [82] to explore the potential security threat actors with the configuration file as shown in Fig. 9. We leveraged our KGSecConfig to identify and mitigate the threat actors automatically, as shown in Fig. 10. A Compliance Parse Tree (CPT) as shown in Fig. 10(b) was built to tokenize configuration argument and its options, where the root of the tree is the configuration file type, e.g., ‘Deployment’, non- leaf nodes are arguments, and leaf nodes are options. We developed a compliance checker using subgraph matching [70] whose aim is to compare CPT with our KGSecConfig for possible misconfiguration detection. Our KGSecConfig identified any user in the cluster could get access to the container as the security policy (e.g., ‘SecurityContext’) argument is missing. A user can also access the container as a root user since ‘runAsNonRoot’ was not set. Moreover, a user can perform the privilege escalation as the ‘allowPrivilegeEscalation’ argument is missing. Besides, DDoS attack can be mounted due to configuring ‘imagePullPolicy’ as ‘Always’ and undefined host networking policy. All of the mentioned threat actors are automatically generated from our compliance checker, as shown in the red circle in Fig. 10(c). In particular, our ‘goal’ concept provided the knowledge of such threat actors where missing or undefined arguments were matched. Our KGSecConfig also has the capability to automatically mitigate the misconfiguration by replacing (e.g., replacing ‘Always’ option with ‘IfNotPresent’ in ‘imagePullPolicy’ argument) or adding (e.g., adding ‘allowPrivilegeEscalation’ argument with ‘false’ option) secured argument and corresponding options as a node or set of nodes as shown in green circles in Fig. 10(c) by using the knowledge in KGSecConfig. Finally, it can generate secured configuration files with YAML as shown in Fig. 10(d) or JSON format for automated execution due to the structured storage of relationships, e.g., parent-child relationship among arguments as shown in arrows in Fig. 10. #### VI-A2 Automated Verification of Extracted Configuration Our KGConfig (Section III-A) can serve as foundation support in terms of verification, e.g., whether the extracted configuration knowledge for security purposes either exists (e.g., if ‘–authorization-mode’ argument is consistent with the current documentation or ‘RBAC’ is a valid option for ‘–authorization-mode’) according to official documentation. For instance, our KGSecConfig has detected some inconsistencies in the NSA published report [83]. For instance, the NSA report stated Kubelet [84] and Kube-Scheduler [85] run Transmission Control Protocol (TCP) on port number 10251, however, we identified it is actually 10250 for read/write port and 10255 read-only port. Similarly, the same inconsistencies were found in other components of Kubernetes, such as Kubelet [84] and control-manager [86]. Our KGSecConfig can automatically detect inconsistencies without manual navigation of the large configuration space. In addition, we also detected some deprecated arguments (e.g., ‘–repair-malformed-updates’ in kube api-server) mentioned in some white papers (e.g., RedHat). Thus our KGSecConfig ensures real-time monitoring and updated configuration knowledge aligned with official documentation by automatic verification of extracted knowledge in terms of configuration space. ### VI-B Automated Interpretation of Configuration Options Our KGSecConfig can provide the reasoning why we need to set/enable or disable a particular set of arguments that can increase the understanding of the configuration knowledge. For example, ‘false’ option in ‘–profiling’ argument reduces the potential attack surface since the default ‘true’ option generates a significant amount of program data that could potentially be exploited to uncover system and program details. Our KGSecConfig systematically organizes such essential knowledge by ‘hasGoal’ relationship with respective options which can help administrators’ reasoning in configuring CO. Besides, our KGSecConfig minimizes the manual traversal of multiple data sources required for an investigation to change the configuration options due to the disclosure of new security issues, e.g., vulnerabilities or malware. For example, a denial-of-service vulnerability of kube api-server, CVE-2019-11254 is revealed in one security information sharing-source (i.e., in a vulnerability database NVD [87]). This vulnerability can be fixed by restricting unauthorized access to kube api-server, which had been mentioned in another source, i.e., Security Announcement Kubernetes (SAK) [35]. However, neither NVD nor SAK described how to restrict unauthorized access to kube api server in terms of configuration options and arguments. Our KGSecConfig can correlate the encapsulated knowledge (e.g., ‘goal’ concepts associated with arguments of kube api server and the query ‘restrict unauthorized access to kube api-server’) and return the mitigation approach by providing necessary arguments and options (e.g., ‘–anonymous-auth’ will be ‘false’ and ‘–authorization-mode’ will be ‘RBAC’) for restricting the unauthorized access. Therefore, KGSecConfig can automatically provide the knowledge required for secured configuration to fix security issues. However, we acknowledge that the returned results will depend on the query formulation (e.g., query other than ‘restrict unauthorized access to kube api-server’). In future research, we will perform the embedding-based textual similarity [24] on the encapsulated knowledge and the given query to reduce the semantic and lexical gap. ### VI-C Visualization of Configuration hot-spot Our KGSecConfig can support to visualize the configuration hot-spot, i.e., the arguments which need to change their default options for security purposes. The default configurations in CO and its integrated software are not always security-focused and require changes to harden the CO. Multiple cyber-security attacks were launched due to default configuration in Kubernetes. For instance, Tesla went through a cyber breach because of misconfiguration in 2018 [10]. Unauthorized users got access to secret resources exploiting an administrative CO which was being operated with default configuration in Tesla. Besides, the diversity of tools, platforms, and resource providers’ configuration space makes it challenging to locate and then update the configuration manually. For example, the lead engineer of Target [88], a retail corporation, reviewed Kubernetes as “..no document versioning. Stuff is all over. It is difficult to find the right stuff..” [89]. In this regard, our KGSecConfig can be a potential approach to mitigate this challenge by visualizing a unified knowledge base. Fig. 11 represents a sub-graph (built using Neo4j [90]) providing the configuration hot-spot of the studied software systems. Fig. 11 can be a way to represent which software system needs to adjust default arguments more than the other software systems and can help the practitioner to prioritize their security configuration task. For instance, Fig. 11(a) has more more nodes compared to others (Fig. 11(b), 11(c), and 11(d)), representing it requires more adjustment of default arguments. Moreover, our KGSecConfig summarises relationships effectively (RQ1) and efficiently (RQ2 and RQ3), and it can scale to adopt new knowledge automatically using the learning-based model. Figure 11: A subgraph visualizing the configuration hotspot of the studied CO software systems ## VII Threats to validity #### VII-1 Construct Validity Labelled data are necessary for training a supervised learning model. Our manually labelled dataset may be biased and subjective. We followed the prior studies [30, 32, 42] to mitigate the bias. Two authors independently annotated the dataset and reported the disagreement. Moreover, we only used the dataset that both authors agreed to train the learning model for minimizing the effect of bias. Our labelled dataset is also comparable with prior studies to build the cyber security knowledge graph. For example, Islam et al. [42] labelled 1.7K, Rastogi et al. [30] labelled 3K, Piplai et al. [32] labelled 3.6K sentences for building security tools, predicting malware entities, and constructing malware KG, respectively. #### VII-2 Internal Validity Our collected configuration options from official documentation might be incomplete. However, prior studies on configuration mining from question/answering sites [24], bug reports [25, 26] also leveraged official documentation in their studies. Besides, our security configuration arguments are derived from publicly available security information-sharing sources, which might miss some secured configuration arguments. However, we used security information sharing sources reported by prior researchers in the security of orchestrators [7] or software systems. In future work, we will investigate other sources of security information. Another threat to the internal validity is the subjective judgment in RQs, for example the evaluation of the quality of extracted configuration knowledge. To alleviate this threat, we have reported the agreement for each subjective judgment. #### VII-3 External Validity Our evaluation results consider four case-study software systems and five ML models. We can not generalize our findings to other software systems or ML models. However, we provided an approach that can be applied to any other software system. Besides, our selected software system has a large configuration space and is popular in orchestration services [34]. Moreover, our goal is not to compare different ML models, and any learning-based model can be plugged into our approach to build the KG. ## VIII Conclusion and future work 85% of global organizations are forecast to relocate their legacy application to container-based development and deployment [8]. Container orchestrators, e.g., Kubernetes, are playing a vital role in managing container clusters in both the development and production environment. Container orchestrators require automated configuration for secured operation of container clusters. The diversity of software systems used in orchestrators, their vast configuration space, and information overload cause barriers for automating the secured configuration. We proposed a novel knowledge graph-based approach, KGSecConfig to aggregate and organize disparate silos of security configuration knowledge. We built a secured configuration knowledge graph with Kubernetes, Docker, Azure, and VMWare using KGSecConfig, which provide essential configuration knowledge, such as implementation details and reasoning. We demonstrated how KGSecConfig can be utilized for various downstream task, such as automated misconfiguration mitigation, inconsistency identification, and configuration hot-spot visualization. We plan to extend our KGSecConfig to explore the potential of secure configuration migration from one orchestrator to another orchestrator for the clusters. ## Acknowledgment The work has been supported by the Cyber Security Research Centre Limited whose activities are partially funded by the Australian Government’s Cooperative Research Centres Programme. This work has also been supported with super-computing resources provided by the Phoenix HPC service at The University of Adelaide. ## References * [1] (2021) Kubernetes documentaiton. Access Date August, 2021. [Online]. Available: http://kubernetes.io * [2] (2021) Swarm mode overview. Access Date August, 2021. [Online]. Available: https://docs.docker.com/engine/swarm/ * [3] (2021) Nomad documentation. Access Date August, 2021. [Online]. Available: https://www.nomadproject.io/docs * [4] D. Bernstein, “Containers and cloud: From lxc to docker to kubernetes,” _IEEE Cloud Computing_ , vol. 1, no. 3, pp. 81–84, 2014. * [5] B. Burns, J. Beda, and K. Hightower, _Kubernetes: up and running: dive into the future of infrastructure_. O’Reilly Media, 2019. * [6] Portworx and A. Security. (2019) 2019 container adoption survey. October 20, 2020\. [Online]. Available: https://portworx.com/wp-content/uploads/2019/05/2019-container-adoption-survey.pdf * [7] M. S. I. Shamim, F. A. Bhuiyan, and A. Rahman, “Xi commandments of kubernetes security: A systematization of knowledge related to kubernetes security practices,” in _2020 IEEE Secure Development (SecDev)_. IEEE, 2020, pp. 58–64. * [8] StackRox. (2020) The state of container and kubernetes security. Access Date August, 2021. [Online]. Available: https://security.stackrox.com/rs/219-UEH-533/images/State_of_Container_and_Kubernetes_Report.pdf * [9] RedHat. (2021) State of kubernetes security report. Access Date August, 2021. [Online]. Available: https://www.redhat.com/en/engage/state-kubernetes-security-s-202106210910 * [10] T. Taylor. (2020) 5 kubernetes security incidents and what we can learn from them. Access Date August, 2021. [Online]. Available: https://techgenix.com/5-kubernetes-security-incidents * [11] E. Zilberman. (2021) Top kubernetes configuration mistakes to avoid. Access Date August, 2021. [Online]. Available: https://www.datree.io/resources/kubernetes-configuration-mistakes * [12] R. Wilson. (2021) Why is kubernetes so hard - 4 reasons why and what to do about it. Access Date August, 2021. [Online]. Available: https://releasehub.com/blog/why-kubernetes-is-so-hard * [13] Kubernetes. (2021) Kubernetes deployments. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ * [14] D. B. Bose, A. Rahman, and S. I. Shamim, “‘under-reported’security defects in kubernetes manifests,” in _2021 IEEE/ACM 2nd International Workshop on Engineering and Cybersecurity of Critical Systems (EnCyCriS)_. IEEE, 2021, pp. 9–12. * [15] Kubernetes. (2021) kube api-server. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/reference/command-line-tools-reference/kube-apiserver/ * [16] C.-C. Chang, S.-R. Yang, E.-H. Yeh, P. Lin, and J.-Y. Jeng, “A kubernetes-based monitoring platform for dynamic cloud resource provisioning,” in _GLOBECOM 2017-2017 IEEE Global Communications Conference_. IEEE, 2017, pp. 1–6. * [17] A. Modak, S. Chaudhary, P. Paygude, and S. Ldate, “Techniques to secure data on cloud: Docker swarm or kubernetes?” in _2018 Second International Conference on Inventive Communication and Computational Technologies (ICICCT)_. IEEE, 2018, pp. 7–12. * [18] (2021) Docker documentation. Access Date August, 2021. [Online]. Available: https://docs.docker.com * [19] (2021) Micorsoft azure documentaiton. Access Date August, 2021. [Online]. Available: https://docs.microsoft.com/en-us/azure/aks/ * [20] (2021) Vmware tanzu kubernetes grid documentation. Access Date August, 2021. [Online]. Available: https://docs.vmware.com/en/VMware-Tanzu-Kubernetes-Grid/index.html * [21] (2021) Container concept kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/containers/ * [22] (2021) Pods in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/workloads/pods/ * [23] J. Saldaña, _The coding manual for qualitative researchers_. sage, 2021. * [24] M. Sayagh and A. E. Hassan, “Configminer: Identifying the appropriate configuration options for config-related user questions by mining online forums,” _IEEE Transactions on Software Engineering_ , 2020. * [25] D. Jin, M. B. Cohen, X. Qu, and B. Robinson, “Preffinder: Getting the right preference in configurable software systems,” in _Proceedings of the 29th ACM/IEEE international conference on Automated software engineering_ , 2014, pp. 151–162. * [26] W. Wen, T. Yu, and J. H. Hayes, “Colua: Automatically predicting configuration bug reports and extracting configuration options,” in _2016 IEEE 27Th international symposium on software reliability engineering (ISSRE)_. IEEE, 2016, pp. 150–161. * [27] X. Xia, D. Lo, W. Qiu, X. Wang, and B. Zhou, “Automated configuration bug report prediction using text mining,” in _2014 IEEE 38th Annual Computer Software and Applications Conference_. IEEE, 2014, pp. 107–116. * [28] D. van der Linden, E. Williams, J. Hallett, and A. Rashid, “The impact of surface features on choice of (in) secure answers by stackoverflow readers,” _IEEE Transactions on Software Engineering_ , no. 01, pp. 1–1, 2020. * [29] M. Verdi, A. Sami, J. Akhondali, F. Khomh, G. Uddin, and A. K. Motlagh, “An empirical study of c++ vulnerabilities in crowd-sourced code examples,” _IEEE Transactions on Software Engineering_ , 2020. * [30] N. Rastogi, S. Dutta, R. Christian, M. Zaki, A. Gittens, and C. Aggarwal, “Information prediction using knowledge graphs for contextual malware threat intelligence,” _arXiv preprint arXiv:2102.05571_ , 2021. * [31] A. Pingle, A. Piplai, S. Mittal, A. Joshi, J. Holt, and R. Zak, “Relext: Relation extraction using deep learning approaches for cybersecurity knowledge graph improvement,” in _Proceedings of the 2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining_ , 2019, pp. 879–886. * [32] A. Piplai, S. Mittal, A. Joshi, T. Finin, J. Holt, and R. Zak, “Creating cybersecurity knowledge graphs from malware after action reports,” _IEEE Access_ , vol. 8, pp. 211 691–211 703, 2020. * [33] O. Mendsaikhan, H. Hasegawa, Y. Yamaguchi, and H. Shimada, “Quantifying the significance and relevance of cyber-security text through textual similarity and cyber-security knowledge graph,” _IEEE Access_ , vol. 8, pp. 177 041–177 052, 2020. * [34] (2021) kubernetes blog. Access Date August, 2021. [Online]. Available: https://kubernetes.io/blog/ * [35] (2021) kubernetes security announcement. Access Date August, 2021. [Online]. Available: https://groups.google.com/g/kubernetes-security-announce * [36] C. Dietrich, K. Krombholz, K. Borgolte, and T. Fiebig, “Investigating system operators’ perspective on security misconfigurations,” in _Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security_ , 2018, pp. 1272–1289. * [37] T. H. M. Le, D. Hin, R. Croft, and M. A. Babar, “Puminer: Mining security posts from developer question and answer websites with pu learning,” in _Proceedings of the 17th International Conference on Mining Software Repositories_ , 2020, pp. 350–361. * [38] X. Zhao, Z. Xing, M. A. Kabir, N. Sawada, J. Li, and S.-W. Lin, “Hdskg: Harvesting domain specific knowledge graph from content of webpages,” in _2017 ieee 24th international conference on software analysis, evolution and reengineering (saner)_. IEEE, 2017, pp. 56–67. * [39] S. Mumtaz, C. Rodriguez, B. Benatallah, M. Al-Banna, and S. Zamanirad, “Learning word representation for the cyber security vulnerability domain,” in _2020 International Joint Conference on Neural Networks (IJCNN)_. IEEE, 2020, pp. 1–8. * [40] A. Kao and S. R. Poteet, _Natural language processing and text mining_. Springer Science & Business Media, 2007. * [41] S. Vijayarani, M. J. Ilamathi, M. Nithya _et al._ , “Preprocessing techniques for text mining-an overview,” _International Journal of Computer Science & Communication Networks_, vol. 5, no. 1, pp. 7–16, 2015. * [42] C. Islam, M. A. Babar, and S. Nepal, “Automated interpretation and integration of security tools using semantic knowledge,” in _International Conference on Advanced Information Systems Engineering_. Springer, 2019, pp. 513–528. * [43] A. Bundy and L. Wallen, “Breadth-first search,” in _Catalogue of artificial intelligence tools_. Springer, 1984, pp. 13–13. * [44] (2021) Kubernetes use case studies. Access Date August, 2021. [Online]. Available: https://kubernetes.io/case-studies/ * [45] (2021) Beautiful soup documentation. Access Date August, 2021. [Online]. Available: https://www.crummy.com/software/BeautifulSoup/bs4/doc/ * [46] (2021) Natural language toolkit. Access Date August, 2021. [Online]. Available: https://www.nltk.org/ * [47] M. Liu, X. Peng, A. Marcus, Z. Xing, W. Xie, S. Xing, and Y. Liu, “Generating query-specific class api summaries,” in _Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering_ , 2019, pp. 120–130. * [48] X. Hu, G. Li, X. Xia, D. Lo, S. Lu, and Z. Jin, “Summarizing source code with transferred api knowledge,” 2018. * [49] A. A. U. Rahman and L. Williams, “Software security in devops: Synthesizing practitioners’ perceptions and practices,” in _2016 IEEE/ACM International Workshop on Continuous Software Evolution and Delivery (CSED)_. IEEE, 2016, pp. 70–76. * [50] A. A. U. Rahman, E. Helms, L. Williams, and C. Parnin, “Synthesizing continuous deployment practices used in software development,” in _2015 Agile Conference_. IEEE, 2015, pp. 1–10. * [51] V. Garousi and B. Küçük, “Smells in software test code: A survey of knowledge in industry and academia,” _Journal of systems and software_ , vol. 138, pp. 52–81, 2018. * [52] (2021) sysdig. Access Date August, 2021. [Online]. Available: https://sysdig.com/ * [53] C. Islam, M. A. Babar, and S. Nepal, “A multi-vocal review of security orchestration,” _ACM Computing Surveys (CSUR)_ , vol. 52, no. 2, pp. 1–45, 2019. * [54] (2021) Vulnerability assessment platform. Access Date August, 2021. [Online]. Available: https://vulners.com/ * [55] V. V. Varenitca, A. S. Markov, and V. V. Savchenko, “Recommended practices for the analysis of web application vulnerabilities,” in _10th Anniversary International Scientific and Technical Conference on Secure Information Technologies, BIT 2019 CEUR Workshop Proceedings_ , vol. 2603, 2019, pp. 75–78. * [56] O. Valea and C. Oprişa, “Towards pentesting automation using the metasploit framework,” in _2020 IEEE 16th International Conference on Intelligent Computer Communication and Processing (ICCP)_. IEEE, 2020, pp. 171–178. * [57] M. L. McHugh, “Interrater reliability: the kappa statistic,” _Biochemia medica_ , vol. 22, no. 3, pp. 276–282, 2012. * [58] Y. Liu, M. Liu, X. Peng, C. Treude, Z. Xing, and X. Zhang, “Generating concept based api element comparison using a knowledge graph,” in _Proceedings of the 35th IEEE/ACM International Conference on Automated Software Engineering_ , 2020, pp. 834–845. * [59] J. Sun, Z. Xing, R. Chu, H. Bai, J. Wang, and X. Peng, “Know-how in programming tasks: From textual tutorials to task-oriented knowledge graph,” in _2019 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 2019, pp. 257–268. * [60] T. H. M. Le, B. Sabir, and M. A. Babar, “Automated software vulnerability assessment with concept drift,” in _2019 IEEE/ACM 16th International Conference on Mining Software Repositories (MSR)_. IEEE, 2019, pp. 371–382. * [61] T. Menzies, S. Majumder, N. Balaji, K. Brey, and W. Fu, “500+ times faster than deep learning:(a case study exploring faster methods for text mining stackoverflow),” in _2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR)_. IEEE, 2018, pp. 554–563. * [62] Y. Ma, S. Fakhoury, M. Christensen, V. Arnaoudova, W. Zogaan, and M. Mirakhorli, “Automatic classification of software artifacts in open-source applications,” in _2018 IEEE/ACM 15th International Conference on Mining Software Repositories (MSR)_. IEEE, 2018, pp. 414–425. * [63] (2021) Kaggle competitions. Access Date August, 2021. [Online]. Available: https://www.kaggle.com/ * [64] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” _Advances in neural information processing systems_ , vol. 25, 2012. * [65] J. Bergstra, D. Yamins, and D. Cox, “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in _International conference on machine learning_. PMLR, 2013, pp. 115–123. * [66] Z. Wang, M. Zoghi, F. Hutter, D. Matheson, and N. De Freitas, “Bayesian optimization in high dimensions via random embeddings,” in _Twenty-Third international joint conference on artificial intelligence_ , 2013. * [67] A. Luque, A. Carrasco, A. Martín, and A. de las Heras, “The impact of class imbalance in classification performance metrics based on the binary confusion matrix,” _Pattern Recognition_ , vol. 91, pp. 216–231, 2019. * [68] (2021) scikit-learn-machine learning in python. Access Date August, 2021. [Online]. Available: https://scikit-learn.org/stable/ * [69] (2021) Gensim. Access Date August, 2021. [Online]. Available: https://radimrehurek.com/gensim/ * [70] X. Zou, “A survey on application of knowledge graph,” in _Journal of Physics: Conference Series_ , vol. 1487, no. 1. IOP Publishing, 2020, p. 012016. * [71] H. Li, S. Li, J. Sun, Z. Xing, X. Peng, M. Liu, and X. Zhao, “Improving api caveats accessibility by mining api caveats knowledge graph,” in _2018 IEEE International Conference on Software Maintenance and Evolution (ICSME)_. IEEE, 2018, pp. 183–193. * [72] C. Wang, X. Peng, M. Liu, Z. Xing, X. Bai, B. Xie, and T. Wang, “A learning-based approach for automatic construction of domain glossary from source code and documentation,” in _Proceedings of the 2019 27th ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering_ , 2019, pp. 97–108. * [73] H. B. Mann and D. R. Whitney, “On a test of whether one of two random variables is stochastically larger than the other,” _The annals of mathematical statistics_ , pp. 50–60, 1947. * [74] J. Cohen, _Statistical power analysis for the behavioral sciences_. Academic press, 2013. * [75] T. Seals. (April 14, 2021) Security bug allows attackers to brick kubernetes clusters. Access Date August, 2021. [Online]. Available: https://vulners.com/threatpost/THREATPOST:E7B7ABC22A2369A38FEB70BA45964658 * [76] C. Losh. (July 20, 2020) How to best secure your azure container registry. Access Date August, 2021. [Online]. Available: https://vulners.com/trendmicroblog/TRENDMICROBLOG:F404AF8BA3B3A80C21BEB37E66D561E0 * [77] M. Ganaie, M. Hu _et al._ , “Ensemble deep learning: A review,” _arXiv preprint arXiv:2104.02395_ , 2021. * [78] C. Calero and M. Piattini, _Green in software engineering_. Springer, 2015, vol. 3. * [79] (2021) React, node js, and mongodb microservices-based application deployment on kubernetes. Access Date August, 2021. [Online]. Available: https://github.com/aamirpinger/todo-app-client-server-kubernetes * [80] (2021) Nodes in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/architecture/nodes/ * [81] (2021) Control plane in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/architecture/control-plane-node-communication/ * [82] (2021) Worker nodes in kubernetes. Access Date August, 2021. [Online]. Available: https://docs.oracle.com/en/operating-systems/olcne/1.1/orchestration/worker.html * [83] (2021) Cybersecurity and infrastructure security agency. Access Date August, 2021\. [Online]. Available: https://www.nsa.gov/Press-Room/News-Highlights/Article/Article/2716980/nsa-cisa-release-kubernetes-hardening-guidance/ * [84] (2021) Kubelet in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/overview/components/#kubelet * [85] (2021) Kube scheduler in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/overview/components/#kube-scheduler * [86] (2021) Cloud control manager in kubernetes. Access Date August, 2021. [Online]. Available: https://kubernetes.io/docs/concepts/architecture/cloud-controller/ * [87] (2021) National vulnerability database. Access Date August, 2021. [Online]. Available: https://nvd.nist.gov/ * [88] (2021) Target. Access Date August, 2021. [Online]. Available: https://www.target.com/ * [89] M. Rajkarnikar. (2017) Overall satisfaction with kubernetes. Access Date August, 2021. [Online]. Available: https://www.trustradius.com/reviews/kubernetes-2017-04-11-12-36-25 * [90] (2021) Graph database platform. Access Date August, 2021. [Online]. Available: https://neo4j.com/
# Rigidity theorems for constant weighted mean curvature hypersurfaces Saul Ancari Saul Ancari Instituto de Matemática e Estatística, Universidade Federal Fluminense, Campus Gragoatá, Rua Alexandre Moura 8 - São Domingos 24210-200 Niterói, Rio de Janeiro Brazil<EMAIL_ADDRESS>and Igor Miranda Igor Miranda Instituto de Matemática e Estatística, Universidade Federal Fluminense, Campus Gragoatá, Rua Alexandre Moura 8 - São Domingos 24210-200 Niterói, Rio de Janeiro Brazil<EMAIL_ADDRESS> ###### Abstract. In this article, we study hypersurfaces $\Sigma\subset\mathbb{R}^{n+1}$ with constant weighted mean curvature. Recently, Wei-Peng proved a rigidity theorem for CWMC hypersurfaces that generalizes Le-Sesum classification theorem for self-shrinker. More specifically, they showed that a complete CWMC hypersurface with polynomial volume growth, bounded norm of the second fundamental form and that satisfies $|A|^{2}H(H-\lambda)\leq H^{2}/2$ must either be a hyperplane or a generalized cylinder. We generalize this result by removing the bound condition on the norm of the second fundamental form. Moreover, we prove that under some conditions if the reverse inequality holds then the hypersurface must either be a hyperplane or a generalized cylinder. As an application of one of the results proved in this paper, we will obtain another version of the classification theorem obtained by the authors of this article, that is, we show that under some conditions, a complete CWMC hypersurface with $H\geq 0$ must either be a hyperplane or a generalized cylinder. ###### Key words and phrases: weighted mean curvature, polynomial volume growth, self-shrinkers, $\lambda$-hypersurfaces ###### 2010 Mathematics Subject Classification: MSC 53C42 and MSC 53C44 This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) Finance Code 001 ## 1\. Introduction One of the main interests in the study of the mean curvature flow theory is to understand possible singularities that the flow goes through. The singularity models for these flows can be associated to hypersurfaces that satisfy the following mean curvature condition $H=\frac{\langle x,\nu\rangle}{2}$ where $H$, $x$ and $\nu$ stand for the mean curvature of $\Sigma$, the position vector in $\mathbb{R}^{n}$ and the unit normal vector of $\Sigma$, respectively. Such hypersurfaces are known as self-shrinkers. Another characterization of the self-shrinkers is that they are critical points of the weighted area functional (1.1) $\displaystyle F(\Sigma)=\int_{\Sigma}e^{-\frac{|x|^{2}}{4}}dv.$ There is a great interest in studying two-sided smooth hypersurfaces $\Sigma\subset\mathbb{R}^{n+1}$ which are critical points of the functional (1.1) for variations $G:(-\varepsilon,\varepsilon)\times\Sigma\rightarrow\mathbb{R}^{n+1}$ that preserve enclosed weighted volume. These variations can be represented by functions $u:\Sigma\rightarrow\mathbb{R}$ defined by $u(x)=\langle\partial_{t}G(0,x),\nu(x)\rangle$ such that $\int_{\Sigma}u\ e^{-|x|^{2}/4}dv=0$, where $\nu$ is the normal vector of $\Sigma$. It is well known that these hypersurfaces satisfy the following condition $\displaystyle H=\frac{\langle x,\nu\rangle}{2}+\lambda,$ where $\lambda\in\mathbb{R}$. Such hypersurfaces are known as $\lambda$-hypersurfaces or as constant weighted mean curvature hypersurfaces. Throughout this paper, whenever $\Sigma$ satisfies the mean curvature condition above, they will be called CWMC hypersurfaces. Self-shrinkers, hyperplanes, spheres and cylinders are some examples of CWMC hypersurfaces. The study of such hypersurfaces arises in geometry and probability as solutions to the Gaussian isoperimetric problem. These hypersurfaces were first studied by Cheng-Wei [cheng2018complete] and McGonagle-Ross [mcgonagle2015hyperplane]. Since then, there has been much interest about classification results for CWMC hypersurfaces, for instance [sun2018compactness],[guang2018gap],[cheng2016rigidity], etc. Le and Sesum [le2011blow] proved a gap theorem showing that if a complete embedded self- shrinker $\Sigma\subset\mathbb{R}^{n+1}$ with polynomial volume growth ( there exist a constant $C>0$ and $r_{0}>0$ such that $V(B_{r}(0)\cap\Sigma)\leq Cr^{\alpha}$, for all $r\geq r_{0}$ and for some $\alpha>0$ ) and such that the norm of the second fundamental form satisfies $|A|^{2}<1/2$, then $\Sigma$ should be a hyperplane. Cao and Li [cao2013gap] extended this theorem to a more general result for arbitrary codimension proving that a complete embedded self-shrinker $\Sigma^{n}\subset\mathbb{R}^{n+p}$ with polynomial volume growth and $|A|^{2}\leq 1/2$ has to be a generalized cylinder. Later, Guang [guang2018gap], Cheng, Ogata and Wei [cheng2016rigidity] proved rigidity theorems for CWMC that generalized Le-Sesum’s theorem. Recently, Wei and Peng proved another generalization for CWMC hypersurfaces. More specifically, they showed the following result. ###### Theorem 1.1. [wei2019note] Let $\Sigma\subset\mathbb{R}^{n+1}$ be a complete embedded CWMC hypersurface with polynomial volume growth. If the norm of the second fundamental form is bounded and $|A|^{2}H(H-\lambda)\leq\frac{H^{2}}{2}$ then $\Sigma$ is either a hyperplane or a generalized cylinder $S^{k}_{r}(0)\times\mathbb{R}^{n-k}$, $1\leq k\leq n$. In this paper, we will prove a similar result without the bound assumption on the norm of the second fundamental form. In fact, we prove the following theorem. ###### Theorem 1.2. Let $\Sigma\subset\mathbb{R}^{n+1}$ be a complete embedded CWMC hypersurface. If the following properties hold: (i) $|A|^{2}H(H-\lambda)\leq\frac{H^{2}}{2}$, (ii) $\frac{1}{k^{2}}\int_{B^{\Sigma}_{2k}(p)\setminus B^{\Sigma}_{k}(p)}H^{2}e^{-\frac{|x|^{2}}{4}}\rightarrow 0$, when $k\rightarrow\infty$, for a fixed point $p\in\Sigma$, then $\Sigma$ is either a hyperplane or a generalized cylinder $S^{k}_{r}(0)\times\mathbb{R}^{n-k}$, $1\leq k\leq n$. ###### Remark 1.1. : Notice that if $\Sigma$ has polynomial volume growth, then the condition (ii) is satisfied.